CN113158997A - Grain depot monitoring image denoising method, device and medium based on deep learning - Google Patents

Grain depot monitoring image denoising method, device and medium based on deep learning Download PDF

Info

Publication number
CN113158997A
CN113158997A CN202110576040.0A CN202110576040A CN113158997A CN 113158997 A CN113158997 A CN 113158997A CN 202110576040 A CN202110576040 A CN 202110576040A CN 113158997 A CN113158997 A CN 113158997A
Authority
CN
China
Prior art keywords
image
sub
initial
subgraph
grain depot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110576040.0A
Other languages
Chinese (zh)
Other versions
CN113158997B (en
Inventor
李智慧
甄彤
于虹
吴建军
高辉
张仲凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202110576040.0A priority Critical patent/CN113158997B/en
Publication of CN113158997A publication Critical patent/CN113158997A/en
Application granted granted Critical
Publication of CN113158997B publication Critical patent/CN113158997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a deep learning-based grain depot monitoring image denoising method, which comprises the following steps of: acquiring an initial monitoring image of a grain depot; decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images; respectively denoising the high-frequency subgraphs by using the trained generated confrontation network model to obtain denoised high-frequency subgraphs meeting preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition; and performing image reconstruction to obtain a de-noising monitoring image, so that the low-quality grain depot image obtained by the grain depot monitoring system is clarified.

Description

Grain depot monitoring image denoising method, device and medium based on deep learning
Technical Field
The invention relates to the field of image processing, in particular to a grain depot monitoring image denoising method, device and medium based on deep learning.
Background
In order to comply with the development of the era and better manage the grain safety problem, the construction industry of the intelligent grain depot is briskly made. The intelligent security and protection, target identification and tracking equipment and the like are also widely applied to the safety guarantee of grain depots. The intelligent security video monitoring system can monitor the working conditions of important places in the grain depot in real time, such as main grain inlet and outlet storage channels, depot areas, operating points, instrument depots, medicine depots and the like. Meanwhile, abnormal behaviors such as personnel gathering, border crossing, regional invasion, violation of regulations of operators and the like can be warned, so that the field inspection is reduced, the cost of manual management of the grain depot is reduced, and the management and data collection in the grain storage working process are facilitated. However, due to the limitations of the hardware conditions of various image acquisition devices in the grain depot and the harsh environment, noise is inevitably generated in the image acquisition process, which will bring adverse effects to the accuracy of the subsequent image data analysis and processing results.
Disclosure of Invention
The invention aims to provide a grain depot monitoring image denoising method, a grain depot monitoring image denoising device and a grain depot monitoring image denoising medium based on deep learning, and the grain depot monitoring system can be used for clarifying low-quality grain depot images.
The invention is realized by the following steps:
the grain depot monitoring image denoising method based on deep learning comprises the following steps:
acquiring an initial monitoring image of a grain depot;
decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images, wherein the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image and an initial HH sub-image respectively;
respectively denoising the initial LH subgraph, the initial HL subgraph and the initial HH subgraph by utilizing the trained generated confrontation network model to obtain a denoised LH subgraph, a denoised HL subgraph and a denoised HH subgraph which accord with preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and reconstructing by using the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image and the initial LL sub-image obtained by wavelet transformation which meet the preset conditions to obtain the de-noised monitoring image.
Preferably, the generator comprises a multi-scale feature extraction module, a feature fusion module, a residual error module and an output module; the multi-scale feature extraction module is used for extracting various image features of the input subgraph; the feature fusion module is used for fusing a plurality of image features of the extracted subgraph to obtain a fused subgraph; the residual error module is used for acquiring a noise residual error image of the fused subgraph; the output module is used for obtaining a denoising subgraph according to the noise residual image of the fusion subgraph and the input subgraph.
Preferably, the multi-scale feature extraction module extracts through four different convolution kernels
Four image features of the input subgraph.
Preferably, the residual module includes three augmented convolutional layers.
Preferably, the expansion factor of the expansion convolution in the expansion convolution layer is a sparse filter with the size of Q × Q, Q ═ 2r +1, and r represents the depth of the expansion convolution layer.
Preferably, the activation function of the extended convolutional layer is PReLU;
the formula of the activation function PReLU is as follows:
Figure BDA0003079076220000021
where x characterizes the input of the activation function, f (x) characterizes the output of the activation function, and a is a parameter of the activation function.
Preferably, before the initial monitoring image of the grain depot is acquired, the method further comprises the following steps:
acquiring a sample data set used for training a generator in a generated confrontation network model; the sample data set comprises a plurality of grain depot pictures;
performing data enhancement processing on the grain depot pictures in the sample data set and updating the sample data set for the first time; the amount of data in the sample data set after the first update is a multiple of the amount of data in the sample data set before the update.
Preferably, after the data enhancement processing is performed on the grain depot picture in the sample data set and the sample data set is updated for the first time, the method further comprises the following steps:
and (4) carrying out noise adding treatment on the grain depot pictures in the sample data set after the first updating so as to update the sample data set for the second time.
The invention also provides a deep learning-based grain depot monitoring image denoising device, which comprises:
the acquisition module is used for acquiring an initial monitoring image of the grain depot;
the decomposition module is used for decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images, wherein the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image and an initial HH sub-image respectively;
the de-noising module is used for respectively de-noising the initial LH sub-image, the initial HL sub-image and the initial HH sub-image by utilizing the trained generated confrontation network model to obtain a de-noised LH sub-image, a de-noised HL sub-image and a de-noised HH sub-image which meet preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and the reconstruction module is used for reconstructing by utilizing the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image which accord with the preset condition and the initial LL sub-image obtained by wavelet transformation to obtain the de-noised monitoring image.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the deep learning-based grain depot monitoring image denoising method according to any one of claims 1 to 8.
The invention has the following beneficial effects: according to the method, the low-frequency image basic information and the high-frequency noise information in the image are effectively separated through wavelet transformation processing, the generation countermeasure network is utilized to generate the high-frequency subgraph with low enough noise, and finally the low-frequency image basic information obtained through wavelet decomposition and the three high-frequency subgraphs with low enough noise generated by the generator are reconstructed together, so that the grain depot monitoring image with higher quality and clearer quality can be obtained.
Furthermore, an optimized DnCNN image denoising network is used as a generator, the generator adopts a multi-scale feature extraction module to extract enough shallow features of the high-frequency subgraph so as to make up the defects that the available target information in the grain depot monitoring image is less and the feature domain is not rich enough, and the adaptability of the network to the scale is increased, so that the quality of the generated high-frequency subgraph is improved, and the grain depot monitoring image with clearer texture details can be obtained.
Furthermore, preferably, the residual error module comprises three expansion convolution layers, more sub-image information is obtained by increasing the area of the receptive field, the detail texture of the generated high-frequency sub-image is further enriched, and the quality of the grain depot monitoring image obtained by reconstruction is improved; meanwhile, the structure of the DnCNN image denoising network is simplified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of a deep learning-based grain depot monitoring image denoising method according to an embodiment of the present invention;
FIG. 2 is a schematic process diagram of a grain depot monitoring image denoising method based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a generator according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a multi-scale feature extraction module according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a grain depot monitoring image denoising device based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "first", "second", and the like are used for distinguishing between the descriptions and are not to be construed as indicating or implying relative importance.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The environment in the grain depot is relatively complex, the daily grain storage operation process is relatively special, the risk that the images obtained by the monitoring system easily contain noise is created, and the causes of the image noise in the grain depot summarized by the text mainly come from the following two aspects:
1) in order to ensure that the grains are not damaged by worms in the storage process, the grains need to be fumigated regularly. But during fumigation operation, the visual field around the monitoring equipment is not bright enough and uneven in brightness, and noise is introduced. And the fumigation inevitably generates corrosive gas phosphine during the fumigation operation, and the long-term chemical reaction can cause the corrosion of the monitoring equipment line, the image pollution and the noise generation.
2) A large amount of dust, vibration noise and the like can be generated during daily warehousing and ex-warehousing operation of grains, and the factors can cause that monitoring equipment in a grain depot is in a bad and unstable working environment, so that a sensor in the monitoring equipment of the grain depot can not work normally, and noise is generated.
In order to remove noise in grain depot monitoring images, the invention provides a grain depot monitoring image denoising method based on deep learning, which comprises the following steps as shown in fig. 1 and fig. 2:
s101, acquiring an initial monitoring image of a grain depot;
s102, decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images, wherein the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image and an initial HH sub-image respectively;
s103, denoising the initial LH sub-image, the initial HL sub-image and the initial HH sub-image respectively by utilizing the trained generated confrontation network model to obtain a denoised LH sub-image, a denoised HL sub-image and a denoised HH sub-image which meet preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and S104, reconstructing by using the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image which meet the preset condition and the initial LL sub-image obtained by wavelet transformation to obtain a de-noised monitoring image.
In step S102, the wavelet transform process can effectively separate the low-frequency image basic information from the high-frequency noise information in the image, the initial LL sub-image contains almost no noise, and the initial LH sub-image, the initial HL sub-image, and the initial HH sub-image mainly contain the details, edges, and noise information of the noise image.
In step S103, in the course of training the generation countermeasure network model, the generator generates a sub-graph deception discriminator with as low noise as possible. The goal of the discriminator is to try to separate the sub-graphs with lower noise and the sub-graphs without noise generated by the generator. Thus, the generator and the discriminator form a dynamic 'gaming process', and the generator can generate subgraphs with low enough noise until the discriminator can not effectively distinguish the subgraphs with low enough noise from the subgraphs without noise.
Therefore, after the initial LL subgraph (the basic information existing region of the original image) obtained by wavelet decomposition and the three high-frequency subgraphs generated by the generator and having low enough noise are reconstructed together in step S104, a grain depot monitoring image with higher quality and more clearness can be obtained.
In this embodiment, preferably, the generator employs an optimized DnCNN image denoising network. As shown in fig. 3, the generator includes a multi-scale feature extraction module 201, a feature fusion module 202, a residual module 203, and an output module 204; the multi-scale feature extraction module 201 is used for extracting a plurality of image features of the input subgraph; the feature fusion module 202 is configured to fuse a plurality of image features of the extracted sub-images to obtain a fused sub-image; the residual error module 203 is used for acquiring a noise residual error image of the fused subgraph; the output module 204 is configured to obtain a denoised subgraph according to the fused subgraph noise residual image and the input subgraph.
Because the grain depot is generally spacious, and when no staff operates, the monitored images may not change for a long time, so that less available information can be used for recovering high-quality images, even when the images in the grain depot are used for network training, target information contained in a training set is not comprehensive enough, and training characteristics are not rich enough. Therefore, the multi-scale feature extraction module 201 is adopted to extract enough shallow features of the high-frequency subgraph so as to make up for the defects that less available target information and insufficient feature domains exist in the grain depot monitoring image and increase the adaptability of the network to the scale.
Preferably, the multi-scale feature extraction module 201 extracts four image features of the input sub-image by using four different convolution kernels.
In this embodiment, as shown in fig. 4, the number of convolution kernels is 1 × 1, 3 × 3, 5 × 5, and 7 × 7, and the number of convolution kernels is 16.
Because different convolution kernels can extract image features with different information, the invention scans input images by using a method that the central position of each convolution kernel keeps consistent and synchronous, so as to obtain various different image feature information, and simultaneously, the size of output can also be ensured to be the same, and then the feature information is connected in series, batch standardization is used between a multi-scale convolution kernel and an activation function, and the activation function is PReLU, so that 64 features are obtained;
the formula of the activation function PReLU is as follows:
Figure BDA0003079076220000081
wherein, x represents the input of the activation function, f (x) represents the output of the activation function, a is the parameter of the activation function PReLU, which can be adjusted at any time, in the training process, the PReLU only increases a little parameter, but has a good promotion to the training effect of the image denoising network.
Specifically, the principle of using batch normalization between the multi-scale convolution kernel and the activation function is as follows:
the batch normalization calculation formula is as follows:
Figure BDA0003079076220000082
wherein gamma and beta represent adjustable parameters,
Figure BDA0003079076220000083
is k after gamma and beta parametrizationnormDistribution of (2)
knormIs defined as follows:
Figure BDA0003079076220000084
wherein k isnormRepresents the regularization result, kiDenotes the i-th neuron node in the network that is not activated, μ denotes the mean of the samples, σ2Representing the variance of the sample, ξ represents a very small, non-zero, positive number in order to guarantee that the denominator is meaningful.
The calculation process is used for the image characteristics extracted by each convolution kernel, and normalization processing is carried out on the characteristic data, so that the same characteristic data distribution can be learned by each layer of the network, and the network convergence speed and the training efficiency are improved.
Preferably, the residual module 203 comprises three expansion convolution layers, more sub-image information is obtained by increasing the area of the receptive field, damaged pixels are recovered, detail textures of the generated high-frequency sub-images are further enriched, and the quality of the grain depot monitoring image obtained by reconstruction is improved; compared with a 17-layer network structure of the DnCNN image denoising network, the generator (optimized denoising network) provided by the invention has fewer layers and better image processing effect.
In this embodiment, specifically, the expansion factor of the expansion convolution in the expansion convolution layer is a sparse filter with a size of Q × Q, Q ═ 2r +1, r represents the depth of the expansion convolution layer, r is set to 1,2, and 3, respectively, and in order to ensure that the size of the input and output image of the ResNetUnit is always, a 0-padding method is adopted for each layer.
Specifically, in this embodiment, the activation function of the extended convolutional layer is also PReLU;
in this embodiment, before obtaining the initial monitoring image of the grain depot, the method further includes the following steps:
acquiring a sample data set used for training a generator in a generated confrontation network model; the sample data set comprises a plurality of grain depot pictures;
performing data enhancement processing on the grain depot pictures in the sample data set and updating the sample data set for the first time; the amount of data in the sample data set after the first update is a multiple of the amount of data in the sample data set before the update.
The data enhancement processing comprises horizontal mirroring, vertical mirroring, horizontal vertical mirroring, affine transformation, 30-degree rotation, 60-degree rotation, 90-degree rotation and the like, and the data size in the sample data set can be expanded to 5-8 times of the original data size.
In this embodiment, after performing data enhancement processing on the grain depot picture in the sample data set and updating the sample data set for the first time, the method further includes the following steps:
and (4) carrying out noise adding treatment on the grain depot pictures in the sample data set after the first updating so as to update the sample data set for the second time.
Specifically, the denoising process may be to add gaussian noise to the grain depot picture.
Experimental data for the optimized DnCNN image denoising network (generator) of the present invention are given below.
Table one is the average PSNR value of the optimized DnCNN image denoising network (MAC-DnCNN), NLM, BM3D, DnCNN on the test set proposed by the present invention.
And a second table shows an average SSIM value of the optimized DnCNN image denoising network (MAC-DnCNN), NLM, BM3D and DnCNN on the test set.
Watch I (Unit: dB)
Method NLM BM3D DnCNN MAC-DnCNN
σ=15 29.41 32.24 32.74 32.79
σ=25 28.58 29.95 30.42 30.47
σ=40 27.19 28.01 28.39 28.50
σ=50 25.27 26.90 27.36 27.47
Watch two
Method NLM BM3D DnCNN MAC-DnCNN
σ=15 0.8218 0.8753 0.8806 0.8860
σ=25 0.7747 0.8242 0.8327 0.8386
σ=40 0.7294 0.7788 0.7908 0.7974
σ=50 0.6830 0.7266 0.7401 0.7456
As can be seen from tables one and two, when the noise level σ is 15, the average PSNR value of the image denoising model (MAC-DnCNN) proposed by the present invention on the test set is 3.38, 0.55 and 0.05 higher than that of NLM, BM3D and DnCNN, respectively, and the average SSIM value is 0.0642, 0.0107 and 0.0054 higher than that of NLM, BM3D and DnCNN, respectively. Therefore, the conclusion can be drawn that, aiming at the grain depot image data set, the image denoising model (MAC-DnCNN) not only obtains a higher PSNR/SSIM value than the traditional denoising algorithm, but also obtains better performance compared with the current algorithm with better denoising effect performance. Therefore, the generator in the generation countermeasure network can generate high-frequency subgraphs with higher quality, so that the grain depot monitoring image with higher quality is generated by the method.
The following provides experimental data of the grain depot monitoring image denoising method based on deep learning.
And the third table shows the average PSNR values of the grain depot monitoring image denoising method (SWT-GAN) based on deep learning, the NLM, the BM3D and the DnCNN on the test set.
And the fourth table shows the average SSIM value of the grain depot monitoring image denoising method (SWT-GAN) based on deep learning, NLM, BM3D and DnCNN on the test set.
Watch III
Method NLM BM3D DnCNN SWT-GAN
σ=15 29.41 32.24 32.74 32.83
σ=25 28.58 29.95 30.42 30.53
σ=40 27.19 28.01 28.39 28.53
σ=50 25.27 26.90 27.36 27.51
Watch four
Method NLM BM3D DnCNN SWT-GAN
σ=15 0.8218 0.8753 0.8806 0.8866
σ=25 0.7747 0.8242 0.8327 0.8409
σ=40 0.7294 0.7788 0.7908 0.7997
σ=50 0.6830 0.7266 0.7401 0.7494
As can be seen from table three and table four, when the noise level σ is 15, the average PSNR values of the deep learning-based grain depot monitoring image denoising method (SWT-GAN) provided by the invention on the test set are respectively higher than NLM, BM3D and DnCNN by 3.42, 0.59 and 0.09, and the average SSIM value is respectively higher than NLM, BM3D and DnCNN by 0.0648, 0.0113 and 0.006. Therefore, the following conclusion can be drawn that, aiming at the grain depot image data set, the grain depot monitoring image denoising method (SWT-GAN) based on deep learning provided by the invention not only obtains higher PSNR and SSIM values than the traditional denoising algorithm, but also can obtain better results compared with the existing algorithm with better denoising effect. Therefore, the grain depot monitoring image with higher quality can be obtained by the method
As shown in fig. 5, this embodiment further provides a deep learning-based grain depot monitoring image denoising device, including:
the acquisition module 301 is used for acquiring an initial monitoring image of the grain depot;
a decomposition module 302, configured to decompose the initial monitoring image by using wavelet transform to obtain four sub-images, where the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image, and an initial HH sub-image, respectively;
the denoising module 303 is configured to perform denoising processing on the initial LH subgraph, the initial HL subgraph and the initial HH subgraph respectively by using the trained generated confrontation network model to obtain a denoised LH subgraph, a denoised HL subgraph and a denoised HH subgraph which meet preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and the reconstruction module 304 is used for reconstructing the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image which meet the preset conditions and the initial LL sub-image obtained through wavelet transformation to obtain the de-noised monitoring image.
The embodiment also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program executes the steps of the deep learning-based grain depot monitoring image denoising method.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the above-described dialogue sentence determination method can be executed, so that the accuracy in document marking can be improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments provided in the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the technical solutions of the present application, and the scope of the present application is not limited thereto, although the present application is described in detail with reference to the foregoing examples, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The grain depot monitoring image denoising method based on deep learning is characterized by comprising the following steps of:
acquiring an initial monitoring image of a grain depot;
decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images, wherein the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image and an initial HH sub-image respectively;
respectively denoising the initial LH subgraph, the initial HL subgraph and the initial HH subgraph by utilizing the trained generated confrontation network model to obtain a denoised LH subgraph, a denoised HL subgraph and a denoised HH subgraph which accord with preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and reconstructing by using the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image and the initial LL sub-image obtained by wavelet transformation which meet the preset conditions to obtain the de-noised monitoring image.
2. The deep learning based grain depot monitored image denoising method according to claim 1,
the generator comprises a multi-scale feature extraction module, a feature fusion module, a residual error module and an output module; the multi-scale feature extraction module is used for extracting various image features of the input subgraph; the feature fusion module is used for fusing a plurality of image features of the extracted subgraph to obtain a fused subgraph; the residual error module is used for acquiring a noise residual error image of the fused subgraph; the output module is used for obtaining a denoising subgraph according to the noise residual image of the fusion subgraph and the input subgraph.
3. The deep learning-based grain depot monitored image denoising method according to claim 2, wherein the multi-scale feature extraction module extracts four image features of an input sub-graph through four different convolution kernels.
4. The deep learning-based grain depot monitored image denoising method according to claim 2, wherein the residual error module comprises three expansion convolutional layers.
5. The deep learning-based grain depot monitored image denoising method according to claim 4, wherein the expansion factor of the expansion convolution layer is a sparse filter with size Q x Q, Q2 r +1, r represents the depth of the expansion convolution layer.
6. The deep learning-based grain depot monitoring image denoising method according to claim 5, wherein: the activation function of the extended convolutional layer is PReLU;
the formula of the activation function PReLU is as follows:
Figure FDA0003079076210000021
where x characterizes the input of the activation function, f (x) characterizes the output of the activation function, and a is a parameter of the activation function.
7. The deep learning-based grain depot monitored image denoising method according to claim 1, further comprising the following steps before obtaining an initial monitored image of the grain depot:
acquiring a sample data set used for training a generator in a generated confrontation network model; the sample data set comprises a plurality of grain depot pictures;
performing data enhancement processing on the grain depot pictures in the sample data set and updating the sample data set for the first time; the amount of data in the sample data set after the first update is a multiple of the amount of data in the sample data set before the update.
8. The deep learning-based grain depot monitored image denoising method of claim 7, wherein after the grain depot images in the sample data set are subjected to data enhancement processing and the sample data set is updated for the first time, the method further comprises the following steps:
and (4) carrying out noise adding treatment on the grain depot pictures in the sample data set after the first updating so as to update the sample data set for the second time.
9. Grain depot monitoring image denoising device based on deep learning is characterized by comprising:
the acquisition module is used for acquiring an initial monitoring image of the grain depot;
the decomposition module is used for decomposing the initial monitoring image by using wavelet transformation to obtain four sub-images, wherein the four sub-images are an initial LL sub-image, an initial LH sub-image, an initial HL sub-image and an initial HH sub-image respectively;
the de-noising module is used for respectively de-noising the initial LH sub-image, the initial HL sub-image and the initial HH sub-image by utilizing the trained generated confrontation network model to obtain a de-noised LH sub-image, a de-noised HL sub-image and a de-noised HH sub-image which meet preset conditions; the generation countermeasure network model comprises a generator for generating a denoising subgraph according to an input subgraph and a discriminator for judging whether the denoising subgraph is a pure picture meeting a preset condition;
and the reconstruction module is used for reconstructing by utilizing the de-noised LH sub-image, the de-noised HL sub-image, the de-noised HH sub-image which accord with the preset condition and the initial LL sub-image obtained by wavelet transformation to obtain the de-noised monitoring image.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program performs the steps of the deep learning based grain depot monitoring image denoising method according to any one of claims 1 to 8.
CN202110576040.0A 2021-05-22 2021-05-22 Grain depot monitoring image denoising method, device and medium based on deep learning Active CN113158997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110576040.0A CN113158997B (en) 2021-05-22 2021-05-22 Grain depot monitoring image denoising method, device and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110576040.0A CN113158997B (en) 2021-05-22 2021-05-22 Grain depot monitoring image denoising method, device and medium based on deep learning

Publications (2)

Publication Number Publication Date
CN113158997A true CN113158997A (en) 2021-07-23
CN113158997B CN113158997B (en) 2023-04-18

Family

ID=76877471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110576040.0A Active CN113158997B (en) 2021-05-22 2021-05-22 Grain depot monitoring image denoising method, device and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN113158997B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170365038A1 (en) * 2016-06-16 2017-12-21 Facebook, Inc. Producing Higher-Quality Samples Of Natural Images
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network
CN111047541A (en) * 2019-12-30 2020-04-21 北京工业大学 Image restoration method based on wavelet transformation attention model
CN111047512A (en) * 2019-11-25 2020-04-21 中国科学院深圳先进技术研究院 Image enhancement method and device and terminal equipment
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
US20200408864A1 (en) * 2019-06-26 2020-12-31 Siemens Healthcare Gmbh Progressive generative adversarial network in medical image reconstruction
CN112435164A (en) * 2020-11-23 2021-03-02 浙江工业大学 Method for simultaneously super-resolution and denoising of low-dose CT lung image based on multi-scale generation countermeasure network
CN112801889A (en) * 2021-01-06 2021-05-14 携程旅游网络技术(上海)有限公司 Image denoising method, system, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170365038A1 (en) * 2016-06-16 2017-12-21 Facebook, Inc. Producing Higher-Quality Samples Of Natural Images
US20200408864A1 (en) * 2019-06-26 2020-12-31 Siemens Healthcare Gmbh Progressive generative adversarial network in medical image reconstruction
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network
CN111047512A (en) * 2019-11-25 2020-04-21 中国科学院深圳先进技术研究院 Image enhancement method and device and terminal equipment
CN111047541A (en) * 2019-12-30 2020-04-21 北京工业大学 Image restoration method based on wavelet transformation attention model
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
CN111275640A (en) * 2020-01-17 2020-06-12 天津大学 Image enhancement method for fusing two-dimensional discrete wavelet transform and generating countermeasure network
CN112435164A (en) * 2020-11-23 2021-03-02 浙江工业大学 Method for simultaneously super-resolution and denoising of low-dose CT lung image based on multi-scale generation countermeasure network
CN112801889A (en) * 2021-01-06 2021-05-14 携程旅游网络技术(上海)有限公司 Image denoising method, system, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIHUI LI 等: ""Grain depot image dehazing via quadtree decomposition and convolutional neural networks"", 《ELSEVIER》 *
于虹: ""图像去噪经典算法研究"", 《信息与电脑》 *

Also Published As

Publication number Publication date
CN113158997B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US11276162B2 (en) Surface defect identification method and apparatus
CN114092386A (en) Defect detection method and apparatus
CN111968095B (en) Product surface defect detection method, system, device and medium
CN111476758B (en) Defect detection method and device for AMOLED display screen, computer equipment and storage medium
CN115713533B (en) Power equipment surface defect detection method and device based on machine vision
CN111275686A (en) Method and device for generating medical image data for artificial neural network training
CN111127387A (en) Method for evaluating quality of non-reference image
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN114926374B (en) Image processing method, device and equipment based on AI and readable storage medium
CN115358952A (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN114022475A (en) Image anomaly detection and anomaly positioning method and system based on self-supervision mask
CN110428006A (en) The detection method of computer generated image, system, device
CN112200789B (en) Image recognition method and device, electronic equipment and storage medium
CN117876793A (en) Hyperspectral image tree classification method and device
CN117576042A (en) Wafer defect detection method, system, electronic equipment and storage medium
CN113158997B (en) Grain depot monitoring image denoising method, device and medium based on deep learning
CN117541546A (en) Method and device for determining image cropping effect, storage medium and electronic equipment
CN116309494B (en) Method, device, equipment and medium for determining interest point information in electronic map
CN116205802A (en) Image denoising method and device, storage medium and electronic equipment
CN115601684A (en) Emergency early warning method and device, electronic equipment and storage medium
CN114648520A (en) Method, system, electronic device and storage medium for detecting track defects
CN112598646B (en) Capacitance defect detection method and device, electronic equipment and storage medium
CN112861874A (en) Expert field denoising method and system based on multi-filter denoising result
CN117333740B (en) Defect image sample generation method and device based on stable diffusion model
CN115115537B (en) Image restoration method based on mask training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant