CN113870128A - Digital mural image restoration method based on deep convolution impedance network - Google Patents

Digital mural image restoration method based on deep convolution impedance network Download PDF

Info

Publication number
CN113870128A
CN113870128A CN202111049091.4A CN202111049091A CN113870128A CN 113870128 A CN113870128 A CN 113870128A CN 202111049091 A CN202111049091 A CN 202111049091A CN 113870128 A CN113870128 A CN 113870128A
Authority
CN
China
Prior art keywords
image
mural
generator
discriminator
damaged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111049091.4A
Other languages
Chinese (zh)
Inventor
曹丽琴
俞雯茜
李治江
胡智博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202111049091.4A priority Critical patent/CN113870128A/en
Publication of CN113870128A publication Critical patent/CN113870128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a mural image restoration method for generating an antagonistic network based on depth convolution, and provides the mural image restoration method for generating the antagonistic network based on depth convolution in order to solve the problems of incomplete data set, low resolution of restored images, missing details and the like in the mural restoration field. Firstly, a DCGAN network is trained by using a perfect image, and then generator parameters are updated according to an image restoration loss function (including prior loss and context loss) to obtain a restored image. And in the loss calculation process, the attention of the network is focused by adding a weight matrix. The method achieves a good effect on the aspect of repairing the clothes mural and has good performance on the aspects of tone detail and the like.

Description

Digital mural image restoration method based on deep convolution impedance network
Technical Field
The invention belongs to the digital image restoration technology, and particularly relates to an image restoration method based on a depth convolution countermeasure network for a damaged mural image.
Background
The mural is an ancient painting form, bears massive history and culture, and has extremely high artistic value. However, as time goes on, many ancient murals in China are destroyed due to natural and artificial reasons, and the repair work of the murals is not slow. However, the conventional manual mural repair process has many disadvantages: large manpower consumption, hard working environment, long repairing time, high technical requirement, irreversible repairing result and the like.
With the development of computer technology and the continuous and deep research of image completion algorithm, the digital image completion technology provides a brand-new solution for repairing murals. The digital image additional painting overcomes the defects of manual repair, can save a large amount of manpower and time resources, and simultaneously avoids secondary loss of mural cultural relics. The traditional digital image inpainting technology has the advantages of high image inpainting speed and stable result, and can be mainly divided into two types: partial differential equation based image repair and best match block based repair algorithms. The method based on partial differential equation adopts a propagation mechanism to propagate the known information to the region to be repaired to realize image repair. The method based on the best matching block is to design a matching principle, and search out the image block with the highest similarity with the missing area in the global state to fill the missing part. The traditional methods can only extract shallow information and characteristics of the image generally, and the supplemented image has low definition and difficult semantic repair. With the development of deep learning technology, people apply the neural network to the field of image inpainting. The deep learning technology can learn deep characteristics and content information of mural images through a large number of mural images, the repairing result is more accurate, and the original mural content is better met.
The image restoration and the supplementary drawing methods for deep learning are roughly divided into two types: image inpainting methods based on auto-encoders (AE) and on generation countermeasure Networks (GAN).
1) Image restoration method based on AE
In 2016, the Context-Encoder (Context-Encoder) proposed by Pathak et al was the first time better to apply deep learning to the field of image inpainting. Although the method realizes unsupervised image restoration, the restored image still has the problems of poor consistency, low resolution and the like. In 2017, Chao et al propose a High-Resolution Image Inpainting method (High-Resolution Image Inpainting) aiming at the problem of coarse texture details of a context encoder, and divide a network into two parts, namely content generation and texture generation. The method has a good processing effect in detail, but the algorithm complexity is high, and the method is not applied to repairing of irregular damaged areas. Then, Partial convolution (Partial convolution) and Gated convolution (Gated convolution) better solve the problem of repairing irregular areas.
2) Image restoration method based on GAN
Because the generation countermeasure network can globally extract deep features such as semantic content of the image, the image restoration task can be completed by utilizing the powerful image generation capability of the generator. Image restoration methods based on dcgan (deep contribution generated adaptive networks), gl (global and localization information) methods, and Coarse-to-fine (Coarse-to-fine) restoration models are effective image restoration methods. The DCGAN-based method adopts the good image to train the model, so that the method is more suitable for the damage of any shape area and has better repairing effect on the large-area damage. However, the method relies on the training result of the DCGAN model, so that the method is more suitable for a data set with obvious characteristics such as human faces and a simple structure, and is not ideal for the data set repairing result of a complex scene. The GL method is used for solving the restoration of complex scenes and irregular damaged areas, and a discriminator of the GL method is divided into a global discriminator and a local discriminator. The coarse-fine model generator is formed by connecting a coarse network and a fine network in series, and is respectively used for analyzing semantics and perfecting details, so that the receptive field of the network is enlarged.
Although the deep learning method is widely applied in the field of image restoration and achieves a good effect, due to the characteristics of mural images, many problems in the restoration aspect are still unsolved:
(1) in terms of data sets, most of image restoration methods for deep learning use data sets of human faces, automobiles or other natural scenes, and the data sets are generally simple in structure and very consistent in characteristics, which is very different from mural data sets. The murals to be repaired in China (such as Dunhuang Mogao cave murals) comprise works of a plurality of times, and the style of each time is greatly different. Moreover, as scenes drawn by wall paintings are usually complex and large, even different parts of the same image are difficult to find very similar features. Therefore, the data set in mural repair is not complete, since there are fewer mural images with uniform styles and complete clarity.
(2) The self-encoder structure is adopted in the mural repairing aspect, any damaged area in the mural can be repaired, but the structure repairing result has the problems that the texture detail is not clear, the consistency of the repaired part and the whole situation is poor, and the like.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a digital mural image restoration method based on a deep convolution impedance network, and the restoration of the damaged and high-resolution mural image in any area is realized. The main content of the invention is as follows:
step 1, collecting mural images, cutting and screening to construct a training set required by an experiment; manually constructing a mask, simulating a damaged mural image, and constructing a test set required by an experiment;
step 2, constructing a digital mural restoration model based on DCGAN, training the model by using a training set until the model generates a real and clear mural image, and storing model parameters;
the DCGAN-based digital mural repair model comprises a generator and a discriminator, wherein the generator is basically symmetrical in structure, the generator comprises a plurality of micro-step convolutions, and the discriminator comprises a plurality of step convolutions and a sigmoid layer;
the input of the digital mural repair model based on the DCGAN is a noise vector, and the output of the generator is a mural image;
and 3, updating the model parameters by using the model parameters stored in the step 2 and the damaged mural images simulated in the test set through calculating a loss function, and iterating for multiple times until the image restoration is completed, wherein the loss function comprises prior loss and context loss.
Further, in the mural image data set in step 1, in order to make the style of the images in the data set uniform, the collected mural images are preprocessed as follows:
step 1: image cropping
Because the images generated by the DCGAN network are all the same in size and the input of the discriminator is also all the same in size, the Tang-generation mural image is cut into K x K image blocks, and the process adopts an overlapping cutting mode, so that the number of the image blocks is increased, and a data set is expanded;
step 2: data screening
In order to unify styles after cutting, further screening is carried out, only the clothes parts of the characters are selected, and the images of the paintings of the clothes of the Tang Dynasty characters are obtained and are used as a training set;
and step 3: simulation of damaged mural
And (3) artificially constructing a mask, and superposing the mask and the intact mural image to simulate the damage of the mural.
Furthermore, in the digital mural repair model based on the DCGAN, the BN layer is not used behind the first stride convolution layer of the discriminator and the last micro-stride convolution layer of the generator, and the BN layer is used behind other layers.
Furthermore, the last micro-step convolution layer of the generator in the DCGAN-based digital mural repair model uses a tanh function, other layers in the generator use ReLU functions, and each layer in the discriminator uses a Leaky ReLU function.
Further, the training process of the digital mural repair model based on the DCGAN is as follows;
training a DCGAN network model by using intact mural images in a training set, wherein the input of the network is 100 noise vectors, the noise vectors are a group of random numbers extracted from standard normal distribution, and mural images with the size of 64 x 64 are output;
firstly, initializing network parameters of a discriminator and a generator, extracting n samples from a training machine, and generating the n samples by the generator by using a noise vector; fixing generator parameters, training a discriminator to distinguish true from false as much as possible; fixing a discriminator and training a generator to make the discriminator unable to distinguish between true and false; after multiple iterations, the discriminator cannot discriminate whether the image is from the generator or from the real data set, the generator can generate a relatively real mural image, and finally the model parameters are stored.
Further, the effect of prior loss in the DCGAN-based digital mural restoration model is to make the restored result as close as possible to the real mural image, and the calculation formula is as follows:
Lossp=ln(1-D(G(z))) (1)
wherein z is an input noise vector, D represents a discriminator, G represents a generator, G (z) represents a generated image, and the discriminator and the generator use model parameters obtained by training of intact mural images before repair;
the effect of the context loss is to bring the repair result closer to the damaged image, rather than generating a repair image that is real but not related to the original image; the context loss can be obtained by subtracting the repaired image and the damaged image and multiplying the difference by the weight matrix, and the calculation formula is as follows:
Lossc=||W⊙(G(z)-y)|| (2)
wherein W is a weight matrix, the size of the weight matrix is the same as that of the input image, and y represents a damaged image, namely an image to be repaired obtained after masking; the weight matrix W is calculated from the mask image M: for the damaged area, the mask position is the damaged area, the pixel value of the damaged area is 0, and the difference value between the generated image and the damaged image is not taken into the loss function, namely the weight is 0; for the rest pixels, taking the pixel point as the center, and taking a window with the size of 3 × 3, wherein the more damaged pixels in the window are, namely the more mask pixels are 0, the larger the weight value is; except for the pixels in the damaged area, the pixel points with the ownership weight value lower than a are assigned with the weight value of a; the calculation formula is as follows:
Figure BDA0003252196110000041
Figure BDA0003252196110000042
wherein M isijIs the pixel value of the mask at the ith row and j column, if the pixel is a broken pixel, M isijIs 0, otherwise is 1; the weight matrix has the function of enabling the attention of the repair network to be more concentrated on the damaged area, the more damaged pixels in the window are, the higher the weight is, and the larger the ratio is in the loss function; a is constant and takes the value of 0-0.5.
Compared with the prior art, the invention has the advantages and beneficial effects that: the training process of the DCGAN model is unsupervised, the complete mural image is used, a mask is not needed, and the method is more suitable for repairing any damaged area. The model uses the generation countermeasure network, and the generated image is more real and has high definition and consistency. In the image restoration process, a loss function is calculated and added into a weight matrix, so that the attention of a restoration network is focused on a damaged area, and the problems of fuzziness and detail loss in the restoration of a larger damaged area are solved.
Drawings
FIG. 1 is a process for artificial mask construction in the practice of the present invention;
FIG. 2 is a specific structure of a DCGAN network generator in the implementation of the present invention;
FIG. 3 is a training flow of a DCGAN network in an embodiment of the present invention;
FIG. 4 is a model architecture and flow for mural image restoration in an implementation of the present invention;
FIG. 5 is a wall-drawn image restoration result in accordance with the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings. The invention provides a digital mural image restoration method based on a deep convolution impedance network, which specifically comprises the following steps:
(1) partial images of the clothing of the mural characters of the dongdong and the huang dynasty are collected and a data set is constructed. The styles of images in the constructed data set are uniform, clear and complete, meanwhile, masks are constructed manually, damaged murals are simulated, and the masks are used as training sets for image restoration. The artificial mask construction is carried out by drawing mask image, overlapping with original image, simulating damage effect, and constructing as shown in FIG. 1
(2) And constructing a digital mural repairing model based on DCGAN, wherein after the training of the image generation model is completed, the image repairing process updates the generator parameters through context loss and prior loss, so that a repairing result conforming to the content of the damaged mural is obtained. In the process of calculating context loss, the weight matrix is added, and the attention of generating the network is focused on the broken area.
The construction process of the DCGAN-based mural repair model is as follows:
step 1: and constructing a clear mural image data set with uniform style. The DCGAN network is trained using the intact mural images in the training set. The input to the network is a 100-dimensional noise vector, a set of random numbers drawn from a standard normal distribution, and the output is a 64 x 64 size mural image. And after multiple iterations, until the network can generate a relatively real mural image, and storing the model parameters. The structure of the DCGAN network generator is shown in FIG. 2, and the arbiter and the generator are basically symmetrical. The specific network structure and parameters of the arbiter and generator are as follows:
Figure BDA0003252196110000051
Figure BDA0003252196110000061
the model training process is the same as the conventional training process for generating the countermeasure network: initializing network parameters of a discriminator and a generator; extracting n samples from the training machine, and generating n samples by using noise by using a generator; fixing generator parameters, training a discriminator to distinguish true from false as much as possible; fixing a discriminator and training a generator to make the discriminator unable to distinguish between true and false; after multiple iterations, in an ideal state, the discriminator cannot discriminate whether the image is from the generator or from the real data set, and the model training process is as shown in fig. 3.
Step 2: inputting any 100-dimensional noise vector z by using the model stored in the step 1, and obtaining an output picture G (z) through a generator(0)),G(z(0)) The discriminator is input, where the superscript 0 indicates the generator output image obtained at 0, resulting in a discrimination result of true or false (i.e., 1 or 0).
And step 3: calculating a Loss function including a Loss of prior LosspAnd Loss of context Lossc
The effect of the prior loss is to make the repaired result as close as possible to the real mural image. The calculation formula is as follows:
Lossp=ln(1-D(G(z))) (1)
wherein z is an input 100-dimensional noise vector, D represents a discriminator, G represents a generator, and G (z) represents a generated image, and the discriminator and the generator before restoration use parameters obtained by training of the undamaged image in the step 1.
The effect of the context loss is to bring the repair result closer to the damaged image, rather than generating a repair image that is true but not related to the original image at all. The context loss can be obtained by subtracting the repaired image from the damaged image (i.e. the image to be repaired obtained after the mask is masked) and multiplying the difference by the weight matrix, and the calculation formula is as follows:
Lossc=||W⊙(G(z)-y)|| (2)
where W is a weight matrix of the same size as the input image and y represents a broken image. The weight matrix W is calculated from the mask image M. For a damaged area (the mask position is the damaged area, and the pixel value of the damaged area is 0), the difference value between the generated image and the damaged image is not taken into a loss function, namely the weight is 0; for the rest pixels, taking the pixel point as the center, and taking a window with the size of 3 × 3, wherein the more damaged pixels in the window (namely, the more mask pixels are 0), the larger the weight value is; except for the damaged pixels, the pixel points with the ownership weight value lower than 0.1 are all endowed with the weight value of 0.1. The calculation formula is as follows:
Figure BDA0003252196110000062
Figure BDA0003252196110000071
wherein M isijIs the pixel value of the mask at the ith row and j column, if the pixel is a broken pixel, M isijIs 0, otherwise is 1. The role of the weight matrix is to make the attention of the repair network more focused on the damaged area, and the more damaged pixels in the window, the higher the weight, and the larger the ratio in the loss function. The 0.1 boundary value is set in the formula so that the pixels of the good area are not ignored, although the proportion is small, but also plays a role in the loss function. The boundary value of 0.1 can be adjusted according to actual conditions, and for a window of 3 x 3, the boundary value can be adjusted between 0 and 0.5.
And 4, step 4: calculating the result through the loss function, updating the model parameters, and continuously iterating (i.e. repeating the steps 2, 3 and 4) until a repaired result G (z) similar to the original damaged image is generated(n)) The superscript n refers to the output image of the nth generator of the iteration. The image restoration process is as in fig. 4.
The invention can use a computer to train and infer the network, and is realized by using a Pythroch deep learning framework under a windows operating system. The specific experimental environment configuration is as follows:
Figure BDA0003252196110000072
the specific implementation is as follows:
step 1: and (5) building a data set. The DCGAN network is suitable for an image data set with uniform characteristic styles, but the drawing styles of different generations are greatly different for mural images, so that it is difficult to directly learn the mural data in different periods. Therefore, when the mural images are collected, the Tang fresco in the Dunhuang mural is selected, the style of the images in the data set is kept basically consistent, and the characteristics of the mural images are convenient for network learning. The number of the images collected by the Tang dynasty fresco is 95 in total, the images are different in size, and the number of the horizontal pixels and the vertical pixels is approximately between 500 and 1000. Because the images generated by the DCGAN network are all the same size and the input of the discriminator is the same size, we cut the image of the mural of the generation of the Tang into 64 × 64 image blocks, and the number of the image blocks is increased by overlapping and cutting. And after cutting, screening for unifying styles, and selecting only the clothes parts of the characters to finally obtain 9299 images of the Tang dynasty character clothes fresco, wherein the images are used as a training set. When the test set is constructed, firstly, a mask is constructed, and then a mask image and an original undamaged image are superposed to simulate the damage of the mural. The test set had 950 mural images in total, accounting for 10.2% of the training set.
Step 2: and (5) training a DCGAN model. And (5) using the clear mural images in the training set to train the DCGAN network and storing the model parameters after training. The overall architecture of the DCGAN model is similar to that of a GAN network, and the structure of a generator and a discriminator is improved on the basis of CNN.
(1) No pooling operation is used in the network, but convolution is used instead. Fractional-stepped convolution is used in the generator and strided convolution is used in the discriminator.
(2) After the top convolution layer in DCGAN, the full link layer is eliminated. This is because the convergence rate of the DCGAN model is reduced due to global average pooling, so the output of the generator is selected in the network to be used directly as the input of the arbiter without using a fully connected layer in between. In the discriminator, the output vector of the last convolution layer is input sigmoid function after flattening, thereby obtaining one-dimensional output.
(3) Batch Normalization (BN) is used in the network to make the learning process more stable. The BN is helpful for deep models with more layers, can effectively process training problems caused by improper initialization, and prevents model collapse problems frequently occurring in the traditional GAN network. However, the more BN is used, the better the BN is used, and the stability of the model is also affected to some extent, so in the DCGAN, the BN is not used in the input layer of the discriminator and the output layer of the generator, and the problem is effectively avoided.
(4) In terms of activation functions, the output layer of the DCGAN generator uses tanh function, the other layers in the generator use ReLU function, and each layer in the discriminator uses Leaky ReLU function.
And step 3: and (3) updating model parameters by calculating a loss function (comprising prior loss and context loss, wherein the weight matrix is added in the context loss calculation) by using the damaged mural image simulated in the test set, and iterating for multiple times until the image restoration is completed. The repairing effect is shown in fig. 5, and the detail definition, the tone and the consistency of the repairing result are good in subjective vision. In detail, the damaged area is clearly restored by the repair result, and the condition that the vision sense is influenced by large-area blurring and the like does not occur. In terms of color tone, the restored image restores the colors of the original image well with almost no deviation in visual perception. In the aspect of consistency, the repaired part can be well fused with other surrounding pixels, the whole image is smooth and consistent, and abrupt and obviously perceptible steps or color changes are avoided. From the repair result, the DCGAN network well completes the repair task for the damage of the irregular area, and successfully restores the original image in aspects of tone, detail and the like. In the aspect of objective evaluation, 950 images in the test set are used for repairing, and the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) value are calculated, wherein the repaired image data is as follows:
number of images/ratio of image PSNR mean value Average value of SSIM
950/10.2% 30.4144 0.9637
In the calculation of the loss function, if the weight matrix is not applicable, the obtained experimental data are as follows:
number of images/ratio of image PSNR mean value Average value of SSIM
950/10.2% 28.9608 0.9505
The method has the advantages that experimental data can be obtained, after the weight matrix is added, PSNR (Peak signal to noise ratio) and SSIM (Small Scale average) values are improved after image restoration, and the weight matrix has an active effect on mural image restoration. The PSNR value is above 30dB, which indicates that the mural repair result is good, the image distortion is not obvious, and the method is in the range acceptable by human vision. Similarly, the average value of the structural similarity index reaches 0.96, the similarity with the original image is high, and the expected repairing effect is achieved.
The specific embodiments described herein are merely illustrative of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. A digital mural image restoration method based on a deep convolution impedance network is characterized by comprising the following steps:
step 1, collecting mural images, cutting and screening to construct a training set required by an experiment; manually constructing a mask, simulating a damaged mural image, and constructing a test set required by an experiment;
step 2, constructing a digital mural restoration model based on DCGAN, training the model by using a training set until the model generates a real and clear mural image, and storing model parameters;
the DCGAN-based digital mural repair model comprises a generator and a discriminator, wherein the generator is basically symmetrical in structure, the generator comprises a plurality of micro-step convolutions, and the discriminator comprises a plurality of step convolutions and a sigmoid layer;
the input of the digital mural repair model based on the DCGAN is a noise vector, and the output of the generator is a mural image;
and 3, updating the model parameters by using the model parameters stored in the step 2 and the damaged mural images simulated in the test set through calculating a loss function, and iterating for multiple times until the image restoration is completed, wherein the loss function comprises prior loss and context loss.
2. The method for repairing the digital mural image based on the deep convolutional antagonistic network as claimed in claim 1, wherein: in the mural image data set in the step 1, in order to make the image style in the data set uniform, the collected mural images are preprocessed as follows:
step 1: image cropping
Because the images generated by the DCGAN network are all the same in size and the input of the discriminator is also all the same in size, the Tang-generation mural image is cut into K x K image blocks, and the process adopts an overlapping cutting mode, so that the number of the image blocks is increased, and a data set is expanded;
step 2: data screening
In order to unify styles after cutting, further screening is carried out, only the clothes parts of the characters are selected, and the images of the paintings of the clothes of the Tang Dynasty characters are obtained and are used as a training set;
and step 3: simulation of damaged mural
And (3) artificially constructing a mask, and superposing the mask and the intact mural image to simulate the damage of the mural.
3. The method for repairing the digital mural image based on the deep convolutional antagonistic network as claimed in claim 1, wherein: in the DCGAN-based digital mural repair model, a BN layer is not used behind the first striding convolution layer of the discriminator and the last micro-stepping convolution layer of the generator, and BN layers are used behind other layers.
4. The method for repairing the digital mural image based on the deep convolutional antagonistic network as claimed in claim 1, wherein: the last micro-step convolution layer of the generator in the DCGAN-based digital mural repair model uses a tanh function, other layers in the generator use ReLU functions, and each layer in the discriminator uses a Leaky ReLU function.
5. The method for repairing the digital mural image based on the deep convolutional antagonistic network as claimed in claim 1, wherein: the training process of the digital mural repair model based on the DCGAN is as follows;
training a DCGAN network model by using intact mural images in a training set, wherein the input of the network is 100 noise vectors, the noise vectors are a group of random numbers extracted from standard normal distribution, and mural images with the size of 64 x 64 are output;
firstly, initializing network parameters of a discriminator and a generator, extracting n samples from a training machine, and generating the n samples by the generator by using a noise vector; fixing generator parameters, training a discriminator to distinguish true from false as much as possible; fixing a discriminator and training a generator to make the discriminator unable to distinguish between true and false; after multiple iterations, the discriminator cannot discriminate whether the image is from the generator or from the real data set, the generator can generate a relatively real mural image, and finally the model parameters are stored.
6. The method for repairing the digital mural image based on the deep convolutional antagonistic network as claimed in claim 1, wherein: the effect of prior loss in the DCGAN-based digital mural repair model is to make the repaired result as close as possible to a real mural image, and the calculation formula is as follows:
Lossp=ln(1-D(G(z))) (1)
wherein z is an input noise vector, D represents a discriminator, G represents a generator, G (z) represents a generated image, and the discriminator and the generator use model parameters obtained by training of intact mural images before repair;
the effect of the context loss is to bring the repair result closer to the damaged image, rather than generating a repair image that is real but not related to the original image; the context loss can be obtained by subtracting the repaired image and the damaged image and multiplying the difference by the weight matrix, and the calculation formula is as follows:
Lossc=||W⊙(G(z)-y)|| (2)
wherein W is a weight matrix, the size of the weight matrix is the same as that of the input image, and y represents a damaged image, namely an image to be repaired obtained after masking; the weight matrix W is calculated from the mask image M: for the damaged area, the mask position is the damaged area, the pixel value of the damaged area is 0, and the difference value between the generated image and the damaged image is not taken into the loss function, namely the weight is 0; for the rest pixels, taking the pixel point as the center, and taking a window with the size of 3 × 3, wherein the more damaged pixels in the window are, namely the more mask pixels are 0, the larger the weight value is; except for the pixels in the damaged area, the pixel points with the ownership weight value lower than a are assigned with the weight value of a; the calculation formula is as follows:
Figure FDA0003252196100000021
Figure FDA0003252196100000022
wherein M isijIs the pixel value of the mask at the ith row and j column, if the pixel is a broken pixel, M isijIs 0, otherwise is 1; the weight matrix has the function of enabling the attention of the repair network to be more concentrated on the damaged area, the more damaged pixels in the window are, the higher the weight is, and the larger the ratio is in the loss function; a is constant and takes the value of 0-0.5。
CN202111049091.4A 2021-09-08 2021-09-08 Digital mural image restoration method based on deep convolution impedance network Pending CN113870128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111049091.4A CN113870128A (en) 2021-09-08 2021-09-08 Digital mural image restoration method based on deep convolution impedance network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111049091.4A CN113870128A (en) 2021-09-08 2021-09-08 Digital mural image restoration method based on deep convolution impedance network

Publications (1)

Publication Number Publication Date
CN113870128A true CN113870128A (en) 2021-12-31

Family

ID=78994916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111049091.4A Pending CN113870128A (en) 2021-09-08 2021-09-08 Digital mural image restoration method based on deep convolution impedance network

Country Status (1)

Country Link
CN (1) CN113870128A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511463A (en) * 2022-02-11 2022-05-17 陕西师范大学 Digital image repairing method, device and equipment and readable storage medium
CN114972129A (en) * 2022-08-01 2022-08-30 电子科技大学 Image restoration method based on depth information
CN115047721A (en) * 2022-05-31 2022-09-13 广东工业大学 Method for rapidly calculating mask near field by using cyclic consistency countermeasure network
CN115131234A (en) * 2022-06-15 2022-09-30 西北大学 Digital mural repairing method based on two-stage neural network
CN116681604A (en) * 2023-04-24 2023-09-01 吉首大学 Qin simple text restoration method based on condition generation countermeasure network

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511463A (en) * 2022-02-11 2022-05-17 陕西师范大学 Digital image repairing method, device and equipment and readable storage medium
CN114511463B (en) * 2022-02-11 2024-04-02 陕西师范大学 Digital image restoration method, device, equipment and readable storage medium
CN115047721A (en) * 2022-05-31 2022-09-13 广东工业大学 Method for rapidly calculating mask near field by using cyclic consistency countermeasure network
CN115131234A (en) * 2022-06-15 2022-09-30 西北大学 Digital mural repairing method based on two-stage neural network
CN115131234B (en) * 2022-06-15 2023-09-19 西北大学 Digital mural repair method based on two-stage neural network
CN114972129A (en) * 2022-08-01 2022-08-30 电子科技大学 Image restoration method based on depth information
CN116681604A (en) * 2023-04-24 2023-09-01 吉首大学 Qin simple text restoration method based on condition generation countermeasure network
CN116681604B (en) * 2023-04-24 2024-01-02 吉首大学 Qin simple text restoration method based on condition generation countermeasure network

Similar Documents

Publication Publication Date Title
CN113870128A (en) Digital mural image restoration method based on deep convolution impedance network
CN108492281B (en) Bridge crack image obstacle detection and removal method based on generation type countermeasure network
CN111784602B (en) Method for generating countermeasure network for image restoration
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
CN110852267B (en) Crowd density estimation method and device based on optical flow fusion type deep neural network
CN109903236B (en) Face image restoration method and device based on VAE-GAN and similar block search
CN109741268B (en) Damaged image complement method for wall painting
CN111242864B (en) Finger vein image restoration method based on Gabor texture constraint
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN107944459A (en) A kind of RGB D object identification methods
CN110895795A (en) Improved semantic image inpainting model method
CN110807738A (en) Fuzzy image non-blind restoration method based on edge image block sharpening
CN114463492A (en) Adaptive channel attention three-dimensional reconstruction method based on deep learning
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN116993975A (en) Panoramic camera semantic segmentation method based on deep learning unsupervised field adaptation
CN113034388B (en) Ancient painting virtual repair method and construction method of repair model
CN105488792A (en) No-reference stereo image quality evaluation method based on dictionary learning and machine learning
CN117252782A (en) Image restoration method based on conditional denoising diffusion and mask optimization
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion
Zhang et al. Geometry-Aware Video Quality Assessment for Dynamic Digital Human
CN116188265A (en) Space variable kernel perception blind super-division reconstruction method based on real degradation
CN116051407A (en) Image restoration method
CN113592021B (en) Stereo matching method based on deformable and depth separable convolution
CN115526891A (en) Training method and related device for generation model of defect data set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination