CN114066744A - Artistic image restoration method and system based on style constraint - Google Patents

Artistic image restoration method and system based on style constraint Download PDF

Info

Publication number
CN114066744A
CN114066744A CN202111180895.8A CN202111180895A CN114066744A CN 114066744 A CN114066744 A CN 114066744A CN 202111180895 A CN202111180895 A CN 202111180895A CN 114066744 A CN114066744 A CN 114066744A
Authority
CN
China
Prior art keywords
image
style
artistic
repaired
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111180895.8A
Other languages
Chinese (zh)
Inventor
张玉红
丁建浩
余节约
翁振雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111180895.8A priority Critical patent/CN114066744A/en
Publication of CN114066744A publication Critical patent/CN114066744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an artistic image restoration method and system based on style constraint, wherein the method comprises the following steps: (1) inputting an artistic image data set, and collecting each style information in the data set to generate a basic information characteristic; (2) extracting features to generate basic bottom layer features of the artistic image; (3) the two obtained characteristics are used as the characteristics of the input image of the convolutional neural network to assist in training the network, a classification loss function is optimized in the training process, style characteristics are learned, and style expression is formed; (4) inputting an image to be repaired into a pre-trained confrontation network generation model, and outputting a repaired image; (5) inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging the content and style features of the repaired image and the original image.

Description

Artistic image restoration method and system based on style constraint
Technical Field
The invention belongs to the technical field of image restoration, and particularly relates to an artistic image restoration method and system based on style constraint.
Background
The artistic images contain rich cultural, artistic, scientific and historical values, and due to various reasons, part of the painting works have more or less damage or bulk loss, thus seriously affecting the activities of appreciation, cultural originality, cultural transmission and the like of the painting works. Therefore, the method has very important significance on how to digitally repair the damaged or large missing artistic images by using the latest scientific and technical means. The image restoration technology based on deep learning is a mainstream image restoration method at present because the characteristics of the image can be learned and extracted through a deep convolutional neural network and the prior knowledge of the image is learned.
However, in the prior art, the repairing effect is better for a smooth and non-textured cavity area, and for a large cavity of a complex texture image, the repaired image has the problems of inconsistent texture, serious artificial traces, poor repairing result diversity and the like.
The method is a key scientific problem in learning implicit styles, colors and other rules by using artistic images and repairing the images by applying the rules. Generally speaking, compared with natural images, the artistic images are extremely difficult to repair digitally due to the problems of color degradation, complex noise types and the like, but meanwhile, due to the characteristics of bright artistic styles, simple and regular content colors and the like, good priori constraint is provided for repairing the artistic images. In view of the particularity of the artistic image, the conventional image restoration method is not suitable for being directly applied to restoration of the artistic image.
Disclosure of Invention
The invention aims to provide an artistic image restoration method and system based on style constraint aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
an artistic image restoration method based on style constraint is carried out according to the following steps:
s1, inputting an artistic image data set, and collecting information of each style (including authors, years, genres and the like) in the data set to generate basic information characteristics;
s2, generating basic bottom layer characteristics of the artistic image by extracting the characteristics of the artistic image;
s3, taking the two features obtained in the step S1 and the step S2 as the features of the input image of the convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning the style features of the artistic image, and forming the style expression of the artistic image;
s4, inputting the image to be repaired into the generation countermeasure network model for repairing, and outputting the repaired image by calculating the constraint of a content loss function and a texture loss function;
and S5, inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image.
Further, in step S2, the color of the artistic image is extracted through the HSV color histogram, the texture of the artistic image is extracted through the LBP algorithm, and the basic bottom layer features of the artistic image are generated through the stroke of the artistic image extracted through the edge detection algorithm.
Further, step S4, by calculating the constraints of the content loss function and the texture loss function, the image to be repaired is input into the generator which generates the result by 3 parallel encoder-decoder branches and a shared decoder module and the generation countermeasure network model of the global and local discriminators for the countermeasure training, and the image which is completed by repairing is output;
further, the specific steps of generating the countermeasure network in step S4 are as follows:
s4-1, the generator network is composed of 3 parallel encoder-decoder branches, the sizes of three convolution kernels are 7x7, 5x5 and 3x3 respectively, the deep convolution neural networks of different convolution kernels can mainly extract the features of different layers of the artistic image to be repaired, and the features from local to global are extracted through different convolution kernels respectively;
s4-2, the first branch is composed of 10 convolutional layers, the second branch is composed of 11 convolutional layers and a deconvolution, the third branch is composed of 11 convolutional layers and a deconvolution 2, the first two branches are used for sampling the extracted features to the original resolution through bilinear upsampling, and the extracted features are connected with the third branch and fused together to obtain a depth feature map;
s4-3, converting the depth feature map into a natural image space through a shared decoder module, wherein the module has two convolution layers;
and S4-4, in the process of converting the depth feature map into a natural image, respectively calculating L2 (least square error) loss and Euclidean distance loss to carry out content and texture constraints on the image, and capturing more detail and texture information of the image, thereby enabling the generated image to be more vivid.
S4-5, continuously optimizing and generating a loss function in the countermeasure network, and learning various parameters in the network by minimizing the loss function.
Further, the step of generating the loss function of the countermeasure network in the step S4 specifically includes the following steps:
SS4-1. to impose spatial location-based constraints, a confidence-driven reconstruction penalty is designed, with the confidence of a known pixel and an unknown pixel that is close to the boundary distance set to 1, and to tie the confidence of the known pixel to the unknown pixel, a Gaussian filter g pair is used
Figure BDA0003297238540000031
Convolution is performed to generate the loss mask MwAs follows:
Figure BDA0003297238540000032
the size of g is 64x64, the standard deviation is 40, which is a one-element operation;
SS4-2. repeating equation 1 yields the impairment mask MwAnd then a restored image G ([ X, M) generated by the generator using the real image Y](ii) a θ) and mask MwAnd calculating the reconstruction loss according to the following calculation formula:
Figure BDA0003297238540000033
θ is a learnable parameter;
SS4-3. antagonistic losses are catalysts for filling missing areas, the method employs a modified Wasserstein GAN and uses local and global discriminators whose discriminant loss calculation is as follows:
Figure BDA0003297238540000034
in the above formula, the first and second carbon atoms are,
Figure BDA0003297238540000037
it is the true image distribution that is,
Figure BDA0003297238540000035
is to generate an image distribution, λ is the gradient penalty coefficient,
Figure BDA0003297238540000036
further, in the step S5, the specific extraction step of the VGG19 feature extraction network is as follows:
s5-1, forming a VGG19 network structure by a convolution layer with a convolution kernel of 3x3 and a step length of 2, a pooling window of 2x2, a pooling layer with a step length of 2, a Relu activation layer and a full connection layer;
s5-2, inputting the repaired image and the real image into a VGG19 feature extraction network, extracting features of the repaired image and the real image at different levels, and calculating content loss and style loss between the repaired image and the real image;
s5-3, for the content loss, calculating the difference of the feature maps extracted by the Conv1_1 convolutional layers and the Conv2_1 convolutional layers, wherein the content loss function is as follows:
Figure BDA0003297238540000041
Figure BDA0003297238540000042
respectively representing pixel values of an ith row and a jth column in a feature map extracted from an L-th layer of the real image and the repaired image;
s5-4, taking a Gram matrix as the style of the style picture, wherein the Gram matrix is defined by the similarity among all feature maps extracted by the same convolution layer of the network, and the style loss function is as follows:
Figure BDA0003297238540000043
Gram(χ L) represents the Gram of the image at level L, the Gram matrix being the off-center covariance matrix between features.
Further, the overall loss function in the steps S4 and S5 is:
Lall=αLrec+βLadv+γLcontent+δLstyle (6)
wherein, alpha, beta, gamma and delta are balance parameters used for balancing various losses.
The invention also discloses an artistic image restoration system based on style constraint, which comprises the following modules:
the basic information characteristic generation module: inputting an artistic image data set, and collecting information of each style in the data set to generate basic information characteristics;
a basic underlying feature generation module: generating basic bottom layer characteristics of the artistic image by extracting the characteristics of the artistic image;
style expression forming module: taking two features obtained in a basic information feature generation module and a basic bottom layer feature generation module as features of an input image of a convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning style features of an artistic image, and forming style expression of the artistic image;
a repair image output module: inputting an image to be repaired into the generated countermeasure network model, and outputting the repaired image;
a judging module: inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image.
The method collects authors, years and genres of each style, extracts the colors of the artistic image through the HSV color histogram, extracts the textures of the artistic image through the LBP algorithm, extracts the main factors influencing different style expressions of the artistic image, such as strokes of the artistic image through the edge detection algorithm, learns the style characteristics of the artistic drawing, and forms the style expression of the artistic image. Based on the style constraint, inputting the image to be restored into a generator which generates a result by 3 parallel encoder-decoder branches and a shared decoder module and a generation countermeasure network model of a global discriminator and a local discriminator for countermeasure training, outputting the image after restoration, inputting the image after restoration and a real image into a pre-trained VGG19 feature extraction network, and judging whether the image after restoration has the content and style features consistent with the original image.
Compared with the prior art, the generator adopts a multi-column parallel encoder, can extract the characteristics of different levels of the artistic image to be restored, can overcome the limitation from coarse to fine architecture, can capture more details and texture information of the image through the restriction of the artistic image style in the restoration process to ensure that the generated image is more vivid, and finally utilizes the VGG19 characteristic extraction network to judge whether the restored image and the original image have consistent content and style characteristics or not, so that the restored image is clearer and more vivid.
Drawings
The invention is described in further detail below with reference to the figures and the detailed description.
FIG. 1 is a flow chart of an art image restoration method based on style constraints according to a preferred embodiment of the present invention;
FIG. 2 is an artistic style expression model;
FIG. 3 is an architecture diagram of an artistic image diversity restoration model based on stylistic constraints;
FIG. 4 is a block diagram of an art image restoration system based on stylistic constraints according to a preferred embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide an artistic image restoration method and system based on style constraint aiming at the defects of the prior art.
As shown in fig. 1, the embodiment provides an artistic image restoration method based on style constraint, which specifically includes the following steps:
s1, inputting an artistic image data set, and collecting information of authors, years, genres and the like of each style in the data set to generate basic information characteristics;
s2, extracting colors of the artistic image through the HSV color histogram, extracting textures of the artistic image through the LBP algorithm, and extracting brush strokes and the like of the artistic image through the edge detection algorithm to generate basic bottom layer characteristics of the artistic image;
s3, taking the two characteristics obtained in S1 and S2 as the characteristics of the input image of the convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning the style characteristics of the artistic drawing, and forming the style expression of the artistic image;
s4, by calculating the constraint of the content loss function and the texture loss function, inputting the image to be restored into a generator which generates results by 3 parallel encoder-decoder branches and a shared decoder module and a generation countermeasure network model of a global discriminator and a local discriminator for countermeasure training, as shown in FIG. 3, the specific process is as follows:
s4-1, the generator network is composed of 3 parallel encoder-decoder branches, the sizes of three convolution kernels are 7x7, 5x5 and 3x3 respectively, the deep convolution neural networks of different convolution kernels can mainly extract the features of different layers of the artistic image to be repaired, and the features from local to global are extracted through different convolution kernels respectively;
s4-2, the first branch is composed of 10 convolutional layers, the second branch is composed of 11 convolutional layers and a deconvolution, the third branch is composed of 11 convolutional layers and a deconvolution 2, the first two branches are used for sampling the extracted features to the original resolution through bilinear upsampling, and the extracted features are connected with the third branch and fused together to obtain a depth feature map;
s4-3, converting the depth feature map into a natural image space through a shared decoder module, wherein the module has two convolution layers;
and S4-4, in the process of converting the depth feature map into a natural image, respectively calculating L2 (least square error) loss and Euclidean distance loss to carry out content and texture constraints on the image, and capturing more detail and texture information of the image to enable the generated image to be more vivid.
S4-5, continuously optimizing and generating a loss function in the countermeasure network, and learning various parameters in the network by minimizing the loss function.
And SS4, the specific obtaining process of the loss function of the countermeasure network generated in the step S4 is as follows:
SS4-1. to impose spatial location-based constraints, a confidence-driven reconstruction penalty is designed, with the confidence of a known pixel and an unknown pixel that is close to the boundary distance set to 1, and to tie the confidence of the known pixel to the unknown pixel, a Gaussian filter g pair is used
Figure BDA0003297238540000061
Convolution is performed to generate the loss mask MwAs follows:
Figure BDA0003297238540000071
the size of g is 64x64, the standard deviation is 40, which is a one-element operation;
SS4-2. repeating equation (1) yields a impairment mask MwAnd then a restored image G ([ X, M) generated by the generator using the real image Y](ii) a θ) and mask MwAnd calculating the reconstruction loss according to the following calculation formula:
Figure BDA0003297238540000072
θ is a learnable parameter;
SS4-3. antagonistic losses are catalysts for filling missing areas, the method employs a modified Wasserstein GAN and uses local and global discriminators whose discriminant loss calculation is as follows:
Figure BDA0003297238540000073
in the above formula, the first and second carbon atoms are,
Figure BDA0003297238540000078
it is the true image distribution that is,
Figure BDA0003297238540000074
is to generate an image distribution, λ is the gradient penalty coefficient,
Figure BDA0003297238540000075
s5, inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image, wherein the VGG19 feature extraction network specifically comprises the following steps:
s5-1, forming a VGG19 network structure by a convolution layer with a convolution kernel of 3x3 and a step length of 2, a pooling window of 2x2, a pooling layer with a step length of 2, a Relu activation layer and a full connection layer;
s5-2, inputting the repaired image and the real image into a VGG19 feature extraction network, extracting features of the repaired image and the real image at different levels, and calculating content loss and style loss between the repaired image and the real image;
s5-3, for the content loss, calculating the difference of the feature maps extracted by the Conv1_1 convolutional layers and the Conv2_1 convolutional layers, wherein the content loss function is as follows:
Figure BDA0003297238540000076
Figure BDA0003297238540000077
the content loss is used for measuring the difference between all feature graphs extracted from the two images in the same network layer, and the minimized content loss is helpful for the content features of the repaired image to be closer to the original image.
S5-4, taking a Gram matrix as the style of the style picture, wherein the Gram matrix is defined by the similarity among all feature maps extracted by the same convolution layer of the network, and the style loss function is as follows:
Figure BDA0003297238540000081
Gram(χ L) represents the Gram of the image at the L-th layer. The style characteristics describe information such as texture, color and the like of the image, and the minimized style loss is beneficial to restoring the style characteristics of the finished image to be closer to the original image.
Finally, combining the loss functions in steps S4 and S5 to obtain the total loss function of the whole frame as:
Lall=αLrec+βLadv+γLcontent+δLstyle (6)
wherein, alpha, beta, gamma and delta are balance parameters used for balancing various losses.
The artistic image restoration system based on style constraint comprises the following modules:
the basic information characteristic generation module: inputting an artistic image data set, and collecting information of each style in the data set to generate basic information characteristics;
a basic underlying feature generation module: generating basic bottom layer characteristics of the artistic image by extracting the characteristics of the artistic image;
style expression forming module: taking two features obtained in a basic information feature generation module and a basic bottom layer feature generation module as features of an input image of a convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning style features of an artistic image, and forming style expression of the artistic image;
a repair image output module: inputting an image to be repaired into the generated countermeasure network model, and outputting the repaired image;
a judging module: inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. An artistic image restoration method based on style constraint is characterized by comprising the following steps:
s1, inputting an artistic image data set, and collecting information of each style in the data set to generate basic information characteristics;
s2, generating basic bottom layer characteristics of the artistic image by extracting the characteristics of the artistic image;
s3, taking the two features obtained in the step S1 and the step S2 as the features of the input image of the convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning the style features of the artistic image, and forming the style expression of the artistic image;
s4, inputting the image to be repaired into the generation countermeasure network model for repairing, and outputting the repaired image by calculating the constraint of a content loss function and a texture loss function;
and S5, inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image.
2. The artistic image restoration method based on style constraints as claimed in claim 1, wherein in step S1, the information includes author, age and genre.
3. The artistic image restoration method based on style constraints as claimed in claim 1, wherein in step S2, the color of the artistic image is extracted through HSV color histogram, the texture of the artistic image is extracted through LBP algorithm, and the basic bottom layer features of the artistic image are generated through the brush strokes of the artistic image extracted through edge detection algorithm.
4. A artistic image restoration method based on style constraints according to any one of claims 1-3, wherein in the step S4, the specific steps of generating the confrontation network model are as follows:
s4-1, the generator network is composed of 3 parallel encoder-decoder branches, the sizes of three convolution kernels are 7x7, 5x5 and 3x3 respectively, the deep convolution neural networks of different convolution kernels can extract the features of different layers of the artistic image to be repaired, and the features from local to global are extracted through different convolution kernels respectively;
s4-2, the first branch is composed of 10 convolutional layers, the second branch is composed of 11 convolutional layers and a deconvolution, the third branch is composed of 11 convolutional layers and a deconvolution 2, the first two branches are used for sampling the extracted features to the original resolution through bilinear upsampling, and the extracted features are connected with the third branch and fused together to obtain a depth feature map;
s4-3, converting the depth feature map into a natural image space through a shared decoder module, wherein the module has two convolution layers;
s4-4, in the process of converting the depth characteristic map into a natural image, capturing more detail and texture information of the image through the constraint of artistic image style;
s4-5, learning various parameters in the network by continuously optimizing a loss function that generates a countermeasure network.
5. The artistic image restoration method based on style constraints as claimed in claim 4, wherein the step of generating the loss function of the countermeasure network in the step S4 is specifically obtained as follows:
SS4-1. set the confidence of the known pixel and the unknown pixel close to the boundary distance to 1, in order to connect the confidence of the known pixel to the unknown pixel, a Gaussian filter g pair is used
Figure FDA0003297238530000026
Convolution is performed to generate the loss mask MwAs follows:
Figure FDA0003297238530000021
the size of g is 64x64, the standard deviation is 40, which is a one-element operation;
SS4-2. repeating equation (1) yields a impairment mask MwAnd then a restored image G ([ X, M) generated by the generator using the real image Y](ii) a θ) and mask MwAnd calculating the reconstruction loss according to the following calculation formula:
Figure FDA0003297238530000022
θ is a learnable parameter;
SS4-3. with the modified Wasserstein GAN and the use of local and global discriminators, the discriminant loss calculation is given by:
Figure FDA0003297238530000023
in the above formula, the first and second carbon atoms are,
Figure FDA0003297238530000027
it is the true image distribution that is,
Figure FDA0003297238530000024
is to generate an image distribution, λ is the gradient penalty coefficient,
Figure FDA0003297238530000025
6. the artistic image restoration method based on style constraints as claimed in claim 5, wherein in the step S5, the VGG19 feature extraction network specifically comprises the following steps:
s51, the VGG19 network structure is composed of a convolution layer with a convolution kernel of 3x3 and a step length of 2, a pooling window of 2x2, a pooling layer with a step length of 2, a Relu activation layer and a full connection layer;
s52, inputting the repaired image and the real image into a VGG19 feature extraction network, extracting features of the two images at different levels, and calculating content loss and style loss between the repaired image and the real image;
s53, for content loss, performing difference calculation on feature maps extracted by the Conv1_1 convolutional layers and the Conv2_1 convolutional layers, wherein a content loss function is as follows:
Figure FDA0003297238530000031
Figure FDA0003297238530000032
respectively representing pixel values of an ith row and a jth column in a feature map extracted from an L-th layer of the real image and the repaired image;
s54, taking the Gram matrix as the style of the style picture, wherein the Gram matrix is extracted from the same convolution layer of the network to define the similarity among all characteristic graphs, and the style loss function is as follows:
Figure FDA0003297238530000033
wherein G isram(χ L) represents the Gram of the image at level L, the Gram matrix being the off-center covariance matrix between features.
7. The artistic image restoration method based on style constraints as claimed in claim 6, wherein the total loss function in steps S4 and S5 is:
Lall=αLrec+βLadv+γLcontent+δLstyle (6)
wherein, alpha, beta, gamma and delta are balance parameters used for balancing various losses.
8. An artistic image restoration system based on style constraint is characterized by comprising the following modules:
the basic information characteristic generation module: inputting an artistic image data set, and collecting information of each style in the data set to generate basic information characteristics;
a basic underlying feature generation module: generating basic bottom layer characteristics of the artistic image by extracting the characteristics of the artistic image;
style expression forming module: taking two features obtained in a basic information feature generation module and a basic bottom layer feature generation module as features of an input image of a convolutional neural network to assist in training the network, optimizing a classification loss function in the training process, learning style features of an artistic image, and forming style expression of the artistic image;
a repair image output module: inputting an image to be repaired into the generated countermeasure network model, and outputting the repaired image;
a judging module: inputting the repaired image and the real image into a pre-trained VGG19 feature extraction network, extracting features of the two different layers, calculating content loss and style loss between the repaired image and the real image, and judging whether the repaired image has content and style features consistent with the original image.
CN202111180895.8A 2021-10-11 2021-10-11 Artistic image restoration method and system based on style constraint Pending CN114066744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111180895.8A CN114066744A (en) 2021-10-11 2021-10-11 Artistic image restoration method and system based on style constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111180895.8A CN114066744A (en) 2021-10-11 2021-10-11 Artistic image restoration method and system based on style constraint

Publications (1)

Publication Number Publication Date
CN114066744A true CN114066744A (en) 2022-02-18

Family

ID=80234240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111180895.8A Pending CN114066744A (en) 2021-10-11 2021-10-11 Artistic image restoration method and system based on style constraint

Country Status (1)

Country Link
CN (1) CN114066744A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943656A (en) * 2022-05-31 2022-08-26 山东财经大学 Face image restoration method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943656A (en) * 2022-05-31 2022-08-26 山东财经大学 Face image restoration method and system
CN114943656B (en) * 2022-05-31 2023-02-28 山东财经大学 Face image restoration method and system

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
Jiang et al. Edge-enhanced GAN for remote sensing image superresolution
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN111598174B (en) Model training method based on semi-supervised antagonistic learning and image change analysis method
CN110853026B (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN111310773A (en) Efficient license plate positioning method of convolutional neural network
CN111861906B (en) Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN114187450A (en) Remote sensing image semantic segmentation method based on deep learning
CN113888550A (en) Remote sensing image road segmentation method combining super-resolution and attention mechanism
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN110097110B (en) Semantic image restoration method based on target optimization
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN111680739A (en) Multi-task parallel method and system for target detection and semantic segmentation
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN115841438A (en) Infrared image and visible light image fusion method based on improved GAN network
CN113378812A (en) Digital dial plate identification method based on Mask R-CNN and CRNN
CN111126185B (en) Deep learning vehicle target recognition method for road gate scene
CN113205103A (en) Lightweight tattoo detection method
CN115410024A (en) Power image defect detection method based on dynamic activation thermodynamic diagram
CN114066744A (en) Artistic image restoration method and system based on style constraint
CN112634168A (en) Image restoration method combined with edge information
CN112734675A (en) Image rain removing method based on pyramid model and non-local enhanced dense block
CN115457568B (en) Historical document image noise reduction method and system based on generation countermeasure network
CN115731138A (en) Image restoration method based on Transformer and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination