CN113012076B - Dunhuang fresco restoration method based on adjacent pixel points and self-encoder - Google Patents

Dunhuang fresco restoration method based on adjacent pixel points and self-encoder Download PDF

Info

Publication number
CN113012076B
CN113012076B CN202110460228.9A CN202110460228A CN113012076B CN 113012076 B CN113012076 B CN 113012076B CN 202110460228 A CN202110460228 A CN 202110460228A CN 113012076 B CN113012076 B CN 113012076B
Authority
CN
China
Prior art keywords
image
repaired
determining
current point
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110460228.9A
Other languages
Chinese (zh)
Other versions
CN113012076A (en
Inventor
何俊霖
张伟文
陆铿宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110460228.9A priority Critical patent/CN113012076B/en
Publication of CN113012076A publication Critical patent/CN113012076A/en
Application granted granted Critical
Publication of CN113012076B publication Critical patent/CN113012076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a Dunhuang wall painting restoration method based on adjacent pixel points and a self-encoder, which is used for solving the technical problem of poor restoration effect of Dunhuang wall painting. The invention comprises the following steps: acquiring an original image of a preset Dunhuang wall painting; decomposing the original image to obtain a structural layer image and a texture layer image; determining a region to be repaired and a first known region of the texture layer image; acquiring known pixel points of the first known region; repairing the region to be repaired by adopting the known pixel points to obtain a texture layer reconstructed image; and reconstructing an image by adopting the structural layer image and the texture layer, and generating a repair image.

Description

Dunhuang fresco restoration method based on adjacent pixel points and self-encoder
Technical Field
The invention relates to the technical field of image processing, in particular to a Dunhuang mural repairing method based on adjacent pixel points and a self-encoder.
Background
Image restoration is an important content in image restoration research and is also a research hotspot in the current image processing and computer vision fields.
Image restoration is a process of filling information into an information defect area on an image, and aims to restore the image with the information defect and make an observer unable to perceive that the image is defective or has been restored. The technology has high application value in the aspects of cultural relic protection, video trick making, old photo restoration, text removal in images, barrier removal, video error concealment and the like.
Dunhuang art is a crossing point of multiple cultural fusion and impact, and is listed as Dunhuang mural of cultural heritage of the world as a main component of Dunhuang art. However, due to the influence of artificial and natural factors, the damage condition of the Dunhuang wall painting is quite various, and the problems of cracking, fading, falling off and the like are continuously caused, and besides a whole blank area formed by large-area image deletion, the Dunhuang wall painting has a very large number of tiny defects.
Disclosure of Invention
The invention provides a Dunhuang wall painting restoration method based on adjacent pixel points and a self-encoder, which is used for solving the technical problem of poor restoration effect of Dunhuang wall painting.
The invention provides a Dunhuang wall painting restoration method based on adjacent pixel points and a self-encoder, which comprises the following steps:
acquiring an original image of a preset Dunhuang wall painting;
decomposing the original image to obtain a structural layer image and a texture layer image;
determining a region to be repaired and a first known region of the texture layer image;
acquiring known pixel points of the first known region;
repairing the region to be repaired by adopting the known pixel points to obtain a texture layer reconstructed image;
and reconstructing an image by adopting the structural layer image and the texture layer, and generating a repair image.
Optionally, the step of decomposing the original image to obtain a structural layer image and a texture layer image includes:
and decomposing the original image by adopting a preset image decomposition model to obtain a structural layer image and a texture layer image.
Optionally, the step of repairing the to-be-repaired area by using the known pixel points to obtain a texture layer reconstructed image includes:
determining a current point to be repaired in the area to be repaired;
determining the neighborhood of the current point to be repaired in the first known area;
calculating the pixel value of the current point to be repaired by adopting the known pixel points in the neighborhood;
filling the current point to be repaired by adopting the pixel value to obtain a repaired point, and merging the repaired point into the first known region to obtain a second known region;
judging whether the to-be-repaired area has an unrepaired point or not;
if not, taking the second known area as a first known area, and returning to the step of determining the current point to be repaired in the area to be repaired;
if yes, determining the image corresponding to the second known area as a texture layer reconstructed image.
Optionally, the step of determining the current point to be repaired in the area to be repaired includes:
determining boundary points of the area to be repaired;
and determining the current point to be repaired in the boundary points.
Optionally, the step of determining the current point to be repaired in the boundary points includes:
calculating the priority of each boundary point;
and determining the boundary point with the highest priority as the current point to be repaired.
Optionally, the step of determining the neighborhood of the current point to be repaired in the first known region includes:
and determining a first known area in a preset radius as a neighborhood of the current point to be repaired by taking the current point to be repaired as a circle center.
Optionally, the step of calculating the pixel value of the current point to be repaired by using the known pixel points in the neighborhood includes:
calculating the weighted value of the pixel value of each known pixel point in the neighborhood;
and calculating the pixel value of the current point to be repaired by adopting the weighted value.
Optionally, the step of reconstructing an image using the structural layer image and the texture layer, and generating a repair image includes:
reconstructing an image by adopting the structural layer image and the texture layer image to generate a reconstructed image;
and removing noise information of the reconstructed image to generate a repair image.
Optionally, the step of removing noise information of the reconstructed image to obtain a repair image includes:
and removing noise information of the reconstructed image by adopting a preset self-encoder to generate a repair image.
Optionally, the self-encoder includes a convolutional layer and a deconvolution layer.
From the above technical scheme, the invention has the following advantages: according to the invention, an original image of a preset Dunhuang wall painting is obtained; decomposing the original image to obtain a structural layer image and a texture layer image; determining a region to be repaired and a first known region of the texture layer image; acquiring known pixel points of the first known region; repairing the region to be repaired by adopting the known pixel points to obtain a texture layer reconstructed image; and reconstructing an image by adopting the structural layer image and the texture layer, and generating a repair image. The restoration effect of the image is enhanced, and the reduction degree of the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart showing steps of a Dunhuang mural restoration method based on adjacent pixels and a self-encoder according to an embodiment of the present invention;
FIG. 2 is a flowchart showing steps of a Dunhuang mural restoration method based on neighboring pixels and a self-encoder according to another embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating decomposition of a VO image decomposition model according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a repair method based on adjacent pixels according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the overall network architecture of the self-encoder according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a building block provided by an embodiment of the present invention;
fig. 7 is a block diagram of a Dunhuang mural repair device based on adjacent pixels and a self-encoder according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a Dunhuang wall painting restoration method based on adjacent pixel points and a self-encoder, which is used for solving the technical problem of poor restoration effect of Dunhuang wall painting.
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a Dunhuang mural restoration method based on adjacent pixels and a self-encoder according to an embodiment of the present invention.
The invention provides a Dunhuang fresco restoration method based on adjacent pixel points and a self-encoder, which specifically comprises the following steps:
step 101, obtaining an original image of a preset Dunhuang wall painting;
in the embodiment of the invention, to repair the Dunhuang wall painting, firstly, the wall painting needs to be subjected to image acquisition to obtain an original image capable of carrying out data analysis.
Step 102, decomposing an original image to obtain a structural layer image and a texture layer image;
in the embodiment of the invention, the original image can be regarded as being formed by overlapping layers of images with different properties, and the original image can be specifically divided into a structural layer image and a texture layer image.
Texture is caused by the variety of physical properties of the surface of an object, different physical properties represent different gray levels or color information of a specific surface feature, different physical surfaces can generate different texture images, and therefore, the texture is an extremely important property of the image and plays a role in computer vision and image processing. Texture is a macroscopic representation of some local repeating pattern of eigenvalue intensities in an image. Therefore, the gray level of the defect area can be predicted according to the gray level change of the texture, so that the defect area can be repaired.
Step 103, determining a region to be repaired and a first known region of the texture layer image;
an image having defect information, the texture layer image of which may be composed of the region to be repaired and the first known region. In the embodiment of the invention, the purpose of acquiring the first known region is to repair the region to be repaired based on the texture features of the first known region.
Step 104, obtaining known pixel points of a first known area;
step 105, repairing the area to be repaired by using known pixel points to obtain a texture layer reconstructed image;
in the embodiment of the invention, the region to be repaired of the texture layer image can be repaired according to the known pixel points of the first known region, so as to obtain a texture layer reconstructed image. The texture layer reconstructed image does not contain defect information.
And 106, reconstructing an image by adopting the structural layer image and the texture layer, and generating a repair image.
After the texture layer reconstructed image is obtained, the texture layer reconstructed image and the structural layer image can be integrated to obtain a repair image without a defect area.
According to the invention, an original image of a preset Dunhuang wall painting is obtained; decomposing the original image to obtain a structural layer image and a texture layer image; determining a region to be repaired and a first known region of the texture layer image; acquiring known pixel points of a first known region; repairing the region to be repaired by adopting known pixel points to obtain a texture layer reconstructed image; and reconstructing an image by adopting the structural layer image and the texture layer to generate a repair image. The problem of damage and pollution of a large number of small-range frescoes caused by the repair of the frescoes over time can be well solved, the repair effect of the images is enhanced, and the reduction degree of the images is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for repairing a Dunhuang mura based on neighboring pixels and a self-encoder according to another embodiment of the present invention. The method specifically comprises the following steps:
step 201, obtaining an original image of a preset Dunhuang wall painting;
step 202, decomposing an original image by adopting a preset image decomposition model to obtain a structural layer image and a texture layer image;
in the embodiment of the invention, a VO (Vese-Osher) image decomposition model can be adopted to decompose an original image to obtain a structural layer image and a texture layer image, namely:
f=u+v
where f represents the original image, u represents the structural layer image, and v represents the texture layer image.
The energy functional of the VO image decomposition model is expressed as:
Figure BDA0003041976410000061
where f represents an original image, u represents a structural layer image,
Figure BDA0003041976410000062
Figure BDA0003041976410000063
texture layer->
Figure BDA0003041976410000064
Lambda and mu are penalty parameters for adjusting the specific gravity of each item in the model to meet lambda>0,μ>0; inf denotes the infinitesimal, g 1 、g 2 Representing the vector field direction,/->
Figure BDA0003041976410000065
Represents the partial derivatives of x, y, L represents the Lebesgue space (Lebesgue space), R represents the real number domain, x, y represents the coordinate direction, Ω represents the image area, ++>
Figure BDA0003041976410000066
Representing gradient->
Figure BDA0003041976410000067
P is an indefinite parameter.
The original image decomposition effect is not greatly different when the value of the parameter p is [1,10], and the model operation speed is the fastest when p=1, so that the value of the parameter p is 1, and a corresponding Euler-Lagrange equation is obtained by using a variational method:
Figure BDA0003041976410000068
since the image region Ω is bounded, functional boundary constraints are:
Figure BDA0003041976410000069
where n is the external normal vector of the boundary, n= (n) x ,n y )。
The texture layer image can be obtained by adopting a differential operation approximation deviation derivation method
Figure BDA00030419764100000610
In one example, the decomposition result of decomposing the original image using the VO image decomposition model is shown in fig. 3.
Step 203, determining a region to be repaired and a first known region of the texture layer image;
step 204, obtaining known pixel points of a first known region;
step 205, repairing the area to be repaired by using known pixel points to obtain a texture layer reconstructed image;
in the embodiment of the invention, the region to be repaired of the texture layer image can be repaired according to the known pixel points of the first known region, so as to obtain a texture layer reconstructed image.
In one example, step 205 may include the sub-steps of:
s51, determining a current point to be repaired in the area to be repaired;
in the embodiment of the invention, the area to be repaired can be filled with the adjacent pixel points. Firstly, determining a current point to be repaired in a region to be repaired, and particularly determining the current point to be repaired by a method for calculating boundary priority. The method comprises the following specific steps:
s511, determining boundary points of the area to be repaired;
s512, calculating the priority of each boundary point;
and S513, determining the boundary point with the largest priority as the current point to be repaired.
S52, determining the neighborhood of the current point to be repaired in the first known area;
s53, calculating the pixel value of the current point to be repaired by adopting the known pixel points in the neighborhood;
s54, filling the current point to be repaired with a pixel value to obtain a repaired point, and merging the repaired point into the first known region to obtain a second known region;
s55, judging whether an unrepaired point exists in the area to be repaired;
s56, if not, taking the second known area as the first known area, and returning to the step of determining the current point to be repaired in the area to be repaired;
and S57, if yes, determining the image corresponding to the second known area as a texture layer reconstructed image.
After the current point to be repaired is determined, the pixel value of the point can be calculated by acquiring the neighboring pixel points of the point. Specifically, a neighborhood of the current point to be repaired may be determined in the first known region, where the first known region within the preset radius may be determined as the neighborhood of the current point to be repaired with the current point to be repaired as a center of a circle. The known pixel points in the neighborhood are the adjacent pixel points of the current point to be repaired. The pixel value of the current point to be repaired can be calculated through the adjacent pixel points. In a specific implementation, the pixel value of the current point to be repaired may be calculated by:
obtaining a weighted value of pixel values of all known pixel points in the neighborhood; and calculating the pixel value of the current point to be repaired by adopting the weighted value.
Further, after the current point to be repaired is repaired, the current point to be repaired can be used as a known pixel point to assist in solving the pixel values of other points to be repaired in the area to be repaired. Specifically, the repaired point may be incorporated into the first known region to obtain a second known region, when the region to be repaired is not empty, taking the second known region as the first known region, and skipping to execute the step of determining the current repaired point in the region to be repaired until the region to be repaired is empty, where the entire texture layer image is the second known region, and the second known region may be used as the texture layer reconstructed image. As a texture layer basis for reconstructing the image.
For ease of understanding, as shown in fig. 4, in the embodiment of the present invention, I is a first known region, and the adjacent pixel points of the region Ω to be repaired are represented as
Figure BDA0003041976410000081
Figure BDA0003041976410000082
For the boundary of the region to be repaired, for +.>
Figure BDA0003041976410000083
Is provided with->
Figure BDA0003041976410000084
Representing a set of known points in a neighborhood centered on p and having a radius ε, the color value of a repair point p is determined by the known neighboring pixels of p, and N ε Estimating the color value of the p-point by the point weight in (p):
Figure BDA0003041976410000085
wherein I (p) is the pixel value of p-point, ω (p, q) is the weight function, and q is the neighboring pixel point of p-point.
Weight function ω (p, q) =dirc (p, q) dist (p, q), wherein,
Figure BDA0003041976410000086
as a geometric distance factor, the effect on p is shown to be smaller as the point q is farther from p, where d 0 Is a constant; />
Figure BDA0003041976410000087
Figure BDA0003041976410000088
Is the direction ofThe factor indicates that among the adjacent points of p, the smaller the intersection angle of the contour direction with the p point contour direction, the larger the contribution of the point q to p. The pixel values of the point to be repaired can be obtained by respectively calculating the 3 color channels R, G, B according to the method.
From the outer boundary of the area to be repaired
Figure BDA0003041976410000089
Initially, the repaired area Ω is narrowed stepwise until Ω is empty. The order of filling is particularly critical since the point that fills first will become the next known point to some other unfilled point, and the order of filling will directly affect the effectiveness of the repair. The embodiment of the invention selects the priority through calculation and determines the repairing sequence according to the size of the priority. In one example, the priority P (P) may be calculated by the following formula:
P(p)=C(p)D(p)
wherein, C (p) is a trust factor, D (p) is a data factor, and is defined as:
Figure BDA0003041976410000091
p |
Figure BDA0003041976410000092
wherein, square window psi p The size is (2ε+1), |ψ p The I is psi p Is a part of the area of (2); α is a normalization factor (α=255 for gray scale image); n is n p Is the boundary
Figure BDA0003041976410000093
The unit normal vector at the p point is used for estimating the normal vector of the curve fitted by the adjacent boundary points of the p point at the p point; />
Figure BDA0003041976410000094
Is the direction of the contour of the image. Initially, the _on>
Figure BDA0003041976410000095
In the embodiment of the invention, only one point is repaired at a time, and then the priority of the boundary points near the repaired point is updated without recalculating the priorities of all the boundary points, so that the repair time can be saved, and the efficiency is improved.
Step 206, reconstructing an image by adopting the structural layer image and the texture layer, and generating a reconstructed image;
step 207, removing noise information of the reconstructed image, and generating a repair image.
In the embodiment of the invention, after the reconstructed image of the texture layer is obtained, the reconstructed image can be obtained by combining the reconstructed image with the structural layer, and the repair image can be generated after noise information in the reconstructed image is removed. In one example, a predetermined self-encoder may be employed to remove noise information of the reconstructed image, generating a repair image.
A self-encoder (Auto-encoder) is one of the neural networks, whose two core parts are an encoder and a decoder, which compress input data into a potential representation space, and then reconstruct the data based on this representation space to obtain the final output data. Based on this, the embodiment of the invention proposes a self-encoder with a depth-full convolutional encoding-decoding framework for solving the image denoising problem, while avoiding the loss of useful image detail information due to pooling operations. The specific structure is shown in fig. 5.
As shown in fig. 5, the network structure of the self-encoder of the embodiment of the present invention is composed of a multi-layer symmetric convolution layer (encoder) and a deconvolution layer (decoder), and learns an end-to-end mapping from a corrupted image to an original image, and needs to estimate the weights θ represented by the convolution kernel and the deconvolution kernel, which can be achieved by minimizing the Euclidean loss between clean images output by the network, specifically, given N training sample pairs X i 、Y i Wherein X is i Is a noise image, Y i For a clean image, the following mean square error (Mean Squared Error) is minimized:
Figure BDA0003041976410000101
wherein N represents an integer from 1 to N.
The convolution layer is used as a feature extractor for denoising and encoding the main component of the image content; the deconvolution layer decodes the image-abstract content to recover details of the image content.
As shown in fig. 5 and 6, a Skip-layer connections is performed on the feature map of the convolutional layer and the corresponding deconvolution layer feature map in mirror image relationship, with each layer being connected once. The method has the advantages that firstly, the response of the convolution layer is directly transmitted to the corresponding mirror image deconvolution layer, and the feature map (feature map) transmitted by the jump layer connection contains a lot of image details, so that a decoder is facilitated to have more image detail information, and a better clean image is recovered; second, the layer-jump connection may back-propagate gradients to the underlying layer, making it easier to train deeper networks.
Further, the kernel sizes of the convolution and deconvolution may both be set to 3×3, thereby exhibiting good image recognition performance. In addition, since the network structure is essentially a pixel-level prediction, the input and output of the network are images w×h×c of the same size, where w, h, and c are width, height, and channel number, respectively, and the sum of elements representing the feature map is expressed. Still further, the embodiment of the present invention may choose not to directly learn the mapping from the input image X to the output image Y, but to learn a residual F (X) =y-X, making training more efficient.
After the self-encoder is trained, the reconstructed image is input into the trained self-encoder, so that noise information in the reconstructed image can be removed, and a repair image is obtained.
According to the invention, an original image of a preset Dunhuang wall painting is obtained; decomposing the original image to obtain a structural layer image and a texture layer image; determining a region to be repaired and a first known region of the texture layer image; acquiring known pixel points of a first known region; repairing the region to be repaired by adopting known pixel points to obtain a texture layer reconstructed image; and reconstructing an image by adopting the structural layer image and the texture layer to generate a repair image. The problem of damage and pollution of a large number of small-range frescoes caused by the repair of the frescoes over time can be well solved, the repair effect of the images is enhanced, and the reduction degree of the images is improved.
Referring to fig. 7, fig. 7 is a block diagram of a Dunhuang mural repair device based on adjacent pixels and a self-encoder according to an embodiment of the invention.
The embodiment of the invention provides a Dunhuang wall painting restoration device based on adjacent pixel points and a self-encoder, which comprises the following components:
the original image obtaining module 701 is configured to obtain an original image of a preset dunhuang fresco;
the original image decomposition module 702 is configured to decompose an original image to obtain a structural layer image and a texture layer image;
a to-be-repaired area and first known area determining module 703, configured to determine a to-be-repaired area and a first known area of the texture layer image;
a known pixel point obtaining module 704, configured to obtain a known pixel point of the first known region;
the texture layer reconstructed image generating module 705 is configured to repair a region to be repaired by using known pixel points to obtain a texture layer reconstructed image;
the repair image generation module 706 is configured to generate a repair image by using the structural layer image and the texture layer reconstructed image.
In an embodiment of the present invention, the original image decomposition module 702 includes:
the original image decomposition sub-module is used for decomposing the original image by adopting a preset image decomposition model to obtain a structural layer image and a texture layer image.
In an embodiment of the present invention, the texture layer reconstructed image generation module 705 includes:
the current point to be repaired determining submodule is used for determining the current point to be repaired in the area to be repaired;
a neighborhood determination submodule, configured to determine a neighborhood of a current point to be repaired in a first known region;
a pixel value calculating sub-module for calculating the pixel value of the current point to be repaired by adopting the known pixel points in the neighborhood;
the second known region generation submodule is used for filling the current point to be repaired with a pixel value to obtain a repaired point, and merging the repaired point into the first known region to obtain a second known region;
the judging submodule is used for judging whether an unrepaired point exists in the area to be repaired or not;
the returning sub-module is used for taking the second known area as the first known area if not, and returning to the step of determining the current point to be repaired in the area to be repaired;
and the texture layer reconstructed image determining submodule is used for determining an image corresponding to the second known region as a texture layer reconstructed image if the texture layer reconstructed image is the texture layer reconstructed image.
In the embodiment of the invention, the current point to be repaired determines the submodule, which comprises:
a boundary point determining unit for determining a boundary point of the area to be repaired;
and the current point to be repaired determining unit is used for determining the current point to be repaired in the boundary points.
In an embodiment of the present invention, a current point to be repaired determining unit includes:
a priority calculating subunit for calculating a priority of each boundary point;
and the current point to be repaired determining subunit is used for determining the boundary point with the largest priority as the current point to be repaired.
In an embodiment of the present invention, the neighborhood determination submodule includes:
the neighborhood determining unit is used for determining a first known area in a preset radius as a neighborhood of the current point to be repaired by taking the current point to be repaired as a circle center.
In an embodiment of the present invention, a pixel value calculation submodule includes:
a weighted value calculation unit for calculating a weighted value of pixel values of each known pixel point in the neighborhood;
and the pixel value calculating unit is used for calculating the pixel value of the current point to be repaired by adopting the weighted value.
In an embodiment of the present invention, the repair image generation module 706 includes:
the reconstructed image generation sub-module is used for generating a reconstructed image by adopting the structure layer image and the texture layer reconstructed image;
and the restoration image generation sub-module is used for removing noise information of the reconstructed image and generating a restoration image.
In an embodiment of the present invention, a repair image generation sub-module includes:
and the restoration image generation unit is used for removing noise information of the reconstructed image by adopting a preset self-encoder to generate a restoration image.
In an embodiment of the invention, the self-encoder includes a convolutional layer and a deconvolution layer.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. A method for repairing a Dunhuang wall painting based on adjacent pixels and a self-encoder, comprising:
acquiring an original image of a preset Dunhuang wall painting;
decomposing the original image to obtain a structural layer image and a texture layer image;
determining a region to be repaired and a first known region of the texture layer image;
acquiring known pixel points of the first known region;
repairing the region to be repaired by adopting the known pixel points to obtain a texture layer reconstructed image;
reconstructing an image by adopting the structural layer image and the texture layer to generate a repair image;
the step of decomposing the original image to obtain a structural layer image and a texture layer image comprises the following steps:
decomposing the original image by adopting a preset image decomposition model, wherein the preset image decomposition model is a VO image decomposition model, namely:
f=u+v
wherein f represents an original image, u represents a structural layer image, and v represents a texture layer image;
the energy functional of the VO image decomposition model is expressed as:
Figure FDA0004235315460000011
where f represents an original image, u represents a structural layer image,
Figure FDA0004235315460000012
texture layer
Figure FDA0004235315460000013
Lambda and mu are penalty parameters for adjusting the specific gravity of each item in the model to meet lambda>0,μ>0; inf denotes the infinitesimal, g 1 、g 2 Representing the vector field direction,/->
Figure FDA0004235315460000014
Figure FDA0004235315460000015
Represents the partial derivatives of x and y, L represents the Leberg space, R represents the real number domain, x and y represent the coordinate directions, Ω represents the image area, < ->
Figure FDA0004235315460000016
Representing gradient->
Figure FDA0004235315460000017
P is an indefinite parameter;
the step of repairing the region to be repaired by adopting the known pixel points to obtain a texture layer reconstructed image comprises the following steps:
determining a current point to be repaired in the area to be repaired;
determining the neighborhood of the current point to be repaired in the first known area;
calculating the pixel value of the current point to be repaired by adopting the known pixel points in the neighborhood;
filling the current point to be repaired by adopting the pixel value to obtain a repaired point, and merging the repaired point into the first known region to obtain a second known region;
judging whether the to-be-repaired area has an unrepaired point or not;
if not, taking the second known area as a first known area, and returning to the step of determining the current point to be repaired in the area to be repaired;
if yes, determining the image corresponding to the second known area as a texture layer reconstructed image;
the step of reconstructing an image using the structural layer image and the texture layer, and generating a repair image, comprises:
reconstructing an image by adopting the structural layer image and the texture layer image to generate a reconstructed image;
removing noise information of the reconstructed image to generate a repair image;
the step of removing noise information of the reconstructed image to obtain a repair image comprises the following steps:
removing noise information of the reconstructed image by adopting a preset self-encoder to generate a repair image;
the self-encoder includes a convolutional layer and a deconvolution layer;
the step of determining the current point to be repaired in the area to be repaired comprises the following steps:
determining boundary points of the area to be repaired;
determining the current point to be repaired in the boundary points;
the step of determining the current point to be repaired in the boundary points comprises the following steps:
calculating the priority of each boundary point;
and determining the boundary point with the highest priority as the current point to be repaired.
2. The method of claim 1, wherein the step of determining the neighborhood of the current point to be repaired in the first known region comprises:
and determining a first known area in a preset radius as a neighborhood of the current point to be repaired by taking the current point to be repaired as a circle center.
3. The method of claim 1, wherein the step of calculating the pixel value of the current point to be repaired using the known pixel points in the neighborhood comprises:
calculating the weighted value of the pixel value of each known pixel point in the neighborhood; and calculating the pixel value of the current point to be repaired by adopting the weighted value.
CN202110460228.9A 2021-04-27 2021-04-27 Dunhuang fresco restoration method based on adjacent pixel points and self-encoder Active CN113012076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110460228.9A CN113012076B (en) 2021-04-27 2021-04-27 Dunhuang fresco restoration method based on adjacent pixel points and self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110460228.9A CN113012076B (en) 2021-04-27 2021-04-27 Dunhuang fresco restoration method based on adjacent pixel points and self-encoder

Publications (2)

Publication Number Publication Date
CN113012076A CN113012076A (en) 2021-06-22
CN113012076B true CN113012076B (en) 2023-06-23

Family

ID=76380594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110460228.9A Active CN113012076B (en) 2021-04-27 2021-04-27 Dunhuang fresco restoration method based on adjacent pixel points and self-encoder

Country Status (1)

Country Link
CN (1) CN113012076B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897970A (en) * 2015-12-21 2017-06-27 阿里巴巴集团控股有限公司 A kind of image repair method and device
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 A kind of face image processing process, system and equipment and storage medium
WO2021073101A1 (en) * 2019-10-16 2021-04-22 深圳开立生物医疗科技股份有限公司 Image processing method and apparatus, electronic device, and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117481B (en) * 2011-03-17 2012-11-28 西安交通大学 Automatic digital repair method of damaged images
EP2768227A1 (en) * 2013-01-23 2014-08-20 Siemens Aktiengesellschaft autogressive pixel prediction in the neighbourhood of image borders
CN110570382B (en) * 2019-09-19 2022-11-11 北京达佳互联信息技术有限公司 Image restoration method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897970A (en) * 2015-12-21 2017-06-27 阿里巴巴集团控股有限公司 A kind of image repair method and device
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 A kind of face image processing process, system and equipment and storage medium
WO2021073101A1 (en) * 2019-10-16 2021-04-22 深圳开立生物医疗科技股份有限公司 Image processing method and apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN113012076A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN107527358B (en) Dense optical flow estimation method and device
JP6810415B2 (en) A monitoring image segmentation method and device that uses a weighted convolution filter for each grid cell by switching modes according to the class of the area in order to satisfy level 4 of an autonomous vehicle, and a test method and test device that uses it.
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN109308689B (en) Mask generation based unsupervised image restoration method for resisting network transfer learning
CN110969589A (en) Dynamic scene fuzzy image blind restoration method based on multi-stream attention countermeasure network
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN113570516B (en) Image blind motion deblurring method based on CNN-Transformer hybrid self-encoder
JP2009509418A (en) Classification filtering for temporal prediction
CN111626141B (en) Crowd counting model building method, counting method and system based on generated image
CN111462019A (en) Image deblurring method and system based on deep neural network parameter estimation
CN109345609B (en) Method for denoising mural image and generating line drawing based on convolutional neural network
CN115035172A (en) Depth estimation method and system based on confidence degree grading and inter-stage fusion enhancement
CN113012076B (en) Dunhuang fresco restoration method based on adjacent pixel points and self-encoder
CN111861935B (en) Rain removing method based on image restoration technology
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN113487512A (en) Digital image restoration method and device based on edge information guidance
CN116051407A (en) Image restoration method
CN111126166A (en) Remote sensing image road extraction method and system
CN115170812A (en) Image denoising model training and denoising method, device and storage medium thereof
CN115018726A (en) U-Net-based image non-uniform blur kernel estimation method
CN112651926A (en) Method and device for detecting cracks based on recursive attention mechanism
CN115880183B (en) Point cloud model restoration method, system, device and medium based on depth network
Huang et al. An End-to-End Network for Single Image Dedusting
CN116524199B (en) Image rain removing method and device based on PReNet progressive network
CN112785523B (en) Semi-supervised image rain removing method and device for sub-band network bridging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant