CN110874824B - Image restoration method and device - Google Patents

Image restoration method and device Download PDF

Info

Publication number
CN110874824B
CN110874824B CN201910965032.8A CN201910965032A CN110874824B CN 110874824 B CN110874824 B CN 110874824B CN 201910965032 A CN201910965032 A CN 201910965032A CN 110874824 B CN110874824 B CN 110874824B
Authority
CN
China
Prior art keywords
image
repaired
area
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910965032.8A
Other languages
Chinese (zh)
Other versions
CN110874824A (en
Inventor
张渊
林杰兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gaoding Xiamen Technology Co Ltd
Original Assignee
Gaoding Xiamen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gaoding Xiamen Technology Co Ltd filed Critical Gaoding Xiamen Technology Co Ltd
Priority to CN201910965032.8A priority Critical patent/CN110874824B/en
Publication of CN110874824A publication Critical patent/CN110874824A/en
Application granted granted Critical
Publication of CN110874824B publication Critical patent/CN110874824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image restoration method and device, wherein the method comprises the following steps: acquiring an image to be repaired, and distinguishing the image to be repaired to obtain an area to be repaired and a residual area; calculating the autocorrelation of the area to be repaired and the residual area, and taking the area with the highest autocorrelation as a reference area; performing high-frequency filtering on the image in the reference region by adopting a high-frequency filtering algorithm to obtain high-frequency information; inputting the images corresponding to the residual areas into a pre-trained neural network model to output low-frequency information; mixing the low-frequency information and the high-frequency information to obtain information to be filled; filling information to be filled into the area to be repaired so as to repair the image to be repaired; therefore, the invention obtains the low-frequency information through the deep learning network, and obtains the high-frequency information through the intelligent filling method, so that the filling result has the continuity of the peripheral information and the authenticity of the information, thereby greatly improving the image repairing effect.

Description

Image restoration method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image restoration method, an image restoration apparatus, a computer-readable storage medium, and a computer device.
Background
The pictures can record the life drops, but due to the influence of shooting conditions and skills, the shot pictures often need to be processed, for example, redundant ornaments or decorations on the background are erased, and after the unwanted contents are erased, the pictures need to be repaired, namely, new content information is filled in the erased area, so that the filled new information can be combined with the continuity of the peripheral information and the authenticity of the information.
The method has the advantages that the reality of the filling effect can be well realized, but the continuity of the filling information and the original information of the periphery is poor; the other method is to train a big data sample through a deep learning technology, so that the deep neural network has the capability of automatically generating information according to the form of peripheral information. That is, the conventional image restoration method cannot achieve both continuity of peripheral information of the image restoration effect and authenticity of the information itself, and thus the restoration effect is poor.
Disclosure of Invention
The present application is based on the recognition and study of the following problems by the inventors:
the existing intelligent filling algorithm is used for filling contents by searching other local information which is most similar to an area to be repaired on an image to be repaired and taking the local information as a reference; the information obtained by the intelligent filling algorithm has higher truth because objective information can be used as reference; however, since other local information is directly moved for filling and repairing, continuity between new information and original peripheral information cannot be achieved; in addition, although the existing filling information generated by deep learning is better in continuity of new information and original information, namely fitting of low-frequency information is relatively natural, the filling information generated by deep learning is very unnatural in high-frequency texture, so that the trueness is poor; therefore, both the peripheral continuity of the repair effect and the content authenticity of the filler information itself cannot be considered.
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, an object of the present invention is to provide an image inpainting method, in which a high-low frequency separation technique is used to separate information to be filled into two parts, namely a low frequency part and a high frequency part, and a deep learning network and an intelligent filling method are combined, so that a filling result has both continuity of peripheral information and authenticity of information, thereby greatly improving an image inpainting effect.
A second object of the invention is to propose a computer-readable storage medium.
A third object of the invention is to propose a computer device.
A fourth object of the present invention is to provide an image restoration apparatus.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides an image repairing method, including: acquiring an image to be repaired, and distinguishing the image to be repaired to obtain an area to be repaired and a residual area; calculating the autocorrelation of the area to be repaired and the autocorrelation of the residual area so as to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area; performing high-frequency filtering processing on the image in the reference region by adopting a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference region; inputting the image corresponding to the residual region into a pre-trained neural network model to output low-frequency information; mixing the low-frequency information with the high-frequency information to obtain information to be filled; and filling the information to be filled into the area to be repaired so as to repair the image to be repaired.
According to the image restoration method provided by the embodiment of the invention, firstly, an image to be restored is obtained, and the image to be restored is distinguished to obtain an area to be restored and a residual area; then calculating the autocorrelation of the area to be repaired and the autocorrelation of the residual area so as to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area; then, carrying out high-frequency filtering processing on the image in the reference area by adopting a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference area; then inputting the images corresponding to the residual areas into a pre-trained neural network model to output low-frequency information; then mixing the low-frequency information with the high-frequency information to obtain information to be filled; and finally, filling the information to be filled into the area to be repaired to repair the image to be repaired, so that the information to be filled is separated into a low frequency part and a high frequency part by a high-low frequency separation technology, the low frequency information is obtained by a deep learning network, and the high frequency information is obtained by an intelligent filling method, so that the filling result has the continuity of the peripheral information and the authenticity of the information, and the image repairing effect is greatly improved.
In addition, the image restoration method proposed according to the above embodiment of the present invention may further have the following additional technical features:
optionally, the constructing and training of the neural network model includes the following steps: collecting a plurality of sample images; carrying out low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image; and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
Optionally, the image to be repaired is divided into a region to be repaired and a remaining region according to a label of a user.
Optionally, the remaining region is screened according to the size of the region to be repaired, so as to screen out a reference region with the highest autocorrelation with the region to be repaired, wherein the size of the reference region is consistent with that of the region to be repaired.
To achieve the above object, a second aspect of the present invention provides a computer-readable storage medium, on which an image restoration program is stored, the image restoration program implementing the image restoration method as described above when executed by a processor.
According to the computer-readable storage medium of the embodiment of the invention, the image restoration program is stored, so that when the image restoration program is executed by the processor, the image restoration method is realized, the filling result has the continuity of the peripheral information and the authenticity of the information, and the image restoration effect is greatly improved.
To achieve the above object, a third embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image restoration method as described above is implemented.
According to the computer equipment provided by the embodiment of the invention, the image restoration program is stored in the memory, so that the image restoration method is realized when the image restoration program is executed by the processor, the filling result has the continuity of the peripheral information and the authenticity of the information, and the image restoration effect is greatly improved.
To achieve the above object, a fourth aspect of the present invention provides an image restoration apparatus, including: the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring an image to be repaired and distinguishing the image to be repaired to acquire an area to be repaired and a residual area; the calculation module is used for calculating the autocorrelation of the to-be-repaired area and the residual area so as to acquire a reference area with the highest autocorrelation with the to-be-repaired area from the residual area; the filtering module is used for carrying out high-frequency filtering processing on the image in the reference region by adopting a high-frequency filtering algorithm so as to obtain high-frequency information of the image corresponding to the reference region; the superposition module is used for inputting the image corresponding to the residual region into a pre-trained neural network model to output low-frequency information, and mixing the low-frequency information with the high-frequency information to obtain information to be filled; and the filling module is used for filling the information to be filled into the area to be repaired so as to repair the image to be repaired.
According to the image restoration device provided by the embodiment of the invention, the image to be restored is obtained through the obtaining module, and the image to be restored is distinguished to obtain the area to be restored and the residual area; then, calculating the autocorrelation of the area to be repaired and the residual area by using a calculation module so as to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area; then, the image in the reference area is subjected to high-frequency filtering processing through a filtering module to obtain high-frequency information of the image corresponding to the reference area; inputting the images corresponding to the residual areas into a pre-trained neural network model through a superposition module to output low-frequency information; mixing the low-frequency information and the high-frequency information to obtain information to be filled; and finally, filling the information to be filled into the area to be repaired to repair the image to be repaired, so that the information to be filled is separated into a low frequency part and a high frequency part by a high-low frequency separation technology, the low frequency information is obtained by a deep learning network, and the high frequency information is obtained by an intelligent filling method, so that the filling result has the continuity of peripheral information and the authenticity of the information, thereby greatly improving the image repairing effect.
In addition, the image restoration device proposed according to the above-mentioned embodiment of the present invention may further have the following additional technical features:
optionally, the image restoration device further comprises a model construction and training module, wherein the model construction and training module is used for collecting a plurality of sample images; carrying out low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image; and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
Optionally, the image to be repaired is divided into a region to be repaired and a remaining region according to a label of a user.
Optionally, screening is performed in the remaining region according to the size of the region to be repaired, so as to screen out a reference region with the highest self-compatibility with the region to be repaired, where the size of the reference region is consistent with that of the region to be repaired.
Drawings
FIG. 1 is a flowchart illustrating an image restoration method according to an embodiment of the present invention;
fig. 2 is a block diagram of an image restoration apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Fig. 1 is a flowchart illustrating an image restoration method according to an embodiment of the present invention. As shown in fig. 1, the image restoration method according to the embodiment of the present invention includes the steps of:
step 101, obtaining an image to be repaired, and distinguishing the image to be repaired to obtain an area to be repaired and a residual area.
As an embodiment, after the image to be repaired is acquired, the repaired image is divided into the area to be repaired and the residual area according to the label of a user.
That is to say, after the image to be repaired is acquired, a user manually marks the image to be repaired in a mode of smearing or drawing an edge to mark the region to be repaired, so that the image to be repaired is divided into the region to be repaired and a residual region; and acquiring the size of the area to be repaired according to the range of the area to be repaired.
And 102, calculating the autocorrelation of the area to be repaired and the residual area to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area.
That is to say, the remaining region is screened according to the size of the region to be repaired, so as to screen out a reference region with the highest autocorrelation with the region to be repaired, and the reference region is used as a reference region of a high-frequency texture, wherein the size of the reference region is consistent with that of the region to be repaired.
As an embodiment, the autocorrelation of the region to be repaired and the remaining region is calculated by the following formula:
Figure BDA0002230209700000051
wherein ". about" is a convolution operator, (. cndot.) is taking the conjugate; f (t + T) and f * And (t) is the color value of two different position pixels with fixed interval of length t on the area to be repaired and the residual area.
And 103, performing high-frequency filtering processing on the image in the reference area by adopting a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference area.
That is to say, the reference region with the highest autocorrelation with the region to be repaired is subjected to high-frequency filtering processing, low-frequency information in the reference region is filtered, and only high-frequency information is reserved.
Note that a place where the color changes drastically is a high-frequency region of the image, and a place where the color is stable and smooth is a low-frequency region of the image.
And 104, inputting the images corresponding to the residual areas into a pre-trained neural network model to output low-frequency information.
That is, the images in the remaining region are input to a neural network model trained in advance to output low-frequency information, as an input to the neural network model.
As an embodiment, the building and training of the neural network model includes the following steps:
collecting a plurality of sample images; carrying out low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image; and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
It should be noted that the trained neural network model has the capability of generating low-frequency information.
And 105, mixing the low-frequency information with the high-frequency information to obtain the information to be filled.
As an embodiment, the image with high frequency information and the image with low frequency information are subjected to superposition processing, thereby obtaining complete information to be filled.
And 106, filling the information to be filled into the area to be repaired so as to repair the image to be repaired.
As an embodiment, an image fusion algorithm may be used to fill the information to be filled into the region to be repaired, so as to obtain a repaired image.
In summary, according to the image repairing method of the embodiment of the present invention, the image to be repaired is obtained first, and the image to be repaired is distinguished to obtain the area to be repaired and the remaining area; then calculating the autocorrelation of the area to be repaired and the residual area to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area; then, carrying out high-frequency filtering processing on the image in the reference area by adopting a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference area; then, inputting images corresponding to the residual areas into a pre-trained neural network model to output low-frequency information; then mixing the low-frequency information with the high-frequency information to obtain information to be filled; and finally, filling the information to be filled into the area to be repaired to repair the image to be repaired, so that the information to be filled is separated into a low frequency part and a high frequency part by a high-low frequency separation technology, the low frequency information is obtained by a deep learning network, and the high frequency information is obtained by an intelligent filling method, so that the filling result has the continuity of peripheral information and the authenticity of the information, thereby greatly improving the image repairing effect.
In addition, the present invention also proposes a computer-readable storage medium having stored thereon an image inpainting program which, when executed by a processor, implements an image inpainting method as described above.
According to the computer-readable storage medium of the embodiment of the invention, the image restoration program is stored, so that when the image restoration program is executed by the processor, the image restoration method is realized, the filling result has the continuity of the peripheral information and the authenticity of the information, and the image restoration effect is greatly improved.
In addition, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the image inpainting method as described above is implemented.
According to the computer equipment provided by the embodiment of the invention, the image restoration program is stored in the memory, so that the image restoration method is realized when the image restoration program is executed by the processor, the filling result has the continuity of the peripheral information and the authenticity of the information, and the image restoration effect is greatly improved.
Fig. 2 is a block diagram of an image restoration device according to an embodiment of the present invention. As shown in fig. 2, the image restoration apparatus includes: the device comprises an acquisition module 201, a calculation module 202, a filtering module 203, a superposition module 204 and a filling module 205;
the obtaining module 201 is configured to obtain an image to be repaired, and distinguish the image to be repaired to obtain a region to be repaired and a remaining region; the calculating module 202 is configured to calculate an autocorrelation of the to-be-repaired area and the remaining area, so as to obtain a reference area with a highest autocorrelation with the to-be-repaired area from the remaining area; the filtering module 203 is configured to perform high-frequency filtering processing on the image in the reference region by using a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference region; the superposition module 204 is configured to input the image corresponding to the remaining region to a pre-trained neural network model to output low-frequency information, and mix the low-frequency information with the high-frequency information to obtain information to be filled; the filling module 205 is configured to fill information to be filled into the area to be repaired, so as to repair the image to be repaired.
As an embodiment, the image restoration apparatus further includes: the model building and training module is used for collecting a plurality of sample images; carrying out low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image; and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
As an embodiment, the image to be repaired is divided into a region to be repaired and a residual region according to the label of the user.
As an embodiment, the residual region is screened according to the size of the region to be repaired, so as to screen out the reference region with the highest self-coherence with the region to be repaired, wherein the size of the reference region is consistent with that of the region to be repaired.
It should be noted that the foregoing explanation on the embodiment of the image restoration method is also applicable to the image restoration apparatus of this embodiment, and is not repeated here.
According to the image restoration device provided by the embodiment of the invention, the image to be restored is obtained through the obtaining module, and the image to be restored is distinguished to obtain the area to be restored and the residual area; then, calculating the autocorrelation of the area to be repaired and the residual area by using a calculation module so as to obtain a reference area with the highest autocorrelation with the area to be repaired from the residual area; then, the image in the reference area is subjected to high-frequency filtering processing through a filtering module to obtain high-frequency information of the image corresponding to the reference area; inputting the images corresponding to the residual areas into a pre-trained neural network model through a superposition module to output low-frequency information; mixing the low-frequency information and the high-frequency information to obtain information to be filled; and finally, filling the information to be filled into the area to be repaired to repair the image to be repaired, so that the information to be filled is separated into a low frequency part and a high frequency part by a high-low frequency separation technology, the low frequency information is obtained by a deep learning network, and the high frequency information is obtained by an intelligent filling method, so that the filling result has the continuity of peripheral information and the authenticity of the information, thereby greatly improving the image repairing effect.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the terminology used in the description presented above should not be understood as necessarily referring to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. An image restoration method, comprising the steps of:
acquiring an image to be repaired, and distinguishing the image to be repaired to obtain an area to be repaired and a residual area;
calculating the autocorrelation of the region to be repaired and the residual region to obtain a reference region with the highest autocorrelation with the region to be repaired from the residual region;
performing high-frequency filtering processing on the image in the reference region by adopting a high-frequency filtering algorithm to obtain high-frequency information of the image corresponding to the reference region;
inputting the image corresponding to the residual region into a pre-trained neural network model to output low-frequency information;
mixing the low-frequency information with the high-frequency information to obtain information to be filled;
filling the information to be filled into the area to be repaired so as to repair the image to be repaired
Calculating the autocorrelation of the region to be repaired and the residual region according to the following formula:
Figure FDA0003740327060000011
wherein ". is a convolution operator, (.) * To obtain conjugation; f (t + T) and f * And (t) is the color value of two different position pixels with fixed interval of length t on the area to be repaired and the residual area.
2. The image inpainting method of claim 1, wherein the constructing and training of the neural network model comprises the steps of:
collecting a plurality of sample images;
carrying out low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image;
and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
3. The image inpainting method of claim 1, wherein the image to be inpainted is divided into an area to be inpainted and a residual area according to a label of a user.
4. The image restoration method according to claim 1, wherein the remaining region is screened according to the size of the region to be restored to screen out a reference region having the highest autocorrelation with the region to be restored, wherein the reference region is the same as the region to be restored in size.
5. A computer-readable storage medium, having stored thereon an image inpainting program which, when executed by a processor, implements an image inpainting method as recited in any one of claims 1 to 4.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the image inpainting method as claimed in any one of claims 1 to 4.
7. An image restoration device, characterized by comprising:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring an image to be repaired and distinguishing the image to be repaired to acquire an area to be repaired and a residual area;
the calculation module is used for calculating the autocorrelation of the to-be-repaired area and the residual area so as to acquire a reference area with the highest autocorrelation with the to-be-repaired area from the residual area;
the filtering module is used for carrying out high-frequency filtering processing on the image in the reference region by adopting a high-frequency filtering algorithm so as to obtain high-frequency information of the image corresponding to the reference region;
the superposition module is used for inputting the image corresponding to the residual region into a pre-trained neural network model to output low-frequency information and mixing the low-frequency information with the high-frequency information to obtain information to be filled;
the filling module is used for filling the information to be filled into the area to be repaired so as to repair the image to be repaired;
calculating the autocorrelation of the region to be repaired and the residual region according to the following formula:
Figure FDA0003740327060000021
wherein ". is a convolution operator, (.) * To take conjugation; f (t + T) and f * And (t) is the color value of two different position pixels with fixed interval of length t on the area to be repaired and the residual area.
8. The image restoration device according to claim 7, further comprising: a model construction and training module for,
collecting a plurality of sample images;
performing low-frequency filtering processing on each sample image by adopting a low-frequency filtering algorithm to obtain low-frequency information of each sample image;
and inputting the low-frequency information of each sample image as a training sample into the deep learning network for training so as to obtain a trained neural network model.
9. The image restoration apparatus according to claim 7, wherein the image to be restored is divided into an area to be restored and a remaining area according to a user's label.
10. The image restoration device according to claim 7, wherein a screening is performed in the remaining region according to a size of the region to be restored to screen out a reference region having a highest degree of self-coherence with the region to be restored, wherein the reference region is identical in size with the region to be restored.
CN201910965032.8A 2019-10-11 2019-10-11 Image restoration method and device Active CN110874824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910965032.8A CN110874824B (en) 2019-10-11 2019-10-11 Image restoration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910965032.8A CN110874824B (en) 2019-10-11 2019-10-11 Image restoration method and device

Publications (2)

Publication Number Publication Date
CN110874824A CN110874824A (en) 2020-03-10
CN110874824B true CN110874824B (en) 2022-08-23

Family

ID=69717809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910965032.8A Active CN110874824B (en) 2019-10-11 2019-10-11 Image restoration method and device

Country Status (1)

Country Link
CN (1) CN110874824B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024098188A1 (en) * 2022-11-07 2024-05-16 京东方科技集团股份有限公司 Visual analysis method of image restoration model, apparatus and electronic device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148945A (en) * 2005-11-30 2007-06-14 Tokyo Institute Of Technology Image restoration method
JP5482007B2 (en) * 2008-10-08 2014-04-23 株式会社ニコン Image processing method
CN105551006B (en) * 2015-12-01 2018-06-05 深圳大学 A kind of restorative procedure and system of depth image missing pixel
CN106910163B (en) * 2015-12-23 2022-06-21 通用电气公司 Data restoration device and method for original CT projection data and CT imaging system
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment
TWI682359B (en) * 2018-01-29 2020-01-11 國立清華大學 Image completion method
CN108288258B (en) * 2018-04-23 2021-08-10 电子科技大学 Low-quality image enhancement method under severe weather condition
CN110070487B (en) * 2019-04-02 2021-05-11 清华大学 Semantic reconstruction face hyper-segmentation method and device based on deep reinforcement learning
CN110136080B (en) * 2019-05-10 2023-03-21 厦门稿定股份有限公司 Image restoration method and device
CN110246100B (en) * 2019-06-11 2021-06-25 山东师范大学 Image restoration method and system based on angle sensing block matching

Also Published As

Publication number Publication date
CN110874824A (en) 2020-03-10

Similar Documents

Publication Publication Date Title
CN112669429A (en) Image distortion rendering method and device
CN107610140A (en) Near edge detection method, device based on depth integration corrective networks
CN103578085B (en) Image cavity region based on variable-block method for repairing and mending
CN111524100A (en) Defect image sample generation method and device and panel defect detection method
CN110136080B (en) Image restoration method and device
CN107993228B (en) Vulnerable plaque automatic detection method and device based on cardiovascular OCT (optical coherence tomography) image
CN109242807B (en) Rendering parameter adaptive edge softening method, medium, and computer device
CN108052909B (en) Thin fiber cap plaque automatic detection method and device based on cardiovascular OCT image
CN111709966A (en) Fundus image segmentation model training method and device
CN110874824B (en) Image restoration method and device
JP6450287B2 (en) Learning data generation device, learning device, learning data generation method, learning method, and image processing program
CN110136052A (en) A kind of image processing method, device and electronic equipment
CN112699885A (en) Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
CN104952068B (en) A kind of vertebra characteristic point automatic identifying method
CN112785572A (en) Image quality evaluation method, device and computer readable storage medium
CN109558801B (en) Road network extraction method, medium, computer equipment and system
CN112801911B (en) Method and device for removing text noise in natural image and storage medium
CN112365493B (en) Training data generation method and device for fundus image recognition model
CN110706161B (en) Image brightness adjusting method, medium, device and apparatus
CN113792600A (en) Video frame extraction method and system based on deep learning
CN112733864B (en) Model training method, target detection method, device, equipment and storage medium
Kumar et al. Performance evaluation of joint filtering and histogram equalization techniques for retinal fundus image enhancement
CN103679764A (en) Image generation method and device
CN112634266B (en) Semi-automatic labeling method, medium, equipment and device for laryngoscope image
JPH04125779A (en) Picture processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant