CN111445415B - Image restoration method and device, electronic equipment and storage medium - Google Patents

Image restoration method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111445415B
CN111445415B CN202010237090.1A CN202010237090A CN111445415B CN 111445415 B CN111445415 B CN 111445415B CN 202010237090 A CN202010237090 A CN 202010237090A CN 111445415 B CN111445415 B CN 111445415B
Authority
CN
China
Prior art keywords
image
repair
image block
blocks
restoration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010237090.1A
Other languages
Chinese (zh)
Other versions
CN111445415A (en
Inventor
徐瑞
郭明皓
王佳琦
李晓潇
周博磊
吕健勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010237090.1A priority Critical patent/CN111445415B/en
Publication of CN111445415A publication Critical patent/CN111445415A/en
Application granted granted Critical
Publication of CN111445415B publication Critical patent/CN111445415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an image restoration method and device, an electronic device and a storage medium, wherein the method comprises the following steps: performing preliminary restoration on an image to be processed to obtain a first restoration image, wherein the image to be processed comprises a normal area and an area to be restored, and the first restoration image comprises a preliminary restoration area corresponding to the area to be restored; according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, respectively determining first image blocks matched with textures of the second image blocks; and repairing the preliminary repairing area according to the first image blocks matched with the textures of the second image blocks to obtain a second repairing image of the image to be processed. The embodiment of the disclosure can improve the restoration effect of the image.

Description

Image restoration method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to an image restoration method and device, an electronic device and a storage medium.
Background
Image restoration is an important problem in the field of computer vision, and has important applications in many fields, such as image watermarking and image inpainting. The related art image restoration method can only restore based on the existing content in the image to be restored, and the image restoration effect is poor due to the lack of construction and generation capability.
Disclosure of Invention
The present disclosure proposes an image restoration technique.
According to an aspect of the present disclosure, there is provided an image restoration method including: performing preliminary restoration on an image to be processed to obtain a first restoration image, wherein the image to be processed comprises a normal area and an area to be restored, and the first restoration image comprises a preliminary restoration area corresponding to the area to be restored; according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, respectively determining first image blocks matched with textures of the second image blocks; and repairing the preliminary repairing area according to the first image blocks matched with the textures of the second image blocks to obtain a second repairing image of the image to be processed.
In one possible implementation manner, the determining, according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, the first image block matching the texture of each second image block includes: for any second image block, determining the similarity between the second image block and the plurality of first image blocks respectively; and determining at least one first image block with highest similarity as a first image block matched with the texture of the second image block.
In a possible implementation manner, the repairing the preliminary repairing area according to the first image block matched with the texture of each second image block to obtain a second repairing image of the image to be processed includes: repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block; and splicing the normal area of the image to be processed and the restoration image blocks of each second image block to obtain the second restoration image.
In one possible implementation manner, the repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block includes: for any second image block, extracting features of a third image block corresponding to the second image block in the first repair image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; extracting features of a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain the fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristic of the second image block.
In one possible implementation manner, the method is implemented by a neural network, where the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is used for performing preliminary repair on an image to be processed, the texture matching network is used for performing texture matching on the second image block and the first image block, and the second repair network is used for repairing the preliminary repair area, and the method further includes: training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In one possible implementation manner, the training the neural network according to the preset training set includes: inputting the sample image into the neural network for processing to obtain a preliminary restoration image of the sample image and a plurality of restoration sample image blocks; splicing the normal area of the sample image and the plurality of repair sample image blocks to obtain a repair sample image of the sample image; training the neural network according to the preliminary repair image of the sample image, the repair sample image and the real image.
In one possible implementation, the training the neural network according to the preliminary repair image of the sample image, the repair sample image, and the real image includes: determining the preliminary repair loss of the neural network according to the preliminary repair image and the real image; determining the blocking loss of the neural network according to a plurality of repair sample image blocks of the repair sample image and a plurality of real image blocks of the real image; determining the overall loss of the image of the neural network according to the repair sample image and the real image; and training the neural network according to the preliminary repair loss, the blocking loss and the overall image loss.
In one possible implementation manner, the neural network further includes a first discrimination network, and the training the neural network according to a preset training set further includes: inputting a plurality of real image blocks of a real image corresponding to the sample image and a repair sample image block at a corresponding position into the first discrimination network for processing respectively to obtain a first discrimination result of the real image block and a second discrimination result of the repair sample image block; and according to the first discrimination result and the second discrimination result, training the neural network in an antagonism way.
In one possible implementation manner, the training the neural network according to a preset training set further includes: determining a first data distribution of a plurality of image blocks of a normal region of the sample image and a second data distribution of the plurality of repair sample image blocks; inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result; and according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result, training the neural network in an antagonism manner.
In one possible implementation manner, the neural network further includes a second discrimination network, and the training the neural network according to a preset training set further includes: respectively inputting a real image corresponding to the sample image and the repair sample image into the second discrimination network for processing to obtain a fifth discrimination result of the real image and a sixth discrimination result of the repair sample image; and according to the fifth discrimination result and the sixth discrimination result, training the neural network in an antagonism way.
According to an aspect of the present disclosure, there is provided an image restoration apparatus including: the first restoration module is used for carrying out preliminary restoration on an image to be processed to obtain a first restoration image, wherein the image to be processed comprises a normal area and an area to be restored, and the first restoration image comprises a preliminary restoration area corresponding to the area to be restored; the texture matching module is used for respectively determining first image blocks matched with textures of the second image blocks according to the first image blocks of the normal area and the second image blocks of the preliminary repair area; and the second restoration module is used for restoring the preliminary restoration area according to the first image blocks matched with the textures of the second image blocks to obtain a second restoration image of the image to be processed.
In one possible implementation, the texture matching module includes: the similarity determination submodule is used for determining the similarity between any second image block and the plurality of first image blocks respectively; and the matching sub-module is used for determining at least one first image block with the highest similarity as a first image block matched with the texture of the second image block.
In one possible implementation, the second repair module includes: the image block restoration sub-module is used for restoring each second image block according to the first image block matched with the texture of each second image block to obtain a restored image block of each second image block; and the first splicing sub-module is used for splicing the normal area of the image to be processed and the repair image blocks of each second image block to obtain the second repair image.
In one possible implementation, the image block repair submodule is configured to: for any second image block, extracting features of a third image block corresponding to the second image block in the first repair image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; extracting features of a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain the fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristic of the second image block.
In one possible implementation manner, the apparatus is implemented by a neural network, where the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is used for performing preliminary repair on an image to be processed, the texture matching network is used for performing texture matching on the second image block and the first image block, and the second repair network is used for repairing the preliminary repair area, and the apparatus further includes: the training module is used for training the neural network according to a preset training set, the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In one possible implementation, the training module includes: the restoration submodule is used for inputting the sample image into the neural network for processing to obtain a preliminary restoration image of the sample image and a plurality of restoration sample image blocks; the second splicing sub-module is used for splicing the normal area of the sample image and the plurality of repair sample image blocks to obtain a repair sample image of the sample image; and the training sub-module is used for training the neural network according to the preliminary restoration image of the sample image, the restoration sample image and the real image.
In one possible implementation, the training submodule is configured to: determining the preliminary repair loss of the neural network according to the preliminary repair image and the real image; determining the blocking loss of the neural network according to a plurality of repair sample image blocks of the repair sample image and a plurality of real image blocks of the real image; determining the overall loss of the image of the neural network according to the repair sample image and the real image; and training the neural network according to the preliminary repair loss, the blocking loss and the overall image loss.
In one possible implementation, the neural network further includes a first discrimination network, and the training module further includes: the first judging sub-module is used for respectively inputting a plurality of real image blocks of the real image corresponding to the sample image and the repairing sample image blocks at corresponding positions into the first judging network for processing to obtain a first judging result of the real image blocks and a second judging result of the repairing sample image blocks; and the first countermeasure training submodule is used for countermeasure training the neural network according to the first discrimination result and the second discrimination result.
In one possible implementation, the training module further includes: a distribution determination sub-module for determining a first data distribution of a plurality of image blocks of a normal region of the sample image and a second data distribution of the plurality of repair sample image blocks; the second judging sub-module is used for respectively inputting the first data distribution and the second data distribution into the first judging network for processing to obtain a third judging result and a fourth judging result; and the second countermeasure training submodule is used for countermeasure training the neural network according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result.
In one possible implementation, the neural network further includes a second discrimination network, and the training module further includes: the third judging sub-module is used for respectively inputting the real image corresponding to the sample image and the repair sample image into the second judging network for processing to obtain a fifth judging result of the real image and a sixth judging result of the repair sample image; and the third countermeasure training submodule is used for countermeasure training the neural network according to the fifth discrimination result and the sixth discrimination result.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the image to be processed can be subjected to preliminary restoration, and the image block matched with the texture of the image block of the preliminary restoration area is determined from the image blocks of the normal area of the image; and further repairing the image according to the image blocks matched with the textures, so that the repairing effect of the image to be processed is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of an image restoration method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a process of an image restoration method according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an image restoration device according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image restoration method according to an embodiment of the present disclosure, as shown in fig. 1, the method including:
in step S11, performing preliminary repair on an image to be processed to obtain a first repair image, where the image to be processed includes a normal region and a region to be repaired, and the first repair image includes a preliminary repair region corresponding to the region to be repaired;
in step S12, determining a first image block matching the texture of each second image block according to the plurality of first image blocks of the normal region and the plurality of second image blocks of the preliminary repair region;
in step S13, the preliminary repair area is repaired according to the first image block matched with the texture of each second image block, so as to obtain a second repair image of the image to be processed.
In a possible implementation manner, the image restoration method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable instructions stored in a memory. Alternatively, the method may be performed by a server.
For example, the image to be processed may be an image including arbitrary content, such as a person, a landscape, a building, and the like. The image to be processed may include a normal region and a region to be repaired, the normal region is a region in which the image content is normally visible, and the region to be repaired is a region in which the image content is abnormal or invisible, for example, a region in which the content is missing or a watermark exists in the image, which is not limited in the present disclosure.
In one possible implementation, the image to be processed may be primarily repaired in step S11, for example, the image to be processed is processed using a convolutional neural network in the style of encoder-decoder, global structure information of the image is extracted, and the image is repaired according to the global structure information. The convolutional neural network may include, for example, a convolutional layer, an expanded convolutional layer (which may also be referred to as a hole convolutional layer), a deconvolution layer, a pooling layer, a fully-connected layer, etc., and the specific network structure of the convolutional neural network is not limited by the present disclosure.
And obtaining a first repair image of the image to be processed after the preliminary repair. The first repair image includes a preliminary repair region corresponding to a location of the region to be repaired.
In one possible implementation, the normal region of the image to be processed may be cut to obtain a plurality of first image blocks. For example, a k x k window is used to slide in the normal region (step size is for example k/2 to obtain more image blocks), resulting in a plurality of first image blocks of size k x k. Where k has a value of, for example, 32, which is not limited by the present disclosure.
In one possible implementation, the first image block may include local texture information in the normal region. A texture library of the image to be processed may be constructed from a plurality of first image blocks for subsequent selection of similar textures.
In one possible implementation, the preliminary repair area of the first repair image may be cut to obtain a plurality of second image blocks. The second image block may be the same size as the first image block or different size, which is not limited by the present disclosure.
In one possible implementation, in step S12, a first image block matching the texture of each second image block may be determined from the plurality of first image blocks and the plurality of second image blocks of the preliminary repair area, respectively. That is, for any one of the second image blocks, the similarity between the texture information of the second image block and the texture information of the plurality of first image blocks may be determined; and selecting a first image block matched with the texture of the second image block according to the similarity so as to further restore the second image block according to the texture information.
In one possible implementation manner, in step S13, the preliminary repair area may be repaired according to the first image block matched with the texture of each second image block, so as to obtain a second repair image of the image to be processed.
That is, for any one of the second image blocks, the feature of the first image block matching the texture of the second image block, which includes the texture information of the first image block, may be extracted by the convolutional neural network. Processing the second image block through a convolutional neural network in a style of an encoder-decoder, and extracting the characteristics of the second image block; fusing the features of the second image block with the features of the first image block; and repairing the second image block according to the fused characteristics to obtain a repaired image block. And splicing the plurality of repair image blocks with the normal area of the image to be processed to obtain a second repair image.
According to the embodiment of the disclosure, the image to be processed can be subjected to preliminary restoration, and the image block matched with the texture of the image block of the preliminary restoration area is determined from the image blocks of the normal area of the image; and further repairing the image according to the image blocks matched with the textures, so that the repairing effect of the image to be processed is improved.
In one possible implementation, a first repair network may be preset for performing a preliminary repair on the image to be processed. The first repair network is a convolutional neural network in the style of an encoder-decoder, and can extract global structure information of an image and generate a repair image according to the global structure information.
In one possible implementation, the first repair network may include, for example, a convolutional layer, an inflated convolutional layer (which may also be referred to as a hole convolutional layer), a deconvolution layer, a pooling layer, a fully-connected layer, and the like. The expansion convolution layer is adopted, so that the perception capability of the first repair network to the global structure information can be improved, and the global repair effect is improved. The present disclosure is not limited to a particular network architecture of the first repair network.
In this way, a preliminary repair of the image to be processed can be achieved.
In one possible implementation, step S12 may include:
for any second image block, determining the similarity between the second image block and the plurality of first image blocks respectively;
and determining at least one first image block with highest similarity as a first image block matched with the texture of the second image block.
For example, a texture matching network may be preset for texture matching the second image block with the first image block. For any one second image block, the second image block and a plurality of first image blocks in a texture library can be input into a texture matching network for processing, and the similarity between the second image block and each first image block is determined.
For example, extracting features of the second image block and the first image block through a texture matching network; constructing a similarity matrix between the second image block and each first image block according to the characteristics; and determining the similarity between the second image block and each first image block according to the similarity matrix. The present disclosure is not limited to a specific process of determining the similarity.
In one possible implementation, the texture matching network may include, for example, a convolutional layer, a softmax layer, etc., and the present disclosure is not limited to the specific structure of the texture matching network.
In one possible implementation, at least one first image block with the highest similarity may be determined as the first image block matching the texture of the second image block, e.g. the 4 first image blocks with the highest similarity are selected as the first image blocks matching the texture.
In this way, a first image block that matches the texture of a second image block may be determined to further repair the second image block based on the texture information.
In one possible implementation, step S13 may include:
repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block;
And splicing the normal area of the image to be processed and the restoration image blocks of each second image block to obtain the second restoration image.
That is, for any one of the second image blocks, the feature of the first image block matching the texture of the second image block, which includes the texture information of the first image block, may be extracted by the convolutional neural network. Processing the second image block through a convolutional neural network in a style of an encoder-decoder, and extracting the characteristics of the second image block; fusing the features of the second image block with the features of the first image block; and repairing the second image block according to the fused characteristics to obtain a repaired image block.
In one possible implementation, a plurality of repair image blocks are stitched to a normal region of the image to be processed, and a second repair image is obtained. In this way, the image to be processed can be further repaired, and the image repairing effect is improved.
In one possible implementation manner, the step of repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block includes:
For any second image block, extracting features of a third image block corresponding to the second image block in the first repair image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block;
extracting features of a first image block matched with the texture of the second image block to obtain the features of the first image block;
fusing the characteristics of the third image block with the characteristics of the first image block to obtain the fused characteristics of the second image block;
and generating a repair image block of the second image block according to the fusion characteristic of the second image block.
For example, a second repair network may be pre-configured for repairing the first repair image. The second repair network includes a first convolutional network of encoder-decoder style and a conventional second convolutional network. The first convolutional network and the second convolutional network may, for example, include convolutional layers, inflated convolutional layers (hole convolutional layers), deconvolution layers, pooling layers, fully-connected layers, and the like, which are not limited by the present disclosure.
In one possible implementation, for any one second image block, a third image block corresponding to the second image block may be acquired. For example, in the first repair image, the third image block may be obtained by expanding outward with the position of the second image block as the center. Wherein the size of the third image block is larger than the size of the second image block, for example, the size of the second image block is 32×32, and the size of the third image block is 96×96. In this way, more image information of the second image block and its neighboring areas can be preserved, so as to further improve the effect of image restoration.
In one possible implementation, the third image block may be input into an encoder of the first convolutional network for feature extraction, resulting in features of the third image block.
In one possible implementation, a first image block that matches the texture of a second image block may be input into a second convolution network for feature extraction. When the number of the first image blocks matched with the texture of the second image block is multiple, the first image blocks can be spliced and then input into the second convolution network. After processing, the features of the first image block are obtained.
In one possible implementation, the features of the third image block are fused with the features of the first image block to obtain fused features of the second image block; and inputting the fusion characteristic into a decoder of the first convolution network to generate a repair image block of the second image block.
In one possible implementation, multiple levels of feature fusion may be performed. That is, the features of the first image block may include features output by the multi-level convolution layer, feature fusion may be performed on a plurality of corresponding levels in the decoder of the first convolution network, and subsequent processes such as expansion convolution and deconvolution are sequentially performed on the multi-level fusion features, so as to finally generate a repair image block of the second image block. The present disclosure is not limited in this regard.
In one possible implementation, a plurality of repair image blocks are spliced with the normal area of the image to be processed, so that a second repair image can be obtained, and the whole process of image repair is completed.
In this way, the first image block with texture matching participates in the repairing process of the second image block, and the blocking repairing effect of the second image block can be improved through the local texture information in the first image block.
In a possible implementation manner, the image restoration method according to the embodiment of the present disclosure may be implemented by a neural network, where the neural network includes a first restoration network, a texture matching network, and a second restoration network, where the first restoration network is used for performing preliminary restoration on an image to be processed, the texture matching network is used for performing texture matching on the second image block and the first image block, and the second restoration network is used for performing restoration on the preliminary restoration area.
Fig. 2 shows a schematic diagram of a process of an image restoration method according to an embodiment of the present disclosure. As shown in fig. 2, the neural network according to the embodiment of the present disclosure includes a first repair network 21, a texture matching network 22, and a second repair network 23.
As shown in fig. 2, the image I to be processed m Including blank areas to be repaired and normal areas having normal contents. The image I to be processed can be processed m Inputting into a first repair network 21 for preliminary repair to obtain a first repair image I s . First repair image I s Including a preliminary repair area corresponding to the location of the area to be repaired.
In an example, the preliminary repair area may be cut to obtain a set { p } of second image blocks s Each second image block has a size of 32 x 32.
In an example, image I may be treated m Cutting the normal region of the image to obtain a plurality of first image blocks, wherein the size of the first image blocks is 32 multiplied by 32; from the plurality of first image blocks, a texture library 24 of the image to be processed is constructed.
In an example, for any second image block p s The second image block and a plurality of first image blocks in texture library 24 may be input into texture matching network 22, outputting 4 first image blocks 221 that match the texture of the second image block.
In an example, the first repair image may be divided into respective second image blocks p s Respectively expanding for the center to obtain a set of third image blocksEach third image block has a size of 96 x 96.
In an example, the second image block p may be s Corresponding third image blockThe 4 first image blocks 221 matched with the texture are input into a second restoration network 23 to obtain a second image block p s Is used for repairing the image block. As shown in fig. 2, the multi-level features of the first image block 221 are respectively fused with the multi-level features of the third image block.
In an example, the plurality of second image blocks are processed separately to obtain a plurality of repair image blocks 231; the plurality of repair image blocks 231 are spliced with the normal area of the image to be processed to obtain the second repair image 25, thereby completing the entire image repair process.
The neural network may be trained prior to application of the neural network of embodiments of the present disclosure.
In one possible implementation, the method further includes: training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
For example, a training set may be preset, the training set including a plurality of sample images and real images corresponding to the sample images. The actual image in the existing image dataset may be selected, for example, or otherwise acquired; and shielding a partial region of the real image to obtain corresponding sample images, so that each sample image comprises a normal region and a region to be repaired. The present disclosure is not limited in this regard.
In one possible implementation, the step of training the neural network according to a preset training set may include:
inputting the sample image into the neural network for processing to obtain a preliminary restoration image of the sample image and a plurality of restoration sample image blocks;
splicing the normal area of the sample image and the plurality of repair sample image blocks to obtain a repair sample image of the sample image;
training the neural network according to the preliminary repair image of the sample image, the repair sample image and the real image.
For example, a sample image in a training set may be input into a first repair network, resulting in a preliminary repair image of the sample image, the preliminary repair image including a preliminary repair area; constructing a texture library according to a plurality of image blocks of a normal region of the sample image; and inputting any sample image block of the preliminary repair area and a plurality of image blocks of the normal area into a texture matching network to obtain at least one image block matched with the texture of the sample image block. In this way, each sample image block of the preliminary repair area is processed separately, and a first image block matching the texture of each sample image block can be obtained.
In one possible implementation, the sample image block of the preliminary repair area is expanded outwards to obtain a third image block; and inputting the third image block and the first image block with the matched texture into a second restoration network to obtain a corresponding restoration sample image block.
In one possible implementation manner, the normal area of the sample image and the plurality of repair sample image blocks are spliced to obtain a repair sample image of the sample image; further, the neural network may be trained from a preliminary repair image of a sample image, the repair sample image, and the real image. In this way, a training process for the neural network may be achieved.
In one possible implementation, the step of training the neural network based on the preliminary repair image of the sample image, the repair sample image, and the real image includes:
determining the preliminary repair loss of the neural network according to the preliminary repair image and the real image;
determining the blocking loss of the neural network according to a plurality of repair sample image blocks of the repair sample image and a plurality of real image blocks of the real image;
Determining the overall loss of the image of the neural network according to the repair sample image and the real image;
and training the neural network according to the preliminary repair loss, the blocking loss and the overall image loss.
For example, the loss of the neural network may be defined from various aspects. On the one hand, from the difference between the preliminary repair image and the real image, the loss of the first repair network, i.e. the preliminary repair loss of the neural network (L recon ). The preliminary repair loss may be, for example, L 1 Loss, the present disclosure does not limit the choice of loss function.
On the one hand, according to the difference between the repair sample image block of the repair sample image and the real image block of the corresponding position of the real image, the repair loss of each image block, namely the blocking loss (L ps ). The chunking loss may include L 1 Loss, perceptual loss L percep Etc., the present disclosure is directed to lossesThe choice of function is not limited.
On the other hand, the difference between the restored sample image and the real image can be used to determine the restoration loss of the whole image, i.e. the image whole loss of the neural network (L blend ). The overall loss of the image may include a boundary smoothing loss L tv For removing boundary artifacts and maintaining consistency between adjacent tiles. The present disclosure does not limit the choice of the loss function.
In one possible implementation, a weighted sum of the preliminary repair loss, the chunking loss, and the overall image loss may be determined as an overall loss of the neural network; and reversely adjusting the parameters of the neural network according to the total loss. After multiple rounds of adjustment, a trained neural network may be obtained if training conditions (e.g., network convergence) are met. By the method, the network training effect can be improved, and a high-precision neural network can be obtained.
In one possible implementation, the network training effect can be further improved by means of countermeasure training. Wherein the neural network according to an embodiment of the present disclosure may further include a first discrimination network,
the step of training the neural network according to a preset training set may further include:
inputting a plurality of real image blocks of a real image corresponding to the sample image and a repair sample image block at a corresponding position into the first discrimination network for processing respectively to obtain a first discrimination result of the real image block and a second discrimination result of the repair sample image block;
And according to the first discrimination result and the second discrimination result, training the neural network in an antagonism way.
For example, a first discrimination network may be preset as a discriminator for countermeasure training; the first repair network, the texture matching network and the second repair network are used as generators of countermeasure training.
In the training process, inputting a real image block of a real image into a first judging network to obtain a first judging result; and inputting the repair sample image block at the corresponding position into a first discrimination network to obtain a second discrimination result.
In one possible implementation, the blocking countermeasure loss of the neural network may be determined according to the first discrimination result and the second discrimination result; according to the block countermeasure loss, parameters of the generator and the discriminator are respectively adjusted, so that countermeasure training of the generator and the discriminator is realized.
In countermeasure training, the arbiter attempts to distinguish between real image blocks and repair sample image blocks; the generator tries to confuse the real image block with the repair sample image block, which interact so that the precision of the generator and the arbiter is improved at the same time.
In one possible implementation manner, the training the neural network according to a preset training set further includes:
Determining a first data distribution of a plurality of image blocks of a normal region of the sample image and a second data distribution of the plurality of repair sample image blocks;
inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result;
and according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result, training the neural network in an antagonism manner.
For example, in order to make full use of texture prior information and match the texture distribution of the normal region, the data distribution of the image block may be discriminated by the first discrimination network, so as to further improve the training effect.
In one possible implementation, from a plurality of image blocks in a texture library of a sample image, a data distribution (referred to as a first data distribution) of the plurality of image blocks may be determined; from a plurality of repair sample image blocks of the sample image, a data distribution (referred to as a second data distribution) of the plurality of repair sample image blocks may be determined. The present disclosure is not limited to a specific manner of computing the data distribution.
In one possible implementation manner, a first data distribution and the second data distribution may be respectively input into the first discrimination network for processing, so as to obtain a third discrimination result and a fourth discrimination result; furthermore, the blocking countermeasure loss of the neural network can be determined according to the first, second, third and fourth discrimination results; according to the block countermeasure loss, parameters of the generator and the discriminator are respectively adjusted, so that countermeasure training of the generator and the discriminator is realized.
In countermeasure training, the arbiter attempts to distinguish between real image blocks and repair sample image blocks; the generator tries to confuse the real image block with the repair sample image block; meanwhile, the discriminator tries to distinguish the data distribution of the real image block from the data distribution of the repair sample image block, the generator tries to confuse the data distribution of the real image block and the data distribution of the repair sample image block, the two promote each other, and the precision of the generator and the discriminator is improved at the same time.
After countermeasure training, the repair sample image block can be more similar to the real image block, and the data distribution of the repair sample image block is more similar to the data distribution of the real image block, so that the network training effect is further improved.
In one possible implementation, the countermeasure training may be added to the preceding training process, i.e., adding the chunked countermeasure loss to the chunked loss of the neural network, and adding the chunked countermeasure loss, L 1 Loss and perceptual loss L percep And determining the weighted sum as the blocking loss of the neural network, thereby improving the network training effect.
In one possible implementation, a neural network according to an embodiment of the present disclosure may further include a second discrimination network,
the step of training the neural network according to a preset training set may further include:
Respectively inputting a real image corresponding to the sample image and the repair sample image into the second discrimination network for processing to obtain a fifth discrimination result of the real image and a sixth discrimination result of the repair sample image;
and according to the fifth discrimination result and the sixth discrimination result, training the neural network in an antagonism way.
For example, a second discrimination network may be preset as a discriminator for countermeasure training; the first repair network, the texture matching network and the second repair network are used as generators of countermeasure training.
In the training process, inputting the real image into a second discrimination network to obtain a fifth discrimination result; and inputting the corresponding repair sample image block into a second discrimination network to obtain a sixth discrimination result.
In one possible implementation, the overall challenge loss of the neural network may be determined according to the fifth and sixth discrimination results; according to the overall countermeasure loss, parameters of the generator and the arbiter are respectively adjusted, thereby realizing countermeasure training of the generator and the arbiter.
In one possible implementation, the countermeasure training may be added to the previous training process, i.e., adding the global countermeasure loss to the global loss of the image of the neural network, and adding the global countermeasure loss to the boundary smoothing loss L tv And determining the weighted sum of the images as the overall loss of the neural network, thereby further improving the network training effect.
According to the image restoration method disclosed by the embodiment of the invention, on the basis of restoring an image according to global information, region texture information is extracted on a smaller block region, high-quality image blocks are generated and spliced back to an original image, so that the image restoration in a complex scene and a general scene can be realized, and the image restoration method has good generalization capability; in addition, the restored image has better texture effect, and the accuracy of image restoration is improved.
According to the image restoration method disclosed by the embodiment of the disclosure, the useful texture blocks can be efficiently obtained by a texture library mode; the method adopts a blocking processing mode, and can be executed in parallel, so that the processing efficiency is improved; in training, the network training effect can be improved through countermeasure training, and the quality of the generated textures is further improved.
The image restoration method according to the embodiment of the disclosure can be applied to scenes such as image restoration, image special effect production and the like, for example, background restoration in algorithms such as face beautification and body slimming; repair of image watermark regions, etc.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an image restoration device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the image restoration methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 3 shows a block diagram of an image restoration device according to an embodiment of the present disclosure, as shown in fig. 3, the device including:
the first repair module 31 is configured to perform preliminary repair on an image to be processed, so as to obtain a first repair image, where the image to be processed includes a normal area and an area to be repaired, and the first repair image includes a preliminary repair area corresponding to the area to be repaired; a texture matching module 32, configured to determine, according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, first image blocks that match textures of the respective second image blocks; and the second repairing module 33 is configured to repair the preliminary repairing area according to the first image blocks matched with the textures of the second image blocks, so as to obtain a second repairing image of the image to be processed.
In one possible implementation, the texture matching module includes: the similarity determination submodule is used for determining the similarity between any second image block and the plurality of first image blocks respectively; and the matching sub-module is used for determining at least one first image block with the highest similarity as a first image block matched with the texture of the second image block.
In one possible implementation, the second repair module includes: the image block restoration sub-module is used for restoring each second image block according to the first image block matched with the texture of each second image block to obtain a restored image block of each second image block; and the first splicing sub-module is used for splicing the normal area of the image to be processed and the repair image blocks of each second image block to obtain the second repair image.
In one possible implementation, the image block repair submodule is configured to: for any second image block, extracting features of a third image block corresponding to the second image block in the first repair image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; extracting features of a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain the fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristic of the second image block.
In one possible implementation manner, the apparatus is implemented by a neural network, where the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is used for performing preliminary repair on an image to be processed, the texture matching network is used for performing texture matching on the second image block and the first image block, and the second repair network is used for repairing the preliminary repair area, and the apparatus further includes: the training module is used for training the neural network according to a preset training set, the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In one possible implementation, the training module includes: the restoration submodule is used for inputting the sample image into the neural network for processing to obtain a preliminary restoration image of the sample image and a plurality of restoration sample image blocks; the second splicing sub-module is used for splicing the normal area of the sample image and the plurality of repair sample image blocks to obtain a repair sample image of the sample image; and the training sub-module is used for training the neural network according to the preliminary restoration image of the sample image, the restoration sample image and the real image.
In one possible implementation, the training submodule is configured to: determining the preliminary repair loss of the neural network according to the preliminary repair image and the real image; determining the blocking loss of the neural network according to a plurality of repair sample image blocks of the repair sample image and a plurality of real image blocks of the real image; determining the overall loss of the image of the neural network according to the repair sample image and the real image; and training the neural network according to the preliminary repair loss, the blocking loss and the overall image loss.
In one possible implementation, the neural network further includes a first discrimination network, and the training module further includes: the first judging sub-module is used for respectively inputting a plurality of real image blocks of the real image corresponding to the sample image and the repairing sample image blocks at corresponding positions into the first judging network for processing to obtain a first judging result of the real image blocks and a second judging result of the repairing sample image blocks; and the first countermeasure training submodule is used for countermeasure training the neural network according to the first discrimination result and the second discrimination result.
In one possible implementation, the training module further includes: a distribution determination sub-module for determining a first data distribution of a plurality of image blocks of a normal region of the sample image and a second data distribution of the plurality of repair sample image blocks; the second judging sub-module is used for respectively inputting the first data distribution and the second data distribution into the first judging network for processing to obtain a third judging result and a fourth judging result; and the second countermeasure training submodule is used for countermeasure training the neural network according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result.
In one possible implementation, the neural network further includes a second discrimination network, and the training module further includes: the third judging sub-module is used for respectively inputting the real image corresponding to the sample image and the repair sample image into the second judging network for processing to obtain a fifth judging result of the real image and a sixth judging result of the repair sample image; and the third countermeasure training submodule is used for countermeasure training the neural network according to the fifth discrimination result and the sixth discrimination result.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the image restoration method provided in any of the embodiments above.
The disclosed embodiments also provide another computer program product for storing computer readable instructions that, when executed, cause a computer to perform the operations of the image restoration method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 4, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 5 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 5, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate an operating system based on a memory 1932, such as Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. An image restoration method, comprising:
performing preliminary restoration on an image to be processed by using a first restoration network to obtain a first restoration image, wherein the image to be processed comprises a normal area and an area to be restored, and the first restoration image comprises a preliminary restoration area corresponding to the area to be restored;
According to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, respectively determining first image blocks matched with textures of the second image blocks;
inputting a third image block corresponding to each second image block and a first image block matched with the texture of each second image block into a second restoration network to obtain restoration image blocks corresponding to each second image block, wherein the third image block is an image block with a size larger than that of the second image block, and the expansion is performed by taking the second image block as the center;
and splicing the plurality of repair image blocks with the normal area of the image to be processed to obtain a second repair image of the image to be processed.
2. The method of claim 1, wherein the determining a first image block matching a texture of each second image block from the plurality of first image blocks of the normal region and the plurality of second image blocks of the preliminary repair region, respectively, comprises:
for any second image block, determining the similarity between the second image block and the plurality of first image blocks respectively;
and determining at least one first image block with highest similarity as a first image block matched with the texture of the second image block.
3. The method according to claim 1, wherein inputting the third image block corresponding to each second image block and the first image block matched with the texture of each second image block into the second repair network, to obtain the repair image block corresponding to each second image block, includes:
for any second image block, extracting features of a third image block corresponding to the second image block in the first repair image to obtain features of the third image block;
extracting features of a first image block matched with the texture of the second image block to obtain the features of the first image block;
fusing the characteristics of the third image block with the characteristics of the first image block to obtain the fused characteristics of the second image block;
and generating a repair image block of the second image block according to the fusion characteristic of the second image block.
4. A method according to any of claims 1-3, wherein the method is implemented by a neural network comprising the first repair network, a texture matching network and the second repair network, the texture matching network being used for texture matching the second image block with the first image block;
The method further comprises the steps of: training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
5. The method of claim 4, wherein training the neural network according to a preset training set comprises:
inputting the sample image into the neural network for processing to obtain a preliminary restoration image of the sample image and a plurality of restoration sample image blocks;
splicing the normal area of the sample image and the plurality of repair sample image blocks to obtain a repair sample image of the sample image;
training the neural network according to the preliminary repair image of the sample image, the repair sample image and the real image.
6. The method of claim 5, wherein training the neural network based on the preliminary repair image of the sample image, the repair sample image, and the real image comprises:
determining the preliminary repair loss of the neural network according to the preliminary repair image and the real image;
Determining the blocking loss of the neural network according to a plurality of repair sample image blocks of the repair sample image and a plurality of real image blocks of the real image;
determining the overall loss of the image of the neural network according to the repair sample image and the real image;
and training the neural network according to the preliminary repair loss, the blocking loss and the overall image loss.
7. The method of claim 5 or 6, wherein the neural network further comprises a first discriminant network, the training the neural network according to a preset training set, further comprising:
inputting a plurality of real image blocks of a real image corresponding to the sample image and a repair sample image block at a corresponding position into the first discrimination network for processing respectively to obtain a first discrimination result of the real image block and a second discrimination result of the repair sample image block;
and according to the first discrimination result and the second discrimination result, training the neural network in an antagonism way.
8. The method of claim 7, wherein training the neural network according to a preset training set further comprises:
Determining a first data distribution of a plurality of image blocks of a normal region of the sample image and a second data distribution of the plurality of repair sample image blocks;
inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result;
and according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result, training the neural network in an antagonism manner.
9. The method of claim 5, wherein the neural network further comprises a second discrimination network, the training the neural network according to a preset training set, further comprising:
respectively inputting a real image corresponding to the sample image and the repair sample image into the second discrimination network for processing to obtain a fifth discrimination result of the real image and a sixth discrimination result of the repair sample image;
and according to the fifth discrimination result and the sixth discrimination result, training the neural network in an antagonism way.
10. An image restoration device, comprising:
the first restoration module is used for carrying out preliminary restoration on an image to be processed by utilizing a first restoration network to obtain a first restoration image, wherein the image to be processed comprises a normal area and an area to be restored, and the first restoration image comprises a preliminary restoration area corresponding to the area to be restored;
The texture matching module is used for respectively determining first image blocks matched with textures of the second image blocks according to the first image blocks of the normal area and the second image blocks of the preliminary repair area;
the second restoration module is used for inputting third image blocks corresponding to the second image blocks and first image blocks matched with textures of the second image blocks into a second restoration network to obtain restoration image blocks corresponding to the second image blocks, wherein the third image blocks are image blocks with the sizes larger than the second image blocks and obtained by taking the second image blocks as the centers for expansion; and splicing the plurality of repair image blocks with the normal area of the image to be processed to obtain a second repair image of the image to be processed.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 9.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 9.
CN202010237090.1A 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium Active CN111445415B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010237090.1A CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010237090.1A CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111445415A CN111445415A (en) 2020-07-24
CN111445415B true CN111445415B (en) 2024-03-08

Family

ID=71649321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010237090.1A Active CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111445415B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344832A (en) * 2021-05-28 2021-09-03 杭州睿胜软件有限公司 Image processing method and device, electronic equipment and storage medium
CN113674176B (en) * 2021-08-23 2024-04-16 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145839A (en) * 2017-04-17 2017-09-08 努比亚技术有限公司 A kind of fingerprint image completion analogy method and its system
CN107993210A (en) * 2017-11-30 2018-05-04 北京小米移动软件有限公司 Image repair method, device and computer-readable recording medium
CN109584178A (en) * 2018-11-29 2019-04-05 腾讯科技(深圳)有限公司 Image repair method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10740881B2 (en) * 2018-03-26 2020-08-11 Adobe Inc. Deep patch feature prediction for image inpainting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145839A (en) * 2017-04-17 2017-09-08 努比亚技术有限公司 A kind of fingerprint image completion analogy method and its system
CN107993210A (en) * 2017-11-30 2018-05-04 北京小米移动软件有限公司 Image repair method, device and computer-readable recording medium
CN109584178A (en) * 2018-11-29 2019-04-05 腾讯科技(深圳)有限公司 Image repair method, device and storage medium

Also Published As

Publication number Publication date
CN111445415A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN111310616B (en) Image processing method and device, electronic equipment and storage medium
CN110378976B (en) Image processing method and device, electronic equipment and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
CN110889469B (en) Image processing method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN111783756B (en) Text recognition method and device, electronic equipment and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN111242303B (en) Network training method and device, and image processing method and device
CN111340731B (en) Image processing method and device, electronic equipment and storage medium
CN110458218B (en) Image classification method and device and classification network training method and device
CN111435432B (en) Network optimization method and device, image processing method and device and storage medium
CN112219224B (en) Image processing method and device, electronic equipment and storage medium
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN110188865B (en) Information processing method and device, electronic equipment and storage medium
JP2022533065A (en) Character recognition methods and devices, electronic devices and storage media
CN111445415B (en) Image restoration method and device, electronic equipment and storage medium
CN111369482B (en) Image processing method and device, electronic equipment and storage medium
CN111563138B (en) Positioning method and device, electronic equipment and storage medium
CN112529846A (en) Image processing method and device, electronic equipment and storage medium
CN110415258B (en) Image processing method and device, electronic equipment and storage medium
CN110633715B (en) Image processing method, network training method and device and electronic equipment
CN109840890B (en) Image processing method and device, electronic equipment and storage medium
CN113807498B (en) Model expansion method and device, electronic equipment and storage medium
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
CN113538310A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant