CN111445415A - Image restoration method and device, electronic equipment and storage medium - Google Patents

Image restoration method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111445415A
CN111445415A CN202010237090.1A CN202010237090A CN111445415A CN 111445415 A CN111445415 A CN 111445415A CN 202010237090 A CN202010237090 A CN 202010237090A CN 111445415 A CN111445415 A CN 111445415A
Authority
CN
China
Prior art keywords
image
image block
repaired
repair
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010237090.1A
Other languages
Chinese (zh)
Other versions
CN111445415B (en
Inventor
徐瑞
郭明皓
王佳琦
李晓潇
周博磊
吕健勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010237090.1A priority Critical patent/CN111445415B/en
Publication of CN111445415A publication Critical patent/CN111445415A/en
Application granted granted Critical
Publication of CN111445415B publication Critical patent/CN111445415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image restoration method and apparatus, an electronic device, and a storage medium, the method including: performing preliminary repairing on an image to be processed to obtain a first repaired image, wherein the image to be processed comprises a normal area and an area to be repaired, and the first repaired image comprises a preliminary repaired area corresponding to the area to be repaired; respectively determining a first image block matched with the texture of each second image block according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the primary repair area; and repairing the preliminary repair area according to the first image blocks matched with the textures of the second image blocks to obtain second repair images of the images to be processed. The embodiment of the disclosure can improve the image restoration effect.

Description

Image restoration method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image restoration method and apparatus, an electronic device, and a storage medium.
Background
Image restoration is an important problem in the field of computer vision, and has important applications in many fields, such as image watermarking, image inpainting and the like. The image restoration method in the related art can only restore the image based on the existing content in the image to be restored, and the image restoration effect is poor due to the lack of the construction and generation capabilities.
Disclosure of Invention
The present disclosure provides an image restoration technical solution.
According to an aspect of the present disclosure, there is provided an image inpainting method including: performing preliminary repairing on an image to be processed to obtain a first repaired image, wherein the image to be processed comprises a normal area and an area to be repaired, and the first repaired image comprises a preliminary repaired area corresponding to the area to be repaired; respectively determining a first image block matched with the texture of each second image block according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the primary repair area; and repairing the preliminary repair area according to the first image blocks matched with the textures of the second image blocks to obtain second repair images of the images to be processed.
In a possible implementation manner, the determining, according to a plurality of first image blocks of the normal area and a plurality of second image blocks of the preliminary repair area, a first image block that matches a texture of each second image block includes: for any second image block, respectively determining the similarity between the second image block and the plurality of first image blocks; and determining at least one first image block with the highest similarity as the first image block matched with the texture of the second image block.
In a possible implementation manner, the repairing the preliminary repair area according to the first image block matched with the texture of each second image block to obtain a second repair image of the image to be processed includes: respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block; and splicing the normal area of the image to be processed and the repaired image blocks of the second image blocks to obtain the second repaired image.
In a possible implementation manner, the repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block includes: for any second image block, performing feature extraction on a third image block corresponding to the second image block in the first restored image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; performing feature extraction on a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristics of the second image block.
In a possible implementation manner, the method is implemented by a neural network, where the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is used to perform preliminary repair on an image to be processed, the texture matching network is used to perform texture matching on the second image block and the first image block, and the second repair network is used to repair the preliminary repair area, and the method further includes: and training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In a possible implementation manner, the training the neural network according to a preset training set includes: inputting the sample image into the neural network for processing to obtain a preliminary repair image and a plurality of repair sample image blocks of the sample image; splicing the normal area of the sample image and the plurality of repaired sample image blocks to obtain a repaired sample image of the sample image; and training the neural network according to the preliminary repairing image of the sample image, the repairing sample image and the real image.
In one possible implementation, the training the neural network according to the preliminary repair image of the sample image, the repair sample image, and the real image includes: determining the initial repair loss of the neural network according to the initial repair image and the real image; determining the blocking loss of the neural network according to the plurality of repaired sample image blocks of the repaired sample image and the plurality of real image blocks of the real image; determining the overall image loss of the neural network according to the repaired sample image and the real image; and training the neural network according to the primary repair loss, the block loss and the image overall loss.
In a possible implementation manner, the neural network further includes a first discriminant network, and the training the neural network according to a preset training set further includes: inputting a plurality of real image blocks of a real image corresponding to the sample image and a repaired sample image block at a corresponding position into the first discrimination network respectively for processing to obtain a first discrimination result of the real image blocks and a second discrimination result of the repaired sample image blocks; and countertraining the neural network according to the first judgment result and the second judgment result.
In a possible implementation manner, the training the neural network according to a preset training set further includes: determining a first data distribution of a plurality of image blocks of a normal area of the sample image and a second data distribution of the plurality of restored sample image blocks; inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result; and countertraining the neural network according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result.
In a possible implementation manner, the neural network further includes a second decision network, and the training the neural network according to a preset training set further includes: inputting a real image corresponding to the sample image and the repaired sample image into the second judgment network respectively for processing to obtain a fifth judgment result of the real image and a sixth judgment result of the repaired sample image; and countertraining the neural network according to the fifth judgment result and the sixth judgment result.
According to an aspect of the present disclosure, there is provided an image repair apparatus including: the device comprises a first repairing module, a second repairing module and a third repairing module, wherein the first repairing module is used for performing primary repairing on an image to be processed to obtain a first repairing image, the image to be processed comprises a normal area and a region to be repaired, and the first repairing image comprises a primary repairing area corresponding to the region to be repaired; the texture matching module is used for respectively determining a first image block matched with the texture of each second image block according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the primary repair area; and the second repairing module is used for repairing the preliminary repairing area according to the first image blocks matched with the textures of the second image blocks to obtain a second repairing image of the image to be processed.
In one possible implementation, the texture matching module includes: the similarity determining sub-module is used for respectively determining the similarities between the second image block and the plurality of first image blocks aiming at any second image block; and the matching sub-module is used for determining at least one first image block with the highest similarity as the first image block matched with the texture of the second image block.
In one possible implementation, the second repair module includes: the image block repairing sub-module is used for respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block; and the first splicing submodule is used for splicing the normal area of the image to be processed and the repaired image blocks of the second image blocks to obtain the second repaired image.
In one possible implementation, the image block repair sub-module is configured to: for any second image block, performing feature extraction on a third image block corresponding to the second image block in the first restored image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; performing feature extraction on a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristics of the second image block.
In a possible implementation manner, the apparatus is implemented by a neural network, the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is configured to perform a preliminary repair on an image to be processed, the texture matching network is configured to perform texture matching on the second image block and the first image block, the second repair network is configured to repair the preliminary repair area, and the apparatus further includes: the training module is used for training the neural network according to a preset training set, the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In one possible implementation, the training module includes: the restoration sub-module is used for inputting the sample image into the neural network for processing to obtain a preliminary restoration image and a plurality of restoration sample image blocks of the sample image; the second splicing sub-module is used for splicing the normal area of the sample image and the plurality of repaired sample image blocks to obtain a repaired sample image of the sample image; and the training submodule is used for training the neural network according to the preliminary repair image of the sample image, the repair sample image and the real image.
In one possible implementation, the training submodule is configured to: determining the initial repair loss of the neural network according to the initial repair image and the real image; determining the blocking loss of the neural network according to the plurality of repaired sample image blocks of the repaired sample image and the plurality of real image blocks of the real image; determining the overall image loss of the neural network according to the repaired sample image and the real image; and training the neural network according to the primary repair loss, the block loss and the image overall loss.
In one possible implementation, the neural network further includes a first discriminant network, and the training module further includes: the first judgment sub-module is used for respectively inputting a plurality of real image blocks of a real image corresponding to the sample image and a repaired sample image block at a corresponding position into the first judgment network for processing to obtain a first judgment result of the real image blocks and a second judgment result of the repaired sample image blocks; and the first antagonistic training submodule is used for carrying out antagonistic training on the neural network according to the first judgment result and the second judgment result.
In one possible implementation, the training module further includes: the distribution determining sub-module is used for determining first data distribution of a plurality of image blocks of the normal area of the sample image and second data distribution of a plurality of repaired sample image blocks; the second judging submodule is used for respectively inputting the first data distribution and the second data distribution into the first judging network for processing to obtain a third judging result and a fourth judging result; and the second antagonistic training submodule is used for carrying out antagonistic training on the neural network according to the first judgment result, the second judgment result, the third judgment result and the fourth judgment result.
In one possible implementation, the neural network further includes a second decision network, and the training module further includes: a third judging submodule, configured to input the real image and the repaired sample image corresponding to the sample image into the second judging network, respectively, and process the real image and the repaired sample image to obtain a fifth judging result of the real image and a sixth judging result of the repaired sample image; and the third confrontation training submodule is used for confronting and training the neural network according to the fifth judgment result and the sixth judgment result.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the image to be processed can be primarily repaired, and the image block matched with the texture of the image block in the primary repair area is determined from the image blocks in the normal area of the image; and further repairing the image according to the image block matched with the texture, thereby improving the repairing effect of the image to be processed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image inpainting method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a process of an image restoration method according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an image restoration apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image inpainting method according to an embodiment of the present disclosure, as shown in fig. 1, the method including:
in step S11, performing preliminary repair on an image to be processed to obtain a first repaired image, where the image to be processed includes a normal area and an area to be repaired, and the first repaired image includes a preliminary repaired area corresponding to the area to be repaired;
in step S12, determining, according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, first image blocks that match textures of the respective second image blocks, respectively;
in step S13, the preliminary repair area is repaired according to the first image block that matches the texture of each second image block, so as to obtain a second repair image of the to-be-processed image.
In one possible implementation, the image restoration method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
For example, the image to be processed may be an image including arbitrary contents such as a person, a landscape, a building, and the like. The image to be processed may include a normal area and an area to be repaired, where the normal area is an area where image content is normally visible, and the area to be repaired is an area where image content is abnormal or invisible, for example, an area where content in the image is missing or a watermark exists, which is not limited by the present disclosure.
In one possible implementation, the image to be processed may be subjected to preliminary restoration in step S11, for example, the image to be processed is processed by using a convolutional neural network in a coder-decoder style, the global structure information of the image is extracted, and the image is restored according to the global structure information. The convolutional neural network may include, for example, convolutional layers, dilatant convolutional layers (also may be referred to as void convolutional layers), deconvolution layers, pooling layers, fully-connected layers, and the like, and the specific network structure of the convolutional neural network is not limited by this disclosure.
And obtaining a first repairing image of the image to be processed after the initial repairing. The first repair image includes a preliminary repair region corresponding to a position of a region to be repaired.
For example, a window of k × k is adopted to slide in the normal area (the step size is k/2, for example, to obtain more image blocks), and a plurality of first image blocks of size k × k are obtained.
In one possible implementation, the first image block may include local texture information in the normal area. The texture library of the image to be processed can be constructed according to the plurality of first image blocks, so that similar textures can be selected in the following process.
In a possible implementation manner, the preliminary repair area of the first repair image may be cut to obtain a plurality of second image blocks. The size of the second image block may be the same as or different from the first image block, which is not limited by this disclosure.
In one possible implementation manner, in step S12, first image blocks that match textures of the respective second image blocks may be determined according to the plurality of first image blocks and the plurality of second image blocks of the preliminary repair area, respectively. That is, for any one second image block, the similarity between the texture information of the second image block and the texture information of the plurality of first image blocks may be determined; and selecting the first image block matched with the texture of the second image block according to the similarity so as to further repair the second image block according to the texture information.
In a possible implementation manner, in step S13, the preliminary repair area may be repaired according to the first image block that matches the texture of each second image block, so as to obtain a second repair image of the to-be-processed image.
That is, for any one of the second image blocks, the features of the first image block that match the texture of the second image block, including the texture information of the first image block, may be extracted by the convolutional neural network. Processing the second image block through a convolutional neural network of a coder-decoder style, and extracting the characteristics of the second image block; fusing the characteristics of the second image block with the characteristics of the first image block; and repairing the second image block according to the fused features to obtain a repaired image block. And splicing the plurality of repaired image blocks with the normal area of the image to be processed to obtain a second repaired image.
According to the embodiment of the disclosure, the image to be processed can be primarily repaired, and the image block matched with the texture of the image block in the primary repair area is determined from the image blocks in the normal area of the image; and further repairing the image according to the image block matched with the texture, thereby improving the repairing effect of the image to be processed.
In a possible implementation manner, a first repairing network may be preset, and is used for performing preliminary repairing on the image to be processed. The first restoration network is a convolutional neural network of a coder-decoder style, and can extract the global structure information of the image and generate a restoration image according to the global structure information.
In one possible implementation, the first repair network may include, for example, a convolutional layer, an expandable convolutional layer (also referred to as a void convolutional layer), an anti-convolutional layer, a pooling layer, a fully-connected layer, and so on. The expansion convolutional layer is adopted, so that the sensing capability of the first repair network on the global structure information can be improved, and the global repair effect is improved. The present disclosure does not limit the specific network structure of the first repair network.
In this way, preliminary restoration of the image to be processed can be achieved.
In one possible implementation, step S12 may include:
for any second image block, respectively determining the similarity between the second image block and the plurality of first image blocks;
and determining at least one first image block with the highest similarity as the first image block matched with the texture of the second image block.
For example, a texture matching network may be preset for performing texture matching on the second image block and the first image block. For any one second image block, the second image block and a plurality of first image blocks in the texture library may be input into the texture matching network for processing, and the similarity between the second image block and each first image block may be determined.
For example, extracting features of the second image block and the first image block through a texture matching network; constructing a similarity matrix between the second image block and each first image block according to the characteristics; and determining the similarity between the second image block and each first image block according to the similarity matrix. The present disclosure does not limit the specific process of determining similarity.
In one possible implementation, the texture matching network may include, for example, a convolution layer, a softmax layer, etc., and the specific structure of the texture matching network is not limited by the present disclosure.
In one possible implementation, the at least one first image block with the highest similarity may be determined as the first image block that matches the texture of the second image block, for example, the 4 first image blocks with the highest similarity are selected as the first image blocks with texture matching.
In this way, a first image block matching the texture of a second image block may be determined in order to further repair the second image block according to the texture information.
In one possible implementation, step S13 may include:
respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block;
and splicing the normal area of the image to be processed and the repaired image blocks of the second image blocks to obtain the second repaired image.
That is, for any one of the second image blocks, the features of the first image block that match the texture of the second image block, including the texture information of the first image block, may be extracted by the convolutional neural network. Processing the second image block through a convolutional neural network of a coder-decoder style, and extracting the characteristics of the second image block; fusing the characteristics of the second image block with the characteristics of the first image block; and repairing the second image block according to the fused features to obtain a repaired image block.
In a possible implementation manner, a second repair image may be obtained by stitching a plurality of repair image blocks with a normal area of the image to be processed. By the method, the image to be processed can be further repaired, and the image repairing effect is improved.
In a possible implementation manner, the step of respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block includes:
for any second image block, performing feature extraction on a third image block corresponding to the second image block in the first restored image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block;
performing feature extraction on a first image block matched with the texture of the second image block to obtain the features of the first image block;
fusing the characteristics of the third image block with the characteristics of the first image block to obtain fused characteristics of the second image block;
and generating a repair image block of the second image block according to the fusion characteristics of the second image block.
For example, a second repair network may be preset for repairing the first repair image. The second repair network comprises a first convolutional network of encoder-decoder style and a second convolutional network of conventional type. The first convolutional network and the second convolutional network may, for example, include convolutional layers, expansion convolutional layers (void convolutional layers), deconvolution layers, pooling layers, fully-connected layers, and the like, which are not limited by this disclosure.
For example, the size of the third image block is larger than that of the second image block, for example, the size of the second image block is 32 × 32, and the size of the third image block is 96 × 96.
In a possible implementation manner, the third image block may be input to an encoder of the first convolution network for feature extraction, so as to obtain features of the third image block.
In one possible implementation, the first image block that matches the texture of the second image block may be input into a second convolution network for feature extraction. When the number of the first image blocks matched with the textures of the second image block is multiple, the multiple first image blocks can be spliced and then input into the second convolution network. And after processing, obtaining the characteristics of the first image block.
In a possible implementation manner, the features of the third image block and the features of the first image block are fused to obtain the fusion features of the second image block; and inputting the fusion features into a decoder of the first convolution network to generate a repair image block of the second image block.
In one possible implementation, multi-level feature fusion may be performed. That is to say, the features of the first image block may include features output by the multi-level convolution layer, feature fusion may be performed on a plurality of corresponding levels in a decoder of the first convolution network, and subsequent processes such as expansion convolution and deconvolution may be performed on the multi-level fusion features in sequence to finally generate the repair image block of the second image block. The present disclosure is not so limited.
In a possible implementation manner, a second repair image can be obtained by splicing the plurality of repair image blocks with the normal area of the image to be processed, so that the whole process of image repair is completed.
In this way, the first image block with the texture matching is adopted to participate in the repair process of the second image block, and the block repair effect of the second image block can be improved through the local texture information in the first image block.
In a possible implementation manner, the image inpainting method according to the embodiment of the present disclosure may be implemented by a neural network, where the neural network includes a first inpainting network, a texture matching network, and a second inpainting network, the first inpainting network is used to perform preliminary inpainting on an image to be processed, the texture matching network is used to perform texture matching on the second image block and the first image block, and the second inpainting network is used to inpaint the preliminary inpainting area.
Fig. 2 is a schematic diagram illustrating a process of an image restoration method according to an embodiment of the present disclosure. As shown in fig. 2, the neural network according to an embodiment of the present disclosure includes a first repair network 21, a texture matching network 22, and a second repair network 23.
As shown in FIG. 2, image I to be processedmIncluding a blank area to be repaired and a normal area having normal contents. Can process the image I to be processedmInputting the image into a first repairing network 21 for primary repairing to obtain a first repaired image Is. First repair image IsIncluding a preliminary repair area corresponding to the location of the area to be repaired.
In an example, the preliminary repair area may be cut, resulting in a set of second image blocks { p }sEach second image block has a size of 32 × 32.
In an example, an image I to be processed may be treatedmThe normal area of the texture library 24 is cut to obtain a plurality of first image blocks with the size of 32 × 32, and the texture library 24 of the image to be processed is constructed according to the plurality of first image blocks.
In the example, for any second image block psThe second image block and the plurality of first image blocks in the texture library 24 may be input to the texture matching network 22, and the 4 first image blocks 221 matched with the texture of the second image block may be output.
In an example, the second image blocks p may be individually selected in the first repair imagesRespectively expanding the image blocks as the center to obtain a set of third image blocks
Figure BDA0002431361930000101
Each third image block has a size 96 × 96.
In an example, the second image block p may besCorresponding third image block
Figure BDA0002431361930000102
Inputting the 4 first image blocks 221 matched with the textures into the second repair network 23 to obtain a second image block psThe repair image block of (1). As shown in fig. 2, the multi-level features of the first image block 221 are respectively fused with the multi-level features of the third image block.
In an example, the plurality of second image blocks are processed respectively, resulting in a plurality of repair image blocks 231; and splicing the plurality of repaired image blocks 231 with the normal area of the image to be processed to obtain a second repaired image 25, thereby completing the whole image repairing process.
The neural network of the disclosed embodiments may be trained prior to application.
In one possible implementation, the method further includes: and training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
For example, a training set may be preset, and the training set includes a plurality of sample images and real images corresponding to the sample images. The real image in the existing image dataset or the real image obtained by other modes can be selected and used; and shielding partial area of the real image to obtain corresponding sample images, so that each sample image comprises a normal area and an area to be repaired. The present disclosure is not so limited.
In a possible implementation manner, the step of training the neural network according to a preset training set may include:
inputting the sample image into the neural network for processing to obtain a preliminary repair image and a plurality of repair sample image blocks of the sample image;
splicing the normal area of the sample image and the plurality of repaired sample image blocks to obtain a repaired sample image of the sample image;
and training the neural network according to the preliminary repairing image of the sample image, the repairing sample image and the real image.
For example, sample images in the training set may be input into a first repairing network, so as to obtain a preliminary repairing image of the sample images, where the preliminary repairing image includes a preliminary repairing area; constructing a texture library according to a plurality of image blocks of a normal area of a sample image; and inputting any sample image block of the preliminary repair area and a plurality of image blocks of the normal area into a texture matching network to obtain at least one image block matched with the texture of the sample image block. In this way, each sample image block of the preliminary repair area is processed separately, and a first image block matched with the texture of each sample image block can be obtained.
In a possible implementation manner, the sample image blocks of the preliminary repair area are expanded outwards to obtain third image blocks; and inputting the third image block and the first image block with the matched texture into a second repairing network to obtain a corresponding repairing sample image block.
In a possible implementation manner, the normal area of the sample image and the plurality of repaired sample image blocks are spliced to obtain a repaired sample image of the sample image; further, the neural network may be trained according to a preliminary repair image of a sample image, the repair sample image, and the real image. In this way, a training process of the neural network can be achieved.
In one possible implementation, the step of training the neural network according to the preliminary repair image of the sample image, the repair sample image, and the real image includes:
determining the initial repair loss of the neural network according to the initial repair image and the real image;
determining the blocking loss of the neural network according to the plurality of repaired sample image blocks of the repaired sample image and the plurality of real image blocks of the real image;
determining the overall image loss of the neural network according to the repaired sample image and the real image;
and training the neural network according to the primary repair loss, the block loss and the image overall loss.
In one aspect, based on the difference between the preliminary repair image and the real image, a loss of the first repair network, i.e., a preliminary repair loss of the neural network, may be determined (L)recon) The initial repair loss may be, for example, L1Loss, the present disclosure does not limit the choice of the loss function.
On the one hand, based on the difference between a repair sample image block of the repair sample image and a real image block of the corresponding location of the real image, a repair loss of each image block, i.e., a blocking loss of the neural network, may be determined (L)ps) The chunking loss may include L1Loss, loss of consciousness LpercepAnd the selection of the loss function is not limited by the present disclosure.
On the other hand, by repairing the difference between the sample image and the real image, the repair loss of the whole image, that is, the image whole loss of the neural network can be determined (L)blend) The image global penalty may include a boundary smoothing penalty LtvFor removing boundary artifacts and maintaining consistency between adjacent partitions. The present disclosure does not limit the choice of the loss function.
In one possible implementation, a weighted sum of the preliminary repair loss, the block loss, and the overall image loss may be determined as an overall loss of the neural network; parameters of the neural network are inversely adjusted according to the overall loss. After multiple rounds of adjustment, the trained neural network can be obtained under the condition that the training condition (such as network convergence) is met. By the method, the network training effect can be improved, and the high-precision neural network can be obtained.
In a possible implementation manner, the network training effect can be further improved by a mode of countertraining. Wherein, the neural network according to the embodiment of the present disclosure may further include a first discriminant network,
the step of training the neural network according to a preset training set may further include:
inputting a plurality of real image blocks of a real image corresponding to the sample image and a repaired sample image block at a corresponding position into the first discrimination network respectively for processing to obtain a first discrimination result of the real image blocks and a second discrimination result of the repaired sample image blocks;
and countertraining the neural network according to the first judgment result and the second judgment result.
For example, a first discriminant network can be preset as a discriminant for the countermeasure training; and taking the first repairing network, the texture matching network and the second repairing network as generators of the countermeasure training.
In the training process, real image blocks of a real image can be input into a first judgment network to obtain a first judgment result; and inputting the repaired sample image blocks at the corresponding positions into a first judgment network to obtain a second judgment result.
In one possible implementation, the blocking countermeasure loss of the neural network can be determined according to the first discrimination result and the second discrimination result; and respectively adjusting the parameters of the generator and the discriminator according to the block countermeasure loss, thereby realizing countermeasure training of the generator and the discriminator.
In the confrontation training, the discriminator tries to distinguish the real image block from the restored sample image block; the generator tries to confuse the real image block and the restored sample image block, both of which contribute to each other, so that the accuracy of the generator and the discriminator is improved at the same time.
In a possible implementation manner, the training the neural network according to a preset training set further includes:
determining a first data distribution of a plurality of image blocks of a normal area of the sample image and a second data distribution of the plurality of restored sample image blocks;
inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result;
and countertraining the neural network according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result.
For example, in order to fully utilize the texture prior information and match the texture distribution of the normal region, the data distribution of the image block may be determined by the first determination network, so as to further improve the training effect.
In one possible implementation, according to a plurality of image blocks in a texture library of a sample image, a data distribution (referred to as a first data distribution) of the plurality of image blocks can be determined; from a plurality of repaired sample image blocks of the sample image, a data distribution (referred to as a second data distribution) of the plurality of repaired sample image blocks may be determined. The present disclosure does not limit the specific manner of calculation of the data distribution.
In a possible implementation manner, the first data distribution and the second data distribution may be respectively input to the first discrimination network for processing, so as to obtain a third discrimination result and a fourth discrimination result; furthermore, the blocking countermeasure loss of the neural network can be determined according to the first, second, third and fourth discrimination results; and respectively adjusting the parameters of the generator and the discriminator according to the block countermeasure loss, thereby realizing countermeasure training of the generator and the discriminator.
In the confrontation training, the discriminator tries to distinguish the real image block from the restored sample image block; the generator tries to confuse the real image block and the repair sample image block; meanwhile, the discriminator tries to distinguish the data distribution of the real image block from the data distribution of the restored sample image block, the generator tries to confuse the data distribution of the real image block and the data distribution of the restored sample image block, both promote each other, and the accuracy of the generator and the discriminator is improved at the same time.
After the confrontation training, the repairing sample image blocks can be closer to the real image blocks, and the data distribution of the repairing sample image blocks is closer to the data distribution of the real image blocks, so that the network training effect is further improved.
In one possible implementation, countermeasure training may be added to the preceding training process, i.e., the chunking countermeasure loss is added to the chunking loss of the neural network, and the chunking countermeasure loss, L1Loss and loss of consciousness LpercepThe block loss of the neural network is determined, and therefore the network training effect is improved.
In one possible implementation, the neural network according to an embodiment of the present disclosure may further include a second decision network,
the step of training the neural network according to a preset training set may further include:
inputting a real image corresponding to the sample image and the repaired sample image into the second judgment network respectively for processing to obtain a fifth judgment result of the real image and a sixth judgment result of the repaired sample image;
and countertraining the neural network according to the fifth judgment result and the sixth judgment result.
For example, a second decision network can be preset as a discriminator for the countermeasure training; and taking the first repairing network, the texture matching network and the second repairing network as generators of the countermeasure training.
In the training process, the real image can be input into a second judgment network to obtain a fifth judgment result; and inputting the corresponding repaired sample image block into a second judgment network to obtain a sixth judgment result.
In one possible implementation, based on the fifth and sixth discrimination results, the overall countermeasure loss of the neural network can be determined; according to the overall confrontation loss, parameters of the generator and the discriminator are respectively adjusted, so that the confrontation training of the generator and the discriminator is realized.
In one possible implementation, the confrontation training may be added to the preceding training process, i.e., the overall confrontation loss is added to the overall loss of the image of the neural network, and the overall confrontation loss is combined with the boundary smoothing loss LtvThe weighted sum is determined as the overall loss of the image of the neural network, thereby further improving the network training effect.
According to the image restoration method disclosed by the embodiment of the disclosure, on the basis of restoring an image according to global information, region texture information is extracted on a smaller blocky region, high-quality image blocks are generated and spliced back to an original image, so that image restoration in a complex scene and a general scene can be realized, and the generalization capability is good; in addition, the repaired image has better texture effect, and the image repairing precision is improved.
According to the image restoration method disclosed by the embodiment of the disclosure, the useful texture block can be efficiently obtained in a texture library mode; the block processing mode is adopted, parallel execution can be realized, and the processing efficiency is improved; in the training, the network training effect can be improved by the countertraining, and the quality of the generated texture can be further improved.
The image restoration method according to the embodiment of the disclosure can be applied to scenes such as image restoration, image special effect production and the like, for example, background restoration in algorithms such as face beautification and human body slimming; and repairing watermark areas of the image and the like.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image restoration apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image restoration methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 3 shows a block diagram of an image restoration apparatus according to an embodiment of the present disclosure, the apparatus including, as shown in fig. 3:
the first repairing module 31 is configured to perform a preliminary repairing on an image to be processed to obtain a first repaired image, where the image to be processed includes a normal area and an area to be repaired, and the first repaired image includes a preliminary repairing area corresponding to the area to be repaired; a texture matching module 32, configured to determine, according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, first image blocks that are matched with textures of the respective second image blocks, respectively; and a second repairing module 33, configured to repair the preliminary repairing area according to the first image block matched with the texture of each second image block, so as to obtain a second repairing image of the to-be-processed image.
In one possible implementation, the texture matching module includes: the similarity determining sub-module is used for respectively determining the similarities between the second image block and the plurality of first image blocks aiming at any second image block; and the matching sub-module is used for determining at least one first image block with the highest similarity as the first image block matched with the texture of the second image block.
In one possible implementation, the second repair module includes: the image block repairing sub-module is used for respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block; and the first splicing submodule is used for splicing the normal area of the image to be processed and the repaired image blocks of the second image blocks to obtain the second repaired image.
In one possible implementation, the image block repair sub-module is configured to: for any second image block, performing feature extraction on a third image block corresponding to the second image block in the first restored image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block; performing feature extraction on a first image block matched with the texture of the second image block to obtain the features of the first image block; fusing the characteristics of the third image block with the characteristics of the first image block to obtain fused characteristics of the second image block; and generating a repair image block of the second image block according to the fusion characteristics of the second image block.
In a possible implementation manner, the apparatus is implemented by a neural network, the neural network includes a first repair network, a texture matching network, and a second repair network, the first repair network is configured to perform a preliminary repair on an image to be processed, the texture matching network is configured to perform texture matching on the second image block and the first image block, the second repair network is configured to repair the preliminary repair area, and the apparatus further includes: the training module is used for training the neural network according to a preset training set, the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
In one possible implementation, the training module includes: the restoration sub-module is used for inputting the sample image into the neural network for processing to obtain a preliminary restoration image and a plurality of restoration sample image blocks of the sample image; the second splicing sub-module is used for splicing the normal area of the sample image and the plurality of repaired sample image blocks to obtain a repaired sample image of the sample image; and the training submodule is used for training the neural network according to the preliminary repair image of the sample image, the repair sample image and the real image.
In one possible implementation, the training submodule is configured to: determining the initial repair loss of the neural network according to the initial repair image and the real image; determining the blocking loss of the neural network according to the plurality of repaired sample image blocks of the repaired sample image and the plurality of real image blocks of the real image; determining the overall image loss of the neural network according to the repaired sample image and the real image; and training the neural network according to the primary repair loss, the block loss and the image overall loss.
In one possible implementation, the neural network further includes a first discriminant network, and the training module further includes: the first judgment sub-module is used for respectively inputting a plurality of real image blocks of a real image corresponding to the sample image and a repaired sample image block at a corresponding position into the first judgment network for processing to obtain a first judgment result of the real image blocks and a second judgment result of the repaired sample image blocks; and the first antagonistic training submodule is used for carrying out antagonistic training on the neural network according to the first judgment result and the second judgment result.
In one possible implementation, the training module further includes: the distribution determining sub-module is used for determining first data distribution of a plurality of image blocks of the normal area of the sample image and second data distribution of a plurality of repaired sample image blocks; the second judging submodule is used for respectively inputting the first data distribution and the second data distribution into the first judging network for processing to obtain a third judging result and a fourth judging result; and the second antagonistic training submodule is used for carrying out antagonistic training on the neural network according to the first judgment result, the second judgment result, the third judgment result and the fourth judgment result.
In one possible implementation, the neural network further includes a second decision network, and the training module further includes: a third judging submodule, configured to input the real image and the repaired sample image corresponding to the sample image into the second judging network, respectively, and process the real image and the repaired sample image to obtain a fifth judging result of the real image and a sixth judging result of the repaired sample image; and the third confrontation training submodule is used for confronting and training the neural network according to the fifth judgment result and the sixth judgment result.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the image inpainting method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the image inpainting method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 4, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user, in some embodiments, the screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 5 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 5, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including AN object oriented programming language such as Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" language or similar programming languages.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An image restoration method, comprising:
performing preliminary repairing on an image to be processed to obtain a first repaired image, wherein the image to be processed comprises a normal area and an area to be repaired, and the first repaired image comprises a preliminary repaired area corresponding to the area to be repaired;
respectively determining a first image block matched with the texture of each second image block according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the primary repair area;
and repairing the preliminary repair area according to the first image blocks matched with the textures of the second image blocks to obtain second repair images of the images to be processed.
2. The method according to claim 1, wherein the determining, according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the preliminary repair area, the first image block that matches the texture of each of the plurality of second image blocks, respectively, comprises:
for any second image block, respectively determining the similarity between the second image block and the plurality of first image blocks;
and determining at least one first image block with the highest similarity as the first image block matched with the texture of the second image block.
3. The method according to claim 1 or 2, wherein the repairing the preliminary repair area according to the first image block matched with the texture of each second image block to obtain a second repair image of the image to be processed includes:
respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain a repaired image block of each second image block;
and splicing the normal area of the image to be processed and the repaired image blocks of the second image blocks to obtain the second repaired image.
4. The method according to claim 3, wherein the respectively repairing each second image block according to the first image block matched with the texture of each second image block to obtain the repaired image block of each second image block comprises:
for any second image block, performing feature extraction on a third image block corresponding to the second image block in the first restored image to obtain features of the third image block, wherein the size of the third image block is larger than that of the second image block;
performing feature extraction on a first image block matched with the texture of the second image block to obtain the features of the first image block;
fusing the characteristics of the third image block with the characteristics of the first image block to obtain fused characteristics of the second image block;
and generating a repair image block of the second image block according to the fusion characteristics of the second image block.
5. The method according to any of claims 1-4, wherein the method is implemented by a neural network comprising a first repair network for performing a preliminary repair on an image to be processed, a texture matching network for performing texture matching on the second tile and the first tile, and a second repair network for repairing the preliminary repair area,
the method further comprises the following steps: and training the neural network according to a preset training set, wherein the training set comprises a plurality of sample images and real images corresponding to the sample images, and each sample image comprises a normal area and an area to be repaired.
6. The method of claim 5, wherein training the neural network according to a preset training set comprises:
inputting the sample image into the neural network for processing to obtain a preliminary repair image and a plurality of repair sample image blocks of the sample image;
splicing the normal area of the sample image and the plurality of repaired sample image blocks to obtain a repaired sample image of the sample image;
and training the neural network according to the preliminary repairing image of the sample image, the repairing sample image and the real image.
7. The method of claim 6, wherein the training the neural network from the preliminary repair image of the sample image, the repair sample image, and the real image comprises:
determining the initial repair loss of the neural network according to the initial repair image and the real image;
determining the blocking loss of the neural network according to the plurality of repaired sample image blocks of the repaired sample image and the plurality of real image blocks of the real image;
determining the overall image loss of the neural network according to the repaired sample image and the real image;
and training the neural network according to the primary repair loss, the block loss and the image overall loss.
8. The method of claim 6 or 7, wherein the neural network further comprises a first discriminant network, and wherein training the neural network according to a predetermined training set further comprises:
inputting a plurality of real image blocks of a real image corresponding to the sample image and a repaired sample image block at a corresponding position into the first discrimination network respectively for processing to obtain a first discrimination result of the real image blocks and a second discrimination result of the repaired sample image blocks;
and countertraining the neural network according to the first judgment result and the second judgment result.
9. The method of claim 8, wherein the training the neural network according to a preset training set further comprises:
determining a first data distribution of a plurality of image blocks of a normal area of the sample image and a second data distribution of the plurality of restored sample image blocks;
inputting the first data distribution and the second data distribution into the first discrimination network respectively for processing to obtain a third discrimination result and a fourth discrimination result;
and countertraining the neural network according to the first discrimination result, the second discrimination result, the third discrimination result and the fourth discrimination result.
10. The method according to any one of claims 6-9, wherein the neural network further comprises a second decision network, the training of the neural network according to a preset training set further comprises:
inputting a real image corresponding to the sample image and the repaired sample image into the second judgment network respectively for processing to obtain a fifth judgment result of the real image and a sixth judgment result of the repaired sample image;
and countertraining the neural network according to the fifth judgment result and the sixth judgment result.
11. An image restoration apparatus, comprising:
the device comprises a first repairing module, a second repairing module and a third repairing module, wherein the first repairing module is used for performing primary repairing on an image to be processed to obtain a first repairing image, the image to be processed comprises a normal area and a region to be repaired, and the first repairing image comprises a primary repairing area corresponding to the region to be repaired;
the texture matching module is used for respectively determining a first image block matched with the texture of each second image block according to the plurality of first image blocks of the normal area and the plurality of second image blocks of the primary repair area;
and the second repairing module is used for repairing the preliminary repairing area according to the first image blocks matched with the textures of the second image blocks to obtain a second repairing image of the image to be processed.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202010237090.1A 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium Active CN111445415B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010237090.1A CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010237090.1A CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111445415A true CN111445415A (en) 2020-07-24
CN111445415B CN111445415B (en) 2024-03-08

Family

ID=71649321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010237090.1A Active CN111445415B (en) 2020-03-30 2020-03-30 Image restoration method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111445415B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419179A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for repairing image
CN113674176A (en) * 2021-08-23 2021-11-19 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
WO2022247702A1 (en) * 2021-05-28 2022-12-01 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2024179333A1 (en) * 2023-02-28 2024-09-06 北京字跳网络技术有限公司 Image processing method and apparatus, device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145839A (en) * 2017-04-17 2017-09-08 努比亚技术有限公司 A kind of fingerprint image completion analogy method and its system
CN107993210A (en) * 2017-11-30 2018-05-04 北京小米移动软件有限公司 Image repair method, device and computer-readable recording medium
CN109584178A (en) * 2018-11-29 2019-04-05 腾讯科技(深圳)有限公司 Image repair method, device and storage medium
US20190295227A1 (en) * 2018-03-26 2019-09-26 Adobe Inc. Deep patch feature prediction for image inpainting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145839A (en) * 2017-04-17 2017-09-08 努比亚技术有限公司 A kind of fingerprint image completion analogy method and its system
CN107993210A (en) * 2017-11-30 2018-05-04 北京小米移动软件有限公司 Image repair method, device and computer-readable recording medium
US20190295227A1 (en) * 2018-03-26 2019-09-26 Adobe Inc. Deep patch feature prediction for image inpainting
CN109584178A (en) * 2018-11-29 2019-04-05 腾讯科技(深圳)有限公司 Image repair method, device and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419179A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for repairing image
CN112419179B (en) * 2020-11-18 2024-07-05 北京字跳网络技术有限公司 Method, apparatus, device and computer readable medium for repairing image
WO2022247702A1 (en) * 2021-05-28 2022-12-01 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113674176A (en) * 2021-08-23 2021-11-19 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN113674176B (en) * 2021-08-23 2024-04-16 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
WO2024179333A1 (en) * 2023-02-28 2024-09-06 北京字跳网络技术有限公司 Image processing method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
CN111445415B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN110647834B (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
CN111445415B (en) Image restoration method and device, electronic equipment and storage medium
CN111310616B (en) Image processing method and device, electronic equipment and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
CN111462268B (en) Image reconstruction method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN109977847B (en) Image generation method and device, electronic equipment and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN110944230B (en) Video special effect adding method and device, electronic equipment and storage medium
CN109711546B (en) Neural network training method and device, electronic equipment and storage medium
CN111242303B (en) Network training method and device, and image processing method and device
CN111435432B (en) Network optimization method and device, image processing method and device and storage medium
CN112219224B (en) Image processing method and device, electronic equipment and storage medium
CN109165738B (en) Neural network model optimization method and device, electronic device and storage medium
CN111310664B (en) Image processing method and device, electronic equipment and storage medium
CN111563138B (en) Positioning method and device, electronic equipment and storage medium
CN111311588B (en) Repositioning method and device, electronic equipment and storage medium
CN112529846A (en) Image processing method and device, electronic equipment and storage medium
CN110415258B (en) Image processing method and device, electronic equipment and storage medium
CN113012052A (en) Image processing method and device, electronic equipment and storage medium
CN109165722B (en) Model expansion method and device, electronic equipment and storage medium
CN113538310A (en) Image processing method and device, electronic equipment and storage medium
CN110929545A (en) Human face image sorting method and device
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant