CN109658346B - Image restoration method and device, computer-readable storage medium and electronic equipment - Google Patents
Image restoration method and device, computer-readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN109658346B CN109658346B CN201811347537.XA CN201811347537A CN109658346B CN 109658346 B CN109658346 B CN 109658346B CN 201811347537 A CN201811347537 A CN 201811347537A CN 109658346 B CN109658346 B CN 109658346B
- Authority
- CN
- China
- Prior art keywords
- feature map
- image
- determining
- target
- missing part
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000010586 diagram Methods 0.000 claims abstract description 27
- 230000008439 repair process Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 46
- 238000000605 extraction Methods 0.000 claims description 39
- 238000012545 processing Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure relates to an image restoration method, an apparatus, a computer-readable storage medium, and an electronic device, the method comprising: determining a first feature map of the broken image; determining a missing part feature map of a target area of the damaged image according to the first feature map, and obtaining a target feature map according to the first feature map and the missing part feature map; and generating a repair image corresponding to the damaged image according to the target feature map. Therefore, according to the technical scheme, the image is repaired by determining the characteristic diagram of the missing part of the damaged image, on one hand, when the image is repaired, the missing part in the damaged image can be rapidly and accurately determined, and therefore the accuracy and precision of the image repairing result are effectively improved. On the other hand, the missing part in the damaged image can be more concerned when the image is repaired, so that the robustness and the mobility of the image repairing method can be effectively improved, and the user experience is improved.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image restoration method and apparatus, a computer-readable storage medium, and an electronic device.
Background
The main purpose of image restoration is to restore missing parts in images, and the image restoration method has wide application in real life. In the image restoration mode, supervised training is performed by calculating pixel difference between an image restored by a network and a complete image, so that the robustness of an image restoration model obtained by training is low.
Disclosure of Invention
The purpose of the present disclosure is to provide an image restoration method, an image restoration apparatus, a computer-readable storage medium, and an electronic device with high accuracy.
In order to achieve the above object, according to a first aspect of the present disclosure, there is provided an image inpainting method, the method including:
determining a first feature map of the broken image;
determining a missing part feature map of a target area of the damaged image according to the first feature map, and obtaining a target feature map according to the first feature map and the missing part feature map;
and generating a repair image corresponding to the damaged image according to the target feature map.
Optionally, the determining, according to the first feature map, a missing part feature map of the target region of the damaged image, and obtaining, according to the first feature map and the missing part feature map, a target feature map includes:
performing convolution processing on the first feature map to determine a second feature map of the damaged image;
based on the first feature map, performing attention feature extraction on a first target region of the damaged image, and determining a missing part feature map of the first target region, wherein the first target region is initially any one of a plurality of target regions;
determining a third feature map based on the second feature map and the missing part feature map of the first target region;
if the damaged image has a target area which is not subjected to attention feature extraction, determining the third feature map as a new first feature map, determining a target area which is not subjected to attention feature extraction as a new first target area, returning to the step of performing convolution processing on the first feature map, and determining a second feature map of the damaged image until all target areas in the damaged image are subjected to attention feature extraction;
and if the attention feature extraction is carried out on each target area, determining the third feature map determined at the last time as the target feature map.
Optionally, the performing attention feature extraction on a first target region of the damaged image based on the first feature map, and determining a feature map of a missing part of the first target region includes:
down-sampling the first feature map through a convolution layer and a pooling layer to obtain a down-sampled feature map;
and upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
Optionally, the determining a third feature map based on the second feature map and the missing part feature map of the first target region includes:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
According to a second aspect of the present disclosure, there is provided an image repair apparatus, the apparatus comprising:
the first determining module is used for determining a first feature map of the damaged image;
a second determining module, configured to determine a feature map of a missing part of the target region of the damaged image according to the first feature map, and obtain a target feature map according to the first feature map and the feature map of the missing part;
and the generating module is used for generating a repair image corresponding to the damaged image according to the target feature map.
Optionally, the target area is multiple, and the second determining module is configured to:
performing convolution processing on the first feature map to determine a second feature map of the damaged image;
based on the first feature map, performing attention feature extraction on a first target region of the damaged image, and determining a missing part feature map of the first target region, wherein the first target region is initially any one of a plurality of target regions;
determining a third feature map based on the second feature map and the missing part feature map of the first target region;
if the damaged image has a target area which is not subjected to attention feature extraction, determining the third feature map as a new first feature map, determining a target area which is not subjected to attention feature extraction as a new first target area, returning to the step of performing convolution processing on the first feature map, and determining a second feature map of the damaged image until all target areas in the damaged image are subjected to attention feature extraction;
and if the attention feature extraction is carried out on each target area, determining the third feature map determined at the last time as the target feature map.
Optionally, the second determining module is configured to:
down-sampling the first feature map through a convolution layer and a pooling layer to obtain a down-sampled feature map;
and upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
Optionally, the second determining module is configured to:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods of the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects above.
In the above technical solution, by acquiring the first feature map of the damaged image, determining the feature map of the missing part of the damaged image based on the first feature map, and further determining the target feature map, the repaired image corresponding to the damaged image can be generated according to the target feature map. Therefore, according to the technical scheme, the image is repaired by determining the characteristic diagram of the missing part of the damaged image, on one hand, the missing part in the damaged image can be rapidly and accurately determined, and therefore the accuracy and precision of the image repairing result are effectively improved. On the other hand, when image restoration is carried out, missing parts in the damaged images can be paid more attention to, so that the robustness and the mobility of the image restoration method can be effectively improved, the accuracy of image restoration is further ensured, and the user experience is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart of an image inpainting method provided in accordance with one embodiment of the present disclosure;
FIG. 2 is a flow diagram of one exemplary implementation of determining a missing portion feature map of a target region of a damaged image from a first feature map and obtaining a target feature map from the first feature map and the missing portion feature map, provided in accordance with one embodiment of the present disclosure;
FIG. 3 is a block diagram of an image restoration apparatus provided in accordance with one embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart illustrating an image restoration method according to an embodiment of the present disclosure, where as shown in fig. 1, the method includes:
in S11, a first feature map of the broken image is determined.
The first feature map can be obtained by performing feature extraction on the damaged image through a first full convolution network. For example, the convolution kernel size of a full convolution network may be 3 x 3, step size 1, and padding 1. Alternatively, when the feature extraction is performed on the broken image, the original size of the broken image is not changed.
In S12, the missing part feature map of the target region of the damaged image is determined from the first feature map, and the target feature map is obtained from the first feature map and the missing part feature map.
The number of the target areas can be one or more, and the determined target characteristic graph is the characteristic graph corresponding to the repaired image. After the first feature map of the damaged image is extracted, the missing part feature map of the target area is determined according to the first feature map, the missing part feature map corresponds to the missing part in the damaged image, and the image is repaired by directly determining the missing part of the damaged image, so that the efficiency and the precision of image repairing can be effectively improved.
In S13, a restored image corresponding to the damaged image is generated from the target feature map. After the target feature map corresponding to the repaired image is determined, the repaired image can be generated directly based on the target feature map. The manner of generating the image according to the feature map is an existing manner, and is not described herein again.
In the above technical solution, by acquiring the first feature map of the damaged image, determining the feature map of the missing part of the damaged image based on the first feature map, and further determining the target feature map, the repaired image corresponding to the damaged image can be generated according to the target feature map. Therefore, according to the technical scheme, the image is repaired by determining the characteristic diagram of the missing part of the damaged image, on one hand, the missing part in the damaged image can be rapidly and accurately determined, and therefore the accuracy and precision of the image repairing result are effectively improved. On the other hand, when image restoration is carried out, missing parts in the damaged images can be paid more attention to, so that the robustness and the mobility of the image restoration method can be effectively improved, the accuracy of image restoration is further ensured, and the user experience is improved.
In order to make those skilled in the art understand the technical solutions provided by the embodiments of the present invention, the following detailed descriptions are provided for the above steps.
Optionally, the determining, by the first feature map, a missing part feature map of the target region of the damaged image, and obtaining, by the first feature map and the missing part feature map, an exemplary implementation of the target feature map is as follows, as shown in fig. 2, and includes:
in S21, the first feature map is subjected to convolution processing, and the second feature map of the damaged image is specified. The first feature map can be convolved by a second full convolution network, so that a second feature map is obtained. The first full convolutional network and the second full convolutional network may be the same convolutional neural network or different convolutional neural networks, which is not limited in this disclosure.
In S22, attention feature extraction is performed on a first target region of the damaged image based on a first feature map, which is initially any one of the plurality of target regions, and a missing part feature map of the first target region is determined.
In this embodiment, feature extraction may be performed on the first target region of the broken image by a neural network including a self-attention mechanism. The neural network may be trained in advance, so that when the neural network performs feature extraction, the neural network may focus attention on a missing part of the first target region in the broken image, thereby obtaining a feature map of the missing part of the first target region.
Optionally, an exemplary implementation manner of performing attention feature extraction on the first target region of the damaged image based on the first feature map to determine the feature map of the missing part of the first target region is as follows, including:
and downsampling the first feature map through a convolution layer and a pooling layer to obtain a downsampled feature map.
And upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
The convolutional layer is used for extracting features, and the pooling layer is used for compressing an input feature map, so that the feature map is reduced, and the network computation complexity is simplified; on one hand, feature compression is carried out, and main features are extracted. Therefore, when the first feature map is downsampled by the convolutional layer and the pooling layer, a downsampled feature map of the first target region in the image can be obtained. The downsampled feature map may then be upsampled to a higher resolution by the convolutional layer and the upsampling layer to obtain a missing portion feature map of the first target region.
In the above-described aspect, the first feature map is downsampled, so that a downsampled feature map can be obtained by sampling a missing portion of the first target region in the feature sampling process, and then the downsampled feature map is upsampled, so that a missing portion feature map of the first target region can be obtained. Therefore, by the technical scheme, the missing parts of the damaged image can be determined respectively in the characteristic extraction and restoration processes of the image, and accurate data support is provided for image restoration.
In S23, a third feature map is determined based on the second feature map and the missing part feature map of the first target region.
Optionally, an exemplary implementation manner of determining the third feature map based on the second feature map and the missing part feature map of the first target region is as follows, including:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
Illustratively, the matrix corresponding to the third feature map is determined by the following formula:
wherein, Q (x) represents a matrix corresponding to the third characteristic diagram;
t (x) represents an image matrix corresponding to the second feature map;
m (x) represents an image matrix corresponding to the missing part feature map of the first target region.
If there is a target region in the damaged image for which attention feature extraction has not been performed, in S24, determining a third feature map as a new first feature map, determining a target region for which attention feature extraction has not been performed as a new first target region, returning to step S21 of performing convolution processing on the first feature map, and determining a second feature map of the damaged image, until attention feature extraction is performed for each target region in the damaged image;
if the attention feature extraction has been performed for each target region, in S25, the third feature map determined last time is determined as the target feature map.
If a target region for which attention characteristics are not extracted exists in the damaged image, it indicates that a part of the target region does not undergo image restoration, and in this case, image restoration needs to be performed on the part of the target region. When the attention feature extraction is performed on each target region in the damaged image, which indicates that the missing part in each target region in the damaged image has been repaired, the last determined third feature map may be determined as the target feature map.
Alternatively, when there is one target area of the damaged image, the third feature map may be directly determined as the target feature map when the third feature map is determined. The determination method of the third characteristic diagram is described in detail above, and is not described herein again.
In another embodiment, each target region may correspond to a sub-model, and the sub-model is used for performing the steps corresponding to S21, S22, and S23, that is, the third feature map of the damaged image may be determined by the sub-model corresponding to the target region. The submodels corresponding to the target areas can be connected in series, so that the currently determined third feature map can be directly input into the submodel corresponding to the next target area, and the image can be repaired continuously. The model synthesized by the submodels corresponding to each target area may be trained in advance. During training, the synthetic model corresponding to the sub-model corresponding to each target area can be trained based on the damaged image and the complete image, and the synthetic model is subjected to feedback training through the loss value of the synthetic model determined by the repaired image and the complete image. The feedback training based on the loss value continuously updates the parameters of each submodel so that the submodel can focus on the missing part in the target region corresponding to the submodel. The above embodiments are merely exemplary illustrations, and do not limit the present disclosure.
In the technical scheme, the missing part characteristic diagram of each target area in the damaged image is determined, so that the damaged image is subjected to image restoration according to the missing part characteristic diagram. On the other hand, the accuracy of the characteristic diagram of the missing part can be effectively ensured by respectively extracting the characteristic diagrams of the missing part of each target region, so that the accuracy and precision of image restoration are ensured.
The present disclosure also provides an image restoration apparatus, as shown in fig. 3, the apparatus 10 including:
a first determining module 100, configured to determine a first feature map of the damaged image;
a second determining module 200, configured to determine a feature map of a missing part of the target area of the damaged image according to the first feature map, and obtain a target feature map according to the first feature map and the feature map of the missing part;
a generating module 300, configured to generate a repaired image corresponding to the damaged image according to the target feature map.
Optionally, the target area is multiple, and the second determining module 200 is configured to:
performing convolution processing on the first feature map to determine a second feature map of the damaged image;
based on the first feature map, performing attention feature extraction on a first target region of the damaged image, and determining a missing part feature map of the first target region, wherein the first target region is initially any one of a plurality of target regions;
determining a third feature map based on the second feature map and the missing part feature map of the first target region;
if the damaged image has a target area which is not subjected to attention feature extraction, determining the third feature map as a new first feature map, determining a target area which is not subjected to attention feature extraction as a new first target area, returning to the step of performing convolution processing on the first feature map, and determining a second feature map of the damaged image until all target areas in the damaged image are subjected to attention feature extraction;
and if the attention feature extraction is carried out on each target area, determining the third feature map determined at the last time as the target feature map.
Optionally, the second determining module 200 is configured to:
down-sampling the first feature map through a convolution layer and a pooling layer to obtain a down-sampled feature map;
and upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
Optionally, the second determining module 200 is configured to:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a block diagram illustrating an electronic device 700 according to an example embodiment. As shown in fig. 4, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the image restoration method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the image restoration method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image inpainting method described above is also provided. For example, the computer readable storage medium may be the memory 702 described above including program instructions that are executable by the processor 701 of the electronic device 700 to perform the image inpainting method described above.
Fig. 5 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 5, an electronic device 1900 includes a processor 1922, which may be one or more in number, and a memory 1932 for storing computer programs executable by the processor 1922. The computer program stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the image inpainting method described above.
Additionally, electronic device 1900 may also include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 1900. In addition, the electronic device 1900 may also include input/output (I/O) interfaces 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, etc., stored in memory 1932.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image inpainting method described above is also provided. For example, the computer readable storage medium may be the memory 1932 described above that includes program instructions that are executable by the processor 1922 of the electronic device 1900 to perform the image inpainting method described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.
Claims (6)
1. An image inpainting method, comprising:
determining a first feature map of the broken image;
determining a missing part feature map of a target area of the damaged image according to the first feature map, and obtaining a target feature map according to the first feature map and the missing part feature map;
generating a repair image corresponding to the damaged image according to the target feature map;
the determining a plurality of target areas, determining a missing part feature map of the target area of the damaged image according to the first feature map, and obtaining a target feature map according to the first feature map and the missing part feature map includes:
performing convolution processing on the first feature map to determine a second feature map of the damaged image;
based on the first feature map, performing attention feature extraction on a first target region of the damaged image, and determining a missing part feature map of the first target region, wherein the first target region is initially any one of a plurality of target regions;
determining a third feature map based on the second feature map and the missing part feature map of the first target region;
if the damaged image has a target area which is not subjected to attention feature extraction, determining the third feature map as a new first feature map, determining a target area which is not subjected to attention feature extraction as a new first target area, returning to the step of performing convolution processing on the first feature map, and determining a second feature map of the damaged image until all target areas in the damaged image are subjected to attention feature extraction;
if the attention feature extraction is carried out on each target area, determining the third feature map determined at the last time as the target feature map;
wherein the determining a third feature map based on the second feature map and the missing part feature map of the first target region comprises:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
2. The method of claim 1, wherein said performing attention feature extraction on a first target region of said damaged image based on said first feature map, and determining a missing part feature map of said first target region comprises:
down-sampling the first feature map through a convolution layer and a pooling layer to obtain a down-sampled feature map;
and upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
3. An image restoration apparatus, characterized in that the apparatus comprises:
the first determining module is used for determining a first feature map of the damaged image;
a second determining module, configured to determine a feature map of a missing part of the target region of the damaged image according to the first feature map, and obtain a target feature map according to the first feature map and the feature map of the missing part;
the generating module is used for generating a repair image corresponding to the damaged image according to the target feature map;
wherein the target area is a plurality of target areas, and the second determining module is configured to:
performing convolution processing on the first feature map to determine a second feature map of the damaged image;
based on the first feature map, performing attention feature extraction on a first target region of the damaged image, and determining a missing part feature map of the first target region, wherein the first target region is initially any one of a plurality of target regions;
determining a third feature map based on the second feature map and the missing part feature map of the first target region;
if the damaged image has a target area which is not subjected to attention feature extraction, determining the third feature map as a new first feature map, determining a target area which is not subjected to attention feature extraction as a new first target area, returning to the step of performing convolution processing on the first feature map, and determining a second feature map of the damaged image until all target areas in the damaged image are subjected to attention feature extraction;
if the attention feature extraction is carried out on each target area, determining the third feature map determined at the last time as the target feature map;
wherein the second determination module is to:
multiplying elements in the image matrix corresponding to the second feature map by elements in the corresponding position in the image matrix corresponding to the missing part feature map of the first target area to obtain a first matrix;
and determining a matrix obtained by adding the image matrix corresponding to the second characteristic diagram and the first matrix as a matrix corresponding to the third characteristic diagram.
4. The apparatus of claim 3, wherein the second determining module is configured to:
down-sampling the first feature map through a convolution layer and a pooling layer to obtain a down-sampled feature map;
and upsampling the downsampled feature map through the convolutional layer and the upsampling layer to obtain a missing part feature map of the first target area.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 1 or 2.
6. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811347537.XA CN109658346B (en) | 2018-11-13 | 2018-11-13 | Image restoration method and device, computer-readable storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811347537.XA CN109658346B (en) | 2018-11-13 | 2018-11-13 | Image restoration method and device, computer-readable storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109658346A CN109658346A (en) | 2019-04-19 |
CN109658346B true CN109658346B (en) | 2021-07-02 |
Family
ID=66110928
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811347537.XA Active CN109658346B (en) | 2018-11-13 | 2018-11-13 | Image restoration method and device, computer-readable storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109658346B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288036B (en) * | 2019-06-28 | 2021-11-09 | 北京字节跳动网络技术有限公司 | Image restoration method and device and electronic equipment |
CN111639654B (en) * | 2020-05-12 | 2023-12-26 | 博泰车联网(南京)有限公司 | Image processing method, device and computer storage medium |
CN111738940B (en) * | 2020-06-02 | 2022-04-12 | 大连理工大学 | Eye filling method for face image |
CN111738958B (en) * | 2020-06-28 | 2023-08-22 | 字节跳动有限公司 | Picture restoration method and device, electronic equipment and computer readable medium |
CN114332334A (en) * | 2021-12-31 | 2022-04-12 | 中国电信股份有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971338A (en) * | 2014-05-06 | 2014-08-06 | 清华大学深圳研究生院 | Variable-block image repair method based on saliency map |
CN106204449A (en) * | 2016-07-06 | 2016-12-07 | 安徽工业大学 | A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9324138B2 (en) * | 2013-03-15 | 2016-04-26 | Eric Olsen | Global contrast correction |
CN106934397B (en) * | 2017-03-13 | 2020-09-01 | 北京市商汤科技开发有限公司 | Image processing method and device and electronic equipment |
CN107481192B (en) * | 2017-08-11 | 2021-08-24 | 北京市商汤科技开发有限公司 | Image processing method, image processing apparatus, storage medium, computer program, and electronic device |
CN107492082A (en) * | 2017-09-29 | 2017-12-19 | 西南石油大学 | A kind of MRF sample block image repair methods using edge statistics feature |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
-
2018
- 2018-11-13 CN CN201811347537.XA patent/CN109658346B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971338A (en) * | 2014-05-06 | 2014-08-06 | 清华大学深圳研究生院 | Variable-block image repair method based on saliency map |
CN106204449A (en) * | 2016-07-06 | 2016-12-07 | 安徽工业大学 | A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network |
Also Published As
Publication number | Publication date |
---|---|
CN109658346A (en) | 2019-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109658346B (en) | Image restoration method and device, computer-readable storage medium and electronic equipment | |
CN108885787B (en) | Method for training image restoration model, image restoration method, device, medium, and apparatus | |
US11544820B2 (en) | Video repair method and apparatus, and storage medium | |
US20210019562A1 (en) | Image processing method and apparatus and storage medium | |
CN109753910B (en) | Key point extraction method, model training method, device, medium and equipment | |
CN109711273B (en) | Image key point extraction method and device, readable storage medium and electronic equipment | |
CN109492531B (en) | Face image key point extraction method and device, storage medium and electronic equipment | |
US20220188982A1 (en) | Image reconstruction method and device, electronic device, and storage medium | |
CN112258404B (en) | Image processing method, device, electronic equipment and storage medium | |
CN109697446B (en) | Image key point extraction method and device, readable storage medium and electronic equipment | |
CN115115593B (en) | Scanning processing method and device, electronic equipment and storage medium | |
CN109903252B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112800276B (en) | Video cover determining method, device, medium and equipment | |
CN114898177B (en) | Defect image generation method, model training method, device, medium and product | |
CN111369482A (en) | Image processing method and device, electronic equipment and storage medium | |
CN111523555A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112837237A (en) | Video repair method and device, electronic equipment and storage medium | |
CN112259122B (en) | Audio type identification method, device and storage medium | |
CN110992387B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112749709A (en) | Image processing method and device, electronic equipment and storage medium | |
CN114255177B (en) | Exposure control method, device, equipment and storage medium in imaging | |
CN115457024A (en) | Method and device for processing cryoelectron microscope image, electronic equipment and storage medium | |
CN112734015B (en) | Network generation method and device, electronic equipment and storage medium | |
CN115098262A (en) | Multi-neural-network task processing method and device | |
CN111179175B (en) | Image processing method and device based on convolutional neural network and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |