WO2021035979A1 - Image filling method and apparatus based on edge learning, terminal, and readable storage medium - Google Patents

Image filling method and apparatus based on edge learning, terminal, and readable storage medium Download PDF

Info

Publication number
WO2021035979A1
WO2021035979A1 PCT/CN2019/118150 CN2019118150W WO2021035979A1 WO 2021035979 A1 WO2021035979 A1 WO 2021035979A1 CN 2019118150 W CN2019118150 W CN 2019118150W WO 2021035979 A1 WO2021035979 A1 WO 2021035979A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
convolution model
preset
edge
background
Prior art date
Application number
PCT/CN2019/118150
Other languages
French (fr)
Chinese (zh)
Inventor
王健宗
王义文
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021035979A1 publication Critical patent/WO2021035979A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Definitions

  • This application relates to the field of image processing, and in particular to an image filling method, device, terminal and readable storage medium for edge learning.
  • Image restoration refers to the process of reconstructing missing or damaged parts of images and videos. For example, in museums, this work is often performed by experienced museum administrators or art restoration engineers; in the digital world, image restoration is also called image interpolation or video interpolation, which refers to the use of complex algorithms to replace lost or damaged items. Image data mainly replaces some small areas and defects.
  • the main purpose of this application is to provide an edge learning image filling method, device, terminal, and readable storage medium, aiming to solve the technical problem that the existing image restoration cannot reconstruct ideal details.
  • the present application provides an image filling method for edge learning, and the image filling method for edge learning includes the following steps:
  • a filled image corresponding to the original image is generated.
  • the present application also provides a terminal, the terminal including a processor, a memory, and computer-readable instructions stored on the memory and executable by the processor, wherein the computer can When the read instruction is executed by the processor, the steps of the above-mentioned edge learning image filling method are realized.
  • the present application also provides a readable storage medium having computer readable instructions stored on the readable storage medium, and the computer readable instructions implement any of the above The steps of the image filling method of edge learning.
  • This application provides an image filling method for edge learning, which determines the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image, and then determines the background grayscale image based on the edge image corresponding to the original image and the mask image A background edge image, and then based on the background grayscale image, the background edge image, the mask image, and the first preset cavity convolution model, a missing edge image is generated, and then based on the background image corresponding to the original image , The background edge image, the missing edge image, the mask image, and the second preset cavity convolution model to generate a filled image corresponding to the original image.
  • the two-stage confrontation model realizes the edge restoration of the image to be reconstructed, and then performs color filling, so that the filled area can better reproduce the fine details of the image.
  • FIG. 1 is a schematic diagram of the hardware structure of an image filling terminal for edge learning involved in a solution of an embodiment of the application;
  • FIG. 2 is a schematic flowchart of a first embodiment of an image filling method for edge learning according to this application;
  • FIG. 3 is a schematic flowchart of a second embodiment of an image filling method for edge learning according to this application.
  • Fig. 4 is a schematic diagram of functional modules of an image filling device for edge learning in this application.
  • the edge learning image filling method involved in the embodiments of the present application is mainly applied to a terminal, and the edge terminal may be a device with display and processing functions such as a PC, a portable computer, and a mobile terminal.
  • FIG. 1 is a schematic diagram of the hardware structure of the edge learning image filling terminal involved in the solution of the embodiment of the application.
  • the edge learning image filling terminal may include a processor 1001 (for example, a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to realize the connection and communication between these components;
  • the user interface 1003 may include a display (Display), an input unit such as a keyboard (Keyboard);
  • the network interface 1004 may optionally include a standard wired interface, a wireless interface (Such as WI-FI interface);
  • the memory 1005 can be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a disk memory.
  • the memory 1005 can optionally be a storage device independent of the aforementioned processor 1001 .
  • FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or a combination of certain components, or different component arrangements.
  • the memory 1005 as a readable storage medium in FIG. 1 may include an operating system, a network communication module, and computer readable instructions.
  • the network communication module is mainly used to connect to the server and perform data communication with the server; and the processor 1001 can call the computer-readable instructions stored in the memory 1005, and execute the edge learning image filling method provided by the embodiment of the present application .
  • the embodiment of the present application provides an image filling method of edge learning.
  • FIG. 2 is a schematic flowchart of a first embodiment of an image filling method for edge learning according to this application.
  • the image filling method for edge learning includes the following steps:
  • Step S10 Determine a background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
  • the original image is the image to be repaired, and there are pixel deletions in the image;
  • grayscale images are images with only one sampled color per pixel, and this type of image is usually displayed from the darkest black to the brightest white
  • the mask image is an image derived from the original image, that is, the pixel value of the pixel in the missing area in the original image is defined as 1, the area outside the missing area is defined as the background, and the pixel value of the pixel in the background is defined as 0 , That is to say, the value of the pixel in the mask image is either 1 or 0.
  • the formula is as follows, where ⁇ represents the Hadamard product.
  • Step S20 Determine a background edge image based on the edge image corresponding to the original image and the mask image;
  • Edge image C gt and mask image M determine the background edge image
  • Step S30 training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
  • the original image is an image with missing pixels
  • the area of the missing image also includes the edge of the image.
  • the missing edge of the image is first repaired.
  • the first preset hollow convolution model is a convolutional neural network.
  • the convolutional neural network mimics the visual perception mechanism of biology. It can perform supervised learning and unsupervised learning.
  • Convolution kernel parameter sharing and layer in its hidden layer The sparsity of the connections enables the convolutional neural network to learn grid features, such as pixels and audio, with a small amount of calculation, with stable effects and no additional feature engineering requirements for the data.
  • the background grayscale image, the background edge image, and the mask image are used as samples to train the first preset hole convolution model until the first preset hole convolution model converges, wherein the first preset hole volume
  • the output of the product model is the missing edge image.
  • the first discriminator adopts the PatchGAN architecture and only judges the structure within the patch range .
  • the first preset hole convolution model is represented by G 1
  • the missing edge image is represented by C pred .
  • the calculation formula is as follows:
  • M is the mask image
  • Step S40 Determine whether the first preset hole convolution model converges based on a first discriminator, where the terminal includes the first discriminator;
  • the missing edge image output during the training process of the first preset hole convolution model is accurate needs to be further identified by the first discriminator.
  • the output missing edge image is relatively accurate.
  • step S40 includes:
  • Step a based on the first input data and the first discriminator, determine a first countermeasure loss function corresponding to the first preset hole convolution model, wherein the first input data includes the grayscale image, The edge image and the missing edge image;
  • the loss function of the first preset hole convolution model includes the counter loss function And feature matching function.
  • the first discriminator is denoted by D 1 , then according to the gray image I gray , the edge image C gt , the missing edge image C pred and the first discriminator D 1 , the first adversarial loss function L adv is calculated , 1 , the formula is as follows:
  • Step b Determine the feature matching loss function corresponding to the first preset hole convolution model based on the second input data and the first discriminator, wherein the second input data includes the edge image and the Missing edge image;
  • the feature matching loss function is represented by L FM .
  • the edge image C gt the missing edge image C pred and the first discriminator D 1 .
  • the feature matching loss function L FM is calculated. The formula is as follows Show:
  • Step c Determine the loss function of the first preset hole convolution model based on the first counter loss function and the feature matching loss function;
  • the loss function L G1 of the first preset hole convolution model is further calculated according to the first confrontation loss function L adv,1 and the feature matching loss function L FM , and the formula is as follows:
  • ⁇ adv,1 and ⁇ FM are regularization parameters.
  • Step d Determine whether the first preset hole convolution model converges based on the loss function of the first preset hole convolution model.
  • the loss function L G1 of the first preset hole convolution model is calculated, and according to the preset convergence judgment rule, according to the loss The function L G1 determines whether the first preset hole convolution model converges.
  • the preset convergence judgment rule is not limited in this application.
  • the preset convergence judgment rule is training on the first preset hole convolution model In the process, when the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the first preset cavity convolution model has converged.
  • the loss function of this training is compared with the loss of the last training When the difference of the function is greater than the preset value, it is determined that the first preset cavity convolution model does not converge, the parameters of the first preset cavity convolution model need to be updated, and the updated first preset cavity convolution model continues to be performed training.
  • Step S50 when it is determined that the first preset cavity convolution model is converged, determine that the first preset cavity convolution model is the first target cavity convolution model;
  • the model training process when the model converges, it means that the model training is finished, and the model convergence is judged according to the loss function of the model. Specifically, in the training process of the first preset hole convolution model, if the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the first preset hole convolution model converges Therefore, the current first preset cavity convolution model is used as the first target cavity convolution model.
  • Step S60 Generate a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target cavity convolution model.
  • the first target hole convolution model is a trained model, and the model is convergent.
  • the first target hole convolution model uses G F1 , then according to the background gray image Background edge image
  • the mask image M and the first target hole convolution model G F1 generate the missing edge image C pred , the formula is as follows:
  • Step S70 Generate a filled image corresponding to the original image based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model.
  • the image is further filled.
  • First determine the overall edge according to the missing edge image and the background edge image, and then fill in the images inside and outside the edge. Specifically, according to the background image, background edge image, missing edge image, and mask image corresponding to the original image And a second preset cavity convolution model to generate a filled image corresponding to the original image.
  • step S40 the method further includes:
  • Step S80 when it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to the first preset rule;
  • the training process of the first preset hole convolution model when the difference between the loss function of this training and the loss function of the last training is greater than the preset value, it is determined that the first preset hole convolution The model does not converge. At this time, it is necessary to update the parameters of the first preset hole convolution model, and continue to train the updated first preset hole convolution model.
  • the learning rate determines the model parameters, so the learning rate of the first preset hole convolution model is updated according to the preset rule, and then the parameters of the first preset hole convolution model are adjusted.
  • step S90 the first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the The step of training a first preset cavity convolution model on the background grayscale image, the background edge image, and the mask image.
  • the learning rate determines the model parameters, so after updating the learning rate of the first preset hole convolution model according to the preset rule, the first preset hole convolution model is further updated according to the updated first learning rate, That is, the parameters of the first preset hole convolution model are updated.
  • train the first preset hole convolution model that is, continue to perform the step of training the first preset hole convolution model based on the background grayscale image, the background edge image, and the mask image .
  • the edge learning image filling method proposed in this embodiment implements the edge repair of the image to be reconstructed through a two-stage confrontation model, and then performs color filling, so that the filled area can better reproduce the fine details of the image.
  • step S70 includes:
  • Step S71 Determine a composite edge image based on the missing edge image, the background edge image, and the mask image;
  • the overall edge is continuously determined based on the missing edge image and the background edge image.
  • the composite edge image is represented by C copm , and the formula is as follows, where C gt represents the edge image corresponding to the original image, M represents the mask image, C pred represents the missing edge image, and ⁇ represents the Hadamard product.
  • Step S72 training a second preset cavity convolution model based on the background image corresponding to the original image and the composite edge image;
  • the second preset hole convolution model is also a kind of convolutional neural network, and its training samples are background images and composite edge images corresponding to the original image.
  • the model The training stops, where the output of the second preset hole convolution model is the filled image corresponding to the original image. Whether the filled image output during the training process of the second preset hole convolution model is accurate needs to be further identified by the second discriminator.
  • the second preset hole convolution model is G 2 , and the filled image is represented by I pred .
  • the color image I gt corresponding to the original image, I gt and (1-M) are subjected to the Hadamard product operation to obtain the original image corresponding Background image
  • the calculation formula of I pred for filling the image is as follows:
  • Step S73 Determine whether the second preset hole convolution model converges based on the second discriminator and the preset convolutional neural network
  • the second discriminator adopts the PatchGAN architecture, which is only in the patch range Judge the structure within.
  • step S73 includes:
  • Step e based on the third input data and the second discriminator, determine a second confrontation loss function corresponding to the second preset hole convolution model
  • the loss function of the second preset hole convolution model includes the L1 loss function , The second confrontation loss function, perceptual loss function and style loss function.
  • the L1 loss function is a preset value, which is determined by the actual situation.
  • the purpose of the L1 loss function is to ensure correct scaling.
  • the L1 loss function is based on the size of the mask image M Standardize.
  • the second authentication Used D 2 represents, according to the third input data and a second discriminator determining a second predetermined cavity corresponding to a second convolutional model loss function against ,
  • the third input data includes the color image I gt corresponding to the original image, the filled image I pred and the composite edge image C copm
  • the second counter loss function L adv is calculated according to the third input data and the second discriminator D 2 , 2 , the formula is as follows:
  • Step f Determine the perceptual loss function corresponding to the second preset hole convolution model based on the fourth input data and the preset convolutional neural network;
  • the perceptual loss function determines the result that is not similar to the label by defining a distance metric between the activation maps of the pre-trained network.
  • ⁇ i is used to represent the activation map of the i-th layer of the preset convolutional neural network
  • N i is the number of elements in the i-th activation layer
  • the fourth input data includes the color image I gt corresponding to the original image and padding
  • the perceptual loss function L perc corresponding to the second preset hole convolution model is calculated, and the formula is as follows:
  • Step g Determine a style loss function corresponding to the second preset hole convolution model based on the fifth input data and a preset convolutional neural network;
  • the filled image I pred and (1-M) are subjected to the Hadamard product operation to obtain
  • M is the mask image
  • the color image corresponding to the original image I gt Hadamard product operation carried out with (1-M) to give
  • the fifth input parameter includes with Then calculate the style loss function L style corresponding to the second preset hole convolution model, where, Is the matrix constructed in the active layer ⁇ i , the formula is as follows:
  • Step h determining the loss function of the second preset hole convolution model based on the second counter loss function, the perceptual loss function, and the style loss function;
  • the loss function L G2 of the second preset hole convolution model is further determined according to the L1 loss function, the second confrontation loss function L adv,2 , the perceptual loss function L perc and the style loss function L style , where ⁇ L1 , ⁇ adv, 2 , ⁇ p and ⁇ s are regularization parameters, the formula is as follows:
  • Step i Determine whether the second preset hole convolution model converges based on the loss function of the second preset hole convolution model.
  • the loss function L G2 of the second preset hole convolution model is calculated according to the L1 loss function, the second counter loss function L adv,2 , the perceptual loss function L perc and the style loss function L style , according to The preset convergence judgment rule determines whether the second preset cavity convolution model converges according to the loss function L G2 .
  • the preset convergence judgment rule is not limited in this application.
  • the preset convergence judgment rule is During the training process of the second preset cavity convolution model, when the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the second preset cavity convolution model has converged.
  • the difference between the loss function and the last training loss function is greater than the preset value, it is determined that the second preset hole convolution model does not converge, and the parameters of the second preset hole convolution model need to be updated, and the updated The second preset cavity convolution model is trained.
  • Step S74 when it is determined that the second preset cavity convolution model is converged, determine that the second preset cavity convolution model is a second target cavity convolution model;
  • the model training process when the model converges, it means that the model training is finished, and the model convergence is judged according to the loss function of the model. Specifically, in the training process of the second preset hole convolution model, if the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the second preset hole convolution model converges Therefore, the current second preset cavity convolution model is used as the second target cavity convolution model.
  • Step S75 Generate a filled image corresponding to the original image based on the background image corresponding to the original image, the composite edge image, and the second target cavity convolution model.
  • the second target hole convolution model is a trained model, and the model is convergent.
  • the second target hole convolution model is represented by G F2
  • the color image corresponding to the original image is represented by I gt
  • I gt Indicates that I gt and (1-M) perform the Hadamard product operation to obtain the background image corresponding to the original image
  • the composite edge image C copm and the second target hole convolution model G F2 are used to generate the filling image I pred , and the calculation formula is as follows:
  • Step S76 When it is determined that the second preset hole convolution model does not converge, update the second learning rate of the second preset hole convolution model according to a second preset rule;
  • the learning rate determines the model parameters, so the learning rate of the second preset hole convolution model is updated according to the preset rule, and then the parameters of the second preset hole convolution model are adjusted.
  • step S77 the second preset hole convolution model is updated based on the updated second learning rate, the updated second preset hole convolution model is used as the second preset hole convolution model, and the execution based on the second preset hole convolution model is continued.
  • the learning rate determines the model parameters, so after updating the learning rate of the second preset hole convolution model according to the preset rule, further, the second preset hole convolution model is updated according to the updated second learning rate , That is, update the parameters of the second preset hole convolution model.
  • the second preset hole convolution model is updated according to the updated second learning rate , That is, update the parameters of the second preset hole convolution model.
  • continue to train the second preset hole convolution model that is, continue to perform the step of determining a composite edge image based on the missing edge image, the background edge image, and the mask image.
  • the edge learning image filling method proposed in this embodiment realizes the color filling after the edge repair of the image to be reconstructed through the confrontation model, so that the filled area can better reproduce the fine details of the image.
  • the embodiment of the present application also provides an image filling device for edge learning.
  • FIG. 4 is a schematic diagram of functional modules of an image filling device for edge learning in this application.
  • the image filling device for edge learning includes:
  • the first determining module 10 is configured to determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
  • the second determining module 20 is configured to determine a background edge image based on the edge image corresponding to the original image and the mask image;
  • the training module 30 is configured to train a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
  • the judging module 40 is configured to determine whether the first preset hole convolution model converges based on the first discriminator
  • the third determining module 50 when determining that the first preset cavity convolution model converges, determines that the first preset cavity convolution model is the first target cavity convolution model;
  • the first generating module 60 generates a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target cavity convolution model;
  • the second generating module 70 generates a filled image corresponding to the original image based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model .
  • an embodiment of the present application also provides a readable storage medium, and the computer-readable storage medium may be a non-volatile readable storage medium.
  • the readable storage medium of the present application stores computer readable instructions, and when the computer readable instructions are executed by a processor, the steps of the above-mentioned edge learning image filling method are realized.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Provided are an image filling method and apparatus based on edge learning, a terminal, and a readable storage medium, belonging to the field of image processing. The method comprises the following steps: determining a background grayscale image on the basis of a grayscale image corresponding to an original image and a mask image corresponding to the original image; then, determining a background edge image on the basis of an edge image corresponding to the original image and the mask image; generating a missing edge image on the basis of the background grayscale image, the background edge image, the mask image and a first preset dilated convolution model; and generating, on the basis of the background image corresponding to the original image, the background edge image, the missing edge image, the mask image and a second preset dilated convolution model, a filling image corresponding to the original image. Edge repairing of an image to be reconstructed is realized through a two-stage adversarial model, and color filling is then performed, so that a filled area better reproduces fine details of the image.

Description

边缘学习的图像填充方法、装置、终端及可读存储介质Edge learning image filling method, device, terminal and readable storage medium
本申请要求于2019年8月23日提交中国专利局、申请号为201910784055.9、发明名称为“边缘学习的图像填充方法、装置、终端及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on August 23, 2019, the application number is 201910784055.9, and the invention title is "Image filling method, device, terminal and readable storage medium for edge learning", all of which The content is incorporated in the application by reference.
技术领域Technical field
本申请涉及图像处理领域,尤其涉及一种边缘学习的图像填充方法、装置、终端及可读存储介质。This application relates to the field of image processing, and in particular to an image filling method, device, terminal and readable storage medium for edge learning.
背景技术Background technique
图像修复指重建图像和视频中丢失或损坏的部分的过程。例如在博物馆中,这项工作常由经验丰富的博物馆管理员或者艺术品修复师来进行;数码世界中,图像修复又称图像插值或视频插值,指利用复杂的算法来替换已丢失、损坏的图像数据,主要替换一些小区域和瑕疵。Image restoration refers to the process of reconstructing missing or damaged parts of images and videos. For example, in museums, this work is often performed by experienced museum administrators or art restoration engineers; in the digital world, image restoration is also called image interpolation or video interpolation, which refers to the use of complex algorithms to replace lost or damaged items. Image data mainly replaces some small areas and defects.
目前,通过填充一个已有图像补丁片段或者利用基于背景进行扩散填充,在修复图像时,过度平滑而使修复区域模糊不清,没有重建出理想的细节效果。At present, by filling an existing image patch segment or using diffusion filling based on the background, when the image is repaired, the repaired area is blurred due to excessive smoothing, and the ideal detail effect is not reconstructed.
发明内容Summary of the invention
本申请的主要目的在于提供一种边缘学习的图像填充方法、装置、终端及可读存储介质,旨在解决现有的图像修复无法重建出理想细节的技术问题。The main purpose of this application is to provide an edge learning image filling method, device, terminal, and readable storage medium, aiming to solve the technical problem that the existing image restoration cannot reconstruct ideal details.
为实现上述目的,本申请提供一种边缘学习的图像填充方法,所述边缘学习的图像填充方法包括以下步骤:To achieve the above objective, the present application provides an image filling method for edge learning, and the image filling method for edge learning includes the following steps:
基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;Determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;Determining a background edge image based on the edge image corresponding to the original image and the mask image;
基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及第一预设空洞卷积模型,生成缺失边缘图像;Generating a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first preset cavity convolution model;
基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model, a filled image corresponding to the original image is generated.
此外,为实现上述目的,本申请还提供一种终端,所述终端包括处理器、 存储器、以及存储在所述存储器上并可被所述处理器执行的计算机可读指令,其中所述计算机可读指令被所述处理器执行时,实现如上述的边缘学习的图像填充方法的步骤。In addition, in order to achieve the above object, the present application also provides a terminal, the terminal including a processor, a memory, and computer-readable instructions stored on the memory and executable by the processor, wherein the computer can When the read instruction is executed by the processor, the steps of the above-mentioned edge learning image filling method are realized.
此外,为实现上述目的,本申请还提供一种可读存储介质,所述可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述任一项所述的边缘学习的图像填充方法的步骤。In addition, in order to achieve the above objective, the present application also provides a readable storage medium having computer readable instructions stored on the readable storage medium, and the computer readable instructions implement any of the above The steps of the image filling method of edge learning.
本申请提供一种边缘学习的图像填充方法,基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像,而后基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像,接下来基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及第一预设空洞卷积模型,生成缺失边缘图像,然后基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。通过两阶段的对抗模型实现了对待重建图像的边缘修复,而后进行颜色填充,使得填充区更好地再现图像精细细节。This application provides an image filling method for edge learning, which determines the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image, and then determines the background grayscale image based on the edge image corresponding to the original image and the mask image A background edge image, and then based on the background grayscale image, the background edge image, the mask image, and the first preset cavity convolution model, a missing edge image is generated, and then based on the background image corresponding to the original image , The background edge image, the missing edge image, the mask image, and the second preset cavity convolution model to generate a filled image corresponding to the original image. The two-stage confrontation model realizes the edge restoration of the image to be reconstructed, and then performs color filling, so that the filled area can better reproduce the fine details of the image.
附图说明Description of the drawings
图1为本申请实施例方案中涉及的边缘学习的图像填充终端的硬件结构示意图;FIG. 1 is a schematic diagram of the hardware structure of an image filling terminal for edge learning involved in a solution of an embodiment of the application;
图2为本申请边缘学习的图像填充方法第一实施例的流程示意图;2 is a schematic flowchart of a first embodiment of an image filling method for edge learning according to this application;
图3为本申请边缘学习的图像填充方法第二实施例的流程示意图;3 is a schematic flowchart of a second embodiment of an image filling method for edge learning according to this application;
图4为本申请边缘学习的图像填充装置的功能模块示意图。Fig. 4 is a schematic diagram of functional modules of an image filling device for edge learning in this application.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
本申请实施例涉及的边缘学习的图像填充方法主要应用于终端,该边终端可以是PC、便携计算机、移动终端等具有显示和处理功能的设备。The edge learning image filling method involved in the embodiments of the present application is mainly applied to a terminal, and the edge terminal may be a device with display and processing functions such as a PC, a portable computer, and a mobile terminal.
参照图1,图1为本申请实施例方案中涉及的边缘学习的图像填充终端的硬件结构示意图。本申请实施例中,边缘学习的图像填充终端可以包括处理器1001(例如CPU),通信总线1002,用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信;用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard);网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口);存储器 1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器,存储器1005可选的还可以是独立于前述处理器1001的存储装置。Referring to FIG. 1, FIG. 1 is a schematic diagram of the hardware structure of the edge learning image filling terminal involved in the solution of the embodiment of the application. In the embodiment of the present application, the edge learning image filling terminal may include a processor 1001 (for example, a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Among them, the communication bus 1002 is used to realize the connection and communication between these components; the user interface 1003 may include a display (Display), an input unit such as a keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a wireless interface (Such as WI-FI interface); the memory 1005 can be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 1005 can optionally be a storage device independent of the aforementioned processor 1001 .
本领域技术人员可以理解,图1中示出的硬件结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the hardware structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or a combination of certain components, or different component arrangements.
继续参照图1,图1中作为一种可读存储介质的存储器1005可以包括操作系统、网络通信模块以及计算机可读指令。Continuing to refer to FIG. 1, the memory 1005 as a readable storage medium in FIG. 1 may include an operating system, a network communication module, and computer readable instructions.
在图1中,网络通信模块主要用于连接服务器,与服务器进行数据通信;而处理器1001可以调用存储器1005中存储的计算机可读指令,并执行本申请实施例提供的边缘学习的图像填充方法。In FIG. 1, the network communication module is mainly used to connect to the server and perform data communication with the server; and the processor 1001 can call the computer-readable instructions stored in the memory 1005, and execute the edge learning image filling method provided by the embodiment of the present application .
本申请实施例提供了一种边缘学习的图像填充方法。The embodiment of the present application provides an image filling method of edge learning.
参照图2,图2为本申请边缘学习的图像填充方法第一实施例的流程示意图。Referring to FIG. 2, FIG. 2 is a schematic flowchart of a first embodiment of an image filling method for edge learning according to this application.
本实施例中,所述边缘学习的图像填充方法包括以下步骤:In this embodiment, the image filling method for edge learning includes the following steps:
步骤S10,基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;Step S10: Determine a background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
在本实施例中,原始图像是待修复的图像,该图像中存在像素缺失;灰度图像是每个像素只有一个采样颜色的图像,这类图像通常显示为从最暗黑色到最亮的白色的灰度;掩模图像是由原始图像衍生的图像,即将原始图像中缺失区域的像素点的像素值定义为1,缺失区域以外的区域定义为背景,背景中像素点的像素值定义为0,也就是说掩模图像中像素点的取值要么为1,要么为0。In this embodiment, the original image is the image to be repaired, and there are pixel deletions in the image; grayscale images are images with only one sampled color per pixel, and this type of image is usually displayed from the darkest black to the brightest white The mask image is an image derived from the original image, that is, the pixel value of the pixel in the missing area in the original image is defined as 1, the area outside the missing area is defined as the background, and the pixel value of the pixel in the background is defined as 0 , That is to say, the value of the pixel in the mask image is either 1 or 0.
具体地,为了描述方便,定义原始图像用I gt表示,灰度图像用I gray表示,掩模图像用M表示,其中,M i=0表示背景,M i=1表示缺失区域,背景灰度图像用
Figure PCTCN2019118150-appb-000001
表示,根据原始图像对应的灰度图像I gray以及原始图像对应的掩模图像M,确定背景灰度图像
Figure PCTCN2019118150-appb-000002
公式如下,其中,□表示哈达玛积。
Specifically, for convenience of description, the definition of the original image represented by I gt, gray scale image is represented by I gray, represented by the mask image M, where, M i = 0 represents background, M i = 1 represents a defective area, the background gray Image use
Figure PCTCN2019118150-appb-000001
Indicates that the background gray image is determined according to the gray image I gray corresponding to the original image and the mask image M corresponding to the original image
Figure PCTCN2019118150-appb-000002
The formula is as follows, where □ represents the Hadamard product.
Figure PCTCN2019118150-appb-000003
Figure PCTCN2019118150-appb-000003
哈达玛积(Hadamard product)是矩阵的一类运算,若A=(aij)和B=(bij)是两个同阶矩阵,若cij=aij×bij,则称矩阵C=(cij)为A和B的哈达玛积,或称基本积。Hadamard product (Hadamard product) is a kind of operation of matrix. If A=(aij) and B=(bij) are two matrices of the same order, if cij=aij×bij, then matrix C=(cij) is called A The Hadamard product of B and B, or the basic product.
步骤S20,基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘 图像;Step S20: Determine a background edge image based on the edge image corresponding to the original image and the mask image;
在本实施例中,为了描述方便,定义原始图像对应的边缘图像为C gt,掩模图像用M表示,其中,M i=0表示背景,M i=1表示缺失区域,根据原始图像对应的边缘图像C gt以及掩模图像M,确定背景边缘图像
Figure PCTCN2019118150-appb-000004
公式如下,其中,□表示哈达玛积。
In the present embodiment, for convenience of description, the definition of an edge image of the original image corresponding to C gt, the mask image is represented by M, where, M i = 0 represents background, M i = 1 represents a deleted region, corresponding to the original image Edge image C gt and mask image M, determine the background edge image
Figure PCTCN2019118150-appb-000004
The formula is as follows, where □ represents the Hadamard product.
Figure PCTCN2019118150-appb-000005
Figure PCTCN2019118150-appb-000005
步骤S30,基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;Step S30, training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
在本实施例中,原始图像是存在像素缺失的图像,缺失图像的区域也包括图像边缘,在本申请中为了对图像进行修复,首先修复图像的缺失边缘。第一预设空洞卷积模型是一种卷积神经网络,卷积神经网络仿造生物的视知觉机制构建,可以进行监督学习和非监督学习,其隐含层内的卷积核参数共享和层间连接的稀疏性使得卷积神经网络能够以较小的计算量对格点化特征,例如像素和音频进行学习、有稳定的效果且对数据没有额外的特征工程要求。In this embodiment, the original image is an image with missing pixels, and the area of the missing image also includes the edge of the image. In this application, in order to repair the image, the missing edge of the image is first repaired. The first preset hollow convolution model is a convolutional neural network. The convolutional neural network mimics the visual perception mechanism of biology. It can perform supervised learning and unsupervised learning. Convolution kernel parameter sharing and layer in its hidden layer The sparsity of the connections enables the convolutional neural network to learn grid features, such as pixels and audio, with a small amount of calculation, with stable effects and no additional feature engineering requirements for the data.
具体地,将背景灰度图像、背景边缘图像以及掩模图像作为样本对第一预设空洞卷积模型进行训练,直到第一预设空洞卷积模型收敛为止,其中,第一预设空洞卷积模型的输出是缺失边缘图像。在第一预设空洞卷积模型的训练过程中所输出的缺失边缘图像是否准确,需要进一步由第一鉴别器进行鉴别,第一鉴别器采用PatchGAN架构,只在补丁patch范围内对结构进行判断。为了描述方便,第一预设空洞卷积模型用G 1,缺失边缘图像用C pred表示,计算公式如下: Specifically, the background grayscale image, the background edge image, and the mask image are used as samples to train the first preset hole convolution model until the first preset hole convolution model converges, wherein the first preset hole volume The output of the product model is the missing edge image. Whether the missing edge image output in the training process of the first preset hole convolution model is accurate, it needs to be further discriminated by the first discriminator. The first discriminator adopts the PatchGAN architecture and only judges the structure within the patch range . For the convenience of description, the first preset hole convolution model is represented by G 1 , and the missing edge image is represented by C pred . The calculation formula is as follows:
Figure PCTCN2019118150-appb-000006
Figure PCTCN2019118150-appb-000006
其中,
Figure PCTCN2019118150-appb-000007
是背景灰度图像,M是掩模图像,背景边缘图像
Figure PCTCN2019118150-appb-000008
among them,
Figure PCTCN2019118150-appb-000007
Is the background grayscale image, M is the mask image, the background edge image
Figure PCTCN2019118150-appb-000008
步骤S40,基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛,其中,所述终端包括所述第一鉴别器;Step S40: Determine whether the first preset hole convolution model converges based on a first discriminator, where the terminal includes the first discriminator;
在本实施例中,在第一预设空洞卷积模型的训练过程中所输出的缺失边缘图像是否准确,需要进一步由第一鉴别器进行鉴别。第一预设空洞卷积模型收敛时,则输出的缺失边缘图像是相对准确的。In this embodiment, whether the missing edge image output during the training process of the first preset hole convolution model is accurate needs to be further identified by the first discriminator. When the first preset hole convolution model converges, the output missing edge image is relatively accurate.
具体地,步骤S40包括:Specifically, step S40 includes:
步骤a,基于第一输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的第一对抗损失函数,其中,所述第一输入数据包括所述灰度 图像、所述边缘图像和所述缺失边缘图像;Step a, based on the first input data and the first discriminator, determine a first countermeasure loss function corresponding to the first preset hole convolution model, wherein the first input data includes the grayscale image, The edge image and the missing edge image;
在本实施例中,判断第一预设空洞卷积模型是否收敛,需要计算出第一预设空洞卷积模型的损失函数,其中,第一预设空洞卷积模型的损失函数包括对抗损失函数和特征匹配函数。In this embodiment, to determine whether the first preset hole convolution model converges, it is necessary to calculate the loss function of the first preset hole convolution model, where the loss function of the first preset hole convolution model includes the counter loss function And feature matching function.
具体地,为了描述方便,第一鉴别器用D 1表示,则根据灰度图像I gray、边缘图像C gt和缺失边缘图像C pred以及第一鉴别器用D 1,计算得到第一对抗损失函数L adv,1,公式如下所示: Specifically, for the convenience of description, the first discriminator is denoted by D 1 , then according to the gray image I gray , the edge image C gt , the missing edge image C pred and the first discriminator D 1 , the first adversarial loss function L adv is calculated , 1 , the formula is as follows:
Figure PCTCN2019118150-appb-000009
Figure PCTCN2019118150-appb-000009
步骤b,基于第二输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的特征匹配损失函数,其中,所述第二输入数据包括所述边缘图像和所述缺失边缘图像;Step b: Determine the feature matching loss function corresponding to the first preset hole convolution model based on the second input data and the first discriminator, wherein the second input data includes the edge image and the Missing edge image;
在本实施例中,为了描述方便,特征匹配损失函数用L FM表示,根据边缘图像C gt、缺失边缘图像C pred以及第一鉴别器用D 1,计算得到特征匹配损失函数L FM,公式如下所示: In this embodiment, for the convenience of description, the feature matching loss function is represented by L FM . According to the edge image C gt , the missing edge image C pred and the first discriminator D 1 , the feature matching loss function L FM is calculated. The formula is as follows Show:
Figure PCTCN2019118150-appb-000010
Figure PCTCN2019118150-appb-000010
步骤c,基于所述第一对抗损失函数以及所述特征匹配损失函数确定所述第一预设空洞卷积模型的损失函数;Step c: Determine the loss function of the first preset hole convolution model based on the first counter loss function and the feature matching loss function;
在本实施例中,进一步根据第一对抗损失函数L adv,1和特征匹配损失函数L FM,计算出第一预设空洞卷积模型的损失函数L G1,公式如下所示: In this embodiment, the loss function L G1 of the first preset hole convolution model is further calculated according to the first confrontation loss function L adv,1 and the feature matching loss function L FM , and the formula is as follows:
Figure PCTCN2019118150-appb-000011
Figure PCTCN2019118150-appb-000011
其中,λ adv,1和λ FM是正则化参数,可选地,λ adv,1=1,λ FM=10。 Among them, λ adv,1 and λ FM are regularization parameters. Optionally, λ adv,1 =1 and λ FM =10.
步骤d,基于所述第一预设空洞卷积模型的损失函数确定所述第一预设空洞卷积模型是否收敛。Step d: Determine whether the first preset hole convolution model converges based on the loss function of the first preset hole convolution model.
在本实施例中,根据第一对抗损失函数L adv,1和特征匹配损失函数L FM,计算出第一预设空洞卷积模型的损失函数L G1,按照预设的收敛判断规则,根据损失函数L G1确定第一预设空洞卷积模型是否收敛,预设的收敛判断规则在本申请中不做限定,可选地,预设的收敛判断规则为在第一预设空洞卷积模 型训练过程中,本次训练的损失函数与上次训练的损失函数的差值小于预设值时,则判定第一预设空洞卷积模型收敛,如果本次训练的损失函数与上次训练的损失函数的差值大于预设值时,则判定第一预设空洞卷积模型不收敛,需要更新第一预设空洞卷积模型的参数,继续对更新后的第一预设空洞卷积模型进行训练。 In this embodiment, according to the first confrontation loss function L adv,1 and the feature matching loss function L FM , the loss function L G1 of the first preset hole convolution model is calculated, and according to the preset convergence judgment rule, according to the loss The function L G1 determines whether the first preset hole convolution model converges. The preset convergence judgment rule is not limited in this application. Optionally, the preset convergence judgment rule is training on the first preset hole convolution model In the process, when the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the first preset cavity convolution model has converged. If the loss function of this training is compared with the loss of the last training When the difference of the function is greater than the preset value, it is determined that the first preset cavity convolution model does not converge, the parameters of the first preset cavity convolution model need to be updated, and the updated first preset cavity convolution model continues to be performed training.
步骤S50,在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;Step S50, when it is determined that the first preset cavity convolution model is converged, determine that the first preset cavity convolution model is the first target cavity convolution model;
在本实施例中,模型训练过程中,当模型收敛时表示模型训练结束,根据模型的损失函数判断模型收敛情况。具体地,在第一预设空洞卷积模型训练过程中,如果本次训练的损失函数与上次训练的损失函数的差值小于预设值时,则判定第一预设空洞卷积模型收敛,故将当前的第一预设空洞卷积模型作为第一目标空洞卷积模型。In this embodiment, during the model training process, when the model converges, it means that the model training is finished, and the model convergence is judged according to the loss function of the model. Specifically, in the training process of the first preset hole convolution model, if the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the first preset hole convolution model converges Therefore, the current first preset cavity convolution model is used as the first target cavity convolution model.
步骤S60,基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像。Step S60: Generate a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target cavity convolution model.
在本实施例中,第一目标空洞卷积模型是经过训练后的模型,该模型是收敛的,为了描述方便第一目标空洞卷积模型用G F1,则根据背景灰度图像
Figure PCTCN2019118150-appb-000012
背景边缘图像
Figure PCTCN2019118150-appb-000013
掩模图像M以及第一目标空洞卷积模型G F1,生成缺失边缘图像C pred,公式如下:
In this embodiment, the first target hole convolution model is a trained model, and the model is convergent. For the convenience of description, the first target hole convolution model uses G F1 , then according to the background gray image
Figure PCTCN2019118150-appb-000012
Background edge image
Figure PCTCN2019118150-appb-000013
The mask image M and the first target hole convolution model G F1 generate the missing edge image C pred , the formula is as follows:
Figure PCTCN2019118150-appb-000014
Figure PCTCN2019118150-appb-000014
步骤S70,基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。Step S70: Generate a filled image corresponding to the original image based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model.
在本实施例中,根据背景灰度图像、背景边缘图像、掩模图像以及第一目标空洞卷积模型,生成缺失边缘图像后,进一步,对图像进行填充。首先,根据缺失边缘图像和背景边缘图像确定整体的边缘,然后再对边缘内和边缘外的图像进行填充,具体地,根据原始图像对应的背景图像、背景边缘图像、缺失边缘图像、掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。In this embodiment, after the missing edge image is generated according to the background grayscale image, the background edge image, the mask image, and the first target hole convolution model, the image is further filled. First, determine the overall edge according to the missing edge image and the background edge image, and then fill in the images inside and outside the edge. Specifically, according to the background image, background edge image, missing edge image, and mask image corresponding to the original image And a second preset cavity convolution model to generate a filled image corresponding to the original image.
进一步地,在一实施例中,步骤S40之后,还包括:Further, in an embodiment, after step S40, the method further includes:
步骤S80,在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;Step S80, when it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to the first preset rule;
在本实施例中,在第一预设空洞卷积模型训练过程中,本次训练的损失函数与上次训练的损失函数的差值大于预设值时,则判定第一预设空洞卷积模型不收敛,此时,需要更新第一预设空洞卷积模型的参数,继续对更新后的第一预设空洞卷积模型进行训练。学习率决定模型参数,故根据预设规则更新第一预设空洞卷积模型的学习率,进而调整第一预设空洞卷积模型的参数。In this embodiment, in the training process of the first preset hole convolution model, when the difference between the loss function of this training and the loss function of the last training is greater than the preset value, it is determined that the first preset hole convolution The model does not converge. At this time, it is necessary to update the parameters of the first preset hole convolution model, and continue to train the updated first preset hole convolution model. The learning rate determines the model parameters, so the learning rate of the first preset hole convolution model is updated according to the preset rule, and then the parameters of the first preset hole convolution model are adjusted.
步骤S90,基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。In step S90, the first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the The step of training a first preset cavity convolution model on the background grayscale image, the background edge image, and the mask image.
在本实施例中,学习率决定模型参数,故根据预设规则更新第一预设空洞卷积模型的学习率后,进一步根据更新后的第一学习率更新第一预设空洞卷积模型,也就是更新第一预设空洞卷积模型的参数。接下来,继续对第一预设空洞卷积模型进行训练,也就是继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。In this embodiment, the learning rate determines the model parameters, so after updating the learning rate of the first preset hole convolution model according to the preset rule, the first preset hole convolution model is further updated according to the updated first learning rate, That is, the parameters of the first preset hole convolution model are updated. Next, continue to train the first preset hole convolution model, that is, continue to perform the step of training the first preset hole convolution model based on the background grayscale image, the background edge image, and the mask image .
本实施例提出的边缘学习的图像填充方法,通过两阶段的对抗模型实现了对待重建图像的边缘修复,而后进行颜色填充,使得填充区更好地再现图像精细细节。The edge learning image filling method proposed in this embodiment implements the edge repair of the image to be reconstructed through a two-stage confrontation model, and then performs color filling, so that the filled area can better reproduce the fine details of the image.
基于第一实施例,提出本申请边缘学习的图像填充方法的第二实施例,参照图3,在本实施例中,步骤S70包括:Based on the first embodiment, a second embodiment of the image filling method for edge learning of the present application is proposed. Referring to FIG. 3, in this embodiment, step S70 includes:
步骤S71,基于所述缺失边缘图像、所述背景边缘图像以及所述掩模图像,确定复合边缘图像;Step S71: Determine a composite edge image based on the missing edge image, the background edge image, and the mask image;
在本实施例中,根据背景灰度图像、背景边缘图像、掩模图像以及第一目标空洞卷积模型,生成缺失边缘图像后,继续根据缺失边缘图像和背景边缘图像确定整体的边缘。具体地,为了描述方便,复合边缘图像用C copm表示,公式如下,其中,C gt表示原始图像对应的边缘图像,M表示掩模图像,C pred表示缺失边缘图像,□表示哈达玛积。 In this embodiment, after the missing edge image is generated according to the background grayscale image, the background edge image, the mask image, and the first target hole convolution model, the overall edge is continuously determined based on the missing edge image and the background edge image. Specifically, for the convenience of description, the composite edge image is represented by C copm , and the formula is as follows, where C gt represents the edge image corresponding to the original image, M represents the mask image, C pred represents the missing edge image, and □ represents the Hadamard product.
C copm=C gt□(1-M)+C pred□M C copm =C gt □(1-M)+C pred □M
步骤S72,基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型;Step S72, training a second preset cavity convolution model based on the background image corresponding to the original image and the composite edge image;
在本实施例中,第二预设空洞卷积模型也是一种卷积神经网络,其训练 样本为原始图像对应的背景图像以及复合边缘图像,当第二预设空洞卷积模型收敛时,模型训练停止,其中,第二预设空洞卷积模型的输出是原始图像对应的填充图像。在第二预设空洞卷积模型的训练过程中所输出的填充图像是否准确,需要进一步由第二鉴别器进行鉴别。为了描述方便,第二预设空洞卷积模型用G 2,填充图像用I pred表示,原始图像对应的彩色图像I gt,I gt与(1-M)进行哈达玛积运算,得到原始图像对应的背景图像
Figure PCTCN2019118150-appb-000015
则填充图像用I pred的计算公式如下:
In this embodiment, the second preset hole convolution model is also a kind of convolutional neural network, and its training samples are background images and composite edge images corresponding to the original image. When the second preset hole convolution model converges, the model The training stops, where the output of the second preset hole convolution model is the filled image corresponding to the original image. Whether the filled image output during the training process of the second preset hole convolution model is accurate needs to be further identified by the second discriminator. For the convenience of description, the second preset hole convolution model is G 2 , and the filled image is represented by I pred . The color image I gt corresponding to the original image, I gt and (1-M) are subjected to the Hadamard product operation to obtain the original image corresponding Background image
Figure PCTCN2019118150-appb-000015
The calculation formula of I pred for filling the image is as follows:
Figure PCTCN2019118150-appb-000016
Figure PCTCN2019118150-appb-000016
步骤S73,基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛;Step S73: Determine whether the second preset hole convolution model converges based on the second discriminator and the preset convolutional neural network;
在本实施例中,在第二预设空洞卷积模型的训练过程中所输出的填充图像是否准确,需要进一步由第二鉴别器进行鉴别,第二鉴别器采用PatchGAN架构,只在补丁patch范围内对结构进行判断。第二预设空洞卷积模型收敛时,则输出的填充图像是相对准确的。In this embodiment, whether the filled image output during the training process of the second preset hole convolution model is accurate or not needs to be further discriminated by the second discriminator. The second discriminator adopts the PatchGAN architecture, which is only in the patch range Judge the structure within. When the second preset cavity convolution model converges, the output filled image is relatively accurate.
具体地,步骤S73包括:Specifically, step S73 includes:
步骤e,基于第三输入数据以及所述第二鉴别器,确定所述第二预设空洞卷积模型对应的第二对抗损失函数;Step e, based on the third input data and the second discriminator, determine a second confrontation loss function corresponding to the second preset hole convolution model;
在本实施例中,判断第二预设空洞卷积模型是否收敛,需要计算出第二预设空洞卷积模型的损失函数,其中,第二预设空洞卷积模型的损失函数包括L1损失函数,第二对抗损失函数,感知损失函数和风格损失函数,其中,L1损失函数是一个预设值,由实际情况确定,L1损失函数的目的是确保正确缩放,L1损失函数根据掩模图像M大小进行标准化。In this embodiment, to determine whether the second preset hole convolution model converges, it is necessary to calculate the loss function of the second preset hole convolution model, where the loss function of the second preset hole convolution model includes the L1 loss function , The second confrontation loss function, perceptual loss function and style loss function. Among them, the L1 loss function is a preset value, which is determined by the actual situation. The purpose of the L1 loss function is to ensure correct scaling. The L1 loss function is based on the size of the mask image M Standardize.
进一步地,计算第二对抗损失函数,为了描述方便,第二鉴别器用D 2表示,则根据第三输入数据以及第二鉴别器,确定第二预设空洞卷积模型对应的第二对抗损失函数,其中,第三输入数据包括原始图像对应的彩色图像I gt、填充图像I pred和复合边缘图像C copm,根据第三输入数据以及第二鉴别器D 2,计算得到第二对抗损失函数L adv,2,公式如下所示: Further, calculating a second function against loss, for convenience of description, the second authentication Used D 2 represents, according to the third input data and a second discriminator determining a second predetermined cavity corresponding to a second convolutional model loss function against , Where the third input data includes the color image I gt corresponding to the original image, the filled image I pred and the composite edge image C copm , and the second counter loss function L adv is calculated according to the third input data and the second discriminator D 2 , 2 , the formula is as follows:
Figure PCTCN2019118150-appb-000017
Figure PCTCN2019118150-appb-000017
步骤f,基于第四输入数据以及预设卷积神经网络,确定所述第二预设空 洞卷积模型对应的感知损失函数;Step f: Determine the perceptual loss function corresponding to the second preset hole convolution model based on the fourth input data and the preset convolutional neural network;
在本实施例中,感知损失函数是通过在预先训练的网络的激活图之间定义距离度量来确定与标签不相似的结果。为了描述方便,用φ i表示预设卷积神经网络的第i层的激活图,N i是第i个激活层中元素的数量,第四输入数据包括原始图像对应的彩色图像I gt和填充图像I pred,则计算第二预设空洞卷积模型对应的感知损失函数L perc,公式如下所示: In this embodiment, the perceptual loss function determines the result that is not similar to the label by defining a distance metric between the activation maps of the pre-trained network. For the convenience of description, φ i is used to represent the activation map of the i-th layer of the preset convolutional neural network, N i is the number of elements in the i-th activation layer, and the fourth input data includes the color image I gt corresponding to the original image and padding For the image I pred , the perceptual loss function L perc corresponding to the second preset hole convolution model is calculated, and the formula is as follows:
Figure PCTCN2019118150-appb-000018
Figure PCTCN2019118150-appb-000018
步骤g,基于第五输入数据以及预设卷积神经网络,确定所述第二预设空洞卷积模型对应的风格损失函数;Step g: Determine a style loss function corresponding to the second preset hole convolution model based on the fifth input data and a preset convolutional neural network;
在本实施例中,填充图像I pred与(1-M)进行哈达玛积运算,得到
Figure PCTCN2019118150-appb-000019
其中,M是掩模图像,M i=0表示背景,M i=1表示缺失区域;原始图像对应的彩色图像I gt与(1-M)进行哈达玛积运算,得到
Figure PCTCN2019118150-appb-000020
第五输入参数包括
Figure PCTCN2019118150-appb-000021
Figure PCTCN2019118150-appb-000022
则计算第二预设空洞卷积模型对应的风格损失函数L style,其中,
Figure PCTCN2019118150-appb-000023
是激活层φ i中构建的矩阵,公式如下所示:
In this embodiment, the filled image I pred and (1-M) are subjected to the Hadamard product operation to obtain
Figure PCTCN2019118150-appb-000019
Wherein, M is the mask image, M i = 0 represents background, M i = 1 indicates the deleted region; the color image corresponding to the original image I gt Hadamard product operation carried out with (1-M), to give
Figure PCTCN2019118150-appb-000020
The fifth input parameter includes
Figure PCTCN2019118150-appb-000021
with
Figure PCTCN2019118150-appb-000022
Then calculate the style loss function L style corresponding to the second preset hole convolution model, where,
Figure PCTCN2019118150-appb-000023
Is the matrix constructed in the active layer φ i , the formula is as follows:
Figure PCTCN2019118150-appb-000024
Figure PCTCN2019118150-appb-000024
步骤h,基于所述第二对抗损失函数、所述感知损失函数以及所述风格损失函数确定所述第二预设空洞卷积模型的损失函数;Step h, determining the loss function of the second preset hole convolution model based on the second counter loss function, the perceptual loss function, and the style loss function;
在本实施例中,进一步根据L1损失函数、第二对抗损失函数L adv,2、感知损失函数L perc以及风格损失函数L style,确定第二预设空洞卷积模型的损失函数L G2,其中λ L1、λ adv,2、λ p和λ s是正则化参数,公式如下所示: In this embodiment, the loss function L G2 of the second preset hole convolution model is further determined according to the L1 loss function, the second confrontation loss function L adv,2 , the perceptual loss function L perc and the style loss function L style , where λ L1 , λ adv, 2 , λ p and λ s are regularization parameters, the formula is as follows:
Figure PCTCN2019118150-appb-000025
Figure PCTCN2019118150-appb-000025
步骤i,基于所述第二预设空洞卷积模型的损失函数确定所述第二预设空洞卷积模型是否收敛。Step i: Determine whether the second preset hole convolution model converges based on the loss function of the second preset hole convolution model.
在本实施例中,根据L1损失函数、第二对抗损失函数L adv,2、感知损失函数L perc以及风格损失函数L style,计算出第二预设空洞卷积模型的损失函数L G2,按照预设的收敛判断规则,根据损失函数L G2确定第二预设空洞卷积模型是否 收敛,预设的收敛判断规则在本申请中不做限定,可选地,预设的收敛判断规则为在第二预设空洞卷积模型训练过程中,本次训练的损失函数与上次训练的损失函数的差值小于预设值时,则判定第二预设空洞卷积模型收敛,如果本次训练的损失函数与上次训练的损失函数的差值大于预设值时,则判定第二预设空洞卷积模型不收敛,需要更新第二预设空洞卷积模型的参数,继续对更新后的第二预设空洞卷积模型进行训练。 In this embodiment, the loss function L G2 of the second preset hole convolution model is calculated according to the L1 loss function, the second counter loss function L adv,2 , the perceptual loss function L perc and the style loss function L style , according to The preset convergence judgment rule determines whether the second preset cavity convolution model converges according to the loss function L G2 . The preset convergence judgment rule is not limited in this application. Optionally, the preset convergence judgment rule is During the training process of the second preset cavity convolution model, when the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the second preset cavity convolution model has converged. When the difference between the loss function and the last training loss function is greater than the preset value, it is determined that the second preset hole convolution model does not converge, and the parameters of the second preset hole convolution model need to be updated, and the updated The second preset cavity convolution model is trained.
步骤S74,在确定所述第二预设空洞卷积模型收敛时,确定所述第二预设空洞卷积模型为第二目标空洞卷积模型;Step S74, when it is determined that the second preset cavity convolution model is converged, determine that the second preset cavity convolution model is a second target cavity convolution model;
在本实施例中,模型训练过程中,当模型收敛时表示模型训练结束,根据模型的损失函数判断模型收敛情况。具体地,在第二预设空洞卷积模型训练过程中,如果本次训练的损失函数与上次训练的损失函数的差值小于预设值时,则判定第二预设空洞卷积模型收敛,故将当前的第二预设空洞卷积模型作为第二目标空洞卷积模型。In this embodiment, during the model training process, when the model converges, it means that the model training is finished, and the model convergence is judged according to the loss function of the model. Specifically, in the training process of the second preset hole convolution model, if the difference between the loss function of this training and the loss function of the last training is less than the preset value, it is determined that the second preset hole convolution model converges Therefore, the current second preset cavity convolution model is used as the second target cavity convolution model.
步骤S75,基于所述原始图像对应的背景图像、所述复合边缘图像以及第二目标空洞卷积模型,生成原始图像对应的填充图像。Step S75: Generate a filled image corresponding to the original image based on the background image corresponding to the original image, the composite edge image, and the second target cavity convolution model.
在本实施例中,第二目标空洞卷积模型是经过训练后的模型,该模型是收敛的,为了描述方便第二目标空洞卷积模型用G F2表示,原始图像对应的彩色图像用I gt表示,I gt与(1-M)进行哈达玛积运算,得到原始图像对应的背景图像
Figure PCTCN2019118150-appb-000026
则根据原始图像对应的背景图像
Figure PCTCN2019118150-appb-000027
复合边缘图像C copm以及第二目标空洞卷积模型G F2,生成填充图像I pred,计算公式如下:
In this embodiment, the second target hole convolution model is a trained model, and the model is convergent. For the convenience of description, the second target hole convolution model is represented by G F2 , and the color image corresponding to the original image is represented by I gt Indicates that I gt and (1-M) perform the Hadamard product operation to obtain the background image corresponding to the original image
Figure PCTCN2019118150-appb-000026
According to the background image corresponding to the original image
Figure PCTCN2019118150-appb-000027
The composite edge image C copm and the second target hole convolution model G F2 are used to generate the filling image I pred , and the calculation formula is as follows:
Figure PCTCN2019118150-appb-000028
Figure PCTCN2019118150-appb-000028
步骤S76,在确定所述第二预设空洞卷积模型不收敛时,按照第二预设规则更新所述第二预设空洞卷积模型的第二学习率;Step S76: When it is determined that the second preset hole convolution model does not converge, update the second learning rate of the second preset hole convolution model according to a second preset rule;
在本实施例中,在第二预设空洞卷积模型训练过程中,本次训练的损失函数与上次训练的损失函数的差值大于预设值时,则判定第二预设空洞卷积模型不收敛,此时,需要更新第二预设空洞卷积模型的参数,继续对更新后的第二预设空洞卷积模型进行训练。学习率决定模型参数,故根据预设规则更新第二预设空洞卷积模型的学习率,进而调整第二预设空洞卷积模型的参数。In this embodiment, during the training process of the second preset hole convolution model, when the difference between the loss function of this training and the loss function of the last training is greater than the preset value, it is determined that the second preset hole convolution The model does not converge. At this time, it is necessary to update the parameters of the second preset hole convolution model, and continue to train the updated second preset hole convolution model. The learning rate determines the model parameters, so the learning rate of the second preset hole convolution model is updated according to the preset rule, and then the parameters of the second preset hole convolution model are adjusted.
步骤S77,基于更新后的第二学习率更新所述第二预设空洞卷积模型,将更新后的第二预设空洞卷积模型作为第二预设空洞卷积模型,继续执行基于 所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型的步骤。In step S77, the second preset hole convolution model is updated based on the updated second learning rate, the updated second preset hole convolution model is used as the second preset hole convolution model, and the execution based on the second preset hole convolution model is continued. The step of training a second preset cavity convolution model on the background image corresponding to the original image and the composite edge image.
在本实施例中,学习率决定模型参数,故根据预设规则更新第二预设空洞卷积模型的学习率后,进一步,根据更新后的第二学习率更新第二预设空洞卷积模型,也就是更新第二预设空洞卷积模型的参数。接下来,继续对第二预设空洞卷积模型进行训练,也就是继续执行基于所述缺失边缘图像、所述背景边缘图像以及所述掩模图像,确定复合边缘图像的步骤。In this embodiment, the learning rate determines the model parameters, so after updating the learning rate of the second preset hole convolution model according to the preset rule, further, the second preset hole convolution model is updated according to the updated second learning rate , That is, update the parameters of the second preset hole convolution model. Next, continue to train the second preset hole convolution model, that is, continue to perform the step of determining a composite edge image based on the missing edge image, the background edge image, and the mask image.
本实施例提出的边缘学习的图像填充方法,通过对抗模型实现了对待重建图像的边缘修复后的颜色填充,使得填充区更好地再现图像精细细节。The edge learning image filling method proposed in this embodiment realizes the color filling after the edge repair of the image to be reconstructed through the confrontation model, so that the filled area can better reproduce the fine details of the image.
此外,本申请实施例还提供一种边缘学习的图像填充装置。In addition, the embodiment of the present application also provides an image filling device for edge learning.
参照图4,图4为本申请边缘学习的图像填充装置的功能模块示意图。Referring to FIG. 4, FIG. 4 is a schematic diagram of functional modules of an image filling device for edge learning in this application.
本实施例中,所述边缘学习的图像填充装置包括:In this embodiment, the image filling device for edge learning includes:
第一确定模块10,用于基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;The first determining module 10 is configured to determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
第二确定模块20,用于基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;The second determining module 20 is configured to determine a background edge image based on the edge image corresponding to the original image and the mask image;
训练模块30,用于基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;The training module 30 is configured to train a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
判断模块40,用于基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛;The judging module 40 is configured to determine whether the first preset hole convolution model converges based on the first discriminator;
第三确定模块50,在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;The third determining module 50, when determining that the first preset cavity convolution model converges, determines that the first preset cavity convolution model is the first target cavity convolution model;
第一生成模块60,基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像;The first generating module 60 generates a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target cavity convolution model;
第二生成模块70,基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。The second generating module 70 generates a filled image corresponding to the original image based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model .
此外,本申请实施例还提供一种可读存储介质,所述计算机可读存储介质可以为非易失性可读存储介质。In addition, an embodiment of the present application also provides a readable storage medium, and the computer-readable storage medium may be a non-volatile readable storage medium.
本申请可读存储介质上存储有计算机可读指令,其中所述计算机可读指令被处理器执行时,实现如上述的边缘学习的图像填充方法的步骤。The readable storage medium of the present application stores computer readable instructions, and when the computer readable instructions are executed by a processor, the steps of the above-mentioned edge learning image filling method are realized.
其中,计算机可读指令被执行时所实现的方法可参照本申请边缘学习的图像填充方法的各个实施例,此处不再赘述。For the method implemented when the computer-readable instruction is executed, refer to the various embodiments of the image filling method for edge learning in this application, which will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or system including a series of elements not only includes those elements, It also includes other elements that are not explicitly listed, or elements inherent to the process, method, article, or system. Without more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or system that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

  1. 一种边缘学习的图像填充方法,应用于终端,其特征在于,所述边缘学习的图像填充方法包括以下步骤:An edge learning image filling method applied to a terminal, characterized in that the edge learning image filling method includes the following steps:
    基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;Determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
    基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;Determining a background edge image based on the edge image corresponding to the original image and the mask image;
    基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;Training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
    基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛,其中,所述终端包括所述第一鉴别器;Determining whether the first preset hole convolution model converges based on a first discriminator, where the terminal includes the first discriminator;
    在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;When it is determined that the first preset cavity convolution model converges, determining that the first preset cavity convolution model is a first target cavity convolution model;
    基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像;Generating a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target hole convolution model;
    基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model, a filled image corresponding to the original image is generated.
  2. 如权利要求1所述的边缘学习的图像填充方法,其特征在于,所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤之后,还包括:5. The edge learning image filling method according to claim 1, wherein after the step of determining whether the first preset hole convolution model is converged based on the first discriminator, the method further comprises:
    在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;When it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to a first preset rule;
    基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。The first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the background gray level is continued. The step of training a first preset cavity convolution model on the image, the background edge image, and the mask image.
  3. 如权利要求1所述的边缘学习的图像填充方法,其特征在于,所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤包括:The method for image filling of edge learning according to claim 1, wherein the step of determining whether the first preset hole convolution model has converged based on the first discriminator comprises:
    基于第一输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的第一对抗损失函数,其中,所述第一输入数据包括所述灰度图像、所述边缘图像和所述缺失边缘图像;Based on the first input data and the first discriminator, a first countermeasure loss function corresponding to the first preset hole convolution model is determined, wherein the first input data includes the gray image, the edge An image and the missing edge image;
    基于第二输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的特征匹配损失函数,其中,所述第二输入数据包括所述边缘图像和所述缺失边缘图像;Determine the feature matching loss function corresponding to the first preset hole convolution model based on the second input data and the first discriminator, wherein the second input data includes the edge image and the missing edge image ;
    基于所述第一对抗损失函数以及所述特征匹配损失函数确定所述第一预设空洞卷积模型的损失函数;Determining a loss function of the first preset cavity convolution model based on the first counter loss function and the feature matching loss function;
    基于所述第一预设空洞卷积模型的损失函数确定所述第一预设空洞卷积模型是否收敛。Determine whether the first preset hole convolution model converges based on the loss function of the first preset hole convolution model.
  4. 如权利要求1所述的边缘学习的图像填充方法,其特征在于,所述终端包括第二鉴别器,所述基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像的步骤包括:The image filling method for edge learning according to claim 1, wherein the terminal includes a second discriminator, and the background image corresponding to the original image, the background edge image, and the missing edge image are based on , The mask image and the second preset cavity convolution model, and the step of generating a filled image corresponding to the original image includes:
    基于所述缺失边缘图像、所述背景边缘图像以及所述掩模图像,确定复合边缘图像;Determining a composite edge image based on the missing edge image, the background edge image, and the mask image;
    基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型;Training a second preset cavity convolution model based on the background image corresponding to the original image and the composite edge image;
    基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛;Determining whether the second preset cavity convolution model converges based on the second discriminator and the preset convolutional neural network;
    在确定所述第二预设空洞卷积模型收敛时,确定所述第二预设空洞卷积模型为第二目标空洞卷积模型;When it is determined that the second preset cavity convolution model converges, determining that the second preset cavity convolution model is a second target cavity convolution model;
    基于所述原始图像对应的背景图像、所述复合边缘图像以及第二目标空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the composite edge image, and the second target cavity convolution model, a filled image corresponding to the original image is generated.
  5. 如权利要求4所述的边缘学习的图像填充方法,其特征在于,所述基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛的步骤之后,还包括:The image filling method for edge learning according to claim 4, wherein after the step of determining whether the second preset hole convolution model has converged based on the second discriminator and the preset convolutional neural network, include:
    在确定所述第二预设空洞卷积模型不收敛时,按照第二预设规则更新所述第二预设空洞卷积模型的第二学习率;When it is determined that the second preset hole convolution model does not converge, update the second learning rate of the second preset hole convolution model according to a second preset rule;
    基于更新后的第二学习率更新所述第二预设空洞卷积模型,将更新后的第二预设空洞卷积模型作为第二预设空洞卷积模型,继续执行基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型的步骤。The second preset hole convolution model is updated based on the updated second learning rate, the updated second preset hole convolution model is used as the second preset hole convolution model, and the corresponding operation based on the original image is continued. The step of training a second preset cavity convolution model on the background image and the composite edge image.
  6. 如权利要求4所述的边缘学习的图像填充方法,其特征在于,所述基 于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛的步骤包括:The image filling method for edge learning according to claim 4, wherein the step of determining whether the second preset hole convolution model is converged based on a second discriminator and a preset convolutional neural network comprises:
    基于第三输入数据以及所述第二鉴别器,确定所述第二预设空洞卷积模型对应的第二对抗损失函数;Based on the third input data and the second discriminator, determine a second counter loss function corresponding to the second preset hole convolution model;
    基于第四输入数据以及预设卷积神经网络,确定所述第二预设空洞卷积模型对应的感知损失函数;Determine the perceptual loss function corresponding to the second preset hole convolution model based on the fourth input data and the preset convolutional neural network;
    基于第五输入数据以及预设卷积神经网络,确定所述第二预设空洞卷积模型对应的风格损失函数;Determining a style loss function corresponding to the second preset hole convolution model based on the fifth input data and a preset convolutional neural network;
    基于所述第二对抗损失函数、所述感知损失函数以及所述风格损失函数确定所述第二预设空洞卷积模型的损失函数;Determining a loss function of the second preset hole convolution model based on the second counter loss function, the perceptual loss function, and the style loss function;
    基于所述第二预设空洞卷积模型的损失函数确定所述第二预设空洞卷积模型是否收敛。Determine whether the second preset hole convolution model converges based on the loss function of the second preset hole convolution model.
  7. 一种边缘学习的图像填充装置,其特征在于,所述边缘学习的图像填充装置包括:An image filling device for edge learning, characterized in that the image filling device for edge learning comprises:
    第一确定模块,用于基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;The first determining module is configured to determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
    第二确定模块,用于基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;The second determining module is configured to determine the background edge image based on the edge image corresponding to the original image and the mask image;
    训练模块,用于基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;A training module, configured to train a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
    第一判断模块,用于基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛;The first judgment module is configured to determine whether the first preset hole convolution model converges based on the first discriminator;
    第三确定模块,在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;A third determining module, when determining that the first preset cavity convolution model converges, determining that the first preset cavity convolution model is the first target cavity convolution model;
    第一生成模块,基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像;A first generating module, which generates a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target cavity convolution model;
    第二生成模块,基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。The second generating module generates a filled image corresponding to the original image based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model.
  8. 如权利要求7所述的边缘学习的图像填充装置,其特征在于,所述的边缘学习的图像填充装置还包括:8. The image filling device for edge learning according to claim 7, wherein the image filling device for edge learning further comprises:
    第一更新模块,用于在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;The first update module is configured to update the first learning rate of the first preset hole convolution model according to the first preset rule when it is determined that the first preset hole convolution model does not converge;
    第一循环模块,用于基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。The first loop module is configured to update the first preset hole convolution model based on the updated first learning rate, use the updated first preset hole convolution model as the first preset hole convolution model, and continue A step of training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image is performed.
  9. 如权利要求7所述的边缘学习的图像填充装置,其特征在于,所述判断模块包括:8. The edge learning image filling device according to claim 7, wherein the judging module comprises:
    第一确定单元,用于基于第一输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的第一对抗损失函数,其中,所述第一输入数据包括所述灰度图像、所述边缘图像和所述缺失边缘图像;The first determining unit is configured to determine a first countermeasure loss function corresponding to the first preset hole convolution model based on the first input data and the first discriminator, wherein the first input data includes the A grayscale image, the edge image, and the missing edge image;
    第二确定单元,用于基于第二输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的特征匹配损失函数,其中,所述第二输入数据包括所述边缘图像和所述缺失边缘图像;The second determining unit is configured to determine the feature matching loss function corresponding to the first preset hole convolution model based on the second input data and the first discriminator, wherein the second input data includes the edge An image and the missing edge image;
    第三确定单元,用于基于所述第一对抗损失函数以及所述特征匹配损失函数确定所述第一预设空洞卷积模型的损失函数;A third determining unit, configured to determine the loss function of the first preset hole convolution model based on the first counter loss function and the feature matching loss function;
    第一判断单元,用于基于所述第一预设空洞卷积模型的损失函数确定所述第一预设空洞卷积模型是否收敛。The first determining unit is configured to determine whether the first preset hole convolution model converges based on the loss function of the first preset hole convolution model.
  10. 如权利要求7所述的边缘学习的图像填充装置,其特征在于,所述第二生成模块包括:8. The image filling device for edge learning according to claim 7, wherein the second generating module comprises:
    第四确定单元,用于基于所述缺失边缘图像、所述背景边缘图像以及所述掩模图像,确定复合边缘图像;A fourth determining unit, configured to determine a composite edge image based on the missing edge image, the background edge image, and the mask image;
    训练单元,用于基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型;A training unit, configured to train a second preset cavity convolution model based on the background image corresponding to the original image and the composite edge image;
    第二判断单元,用于基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛,其中,所述终端包括第二鉴别器;A second judgment unit, configured to determine whether the second preset hole convolution model converges based on the second discriminator and the preset convolutional neural network, wherein the terminal includes a second discriminator;
    第五确定单元,用于在确定所述第二预设空洞卷积模型收敛时,确定所述第二预设空洞卷积模型为第二目标空洞卷积模型;A fifth determining unit, configured to determine that the second preset cavity convolution model is a second target cavity convolution model when it is determined that the second preset cavity convolution model converges;
    生成单元,用于基于所述原始图像对应的背景图像、所述复合边缘图像以及第二目标空洞卷积模型,生成原始图像对应的填充图像。The generating unit is configured to generate a filled image corresponding to the original image based on the background image corresponding to the original image, the composite edge image, and the second target cavity convolution model.
  11. 如权利要求10所述的边缘学习的图像填充装置,其特征在于,所述 的边缘学习的图像填充装置还包括:The image filling device for edge learning according to claim 10, wherein the image filling device for edge learning further comprises:
    第二更新模块,用于在确定所述第二预设空洞卷积模型不收敛时,按照第二预设规则更新所述第二预设空洞卷积模型的第二学习率;The second update module is configured to update the second learning rate of the second preset hole convolution model according to a second preset rule when it is determined that the second preset hole convolution model does not converge;
    第二循环模块,用于基于更新后的第二学习率更新所述第二预设空洞卷积模型,将更新后的第二预设空洞卷积模型作为第二预设空洞卷积模型,继续执行基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型的步骤。The second loop module is configured to update the second preset hole convolution model based on the updated second learning rate, use the updated second preset hole convolution model as the second preset hole convolution model, and continue A step of training a second preset hole convolution model based on the background image corresponding to the original image and the composite edge image is performed.
  12. 如权利要求10所述的边缘学习的图像填充装置,其特征在于,所述边缘学习的图像填充装置还包括:9. The image filling device for edge learning according to claim 10, wherein the image filling device for edge learning further comprises:
    第一计算模块,用于基于第三输入数据以及所述第二鉴别器,确定所述第二预设空洞卷积模型对应的第二对抗损失函数;A first calculation module, configured to determine a second countermeasure loss function corresponding to the second preset hole convolution model based on the third input data and the second discriminator;
    第二计算模块,用于基于第四输入数据以及预设卷积神经网络,确定所述第二预设空洞卷积模型对应的感知损失函数;The second calculation module is configured to determine the perceptual loss function corresponding to the second preset hole convolution model based on the fourth input data and a preset convolutional neural network;
    第三计算模块,用于基于第五输入数据以及预设卷积神经网络,确定所述第二预设空洞卷积模型对应的风格损失函数;The third calculation module is configured to determine the style loss function corresponding to the second preset hole convolution model based on the fifth input data and a preset convolutional neural network;
    第四计算模块,用于基于所述第二对抗损失函数、所述感知损失函数以及所述风格损失函数确定所述第二预设空洞卷积模型的损失函数;A fourth calculation module, configured to determine the loss function of the second preset hole convolution model based on the second confrontation loss function, the perceptual loss function, and the style loss function;
    第二判断模块,用于基于所述第二预设空洞卷积模型的损失函数确定所述第二预设空洞卷积模型是否收敛。The second judgment module is configured to determine whether the second preset hole convolution model converges based on the loss function of the second preset hole convolution model.
  13. 一种终端,其特征在于,所述终端包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机可读指令计算机可读指令,其中所述计算机可读指令被所述处理器执行时,实现如下步骤:A terminal, characterized in that the terminal includes a processor, a memory, and computer readable instructions stored on the memory and executable by the processor, wherein the computer readable instructions are When the processor executes, the following steps are implemented:
    基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;Determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
    基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;Determining a background edge image based on the edge image corresponding to the original image and the mask image;
    基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;Training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
    基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛,其中,所述终端包括所述第一鉴别器;Determining whether the first preset hole convolution model converges based on a first discriminator, where the terminal includes the first discriminator;
    在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;When it is determined that the first preset cavity convolution model converges, determining that the first preset cavity convolution model is a first target cavity convolution model;
    基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像;Generating a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target hole convolution model;
    基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model, a filled image corresponding to the original image is generated.
  14. 如权利要求13所述的终端,其特征在于,所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤之后,还包括:The terminal according to claim 13, wherein after the step of determining whether the first preset hole convolution model is converged based on the first discriminator, the method further comprises:
    在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;When it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to a first preset rule;
    基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。The first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the background gray level is continued. The step of training a first preset cavity convolution model on the image, the background edge image, and the mask image.
  15. 如权利要求13所述的终端,其特征在于,所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤包括:The terminal according to claim 13, wherein the step of determining whether the first preset hole convolution model has converged based on the first discriminator comprises:
    基于第一输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的第一对抗损失函数,其中,所述第一输入数据包括所述灰度图像、所述边缘图像和所述缺失边缘图像;Based on the first input data and the first discriminator, a first countermeasure loss function corresponding to the first preset hole convolution model is determined, wherein the first input data includes the gray image, the edge An image and the missing edge image;
    基于第二输入数据以及所述第一鉴别器,确定所述第一预设空洞卷积模型对应的特征匹配损失函数,其中,所述第二输入数据包括所述边缘图像和所述缺失边缘图像;Determine the feature matching loss function corresponding to the first preset hole convolution model based on the second input data and the first discriminator, wherein the second input data includes the edge image and the missing edge image ;
    基于所述第一对抗损失函数以及所述特征匹配损失函数确定所述第一预设空洞卷积模型的损失函数;Determining a loss function of the first preset cavity convolution model based on the first counter loss function and the feature matching loss function;
    基于所述第一预设空洞卷积模型的损失函数确定所述第一预设空洞卷积模型是否收敛。Determine whether the first preset hole convolution model converges based on the loss function of the first preset hole convolution model.
  16. 如权利要求13所述的终端,其特征在于,所述终端包括第二鉴别器,所述基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像的步骤包括:The terminal according to claim 13, wherein the terminal comprises a second discriminator, the background image corresponding to the original image, the background edge image, the missing edge image, and the mask The image and the second preset cavity convolution model, and the steps of generating a filled image corresponding to the original image include:
    基于所述缺失边缘图像、所述背景边缘图像以及所述掩模图像,确定复合边缘图像;Determining a composite edge image based on the missing edge image, the background edge image, and the mask image;
    基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型;Training a second preset cavity convolution model based on the background image corresponding to the original image and the composite edge image;
    基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛;Determining whether the second preset cavity convolution model converges based on the second discriminator and the preset convolutional neural network;
    在确定所述第二预设空洞卷积模型收敛时,确定所述第二预设空洞卷积模型为第二目标空洞卷积模型;When it is determined that the second preset cavity convolution model converges, determining that the second preset cavity convolution model is a second target cavity convolution model;
    基于所述原始图像对应的背景图像、所述复合边缘图像以及第二目标空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the composite edge image, and the second target cavity convolution model, a filled image corresponding to the original image is generated.
  17. 如权利要求16所述的终端,其特征在于,所述基于第二鉴别器以及预设卷积神经网络确定所述第二预设空洞卷积模型是否收敛的步骤之后,还包括:The terminal according to claim 16, wherein after the step of determining whether the second preset hole convolution model has converged based on the second discriminator and the preset convolutional neural network, the method further comprises:
    在确定所述第二预设空洞卷积模型不收敛时,按照第二预设规则更新所述第二预设空洞卷积模型的第二学习率;When it is determined that the second preset hole convolution model does not converge, update the second learning rate of the second preset hole convolution model according to a second preset rule;
    基于更新后的第二学习率更新所述第二预设空洞卷积模型,将更新后的第二预设空洞卷积模型作为第二预设空洞卷积模型,继续执行基于所述原始图像对应的背景图像以及所述复合边缘图像训练第二预设空洞卷积模型的步骤。The second preset hole convolution model is updated based on the updated second learning rate, the updated second preset hole convolution model is used as the second preset hole convolution model, and the corresponding operation based on the original image is continued. The step of training a second preset cavity convolution model on the background image and the composite edge image.
  18. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机可读指令,其中所述计算机可读指令被处理器执行时,实现如下步骤:A readable storage medium, characterized in that computer readable instructions are stored on the readable storage medium, and when the computer readable instructions are executed by a processor, the following steps are implemented:
    基于原始图像对应的灰度图像以及原始图像对应的掩模图像,确定背景灰度图像;Determine the background grayscale image based on the grayscale image corresponding to the original image and the mask image corresponding to the original image;
    基于原始图像对应的边缘图像以及所述掩模图像确定背景边缘图像;Determining a background edge image based on the edge image corresponding to the original image and the mask image;
    基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型;Training a first preset cavity convolution model based on the background grayscale image, the background edge image, and the mask image;
    基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛,其中,所述终端包括所述第一鉴别器;Determining whether the first preset hole convolution model converges based on a first discriminator, where the terminal includes the first discriminator;
    在确定所述第一预设空洞卷积模型收敛时,确定所述第一预设空洞卷积模型为第一目标空洞卷积模型;When it is determined that the first preset cavity convolution model converges, determining that the first preset cavity convolution model is a first target cavity convolution model;
    基于所述背景灰度图像、所述背景边缘图像、所述掩模图像以及所述第一目标空洞卷积模型,生成缺失边缘图像;Generating a missing edge image based on the background grayscale image, the background edge image, the mask image, and the first target hole convolution model;
    基于所述原始图像对应的背景图像、所述背景边缘图像、所述缺失边缘 图像、所述掩模图像以及第二预设空洞卷积模型,生成原始图像对应的填充图像。Based on the background image corresponding to the original image, the background edge image, the missing edge image, the mask image, and the second preset cavity convolution model, a filled image corresponding to the original image is generated.
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,所述在接收到开票请求时,获取与所述开票请求对应的合同信息的步骤包括:所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤之后,还包括:18. The computer-readable storage medium of claim 18, wherein the step of obtaining contract information corresponding to the issuance request when the issuance request is received comprises: the determining of the issuance based on the first discriminator After the step of whether the first preset hole convolution model converges, it further includes:
    在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;When it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to a first preset rule;
    基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。The first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the background gray level is continued. The step of training a first preset cavity convolution model on the image, the background edge image, and the mask image.
  20. 如权利要求18所述的计算机可读存储介质,其特征在于,所述基于第一鉴别器确定所述第一预设空洞卷积模型是否收敛的步骤之后,还包括:18. The computer-readable storage medium according to claim 18, wherein after the step of determining whether the first preset hole convolution model is converged based on the first discriminator, the method further comprises:
    在确定所述第一预设空洞卷积模型不收敛时,按照第一预设规则更新所述第一预设空洞卷积模型的第一学习率;When it is determined that the first preset hole convolution model does not converge, update the first learning rate of the first preset hole convolution model according to a first preset rule;
    基于更新后的第一学习率更新所述第一预设空洞卷积模型,将更新后的第一预设空洞卷积模型作为第一预设空洞卷积模型,继续执行基于所述背景灰度图像、所述背景边缘图像以及所述掩模图像训练第一预设空洞卷积模型的步骤。The first preset hole convolution model is updated based on the updated first learning rate, the updated first preset hole convolution model is used as the first preset hole convolution model, and the execution based on the background gray level is continued. The step of training a first preset cavity convolution model on the image, the background edge image, and the mask image.
PCT/CN2019/118150 2019-08-23 2019-11-13 Image filling method and apparatus based on edge learning, terminal, and readable storage medium WO2021035979A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910784055.9 2019-08-23
CN201910784055.9A CN111062877A (en) 2019-08-23 2019-08-23 Image filling method and device for edge learning, terminal and readable storage medium

Publications (1)

Publication Number Publication Date
WO2021035979A1 true WO2021035979A1 (en) 2021-03-04

Family

ID=70297456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118150 WO2021035979A1 (en) 2019-08-23 2019-11-13 Image filling method and apparatus based on edge learning, terminal, and readable storage medium

Country Status (2)

Country Link
CN (1) CN111062877A (en)
WO (1) WO2021035979A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266846A (en) * 2021-12-25 2022-04-01 福州大学 Self-learning filling method for target detection model

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476213A (en) * 2020-05-19 2020-07-31 武汉大势智慧科技有限公司 Method and device for filling covering area of shelter based on road image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460830A (en) * 2018-05-09 2018-08-28 厦门美图之家科技有限公司 Image repair method, device and image processing equipment
CN109978786A (en) * 2019-03-22 2019-07-05 北京工业大学 A kind of Kinect depth map restorative procedure based on convolutional neural networks
CN110009576A (en) * 2019-02-28 2019-07-12 西北大学 A kind of mural painting inpainting model is established and restorative procedure
CN110097110A (en) * 2019-04-26 2019-08-06 华南理工大学 A kind of semantic image restorative procedure based on objective optimization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300128B (en) * 2018-09-29 2022-08-26 聚时科技(上海)有限公司 Transfer learning image processing method based on convolution neural network hidden structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460830A (en) * 2018-05-09 2018-08-28 厦门美图之家科技有限公司 Image repair method, device and image processing equipment
CN110009576A (en) * 2019-02-28 2019-07-12 西北大学 A kind of mural painting inpainting model is established and restorative procedure
CN109978786A (en) * 2019-03-22 2019-07-05 北京工业大学 A kind of Kinect depth map restorative procedure based on convolutional neural networks
CN110097110A (en) * 2019-04-26 2019-08-06 华南理工大学 A kind of semantic image restorative procedure based on objective optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAMYAR NAZERI; ERIC NG; TONY JOSEPH; FAISAL Z QURESHI; MEHRAN EBRAHIMI: "EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning", ARXIV.ORG, 1 January 2019 (2019-01-01), pages 1 - 17, XP081010575, DOI: 20200507133622X *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266846A (en) * 2021-12-25 2022-04-01 福州大学 Self-learning filling method for target detection model

Also Published As

Publication number Publication date
CN111062877A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
US20200258197A1 (en) Method for generating high-resolution picture, computer device, and storage medium
US20200342576A1 (en) Digital Image Completion by Learning Generation and Patch Matching Jointly
US11436775B2 (en) Predicting patch displacement maps using a neural network
CN108921782B (en) Image processing method, device and storage medium
WO2021139448A1 (en) Method and apparatus for correcting new model on basis of multiple source models, and computer device
WO2021035979A1 (en) Image filling method and apparatus based on edge learning, terminal, and readable storage medium
WO2021169740A1 (en) Image restoration method and apparatus, computer device, and storage medium
JP7207846B2 (en) Information processing device, information processing method and program
CN110874575A (en) Face image processing method and related equipment
CN111127309A (en) Portrait style transfer model training method, portrait style transfer method and device
CN110335330A (en) Image simulation generation method and its system, deep learning algorithm training method and electronic equipment
CN115984447A (en) Image rendering method, device, equipment and medium
KR20210116922A (en) Method and Device for Fast Adaptation through Meta-learning of Super Resolution Model
WO2024087946A1 (en) Image editing method and apparatus, computer device, and storage medium
WO2024041108A1 (en) Image correction model training method and apparatus, image correction method and apparatus, and computer device
CN116843566A (en) Tone mapping method, tone mapping device, display device and storage medium
CN113658091A (en) Image evaluation method, storage medium and terminal equipment
CN111798381A (en) Image conversion method, image conversion device, computer equipment and storage medium
CN115953597B (en) Image processing method, device, equipment and medium
CN111784726A (en) Image matting method and device
CN116823869A (en) Background replacement method and electronic equipment
CN114519753A (en) Image generation method, system, electronic device, storage medium and product
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
US20200349744A1 (en) System and a method for providing color vision deficiency assistance
CN113223128A (en) Method and apparatus for generating image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942646

Country of ref document: EP

Kind code of ref document: A1