WO2023072072A1 - Procédé et appareil de génération d'image floue, et procédé et appareil de formation de modèle de réseau - Google Patents

Procédé et appareil de génération d'image floue, et procédé et appareil de formation de modèle de réseau Download PDF

Info

Publication number
WO2023072072A1
WO2023072072A1 PCT/CN2022/127384 CN2022127384W WO2023072072A1 WO 2023072072 A1 WO2023072072 A1 WO 2023072072A1 CN 2022127384 W CN2022127384 W CN 2022127384W WO 2023072072 A1 WO2023072072 A1 WO 2023072072A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
blur kernel
blurred
model
Prior art date
Application number
PCT/CN2022/127384
Other languages
English (en)
Chinese (zh)
Inventor
董航
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023072072A1 publication Critical patent/WO2023072072A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present application relates to the technical field of image processing, in particular to a blurred image generation method, a network model training method and a device.
  • Video repair task is the key business in video quality enhancement.
  • the most widely used method of video repair is: obtain a training data set, use the obtained training data set to train the network model to obtain a video repair network model, and finally perform video repair through the video repair network model. Since the training data set directly determines the model performance of the obtained video inpainting network model, how to obtain a training data set that is more consistent with the real data has become one of the research hotspots in this field.
  • the method commonly used in the related art is: acquire a high-quality image set including multiple clear images, then use degraded methods such as Bicubic downsampling to generate blurred images corresponding to each clear image in the high-quality image set, and finally combine the high-quality image set
  • degraded methods such as Bicubic downsampling
  • the clear images in and the corresponding blurred images are used as the training data set.
  • the blurred image obtained by degrading methods such as Bicubic downsampling is very different from the real blurred image. Since the blurred images in the training data set acquired through the training data set generation method in the related art are very different from the real blurred images, the performance of the trained video inpainting network model is very unsatisfactory.
  • the present application provides a blurred image generation method, network model training method and device, which are used to solve the problem that the blurred image in the training data set obtained in the related art is very different from the real blurred image.
  • the embodiments of the present application provide a method for generating a blurred image, including:
  • the first image collection includes a plurality of images with a resolution smaller than the first threshold;
  • the second image set includes a plurality of images with a resolution greater than a second threshold
  • Each image in the second image set is degraded through a blur kernel corresponding to each image in the second image set, and a blurred image corresponding to each image in the second image set is acquired.
  • the embodiments of the present application provide a network model training method, including:
  • the set of sample images including a plurality of sample images with a resolution greater than a threshold resolution
  • the blurred image corresponding to each sample image in the sample image set is obtained;
  • An image inpainting network model for inpainting blurred images is trained by the training data set.
  • an embodiment of the present application provides a device for generating a blurred image, including:
  • An acquisition unit configured to acquire the blur kernel of each image in the first image set; generate a blur kernel pool, the first image set includes a plurality of images with a resolution smaller than a first threshold;
  • a selection unit configured to select a blur kernel corresponding to each image in a second image set from the blur kernel pool; the second image set includes a plurality of images with a resolution greater than a second threshold;
  • a processing unit configured to degrade each image in the second image set through a blur kernel corresponding to each image in the second image set, and acquire a blurred image corresponding to each image in the second image set.
  • the embodiment of the present application provides a network model training device, including:
  • an acquiring unit configured to acquire a sample image set, the sample image set including a plurality of sample images with a resolution greater than a threshold resolution
  • a processing unit configured to obtain a blurred image corresponding to each sample image in the sample image set by using the blurred image generation method described in any one of the first aspect
  • a generation unit configured to generate a training data set according to each sample image in the sample image set and the blurred image corresponding to each sample image
  • the training unit is configured to train a preset network model through the training data set, and obtain an image repair network model for repairing blurred images.
  • an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to enable the electronic device to implement the first Aspect or the method described in any embodiment of the first aspect.
  • the embodiments of the present application provide a computer-readable storage medium.
  • the computing device implements the method described in the first aspect or any implementation manner of the first aspect.
  • an embodiment of the present application provides a computer program product, which enables the computer to implement the method described in the first aspect or any implementation manner of the first aspect when the computer program product is run on a computer.
  • the blurred image generation method provided in the embodiment of the present application first obtains the blur kernels of each image in the first image set of multiple images with a resolution smaller than the first threshold to generate a blur kernel pool, and then selects a blur kernel pool including multiple A blur kernel corresponding to each image in the second image set of images with a resolution greater than the second threshold, and then degrades each image in the second image set through the blur kernel corresponding to each image in the second image set quality, acquiring blurred images corresponding to each image in the second image set.
  • the embodiment of the present application can solve the problem that the blurred image acquired in the related art is very different from the real blurred image. question.
  • Fig. 1 is one of the flow charts of the steps of the blurred image generation method provided by the embodiment of the present application;
  • FIG. 2 is the second schematic diagram of the data flow of the blurred image generation method provided by the embodiment of the present application.
  • FIG. 3 is a model frame diagram of a blurred image generation method provided by an embodiment of the present application.
  • FIG. 4 is a flow chart of the steps of the network model training method provided by the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a blurred image generation device provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a network model training device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • the meaning of "plurality” refers to two or more.
  • the embodiment of the present application provides a method for generating a blurred image.
  • the method for generating a blurred image includes the following steps:
  • the first set of images includes a plurality of images with resolutions smaller than a first threshold.
  • the implementation process of the above step S11 may include: obtaining a plurality of low-resolution images with a resolution smaller than the first threshold to form a first image set, and then extracting the blur kernel of each low-resolution image in the first image set, Finally, all the extracted fuzzy kernels are combined into a fuzzy kernel pool.
  • the first threshold in the embodiment of the present application can be determined according to the usage scenario of the video repair network model obtained from the final training. If the video repair network model obtained from the final training is used to repair a video with lower definition, the first The threshold is set smaller, and if the video inpainting network model finally trained is used to repair a video with higher definition, the first threshold can be set larger.
  • the first image set in the embodiment of the present application may consist of multiple mutually independent images, or may be multiple video frames obtained by sampling image frames from the same video, or may be a set of images in the same video All the image frames are not limited in this embodiment of the present application, as long as the resolutions of the images in the first image set are all smaller than the first threshold.
  • the second image set includes a plurality of images with a resolution greater than a second threshold.
  • one blur kernel is selected from the blur kernel pool as the corresponding blur kernel.
  • the above step S13 (degrade each image in the second image set through the blur kernel corresponding to each image in the second image set, and obtain the corresponding blur kernel of each image in the second image set blurred image) including:
  • the first formula is:
  • I x Deg(J x , K x )+N
  • J x represents the image in the second image collection
  • I x represents the blurred image corresponding to the image J x in the second image collection
  • K x represents the blur kernel corresponding to the image J x in the second image collection
  • Deg(J x , K x ) means that J x is degraded by K x
  • N means additional noise added to I x .
  • the blurred image generation method provided in the embodiment of the present application first obtains the blur kernels of each image in the first image set of multiple images with a resolution smaller than the first threshold to generate a blur kernel pool, and then selects a blur kernel pool including multiple A blur kernel corresponding to each image in the second image set of images with a resolution greater than the second threshold, and then degrades each image in the second image set through the blur kernel corresponding to each image in the second image set quality, acquiring blurred images corresponding to each image in the second image set.
  • the embodiment of the present application can solve the problem that the blurred image acquired in the related art is very different from the real blurred image. question.
  • step S11 obtaining the blur kernel of each image in the first image set
  • step S11 may include the following steps:
  • both the first noise and the second noise satisfy normal distribution.
  • the DIP model is a network model constructed by using the network structure itself to capture a large amount of low-level image statistical prior information; random noise is used as the input of the DIP model, and as the number of iterations of the DIP model increases, the DIP model can output the corresponding Therefore, the first image can be output by the DIP model after the first noise is input into the DIP model in the above step S22.
  • a reversible network can be constructed, and the process from random noise to the corresponding fuzzy kernel can be obtained through pre-training.
  • the trained FKP model is input with a normal distribution of noise, it can get to a real fuzzy kernel. Therefore, the above step S23 After inputting the second noise into the FKP model, the real blur kernel predicted by the FKP model can be obtained.
  • the above step S24 (degrading the first image through the predictive blur kernel to obtain a second image) includes:
  • the first formula is:
  • J y represents the first image
  • I y represents the second image
  • k represents the prediction blur kernel
  • DegDeg(J y , k) represents the operation of degrading the first image through k
  • N represents the additional input to the second image Additional noise added.
  • the image restoration task generally does not need to increase the resolution of the blurred image, it is not necessary to down-sample the first image in the process of degrading the first image through the predictive blur kernel to obtain the second image .
  • an L1loss constraint may be performed on the second image and the target image, so as to determine whether the convergence condition is satisfied.
  • step S25 if the second image and the target image do not meet the convergence condition, then perform the following step S26, and if the second image and the target image meet the convergence condition, then perform the following step S27 .
  • the predictive blur kernel output by the FKP model degrades the first image through the predictive blur kernel, acquires a second image, and judges whether a convergence condition is satisfied based on the second image and the target image. That is, as shown in FIG. 2, after updating the model parameters and/or model input, return to step S22.
  • said updating model parameters and/or model inputs includes:
  • model parameters of the DIP model and/or the second noise are updated during the training process, but the model parameters of the FKP model or the model input (first noise) of the DIP model are not updated.
  • the blur kernel of each image in the first image set can be obtained, and then the blur kernel pool in the above embodiment is generated.
  • the implementation process of acquiring the target image of the first image set shown in FIG. 2 includes:
  • the first image g(z x , ⁇ g ) is degraded by predicting the blur kernel k(z k , ⁇ k ), to obtain the second image P.
  • an implementation of the above step S12 (selecting the blur kernel corresponding to each image in the second image set from the blur kernel pool) includes:
  • an implementation of the above step S12 (selecting the blur kernel corresponding to each image in the second image set from the blur kernel pool) includes the following steps a to d:
  • Step a Divide the images in the first image set into multiple first sub-image sets based on the scene of the image.
  • images with the same image scene in the first image set are divided into a first sub-image set, so as to obtain multiple first sub-image sets.
  • the first set of images consists of image frames of a first video.
  • the image-based scene divides the images in the first image set into a plurality of first sub-image sets, comprising:
  • scene transition detection can be performed on the video, so that the video can be divided into a plurality of first video segments, and then each first video segment is extracted
  • the image frames of the segment are divided into a first sub-image set, so that the images in the first image set are divided into multiple first sub-image sets according to the scenes of the images.
  • Step b Divide blur kernels of images belonging to the same first sub-image set into a blur kernel group.
  • the blur kernels of the two images belong to the same blur kernel group, and if the two images belong to different the first sub-image set, the blur kernels of the two images belong to different blur kernel groups.
  • Step c dividing the images in the second image set into multiple second sub-image sets based on the scene of the image.
  • images with the same image scene in the second image set are divided into a second sub-image set, so as to obtain multiple second sub-image sets.
  • the second set of images consists of image frames of a second video.
  • the image-based scene divides the images in the second image set into a plurality of second sub-image sets, comprising:
  • scene transition detection can be performed on the video, thereby dividing the video into a plurality of second video segments, and then extracting each second video
  • the image frames of the segment are divided into a second sub-image set, so that the images in the second image set are divided into multiple second sub-image sets according to the scenes of the images.
  • Step d For images belonging to the same second sub-image set, randomly select a corresponding blur kernel from the same blur kernel group.
  • the above embodiments can further reduce or avoid the inconsistency of adjacent videos, make the acquired blurred image more temporally consistent, and further make the acquired blurred image more consistent with the real blurred image.
  • the embodiment of the present application also provides a network model training method.
  • the network model training method includes the following steps: S41 to S44:
  • the set of sample images includes a plurality of sample images with a resolution greater than a threshold resolution.
  • the sample image set may be an image set composed of image frames in a piece of high-definition video.
  • the implementation manner of acquiring the blurred image corresponding to each sample image in the sample image set is: acquiring the blurred image corresponding to each sample image in the sample image set through the blurred image generation method provided in any one of the above embodiments.
  • the sample image set is used as the second image set, and the method for generating a blurred image provided by the above embodiment is executed to obtain a blurred image corresponding to each sample image in the sample image set.
  • each sample image in the sample image set is a high-resolution image with a resolution greater than the second threshold
  • the blurred image corresponding to each sample image in the sample image set is generated by degrading each sample image in the sample image set Low-resolution images, therefore, according to each sample image in the sample image set and the blurred image corresponding to each sample image in the sample image set, multiple high-resolution images and multiple high-resolution images corresponding to each A training dataset of low-resolution images.
  • the training data set generated in the embodiment of the present application is the training data set of the image inpainting network model used for inpainting blurred images.
  • the blur kernel used to degrade the images in the sample image set in the embodiment of the present application is the blur kernel of the real image in the first image set
  • the blur kernel corresponding to each sample image in the sample image set The blurred image obtained by downgrading each sample image in the sample image set is more consistent with the real blurred image, so the embodiment of the present application can solve the problem that the blurred image obtained in the related art is very different from the real blurred image , thereby improving the performance of the video inpainting network model.
  • the embodiment of the present application also provides a blurred image generation device and a network model training device
  • the device embodiment corresponds to the aforementioned method embodiment, for the convenience of reading, the device
  • the embodiment does not repeat the details of the foregoing method embodiments one by one, but it should be clear that the device in this embodiment can correspondingly implement all the content in the foregoing method embodiments.
  • FIG. 5 is a schematic structural diagram of the blurred image generating device. As shown in FIG. 5, the blurred image generating device 500 includes:
  • An acquisition unit 51 configured to acquire the blur kernel of each image in the first image set; generate a blur kernel pool, the first image set includes a plurality of images with a resolution smaller than a first threshold;
  • the selection unit 52 is configured to select a blur kernel corresponding to each image in the second image set from the blur kernel pool; the second image set includes a plurality of images with a resolution greater than a second threshold;
  • the processing unit 53 is configured to degrade each image in the second image set through the blur kernel corresponding to each image in the second image set, and obtain a blurred image 5 corresponding to each image in the second image set .
  • the acquisition unit 51 is specifically configured to randomly generate the first noise and the second noise corresponding to the target image in the first image set; the first noise and the second noise both satisfy normal distribution; input the first noise into the depth image prior DIP model, and obtain the first image output by the DIP model; input the second noise into the flow-based kernel prior FKP model, and obtain the FKP model
  • the output predictive blur kernel degrade the first image through the predictive blur kernel to obtain a second image; judge whether the convergence condition is satisfied based on the second image and the target image; if not, update the model parameters and/or model input, and after updating the model parameters and/or the model input, judge whether the reacquired second image and the target image meet the convergence condition until the second image and the target image The convergence condition is met; if so, the predicted blur kernel output by the FKP model is determined as the blur kernel of the target image.
  • the acquiring unit 51 is specifically configured to update the model parameters of the DIP model and/or the model input of the FKP model.
  • the selecting unit 52 is specifically configured to, for each image in the second image set, randomly select a corresponding blur kernel from the blur kernel pool.
  • the selecting unit 52 is specifically configured to divide the images in the first image set into multiple first sub-image sets based on the scene of the images; divide the images belonging to the same first sub-image set
  • the fuzzy kernel of the image is divided into a fuzzy kernel group
  • the image in the second image set is divided into a plurality of second sub-image sets based on the scene of the image; for the images belonging to the same second sub-image set, from A corresponding blur kernel is randomly selected from the same blur kernel group.
  • the first set of images consists of image frames of a first video
  • the second set of images consists of image frames of a second video
  • the selecting unit 52 is specifically configured to divide the first video into a plurality of first video clips based on the scene of the image, and divide the image frames of each of the first video clips into a first sub-image set , and divide the second video into a plurality of second video clips based on the scene of the image, and divide the image frames of each of the second video clips into a second sub-image set.
  • the device for generating a blurred image provided in this embodiment can execute the method for generating a blurred image provided in the above method embodiment, and its implementation principle and technical effect are similar, and details will not be repeated here.
  • FIG. 6 is a schematic structural diagram of the blurred image generation device.
  • the network model training device 600 includes:
  • An acquisition unit 61 configured to acquire a set of sample images, the set of sample images including a plurality of sample images with a resolution greater than a threshold resolution;
  • a processing unit 62 configured to obtain a blurred image corresponding to each sample image in the sample image set through the blurred image generation method described in any one of the above embodiments;
  • a generating unit 63 configured to generate a training data set according to each sample image in the sample image set and the blurred image corresponding to each sample image;
  • the training unit 64 is configured to train a preset network model through the training data set, and obtain an image inpainting network model for inpainting blurred images.
  • the network model training device provided in this embodiment can execute the network model training method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device provided by this embodiment includes: a memory 71 and a processor 72, the memory 71 is used to store computer programs; the processing The device 72 is configured to execute the methods provided in the above embodiments when calling a computer program.
  • an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computing device implements the above-mentioned embodiment provided method.
  • an embodiment of the present application further provides a computer program product, which enables the computing device to implement the method provided in the foregoing embodiments when the computer program product is run on a computer.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
  • the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read only memory (ROM) or flash RAM.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both volatile and non-volatile, removable and non-removable storage media.
  • the storage medium may store information by any method or technology, and the information may be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer readable media excludes transitory computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

La présente demande concerne un procédé et un appareil de génération d'image floue, et un procédé et un appareil de formation de modèle de réseau. Le procédé consiste : à obtenir un noyau de flou de chaque image dans un premier ensemble d'images, et à générer un pool de noyaux de flou, le premier ensemble d'images comprenant une pluralité d'images présentant chacune une résolution inférieure à un premier seuil ; à sélectionner un noyau de flou correspondant à chaque image dans un second ensemble d'images à partir du pool de noyaux de flou, le second ensemble d'images comprenant une pluralité d'images présentant chacune une résolution supérieure à un second seuil ; et à réaliser une réduction de qualité sur les images dans le second ensemble d'images au moyen des noyaux de flou correspondant aux images pour obtenir des images floues correspondant aux images dans le second ensemble d'images.
PCT/CN2022/127384 2021-10-26 2022-10-25 Procédé et appareil de génération d'image floue, et procédé et appareil de formation de modèle de réseau WO2023072072A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111250062.4A CN116029911A (zh) 2021-10-26 2021-10-26 一种模糊图像生成方法、网络模型训练方法及装置
CN202111250062.4 2021-10-26

Publications (1)

Publication Number Publication Date
WO2023072072A1 true WO2023072072A1 (fr) 2023-05-04

Family

ID=86069300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127384 WO2023072072A1 (fr) 2021-10-26 2022-10-25 Procédé et appareil de génération d'image floue, et procédé et appareil de formation de modèle de réseau

Country Status (2)

Country Link
CN (1) CN116029911A (fr)
WO (1) WO2023072072A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357871A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Hierarchical Sharpness Evaluation
CN108364262A (zh) * 2018-01-11 2018-08-03 深圳大学 一种模糊图像的复原方法、装置、设备及存储介质
CN113240581A (zh) * 2021-04-09 2021-08-10 辽宁工程技术大学 一种针对未知模糊核的真实世界图像超分辨率方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357871A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Hierarchical Sharpness Evaluation
CN108364262A (zh) * 2018-01-11 2018-08-03 深圳大学 一种模糊图像的复原方法、装置、设备及存储介质
CN113240581A (zh) * 2021-04-09 2021-08-10 辽宁工程技术大学 一种针对未知模糊核的真实世界图像超分辨率方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, LAN; KONG, XIANGYI; ZHANG, HAITAO: "Image Super-Resolution Based on Blur Kernel Correction with Unknown Degradation Method", COMPUTER ENGINEERING AND APPLICATIONS, HUABEI JISUAN JISHU YANJIUSUO, CN, vol. 21, no. 58, 6 July 2021 (2021-07-06), CN , pages 232 - 242, XP009545734, ISSN: 1002-8331, DOI: 10.3778/j.issn.1002-8331.2103-0530 *

Also Published As

Publication number Publication date
CN116029911A (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
Reda et al. Film: Frame interpolation for large motion
US11449966B2 (en) Real-time video ultra resolution
CN108122197B (zh) 一种基于深度学习的图像超分辨率重建方法
Zhu et al. GAN‐Based Image Super‐Resolution with a Novel Quality Loss
WO2021233006A1 (fr) Appareil et procédé de formation de modèle de traitement d'image, appareil et procédé de traitement d'image, et dispositif
EP3341911B1 (fr) Mise à l'échelle supérieure d'image
CN102576454B (zh) 利用空间图像先验的图像去模糊法
CN111784570A (zh) 一种视频图像超分辨率重建方法及设备
WO2022077978A1 (fr) Procédé de traitement vidéo et appareil de traitement vidéo
US8391626B2 (en) Learning of coefficients for motion deblurring by pixel classification and constraint condition weight computation
US11741579B2 (en) Methods and systems for deblurring blurry images
Jiang et al. Deep edge map guided depth super resolution
Kokaram et al. Motion‐based frame interpolation for film and television effects
JP2011250013A (ja) 画質評価方法、画質評価装置、及びプログラム
US20230138053A1 (en) Systems and methods for optical flow estimation
WO2021179954A1 (fr) Procédé et appareil de traitement vidéo, dispositif et support de stockage
Hung et al. Image interpolation using convolutional neural networks with deep recursive residual learning
WO2023072072A1 (fr) Procédé et appareil de génération d'image floue, et procédé et appareil de formation de modèle de réseau
US10819983B1 (en) Determining a blurriness score for screen capture videos
Zhu et al. Eednet: enhanced encoder-decoder network for autoisp
US20220327663A1 (en) Video Super-Resolution using Deep Neural Networks
Zhang et al. Video Superresolution Reconstruction Using Iterative Back Projection with Critical‐Point Filters Based Image Matching
TWI417810B (zh) 影像增強方法、影像增強裝置及影像處理電路
CN113610031A (zh) 视频处理方法和视频处理装置
KR102325898B1 (ko) 다중 도메인 영상 복원 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885949

Country of ref document: EP

Kind code of ref document: A1