WO2019178893A1 - 运动模糊图像的模糊处理方法、装置、设备及存储介质 - Google Patents

运动模糊图像的模糊处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2019178893A1
WO2019178893A1 PCT/CN2018/081710 CN2018081710W WO2019178893A1 WO 2019178893 A1 WO2019178893 A1 WO 2019178893A1 CN 2018081710 W CN2018081710 W CN 2018081710W WO 2019178893 A1 WO2019178893 A1 WO 2019178893A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
network
unit
motion
enhanced
Prior art date
Application number
PCT/CN2018/081710
Other languages
English (en)
French (fr)
Inventor
张勇
马少勇
赵东宁
唐琳琳
梁长垠
黎丽
曾庆好
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2019178893A1 publication Critical patent/WO2019178893A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Definitions

  • the invention belongs to the technical field of image processing, and in particular relates to a method, a device, a device and a storage medium for blur processing of a motion blurred image.
  • Motion blur is a ubiquitous phenomenon in the imaging process.
  • the relative speed of the subject is too fast or the jitter during shooting will cause motion blur, and the motion blur of the image. It has a serious impact on applications in astronomy, military, road traffic, medical images, etc. Therefore, motion blur removal of images has always been an important issue in the field of computer vision.
  • the object of the present invention is to provide a method, a device, a device and a storage medium for blur processing of a motion blurred image, aiming at solving a fuzzy processing method that cannot provide an effective motion blurred image due to the prior art, resulting in a motion blurred image. After the fuzzy processing, the board effect is obvious and the user experience is not good.
  • the present invention provides a method for blurring a motion blurred image, the method comprising the steps of:
  • the motion blur image is input into a pre-trained enhanced generation countermeasure network generator, the generator including a compressed excitation residual network unit and a scaled convolution unit;
  • the feature image is subjected to blurring processing by the scaling convolution unit to obtain a clear image corresponding to the motion blur image.
  • the present invention provides a fuzzy processing apparatus for a motion blurred image, the apparatus comprising:
  • An image input unit configured to input the motion blurred image into a generator of a pre-trained enhanced generation confrontation network, when the fuzzy processing request for the motion blurred image is received, the generator including a compression excitation residual Network unit and scaling convolution unit;
  • a feature extraction unit configured to perform feature extraction on the motion blurred image by using the compressed excitation residual network unit to obtain a feature image corresponding to the motion blurred image
  • a fuzzy processing unit configured to perform blur processing on the feature image by using the scaling convolution unit to obtain a clear image corresponding to the motion blurred image.
  • the present invention also provides a computing device including a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor implementing the computer program The steps of the method as described above.
  • the invention also provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method as previously described.
  • the present invention when receiving a blurring processing request for a motion blurred image, inputs the motion blurred image into a generator of a pre-trained enhanced generation countermeasure network, the generator including a compressed excitation residual network unit and a scaled volume
  • the product unit extracts the motion blurred image by compressing the excitation residual network unit to obtain the feature image corresponding to the motion blurred image, and blurs the feature image by the scaling convolution unit to obtain a clear image corresponding to the motion blurred image.
  • FIG. 1 is a flowchart of an implementation of a method for blurring a motion blurred image according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flow chart of feature extraction of a compression excitation residual network unit in a motion blur processing method of a motion blurred image according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic structural diagram of a blur processing apparatus for a motion blur image according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic structural diagram of a blur processing apparatus for a motion blur image according to Embodiment 3 of the present invention.
  • FIG. 5 is a schematic structural diagram of a computing device according to Embodiment 4 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a flowchart showing an implementation process of a method for blurring a motion blurred image according to Embodiment 1 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • step S101 when a blurring processing request for the motion blur image is received, the motion blur image is input into a generator of the pre-trained enhanced generation confrontation network, and the generator includes a compressed excitation residual network unit and scaling Convolution unit.
  • Embodiments of the present invention are applicable to computing devices, such as personal computers, smart phones, tablets, and the like.
  • the motion blurred image Before inputting the motion blurred image to the generator of the pre-trained enhanced generation confrontation network (EDGAN), preferably, the motion blurred image is formatted and preprocessed by a preset formatting preprocessing layer, and the format is processed.
  • the motion blurred image of the pre-processed image is subjected to image enhancement processing to obtain an enhanced motion blurred image, thereby improving the effect of blurring the motion blurred image by the enhanced generation generation against the network.
  • the preset formatting pre-processing layer adopts Block-Matching 3D (BM3D for short) or Expected Patch Log Likelihood (EPLL) or Weighted Kernel Norm Minimization (Weighted The Nuclear Norm Minimization (WNNM) denoising algorithm performs format preprocessing on the motion blurred image to denoise the motion blurred image, thereby improving the visibility of the detail features in the motion blurred image.
  • BM3D Block-Matching 3D
  • EPLL Expected Patch Log Likelihood
  • WNNM Weighted Kernel Norm Minimization
  • the pre-trained enhanced generation generation countermeasure network is set as a formatting pre-processing layer, and the motion blurred image is formatted and pre-processed to denoise the motion blurred image, thereby improving motion blur.
  • the effect of image noise filtering is set as a formatting pre-processing layer, and the motion blurred image is formatted and pre-processed to denoise the motion blurred image, thereby improving motion blur. The effect of image noise filtering.
  • the image when performing image enhancement processing on the formatted preprocessed motion blur image, the image may be scaled by a preset resolution zoom ratio, and then the scaled image is randomly cropped to obtain enhanced motion.
  • Blur image for example, scaling a 2560 ⁇ 720 resolution blurred image to 640 ⁇ 360 resolution, and then randomly cropping to a 256 ⁇ 256 resolution image, thereby improving the training efficiency of the enhanced generation against the network in the embodiment of the present invention, and Improve the efficiency of subsequent blurring of motion blurred images.
  • the two The generated confrontation network is an enhanced generation confrontation network.
  • Each generated confrontation network includes two convolution units, nine compression excitation residual network units, and two scaling convolution units to improve the generalization performance of the network.
  • the training of generating a confrontation network can obtain an enhanced generation confrontation network pre-trained by the present application.
  • the two generated confrontation networks are referred to herein as a first generated confrontation network and a second generated confrontation network.
  • the two generated confrontation networks are trained to obtain a pre-trained enhanced generation confrontation network.
  • the first training samples are input into the pre-built first generation confrontation network for training to obtain training.
  • First generating a confrontation network performing format preprocessing on the second training sample through the trained first generation confrontation network, and performing image enhancement processing on the formatted preprocessed second training sample to obtain the enhanced second training sample
  • inputting the enhanced second training sample into the pre-built second generation confrontation network for training to obtain the trained second generation confrontation network and setting the trained second generation confrontation network as the enhanced generation confrontation network (ie: pre-trained enhanced generation against the network).
  • the enhanced generation generated by the embodiment of the present invention performs fuzzy processing on the motion blurred image, and improves the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the restored motion blurred image.
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • step S102 feature extraction is performed on the motion blurred image by the compressed excitation residual network unit to obtain a feature image corresponding to the motion blurred image.
  • the compressed excitation residual network unit is composed of a residual network and a compressed excitation network, the residual network does not include a Batch Norm or Instance Norm layer, and the compressed excitation network includes a global average pooling layer. 2 fully connected layers, one Relu layer and one Sigmoid layer, which improves the performance of feature extraction.
  • FIG. 2 is a schematic flow diagram of feature extraction of a compressed excitation residual network unit, inputting an image to a compressed excitation residual network unit (SE-ResBlock), first, passing through two convolution kernel sizes in the residual network. After the 3 ⁇ 3 convolution layer (3 ⁇ 3 Conv) is convoluted and connected to the excitation layer Relu between the two accumulation layers, a preliminary feature image is obtained, and the feature image is input to the global compression excitation network.
  • SE-ResBlock compressed excitation residual network unit
  • the global pooling layer performs a compression operation to input the compressed feature image into a fully connected layer (FC), which reduces the feature dimension of the feature image to 1/16 of the input, and then passes through a After Relu is activated, the feature dimension of the feature image is raised back to the original dimension through a fully connected layer (FC), and then subjected to a Sigmoid excitation, and finally the normalized weight is weighted to each channel feature by a Scale operation. Finally, the feature image corresponding to the motion blur image is output, thereby improving the performance of feature extraction.
  • FC fully connected layer
  • step S103 the feature image is subjected to blur processing by the scaling convolution unit to obtain a clear image corresponding to the motion blur image.
  • the size of the feature image is preset according to a preset ratio by a preset nearest neighbor interpolation or bilinear interpolation. Zooming, then, up-sampling the scaled feature image by a convolution operation to obtain a clear image corresponding to the feature image, thereby reducing the checkerboard effect in the obtained clear image, so that the sharp image can be resolved at a higher resolution
  • the rate is clearly displayed on the display device.
  • Table 1 shows the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) obtained by experimenting with the enhanced data generation against the network in the DebluGAN network and the embodiment of the invention by the GOPRO data set and the Lai data set, It can be seen that the peak signal to noise ratio (SNR) and structural similarity obtained by the enhanced generation of the anti-network by the embodiment of the present invention are significantly improved over that obtained by the DebluGAN network.
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • the motion blurred image when receiving a blurring processing request for a motion blurred image, the motion blurred image is input into a pre-trained enhanced generation generating network, the generator including a compressed excitation residual network
  • the generator including a compressed excitation residual network
  • the unit and the scaled convolution unit extract the feature of the motion blurred image by compressing the excitation residual network unit to obtain the feature image corresponding to the motion blurred image, and blur the feature image by the scaling convolution unit to obtain the motion blurred image corresponding
  • the clear image reduces the checkerboard effect in the blurring process of the motion blurred image, improves the sharpness of the motion blurred image restoration, and improves the generalization performance of the enhanced generation against the network in the embodiment of the present invention.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 3 is a diagram showing the structure of a blurring processing apparatus for a motion blur image according to Embodiment 2 of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown, including:
  • the image input unit 31 is configured to: when receiving the blurring processing request for the motion blurred image, input the motion blurred image into a generator of the pre-trained enhanced generation confrontation network, where the generator includes a compressed excitation residual network unit And scale convolution units.
  • Embodiments of the present invention are applicable to computing devices, such as personal computers, smart phones, tablets, and the like.
  • the enhanced generation confrontation network is pre-built, the enhanced generation
  • the adversarial network includes two convolution units, nine compressed excitation residual network elements, and two scaled convolution units to improve network generalization performance.
  • the feature extraction unit 32 is configured to perform feature extraction on the motion blurred image by compressing the excitation residual network unit to obtain a feature image corresponding to the motion blurred image.
  • the compressed excitation residual network unit is composed of a residual network and a compressed excitation network, the residual network does not include a Batch Norm or Instance Norm layer, and the compressed excitation network includes a global average pooling layer. , 2 fully connected layers, one Relu layer and one Sigmoid layer, which improves the performance of feature extraction.
  • the blur processing unit 33 is configured to perform blur processing on the feature image by the scaling convolution unit to obtain a clear image corresponding to the motion blur image.
  • the size of the feature image is preset according to a preset ratio by a preset nearest neighbor interpolation or bilinear interpolation. Zooming, then, up-sampling the scaled feature image by a convolution operation to obtain a clear image corresponding to the feature image, thereby reducing the checkerboard effect in the obtained clear image, so that the sharp image can be resolved at a higher resolution
  • the rate is clearly displayed on the display device.
  • each unit of the motion blur image blur processing device may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into a soft and hardware unit. To limit the invention.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 4 is a diagram showing the structure of a blurring processing apparatus for a motion blur image according to Embodiment 3 of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown, including:
  • the first network training unit 41 is configured to input the first training sample into the pre-built first generation confrontation network for training to obtain the trained first generation confrontation network;
  • the sample pre-processing unit 42 is configured to perform format pre-processing on the second training sample by using the first generation confrontation network
  • a sample enhancement unit 43 configured to perform image enhancement processing on the formatted preprocessed second training sample to obtain an enhanced second training sample
  • a second network training unit 44 configured to input the enhanced second training sample into a pre-built second generation confrontation network for training, to obtain a trained second generated confrontation network, and to train the second generated confrontation
  • the network is set up to enhance the generation of the adversarial network (ie: pre-trained enhanced generation against the network);
  • a formatting pre-processing unit 45 configured to perform format pre-processing on the motion blurred image by using a preset formatting pre-processing layer when receiving a blurring processing request for the motion blurred image;
  • An image enhancement unit 46 configured to perform image enhancement processing on the formatted preprocessed motion blur image to obtain an enhanced motion blur image
  • An image input unit 47 configured to input the enhanced motion blur image into a generator of a pre-trained enhanced generation countermeasure network, the generator comprising a compression excitation residual network unit and a scaling convolution unit;
  • a feature extraction unit 48 configured to perform feature extraction on the motion blurred image by compressing the excitation residual network unit to obtain a feature image corresponding to the motion blurred image
  • the fuzzy processing unit 49 is configured to perform blur processing on the feature image by using a scaling convolution unit to obtain a clear image corresponding to the motion blurred image.
  • each unit of the motion blur image blur processing device may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into a soft and hardware unit.
  • each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into a soft and hardware unit.
  • each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into a soft and hardware unit.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 5 shows the structure of a computing device according to Embodiment 4 of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown.
  • the computing device 5 of an embodiment of the present invention includes a processor 50, a memory 51, and a computer program 52 stored in the memory 51 and operable on the processor 50.
  • the processor 50 executes the computer program 52, the steps in the embodiment of the blurring processing method of the above-described motion blur image are implemented, for example, steps S101 to S103 shown in FIG.
  • the processor 50 when executing the computer program 52, implements the functions of the various units in the various apparatus embodiments described above, such as the functions of the units 31 through 33 shown in FIG.
  • the motion blurred image when receiving a blurring processing request for a motion blurred image, the motion blurred image is input into a pre-trained enhanced generation generating network, the generator including a compressed excitation residual network
  • the generator including a compressed excitation residual network
  • the unit and the scaled convolution unit extract the feature of the motion blurred image by compressing the excitation residual network unit to obtain the feature image corresponding to the motion blurred image, and blur the feature image by the scaling convolution unit to obtain the motion blurred image corresponding
  • the clear image reduces the checkerboard effect in the blurring process of the motion blurred image, improves the sharpness of the motion blurred image restoration, and improves the generalization performance of the enhanced generation against the network in the embodiment of the present invention.
  • the computing device of the embodiment of the present invention may be a personal computer, a smart phone, or a tablet.
  • a personal computer a smart phone, or a tablet.
  • the method for implementing the fuzzy processing method of the motion blur image when the processor 50 is executed by the processor 50 in the computing device 5 reference may be made to the description of the foregoing method embodiments, and details are not described herein again.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • a computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the steps in the method for blur processing of the motion blurred image For example, steps S101 to S103 shown in FIG.
  • the computer program when executed by the processor, implements the functions of the various units in the various apparatus embodiments described above, such as the functions of units 31 through 33 shown in FIG.
  • the motion blurred image when receiving a blurring processing request for a motion blurred image, the motion blurred image is input into a pre-trained enhanced generation generating network, the generator including a compressed excitation residual network
  • the generator including a compressed excitation residual network
  • the unit and the scaled convolution unit extract the feature of the motion blurred image by compressing the excitation residual network unit to obtain the feature image corresponding to the motion blurred image, and blur the feature image by the scaling convolution unit to obtain the motion blurred image corresponding
  • the clear image reduces the checkerboard effect in the blurring process of the motion blurred image, improves the sharpness of the motion blurred image restoration, and improves the generalization performance of the enhanced generation against the network in the embodiment of the present invention.
  • the computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种运动模糊图像的模糊处理方法、装置、设备及存储介质,该方法包括:当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,该生成器包括压缩激励残差网络单元和缩放卷积单元(S101),通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像(S102),通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像(S103),从而降低了运动模糊图像的模糊处理中的棋盘效应,提高了运动模糊图像复原的清晰度,并提高了增强型生成对抗网络的泛化性能。

Description

运动模糊图像的模糊处理方法、装置、设备及存储介质 技术领域
本发明属于图像处理技术领域,尤其涉及一种运动模糊图像的模糊处理方法、装置、设备及存储介质。
背景技术
运动模糊是成像过程中一种普遍存在的现象,在行走过程中、飞机或者行驶的汽车上拍摄图像时,拍摄对象相对速度过快或者拍摄时抖动都会导致运动模糊的产生,而图像的运动模糊对在天文、军事、道路交通、医学图像等领域的应用产生了严重的影响,因此,图像的运动模糊去除一直是计算机视觉领域一个重要问题。
随着以深度学习算法为代表的人工智能算法的崛起,图像处理、图像识别、语言信号处理、自然语言处理等研究领域得到了飞速的发展,利用深度学习算法来进行模糊图像复原的研究也取得了一些成果,例如,2014年Ian Goodfellow等学者提出的生成对抗网络(GAN),一经提出便成为了深度学习领域最炙手可热的研究方向之一。随后,Orest Kupyn等学者在2017年论文中提出了一种基于条件对抗式生成网络(CGAN)和内容损失(content loss)的端对端学习法DeblurGAN,该DeblurGAN网络模型在去除图像上因为物体运动而产生的模糊现象效果明显,然而,该DeblurGAN网络模型在去除运动模糊图像尤其是图像的深色部分后会产生“棋盘格子状伪影”(即棋盘效应),影响了运动模糊图像的复原效果。
发明内容
本发明的目的在于提供一种运动模糊图像的模糊处理方法、装置、设备及 存储介质,旨在解决由于现有技术无法提供一种有效的运动模糊图像的模糊处理方法,导致在对运动模糊图像进行模糊处理后棋盘效应明显、用户体验不佳的问题。
一方面,本发明提供了一种运动模糊图像的模糊处理方法,所述方法包括下述步骤:
当接收到对运动模糊图像的模糊处理请求时,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,所述生成器包括压缩激励残差网络单元和缩放卷积单元;
通过所述压缩激励残差网络单元对所述运动模糊图像进行特征提取,以得到所述运动模糊图像对应的特征图像;
通过所述缩放卷积单元对所述特征图像进行模糊处理,以得到所述运动模糊图像对应的清晰图像。
另一方面,本发明提供了一种运动模糊图像的模糊处理装置,所述装置包括:
图像输入单元,用于当接收到对运动模糊图像的模糊处理请求时,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,所述生成器包括压缩激励残差网络单元和缩放卷积单元;
特征提取单元,用于通过所述压缩激励残差网络单元对所述运动模糊图像进行特征提取,以得到所述运动模糊图像对应的特征图像;以及
模糊处理单元,用于通过所述缩放卷积单元对所述特征图像进行模糊处理,以得到所述运动模糊图像对应的清晰图像。
另一方面,本发明还提供了一种计算设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如前所述方法的步骤。
另一方面,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如前所述方法 的步骤。
本发明在当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,该生成器包括压缩激励残差网络单元和缩放卷积单元,通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像,通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像,从而降低了运动模糊图像的模糊处理中的棋盘效应,提高了运动模糊图像复原的清晰度,并提高了本发明增强型生成对抗网络的泛化性能。
附图说明
图1是本发明实施例一提供的运动模糊图像的模糊处理方法的实现流程图;
图2是本发明实施例一提供的运动模糊图像的模糊处理方法中压缩激励残差网络单元进行特征提取的流程示意图;
图3是本发明实施例二提供的运动模糊图像的模糊处理装置的结构示意图;
图4是本发明实施例三提供的运动模糊图像的模糊处理装置的结构示意图;以及
图5是本发明实施例四提供的计算设备的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的具体实现进行详细描述:
实施例一:
图1示出了本发明实施例一提供的运动模糊图像的模糊处理方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
在步骤S101中,当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,生成器包括压缩激励残差网络单元和缩放卷积单元。
本发明实施例适用于计算设备,例如,个人计算机、智能手机、平板等。在将该运动模糊图像输入到预先训练好的增强型生成对抗网络(EDGAN)的生成器之前,优选地,通过预设的格式化预处理层对运动模糊图像进行格式化预处理,对经过格式化预处理的运动模糊图像进行图像增强处理,得到增强的运动模糊图像,从而提高了后续通过增强型生成对抗网络对该运动模糊图像进行模糊处理的效果。
进一步优选地,预设的格式化预处理层采用三维块匹配(Block-Matching 3D,简称BM3D)或者似然概率对数期望(Expected Patch Log Likelihood,简称EPLL)或者加权核范数最小化(Weighted Nuclear Norm Minimization,简称WNNM)去噪算法对运动模糊图像进行格式化预处理,以对该运动模糊图像进行去噪,从而提高了该运动模糊图像中细节特征的明显程度。
又一优选地,将该预先训练好的增强型生成对抗网络设置为格式化预处理层,对运动模糊图像进行格式化预处理,以对该运动模糊图像进行去噪,从而提高了对运动模糊图像噪声滤除的效果。
进一步优选地,在对经过格式化预处理的运动模糊图像进行图像增强处理时,可以通过预设的分辨率缩放比例对图像进行缩放,然后对缩放后的图像进行随机裁剪,以得到增强的运动模糊图像,例如,将2560×720分辨率的模糊图像缩放到640×360分辨率,再随机裁剪为256×256分辨率图像,从而提高了本发明实施例增强型生成对抗网络的训练效率,以及提高后续对运动模糊图像进行模糊处理的效率。
当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预 先训练好的增强型生成对抗网络的生成器中之前,又一优选地,预先构建两个生成对抗网络,该两个生成对抗网络为增强型生成对抗网络,每个生成对抗网络包含两个卷积单元、9个压缩激励残差网络单元以及两个缩放卷积单元,以提高网络的泛化性能,通过对两个生成对抗网络的训练,可得到本申请预先训练好的增强型生成对抗网络。为了便于描述,在这里将该两个生成对抗网络记为第一生成对抗网络和第二生成对抗网络。
进一步优选地,在分别构建两个生成对抗网络的生成器时,去除生成器中所有的Batch Norm或Instance Norm层,从而提高网络的训练速度以及网络的稳定性,进而避免破坏图像原本的对比度信息。
在初始构建上述两个生成对抗网络之后,对两个生成对抗网络进行训练,以得到预先训练好的增强型生成对抗网络。优选地,在对两个生成对抗网络进行训练、以得到预先训练好的增强型生成对抗网络时,将第一训练样本输入到预先构建的第一生成对抗网络中进行训练,以得到训练好的第一生成对抗网络,通过训练好的第一生成对抗网络对第二训练样本进行格式化预处理,对经过格式化预处理的第二训练样本进行图像增强处理,以得到增强的第二训练样本,将增强的第二训练样本输入到预先构建的第二生成对抗网络中进行训练,以得到训练好的第二生成对抗网络,将该训练好的第二生成对抗网络设置为增强型生成对抗网络(即:预先训练好的增强型生成对抗网络)。经过本发明实施例训练好的增强型生成对抗网络对运动模糊图像进行模糊处理,提高了复原的运动模糊图像的峰值信噪比(PSNR)和结构相似性(SSIM)。
在步骤S102中,通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像。
在本发明实施例中,优选地,压缩激励残差网络单元由残差网络和压缩激励网络组成,该残差网络不包括Batch Norm或Instance Norm层,该压缩激励网络包括一个全局平均池化层、2个全连接层、一个Relu层以及一个Sigmoid层,从而提高了特征提取的性能。
作为示例地,图2为压缩激励残差网络单元进行特征提取的流程示意图,输入到压缩激励残差网络单元(SE-ResBlock)的图像,首先,经过残差网络中的两个卷积核大小为3×3的巻积层(3×3Conv)卷积和连接在两个巻积层之间的激励层Relu操作后,得到初步的特征图像,再将该特征图像输入到压缩激励网络的全局平均池化层(Global Pooling)进行压缩操作,将经过压缩的特征图像输入到一个全连接层(FC),该全连接层将特征图像的特征维度降低到输入的1/16,然后,经过一个Relu激活后再通过一个全连接层(FC)将特征图像的特征维度升回原来的维度,之后经过一个Sigmoid激励,最后通过一个Scale操作将归一化后的权重加权到每个通道特征上,最终输出运动模糊图像对应的特征图像,从而提高了特征提取的性能。
在步骤S103中,通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像。
在本发明实施例中,在通过缩放卷积单元对特征图像进行模糊处理时,优选地,首先,通过预设的最近邻插值或者双线性插值对该特征图像的尺寸按照预设的比例进行缩放,然后,通过卷积操作对经过缩放处理的特征图像进行上采样,得到该特征图像对应的清晰图像,从而降低了得到的清晰图像中的棋盘效应,使得到的清晰图像可以在更高分辨率的显示设备上清晰显示。
作为示例地,表1示出了通过GOPRO数据集和Lai数据集在DebluGAN网络和本发明实施例的增强型生成对抗网络下实验得到的峰值信噪比(PSNR)和结构相似性(SSIM),可以看出通过本发明实施例的增强型生成对抗网络得到的峰值信噪比和结构相似性比通过DebluGAN网络得到的有了明显提高。
表1
Figure PCTCN2018081710-appb-000001
在本发明实施例中,当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,该生成器包括压缩激励残差网络单元和缩放卷积单元,通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像,通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像,从而降低了运动模糊图像的模糊处理中的棋盘效应,提高了运动模糊图像复原的清晰度,并提高了本发明实施例增强型生成对抗网络的泛化性能。
实施例二:
图3示出了本发明实施例二提供的运动模糊图像的模糊处理装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:
图像输入单元31,用于当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,生成器包括压缩激励残差网络单元和缩放卷积单元。
本发明实施例适用于计算设备,例如,个人计算机、智能手机、平板等。当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中之前,优选地,预先构建增强型生成对抗网络,该增强型生成对抗网络包含两个卷积单元、9个压缩激励残差网络单元以及两个缩放卷积单元,从而提高了网络泛化性能。
进一步优选地,在构建增强型生成对抗网络的生成器时,去除生成器中所有的Batch Norm或Instance Norm层,从而提高了网络的训练速度以及网络的稳定性,进而避免破坏图像原本的对比度信息。
特征提取单元32,用于通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像。
在本发明实施例中,优选地,压缩激励残差网络单元由残差网络和压缩激励网络组成,该残差网络不包括Batch Norm或Instance Norm层,该压缩激励网络包括一个全局平均池化层、2个全连接层、一个Relu层以及一个Sigmoid 层,从而提高了特征提取的性能。
模糊处理单元33,用于通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像。
在本发明实施例中,在通过缩放卷积单元对特征图像进行模糊处理时,优选地,首先,通过预设的最近邻插值或者双线性插值对该特征图像的尺寸按照预设的比例进行缩放,然后,通过卷积操作对经过缩放处理的特征图像进行上采样,得到该特征图像对应的清晰图像,从而降低了得到的清晰图像中的棋盘效应,使得到的清晰图像可以在更高分辨率的显示设备上清晰显示。
在本发明实施例中,运动模糊图像的模糊处理装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。
实施例三:
图4示出了本发明实施例三提供的运动模糊图像的模糊处理装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:
第一网络训练单元41,用于将第一训练样本输入到预先构建的第一生成对抗网络中进行训练,以得到训练好的第一生成对抗网络;
样本预处理单元42,用于通过第一生成对抗网络对第二训练样本进行格式化预处理;
样本增强单元43,用于对经过格式化预处理的第二训练样本进行图像增强处理,以得到增强的第二训练样本;
第二网络训练单元44,用于将增强的第二训练样本输入到预先构建的第二生成对抗网络中进行训练,以得到训练好的第二生成对抗网络,将训练好的该第二生成对抗网络设置为增强型生成对抗网络(即:预先训练好的增强型生成对抗网络);
格式化预处理单元45,用于当接收到对运动模糊图像的模糊处理请求时,通过预设的格式化预处理层对运动模糊图像进行格式化预处理;
图像增强单元46,用于对经过格式化预处理的运动模糊图像进行图像增强处理,得到增强的运动模糊图像;
图像输入单元47,用于将增强的运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,生成器包括压缩激励残差网络单元和缩放卷积单元;
特征提取单元48,用于通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像;以及
模糊处理单元49,用于通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像。
在本发明实施例中,运动模糊图像的模糊处理装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。各单元的具体实施方式可参考实施例一的描述,在此不再赘述。
实施例四:
图5示出了本发明实施例四提供的计算设备的结构,为了便于说明,仅示出了与本发明实施例相关的部分。
本发明实施例的计算设备5包括处理器50、存储器51以及存储在存储器51中并可在处理器50上运行的计算机程序52。该处理器50执行计算机程序52时实现上述运动模糊图像的模糊处理方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,处理器50执行计算机程序52时实现上述各装置实施例中各单元的功能,例如图3所示单元31至33的功能。
在本发明实施例中,当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,该生成器包括压缩激励残差网络单元和缩放卷积单元,通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像,通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像,从而降低了运动模糊图像的模糊处理中的棋盘效应,提高了运动模糊图像复原的清晰 度,并提高了本发明实施例增强型生成对抗网络的泛化性能。
本发明实施例的计算设备可以为个人计算机、智能手机、平板。该计算设备5中处理器50执行计算机程序52时实现运动模糊图像的模糊处理方法时实现的步骤可参考前述方法实施例的描述,在此不再赘述。
实施例五:
在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述运动模糊图像的模糊处理方法实施例中的步骤,例如,图1所示的步骤S101至S103。或者,该计算机程序被处理器执行时实现上述各装置实施例中各单元的功能,例如图3所示单元31至33的功能。
在本发明实施例中,当接收到对运动模糊图像的模糊处理请求时,将该运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,该生成器包括压缩激励残差网络单元和缩放卷积单元,通过压缩激励残差网络单元对运动模糊图像进行特征提取,以得到运动模糊图像对应的特征图像,通过缩放卷积单元对特征图像进行模糊处理,以得到运动模糊图像对应的清晰图像,从而降低了运动模糊图像的模糊处理中的棋盘效应,提高了运动模糊图像复原的清晰度,并提高了本发明实施例增强型生成对抗网络的泛化性能。
本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种运动模糊图像的模糊处理方法,其特征在于,所述方法包括下述步骤:
    当接收到对运动模糊图像的模糊处理请求时,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,所述生成器包括压缩激励残差网络单元和缩放卷积单元;
    通过所述压缩激励残差网络单元对所述运动模糊图像进行特征提取,以得到所述运动模糊图像对应的特征图像;
    通过所述缩放卷积单元对所述特征图像进行模糊处理,以得到所述运动模糊图像对应的清晰图像。
  2. 如权利要求1所述的方法,其特征在于,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中的步骤之前,包括:
    当接收到对运动模糊图像的模糊处理请求时,通过预设的格式化预处理层对所述运动模糊图像进行格式化预处理;
    对所述经过格式化预处理的运动模糊图像进行图像增强处理,得到增强的所述运动模糊图像。
  3. 如权利要求1所述的方法,其特征在于,所述压缩激励残差网络单元由残差网络和压缩激励网络组成。
  4. 如权利要求1所述的方法,其特征在于,当接收到对运动模糊图像的模糊处理请求时,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中的步骤之前,包括:
    将第一训练样本输入到预先构建的第一生成对抗网络中进行训练,以得到训练好的所述第一生成对抗网络;
    通过所述第一生成对抗网络对第二训练样本进行格式化预处理;
    对所述经过格式化预处理的第二训练样本进行图像增强处理,以得到增强的所述第二训练样本;
    将增强的所述第二训练样本输入到预先构建的第二生成对抗网络中进行训练,以得到训练好的所述第二生成对抗网络,将所述训练好的所述第二生成对抗网络设置为所述增强型生成对抗网络。
  5. 一种运动模糊图像的模糊处理装置,其特征在于,所述装置包括:
    图像输入单元,用于当接收到对运动模糊图像的模糊处理请求时,将所述运动模糊图像输入到预先训练好的增强型生成对抗网络的生成器中,所述生成器包括压缩激励残差网络单元和缩放卷积单元;
    特征提取单元,用于通过所述压缩激励残差网络单元对所述运动模糊图像进行特征提取,以得到所述运动模糊图像对应的特征图像;以及
    模糊处理单元,用于通过所述缩放卷积单元对所述特征图像进行模糊处理,以得到所述运动模糊图像对应的清晰图像。
  6. 如权利要求5所述的装置,其特征在于,所述装置还包括:
    格式化预处理单元,用于当接收到对运动模糊图像的模糊处理请求时,通过预设的格式化预处理层对所述运动模糊图像进行格式化预处理;以及
    图像增强单元,用于对所述经过格式化预处理的运动模糊图像进行图像增强处理,得到增强的所述运动模糊图像。
  7. 如权利要求5所述的装置,其特征在于,所述压缩激励残差网络单元由残差网络和压缩激励网络组成。
  8. 如权利要求5所述的装置,其特征在于,所述装置还包括:
    第一网络训练单元,用于将第一训练样本输入到预先构建的第一生成对抗网络中进行训练,以得到训练好的所述第一生成对抗网络;
    样本预处理单元,用于通过所述第一生成对抗网络对第二训练样本进行格式化预处理;
    样本增强单元,用于对所述经过格式化预处理的第二训练样本进行图像增强处理,以得到增强的所述第二训练样本;以及
    第二网络训练单元,用于将增强的所述第二训练样本输入到预先构建的第 二生成对抗网络中进行训练,以得到训练好的所述第二生成对抗网络,将所述训练好的所述第二生成对抗网络设置为所述增强型生成对抗网络。
  9. 一种计算设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至4任一项所述方法的步骤。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述方法的步骤。
PCT/CN2018/081710 2018-03-22 2018-04-03 运动模糊图像的模糊处理方法、装置、设备及存储介质 WO2019178893A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810240092.9 2018-03-22
CN201810240092.9A CN108550118B (zh) 2018-03-22 2018-03-22 运动模糊图像的模糊处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2019178893A1 true WO2019178893A1 (zh) 2019-09-26

Family

ID=63516720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/081710 WO2019178893A1 (zh) 2018-03-22 2018-04-03 运动模糊图像的模糊处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN108550118B (zh)
WO (1) WO2019178893A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275637A (zh) * 2020-01-15 2020-06-12 北京工业大学 一种基于注意力模型的非均匀运动模糊图像自适应复原方法
CN111369451A (zh) * 2020-02-24 2020-07-03 西华大学 一种基于复杂任务分解正则化的图像复原模型、方法及设备
CN111460939A (zh) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 一种去模糊的人脸识别方法、系统和一种巡检机器人
CN112419201A (zh) * 2020-12-04 2021-02-26 珠海亿智电子科技有限公司 一种基于残差网络的图像去模糊方法
CN112435179A (zh) * 2020-11-11 2021-03-02 北京工业大学 模糊花粉颗粒图片处理方法、装置和电子设备
CN112904548A (zh) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 一种内窥镜对焦方法及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712092B (zh) * 2018-12-18 2021-01-05 上海信联信息发展股份有限公司 档案扫描图像修复方法、装置及电子设备
CN109859141B (zh) * 2019-02-18 2022-05-27 安徽理工大学 一种深立井井壁图像去噪方法
CN110070517B (zh) * 2019-03-14 2021-05-25 安徽艾睿思智能科技有限公司 基于退化成像机理和生成对抗机制的模糊图像合成方法
CN110012145B (zh) * 2019-04-08 2021-01-05 北京易诚高科科技发展有限公司 一种基于图像模糊度的手机防抖功能评估方法
CN110074813B (zh) * 2019-04-26 2022-03-04 深圳大学 一种超声图像重建方法及系统
CN111340716B (zh) * 2019-11-20 2022-12-27 电子科技大学成都学院 一种改进双重判别对抗网络模型的图像去模糊方法
CN111768826B (zh) * 2020-06-30 2023-06-27 深圳平安智慧医健科技有限公司 电子健康病例生成方法、装置、终端设备及存储介质
CN112598593B (zh) * 2020-12-25 2022-05-27 吉林大学 基于非均衡深度期望块对数似然网络的地震噪声压制方法
CN112651894A (zh) * 2020-12-29 2021-04-13 湖北工业大学 基于rrdb的生成对抗网络的图像去模糊算法
CN112949460B (zh) * 2021-02-26 2024-02-13 陕西理工大学 一种基于视频的人体行为网络模型及识别方法
CN113313180B (zh) * 2021-06-04 2022-08-16 太原理工大学 一种基于深度对抗学习的遥感图像语义分割方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330954A (zh) * 2017-07-14 2017-11-07 深圳市唯特视科技有限公司 一种基于衰减网络通过滑动属性操纵图像的方法
CN107590774A (zh) * 2017-09-18 2018-01-16 北京邮电大学 一种基于生成对抗网络的车牌清晰化方法及装置
CN107730458A (zh) * 2017-09-05 2018-02-23 北京飞搜科技有限公司 一种基于生成式对抗网络的模糊人脸重建方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091340A (zh) * 2014-07-18 2014-10-08 厦门美图之家科技有限公司 一种模糊图像的快速检测方法
CN105118031B (zh) * 2015-08-11 2017-11-03 中国科学院计算技术研究所 一种恢复深度信息的图像处理的方法
CN106204467B (zh) * 2016-06-27 2021-07-09 深圳市未来媒体技术研究院 一种基于级联残差神经网络的图像去噪方法
CN106952239A (zh) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 图像生成方法和装置
CN107527044B (zh) * 2017-09-18 2021-04-30 北京邮电大学 一种基于搜索的多张车牌清晰化方法及装置
CN107609560A (zh) * 2017-09-27 2018-01-19 北京小米移动软件有限公司 文字识别方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330954A (zh) * 2017-07-14 2017-11-07 深圳市唯特视科技有限公司 一种基于衰减网络通过滑动属性操纵图像的方法
CN107730458A (zh) * 2017-09-05 2018-02-23 北京飞搜科技有限公司 一种基于生成式对抗网络的模糊人脸重建方法及系统
CN107590774A (zh) * 2017-09-18 2018-01-16 北京邮电大学 一种基于生成对抗网络的车牌清晰化方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904548A (zh) * 2019-12-03 2021-06-04 精微视达医疗科技(武汉)有限公司 一种内窥镜对焦方法及装置
CN112904548B (zh) * 2019-12-03 2023-06-09 精微视达医疗科技(武汉)有限公司 一种内窥镜对焦方法及装置
CN111275637A (zh) * 2020-01-15 2020-06-12 北京工业大学 一种基于注意力模型的非均匀运动模糊图像自适应复原方法
CN111275637B (zh) * 2020-01-15 2024-01-30 北京工业大学 一种基于注意力模型的非均匀运动模糊图像自适应复原方法
CN111369451A (zh) * 2020-02-24 2020-07-03 西华大学 一种基于复杂任务分解正则化的图像复原模型、方法及设备
CN111460939A (zh) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 一种去模糊的人脸识别方法、系统和一种巡检机器人
CN112435179A (zh) * 2020-11-11 2021-03-02 北京工业大学 模糊花粉颗粒图片处理方法、装置和电子设备
CN112419201A (zh) * 2020-12-04 2021-02-26 珠海亿智电子科技有限公司 一种基于残差网络的图像去模糊方法

Also Published As

Publication number Publication date
CN108550118B (zh) 2022-02-22
CN108550118A (zh) 2018-09-18

Similar Documents

Publication Publication Date Title
WO2019178893A1 (zh) 运动模糊图像的模糊处理方法、装置、设备及存储介质
CN107392852B (zh) 深度图像的超分辨率重建方法、装置、设备及存储介质
Hussein et al. Image-adaptive GAN based reconstruction
Xu et al. Learning to super-resolve blurry face and text images
WO2022110638A1 (zh) 人像修复方法、装置、电子设备、存储介质和程序产品
Sun et al. Lightweight image super-resolution via weighted multi-scale residual network
EP3579180A1 (en) Image processing method and apparatus, electronic device and non-transitory computer-readable recording medium for selective image enhancement
Ren et al. Face video deblurring using 3D facial priors
CN111507333B (zh) 一种图像矫正方法、装置、电子设备和存储介质
Dong et al. Learning spatially variant linear representation models for joint filtering
CN114255337A (zh) 文档图像的矫正方法、装置、电子设备及存储介质
Lin et al. Learning to deblur face images via sketch synthesis
CN111932480A (zh) 去模糊视频恢复方法、装置、终端设备以及存储介质
CN114283080A (zh) 一种多模态特征融合的文本指导图像压缩噪声去除方法
CN114529982A (zh) 基于流式注意力的轻量级人体姿态估计方法及系统
JP2023502653A (ja) 人工知能ニューラルネットワークの推論または訓練に対する、故意に歪みを制御する撮像装置の利用
CN117253054B (zh) 一种光场显著性检测方法及其相关设备
Sharif et al. DarkDeblur: Learning single-shot image deblurring in low-light condition
CN111476741B (zh) 图像的去噪方法、装置、电子设备和计算机可读介质
Qi et al. Blind face images deblurring with enhancement
Jiang et al. Image motion deblurring based on deep residual shrinkage and generative adversarial networks
Yang et al. Deblurring and super-resolution using deep gated fusion attention networks for face images
US20230110393A1 (en) System and method for image transformation
Sharma et al. Evaluation of generative adversarial network generated super resolution images for micro expression recognition
CN112634126A (zh) 人像减龄处理方法、训练方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18910593

Country of ref document: EP

Kind code of ref document: A1