CN111275626A - Video deblurring method, device and equipment based on ambiguity - Google Patents

Video deblurring method, device and equipment based on ambiguity Download PDF

Info

Publication number
CN111275626A
CN111275626A CN201811477483.9A CN201811477483A CN111275626A CN 111275626 A CN111275626 A CN 111275626A CN 201811477483 A CN201811477483 A CN 201811477483A CN 111275626 A CN111275626 A CN 111275626A
Authority
CN
China
Prior art keywords
frame
pixel point
image
video
blurred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811477483.9A
Other languages
Chinese (zh)
Other versions
CN111275626B (en
Inventor
王雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weibo Technology Co ltd
Original Assignee
Shenzhen Weibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weibo Technology Co ltd filed Critical Shenzhen Weibo Technology Co ltd
Priority to CN201811477483.9A priority Critical patent/CN111275626B/en
Publication of CN111275626A publication Critical patent/CN111275626A/en
Application granted granted Critical
Publication of CN111275626B publication Critical patent/CN111275626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of computer vision and image processing, and discloses a video deblurring method, a video deblurring device and video deblurring equipment based on ambiguity, which comprise the following steps: calculating the fuzziness of the video frame; determining a clear frame and a fuzzy frame according to the fuzziness; generating a reference frame according to the clear frame and the fuzzy frame; extracting image blocks of the blurred frame and the reference frame; performing weighted fusion according to weights corresponding to pixel points in the image block to obtain a fused image block; and recombining the fused image blocks to obtain an output image. Because the fuzzy core is not required to be estimated, the clear frame and the fuzzy frame are determined by calculating the fuzziness of the video frame according to the fuzziness, the complexity of calculation is effectively reduced, and the calculation speed is improved; and the weight of the reference frame is considered, and the weighting fusion is carried out according to the weight corresponding to the pixel points in the extracted image block, so that the definition of the finally output image is higher.

Description

Video deblurring method, device and equipment based on ambiguity
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to a video deblurring method, device and equipment based on ambiguity.
Background
The video sequence is affected by the posture change or motion interference of a main body of the video capturing device, irregular motion can occur, such as shaking of the device, uneven running road, hand shaking and other factors, and therefore video pictures obtained after imaging are fuzzy. The blurred video not only brings an extremely poor viewing experience, but also is not beneficial to observing and extracting useful information in the video, so that the blurred video needs to be subjected to deblurring processing.
At present, the method for deblurring a video mainly uses a blur kernel to deblur the video. Based on the nature of the blur kernel, there are classification as non-blind deblurring and blind deblurring. The non-blind deblurring needs to be carried out under the condition that the blur kernel is known, but the blur kernel cannot be known in advance in video editing under different scenes. Blind deblurring requires estimation of a blur kernel, which requires a large number of operations, resulting in a high computational complexity.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a terminal device for deblurring a video based on ambiguity, so as to solve the problem that the computation complexity of video deblurring is too high in the prior art.
The first aspect of the embodiments of the present invention provides a method for deblurring a video based on ambiguity, including:
calculating the fuzziness of the video frame;
determining a clear frame and a fuzzy frame according to the fuzziness;
generating a reference frame according to the clear frame and the fuzzy frame;
extracting image blocks of the blurred frame and the reference frame;
performing weighted fusion according to weights corresponding to pixel points in the image block to obtain a fused image block;
and recombining the fused image blocks to obtain an output image.
A second aspect of an embodiment of the present invention provides a video deblurring apparatus for ambiguity, including:
the ambiguity calculation module is used for calculating the ambiguity of the video frame;
a clear frame and fuzzy frame determining module for determining a clear frame and a fuzzy frame according to the fuzziness;
a reference frame generation module for generating a reference frame according to the sharp frame and the fuzzy frame;
the image block extraction module is used for extracting image blocks of the fuzzy frame and the reference frame;
the weighted fusion module is used for carrying out weighted fusion according to the weight corresponding to the pixel point in the image block to obtain a fused image block;
and the image block recombination module is used for recombining the fused image blocks to obtain an output image.
A video deblurring apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor when executing the computer program implementing the method of the first aspect.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the method of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: because the fuzzy core is not required to be estimated, the clear frame and the fuzzy frame are determined by calculating the fuzziness of the video frame according to the fuzziness, the complexity of calculation is effectively reduced, and the calculation speed is improved; and the weight of the reference frame is considered, and the weighting fusion is carried out according to the weight corresponding to the pixel points in the extracted image block, so that the definition of the finally output image is higher.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a method for deblurring a video based on ambiguity according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the detailed steps of generating a reference frame from the sharp frame and the blurred frame;
FIG. 3 is an image block obtained by extracting an image block from an image;
FIG. 4 is a schematic diagram of an apparatus for deblurring a video based on ambiguity according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a video deblurring apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a schematic flow chart of a video deblurring method based on ambiguity according to an embodiment of the present invention, which is detailed as follows:
step S101: the blurriness of the video frame is calculated.
Preferably, before calculating the blurriness of the video frame, the method comprises: and acquiring a video to be processed. After the video to be processed is acquired, the video is parsed into video frames at a certain frame rate, for example, 50 fps.
Specifically, the calculating the ambiguity of the video frame specifically includes:
carrying out graying processing on the video frame to obtain a grayscale image;
filtering the gray level image to obtain a filtered image;
and calculating the variance of the filtering image to obtain the fuzziness of the video frame.
In general, since the analyzed video frame is a color image, in order to reduce the amount of calculation, it is necessary to perform a graying process on the video frame, that is, convert the color image into a grayscale image. Of course, if the video frame itself is a grayscale image, the graying process is not required.
And filtering the gray level image obtained after the graying treatment so as to realize the purpose of calculating the edge gradient of the image. In general, a laplacian filtering method may be selected to filter the grayscale image, and the filtered image is obtained after the laplacian filtering.
The variance of the filtered images is calculated, and the variance of each image can be calculated according to the following variance function:
D(f)=ΣyΣx|f(x,y)-μ|2
in the above formula, d (f) is the calculated variance, f (x, y) is the gray value of a certain pixel in the image, i.e. the pixel value, μ is the average gray value of the whole image, x is the abscissa of the pixel, and y is the ordinate of the pixel. And using the variance to represent the fuzziness of the filtering image to obtain the fuzziness of the video frame.
It should be noted that, because a sharp image has a larger difference than an edge gradient of a blurred image, a relationship between the variance and the blur degree is negative correlation, that is, the larger the variance is, the smaller the blur degree of the corresponding filtered image is, that is, the higher the sharpness is; the smaller the variance, the greater the corresponding filtered image blur, i.e. the lower the sharpness.
Step S102: and determining a clear frame and a fuzzy frame according to the fuzziness.
Specifically, the determining the sharp frame and the blurred frame according to the blur degree specifically includes:
sequencing the video frames according to the fuzziness to obtain a sequenced image sequence;
and selecting a plurality of video frames with larger fuzziness from the image sequence as fuzzy frames, and selecting a plurality of video frames with smaller fuzziness as clear frames.
And sequencing the video frames according to the obtained fuzziness, wherein the specific sequencing mode can be that the fuzziness is sequenced from large to small or from small to large. To illustrate, in this embodiment, the fuzziness is sorted from large to small, and a plurality of video frames with large fuzziness are selected as the fuzzy frames, that is, a video frames with a bit before the rank of the fuzziness are selected as the fuzzy frames, and a video frame with the minimum fuzziness is selected as the clear frames, that is, b video frames with b bit after the rank are selected as the clear frames. Wherein, a and b can be set as required, for convenience of explanation, a selected in this embodiment is 3, b is 2, that is, three video frames of 3 bits before the ambiguity rank are selected as the ambiguity frames, and two video frames of 2 bits after the ambiguity rank are selected as the clear frames.
Step S103: and generating a reference frame according to the clear frame and the fuzzy frame.
Preferably, before generating the reference frame from the sharp frame and the blurred frame, in order to reduce the amount of calculation and increase the processing speed, a downsampling operation, that is, reducing the resolution of the image, needs to be performed on the sharp frame and the blurred frame. The principle of downsampling is as follows: assuming that the size of an image is M × N (M and N are the number of pixels), the image is down-sampled according to a sampling rate s, and the resolution of the obtained image is (M/s) × (N/s).
Referring to fig. 2, fig. 2 shows specific steps of generating a reference frame according to the sharp frame and the blurred frame, where the specific steps of generating the reference frame according to the sharp frame and the blurred frame are as follows:
A1. calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
A2. taking a certain pixel point of the blurred frame as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the sharp frame;
A3. calculating the forward optical flow of the current pixel points to the pixel points of the clear frame according to the forward optical flow, namely forward pixel points; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the sharp frame to the blurred frame;
A4. calculating the backward optical flows of the forward pixel points to the pixel points of the fuzzy frame according to the backward optical flows, namely backward pixel points;
A5. calculating the position error of the current pixel point and the backward pixel point;
A6. and generating a reference frame according to the position error.
Further, step a6 specifically includes the following steps:
B1. constructing a mask according to the position error, specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise, marking the current pixel point as 0;
B2. if the position error is smaller than a second preset value, taking the forward pixel point as a reference point of the current pixel point;
B3. generating a reference alignment frame according to the reference point;
B4. and generating a reference frame according to the mask, the fuzzy frame and the reference alignment frame.
Wherein, the optical flow method is TVL1 optical flow method. There are various optical flow methods currently used, that is, a TVL1 optical flow method, an LK optical flow method, a ctfLK optical flow method, an HS optical flow method, and the like, and since the TVL1 optical flow method can achieve a better effect in the case where an object is hidden, the optical flow method adopted in the present embodiment is a TVL1 optical flow method.
The forward optical flow and the backward optical flow of two adjacent frames can be calculated by using a TVL1 optical flow method, wherein the two adjacent frames comprise a clear frame and a blurred frame, the blurred frame is used as a backward reference frame, and the clear frame is used as a forward reference frame. The alignment of the images can be realized through the TVL1 optical flow method, namely, feature points are extracted from the fuzzy frames, and then the feature points similar to the feature points are found from the corresponding positions of the clear frames.
And selecting a certain pixel point of a blurred frame, such as the point A, as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame. The forward optical flow can be calculated by a TVL1 optical flow method, and then the forward optical flow of the point A is calculated to point to the pixel point of the clear frame, namely the forward pixel point, and the forward pixel point is assumed to be the point B.
The backward optical flow refers to a backward motion vector of the forward pixel point, i.e., the B point, pointing from the sharp frame to the blurred frame. Calculating backward optical flow by a TVL1 optical flow method, and then calculating the backward optical flow of the B point to the pixel point of the fuzzy frame according to the backward optical flow, namely, a backward pixel point, and assuming that the backward pixel point is the C point.
It should be noted that, after the forward optical flow and the backward optical flow are obtained, an up-sampling operation needs to be performed on the image, because it is mentioned in step S103 that, in order to reduce the amount of calculation and increase the processing speed, the down-sampling operation needs to be performed on the sharp frame and the blurred frame first, the down-sampling operation reduces the image, and the reduced image needs to be restored to the original size, that is, the up-sampling operation needs to be performed on the image. The up-sampling is usually realized by inserting new elements between pixel points of an original image, the up-sampling mode is various, in order to achieve a better effect, the up-sampling can be realized by using a bi-cubic spline difference value, and if the calculation speed is expected to be faster, the up-sampling can be realized by using a bilinear difference value instead.
From the above, point a and point C are both located on the blurred frame, while point B is located on the sharp frame. In order to align the feature points of the blurred frame and the clear frame, the position error between the point A and the point C needs to be calculated, and the position of the acquired point A on the blurred frame is assumed to be (x)1,y1) And the position of the point C on the fuzzy frame is (x)2,y2) Then, the position error between the point a and the point C can be calculated by the following equation:
Figure BDA0001892524210000071
in the above equation, e is the calculated position error.
Then, a mask is constructed according to the position error, and the method specifically comprises the following steps: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise, marking the current pixel point as 0.
The first preset value may be set as required, for example, may be set to 1e-3 or other values, which is not limited herein.
The mask is a region for controlling image processing by blocking the processed image with the selected image.
It should be noted that after the mask is obtained, in order to make the transition between the replaced pixel point and the surrounding pixel points that are not replaced smooth and not appear to be fused together too abruptly, the obtained mask needs to be subjected to gaussian blur.
Then, determining a reference point, wherein the specific mode is as follows: and if the position error is smaller than a second preset value, taking the forward pixel point as a reference point of the current pixel point.
The second preset value can be set as required, for example, can be set to 1e-4, that is, 0.0001.
When the position error is smaller than the second preset value, the current pixel point a and the backward pixel point C can be approximately regarded as the same pixel point, and then the current pixel point a and the forward pixel point B are the same pixel point located on different images, so that the point B is regarded as the reference point of the point a. However, if the calculated position error is greater than or equal to the second preset value, it is indicated that the point a and the point C are not the same pixel point, that is, the point a has no reference point.
After the reference point is determined, a reference alignment frame is generated according to the reference point, and the specific method is as follows: and replacing the current pixel point with the reference point, and if the current pixel point does not have the reference point, replacing the current pixel point with 0.
Then, generating a reference frame according to the mask, the blurred frame and the reference alignment frame, wherein the specific method comprises the following steps: taking a mask as a weight, performing weighted fusion on the fuzzy frame and the reference alignment frame, and obtaining a reference frame according to the following formula:
Figure BDA0001892524210000081
in the above formula, cMAP is the mask, ImFor generated reference alignment frames, IMIn order to blur the frame, the frame is blurred,
Figure BDA0001892524210000085
the generated reference frame.
Step S104: and extracting image blocks of the blurred frame and the reference frame.
And combining the blurred frame and the reference frame to form a frame set to be processed, and extracting image blocks of each frame to be processed in the set.
For each frame to be processed, there are a plurality of extracted image blocks, and if an image has a size of 512 × 512 and the extracted image block size is 128 × 128, 16 image blocks can be extracted from the image, as shown in fig. 3.
Step S105: and performing weighted fusion according to the weights corresponding to the pixel points in the image block to obtain a fused image block.
Specifically, the performing weighted fusion according to the weights corresponding to the pixel points in the image block to obtain a fused image block specifically includes:
performing Fourier transform (FFT) on the pixel points in the image blocks to obtain FFT values of the pixel points;
calculating the weight of the pixel point, specifically: performing Gaussian blurring on the FFT value to obtain a blurred FFT value, and taking the power of the 11 th power of the blurred FFT value as the weight of the pixel point;
and then, performing weighted fusion on all pixel points at the same position in each image block according to the following formula:
Figure BDA0001892524210000082
in the formula, Wm is the weight of the pixel point,
Figure BDA0001892524210000083
is the FFT value of the pixel point, m is the label of the pixel point, epsilon is a numerical value for preventing the denominator from being zero,
Figure BDA0001892524210000084
obtaining a weighted fusion value;
performing inverse Fourier transform (IFFT) on the weighted fusion value to obtain a fusion pixel value of the pixel point;
and combining all the pixel points to obtain a fused image block.
Further, the performing fourier FFT on the pixel points in the image block to obtain FFT values of the pixel points specifically includes:
respectively calculating FFT component values corresponding to the red R channel, the green G channel and the blue B channel of the pixel point;
and averaging the FFT component values to obtain the FFT value of the pixel point.
The image blur degree and the attenuation range of the fourier coefficient corresponding to the image are in positive correlation, that is, when the image blur degree is large, the fourier coefficient of the image is greatly attenuated, while the image blur degree is small, the fourier coefficient of the image is slightly attenuated, and the fourier coefficient of the image which is not blurred is not attenuated. Therefore, the image can be transformed into the frequency domain through FFT transformation, the weight of the pixel point of each image block is calculated in the frequency domain, the weight value of the pixel point in the clear frame can be effectively expanded, and the finally output restored image is clearer.
Since the pixel value of each pixel point is composed of components of three channels, namely red R, green G and blue B, FFT component values corresponding to R, G, B channels of the pixel point need to be calculated respectively. Since the image is two-dimensional, the calculated FFT component values are complex numbers, and it is assumed that the three obtained FFT component values are a1+jb1、a2+jb2And a3+jb3Then, the FFT value of the pixel point can be calculated according to the following formula:
Figure BDA0001892524210000091
in the above formula
Figure BDA0001892524210000092
Is the FFT value of the pixel.
And then calculating the weight of the pixel point, wherein the specific method comprises the following steps: and performing Gaussian blurring on the FFT value to obtain a blurred FFT value, and taking the power of the 11 th power of the blurred FFT value as the weight of the pixel point.
And then, performing weighted fusion on all pixel points at the same position in each image block according to the following formula:
Figure BDA0001892524210000093
in the formula, Wm is the weight of the pixel point,
Figure BDA0001892524210000094
is the FFT value of the pixel point, m is the label of the pixel point, epsilon is a numerical value for preventing the denominator from being zero,
Figure BDA0001892524210000095
the resulting weighted fusion value.
Wherein epsilon can be set according to requirements, for example, 1e-8, and the purpose of epsilon is to prevent the denominator in the above equation from being zero, so that the next calculation can not be continued.
After the weighted fusion value is obtained, since the weighted fusion value is a result obtained by calculation in the frequency domain, it is also necessary to perform inverse fourier transform, that is, IFFT transform on the weighted fusion value to obtain a corresponding time domain value, that is, a fusion pixel value of a pixel point. The fusion pixel value is the final pixel value of the pixel point after weighted fusion processing, and all the pixel points are combined to obtain the image block after fusion processing.
Step S106: and recombining the fused image blocks to obtain an output image.
And recombining the image blocks obtained after fusion, namely splicing all the image blocks to obtain an output image, wherein the output image is a clear image subjected to deblurring. As shown in fig. 3, when performing the weighted fusion process, the original image is divided into 16 image blocks, and after the process, these image blocks need to be merged together to obtain an output restored image.
In the embodiment, the calculating complexity is effectively reduced by calculating the ambiguity of the video frame and determining the clear frame and the fuzzy frame according to the ambiguity, so that the calculating speed is improved; and the weight of the reference frame is considered, and the weighting fusion is carried out according to the weight corresponding to the pixel points in the extracted image block, so that the definition of the finally output image is higher.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two:
fig. 4 is a video deblurring apparatus based on ambiguity according to an embodiment of the present invention, where the apparatus includes: a ambiguity calculation module 41, a clear frame and blurred frame determination module 42, a reference frame generation module 43, an image block extraction module 44, a weighted fusion module 45, and an image block recombination module 46.
The ambiguity calculating module 41 is configured to calculate an ambiguity of the video frame.
Further, the ambiguity calculating module 41 specifically includes:
a graying processing unit 411, configured to perform graying processing on the video frame to obtain a grayscale image;
a filtering unit 412, configured to filter the grayscale image to obtain a filtered image;
and a variance calculating unit 413, configured to calculate a variance of the filtered image, so as to obtain a degree of blur of the video frame.
A sharp frame and blurred frame determination module 42 for determining a sharp frame and a blurred frame based on the blurriness.
Further, the sharp frame and blurred frame determining module 42 specifically includes:
a fuzziness sorting unit 421, configured to sort the video frames according to the fuzziness to obtain a sorted image sequence;
a blurred frame and sharp frame selecting unit 422, configured to select, from the image sequence, a plurality of video frames with a higher blur degree as blurred frames and a plurality of video frames with a lower blur degree as sharp frames.
A reference frame generating module 43, configured to generate a reference frame according to the sharp frame and the blurred frame.
Further, the reference frame generation module 43 further includes:
an optical flow calculation unit 431 for calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
a current pixel point selecting unit 432, configured to select a certain pixel point of the blurred frame as a current pixel point, where the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the sharp frame;
a forward pixel point calculation unit 433, configured to calculate, according to the forward optical flow, that the forward optical flow of the current pixel point points to a pixel point of the frame, that is, a forward pixel point; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the sharp frame to the blurred frame;
a backward pixel point calculating unit 434, configured to calculate, according to the backward optical flow, that the backward optical flow of the forward pixel point points to a pixel point of the blurred frame, that is, a backward pixel point;
a position error calculation unit 435, configured to calculate position errors of the current pixel point and the backward pixel point;
a reference frame generating unit 436, configured to generate a reference frame according to the position error.
Further, the reference frame generating unit 436 specifically includes:
a mask constructing subunit 4361, configured to construct a mask according to the position error, specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise, marking the current pixel point as 0;
a reference point determining subunit 4362, configured to, if the position error is smaller than a second preset value, use the forward pixel point as a reference point of the current pixel point;
a reference alignment frame generation subunit 4363, configured to generate a reference alignment frame according to the reference point;
a reference frame generating subunit 4364, configured to generate a reference frame according to the mask, the blurred frame, and the reference alignment frame.
And the image block extraction module 44 is configured to perform image block extraction on the blurred frame and the reference frame.
And the weighted fusion module 45 is configured to perform weighted fusion according to weights corresponding to the pixel points in the image block to obtain a fused image block.
Further, the weighted fusion module 45 specifically includes:
the FFT transforming unit 451 is configured to perform fourier FFT on the pixel points in the image block to obtain FFT values of the pixel points;
the weight calculating unit 452 is configured to calculate the weight of the pixel point, and specifically includes: performing Gaussian blurring on the FFT value to obtain a blurred FFT value, and taking the power of the 11 th power of the blurred FFT value as the weight of the pixel point;
the weighted fusion unit 453 is configured to perform weighted fusion on the pixel points located at the same position in each image block according to the following equation:
Figure BDA0001892524210000121
in the formula, WmIs the weight of the pixel point and is,
Figure BDA0001892524210000122
is the FFT value of the pixel point, m is the label of the pixel point, epsilon is a numerical value for preventing the denominator from being zero,
Figure BDA0001892524210000123
obtaining a weighted fusion value;
an IFFT transforming unit 454, configured to perform inverse fourier transform on the weighted fusion value to obtain a fusion pixel value of the pixel point;
the pixel point combining unit 455 is configured to combine all the pixel points to obtain a fused image block.
And an image block recombining module 46, configured to recombine the fused image blocks to obtain an output image.
Example three:
fig. 5 is a schematic diagram of a video deblurring apparatus according to an embodiment of the present invention. As shown in fig. 5, a video deblurring apparatus 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as an ambiguity-based video deblurring program, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the various ambiguity-based video deblurring method embodiments described above, such as the steps 101-106 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 41 to 46 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the video deblurring apparatus 5. For example, the computer program 52 may be divided into a ambiguity calculation module, a clear frame and blurred frame determination module, a reference frame generation module, an image block extraction module, a weighted fusion module, and an image block recombination module, and the specific functions of each module are as follows:
the ambiguity calculation module is used for calculating the ambiguity of the video frame;
a clear frame and fuzzy frame determining module for determining a clear frame and a fuzzy frame according to the fuzziness;
a reference frame generation module for generating a reference frame according to the sharp frame and the fuzzy frame;
the image block extraction module is used for extracting image blocks of the fuzzy frame and the reference frame;
the weighted fusion module is used for carrying out weighted fusion according to the weight corresponding to the pixel point in the image block to obtain a fused image block;
and the image block recombination module is used for recombining the fused image blocks to obtain an output image.
The video deblurring device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The video deblurring device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a video deblurring device 5 and does not constitute a limitation of the video deblurring device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the video deblurring device may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the video deblurring device 5, such as a hard disk or a memory of the video deblurring device 5. The memory 51 may also be an external storage device of the video deblurring device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the video deblurring device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the video deblurring device 5. The memory 51 is used for storing the computer program and other programs and data required by the video deblurring apparatus. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for deblurring a video based on blurriness, comprising:
calculating the fuzziness of the video frame;
determining a clear frame and a fuzzy frame according to the fuzziness;
generating a reference frame according to the clear frame and the fuzzy frame;
extracting image blocks of the blurred frame and the reference frame;
performing weighted fusion according to weights corresponding to pixel points in the image block to obtain a fused image block;
and recombining the fused image blocks to obtain an output image.
2. The method of claim 1, wherein the calculating the blurriness of the video frame specifically comprises:
carrying out graying processing on the video frame to obtain a grayscale image;
filtering the gray level image to obtain a filtered image;
and calculating the variance of the filtering image to obtain the fuzziness of the video frame.
3. The method of claim 2, wherein said determining the sharp and blurred frames based on the blurriness comprises:
sequencing the video frames according to the fuzziness to obtain a sequenced image sequence;
and selecting a plurality of video frames with larger fuzziness from the image sequence as fuzzy frames, and selecting a plurality of video frames with smaller fuzziness as clear frames.
4. The method of claim 1, wherein generating the reference frame from the sharp frame and the blurred frame comprises:
calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
taking a certain pixel point of the blurred frame as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the sharp frame;
calculating the forward optical flow of the current pixel points to the pixel points of the clear frame according to the forward optical flow, namely forward pixel points; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the sharp frame to the blurred frame;
calculating the backward optical flows of the forward pixel points to the pixel points of the fuzzy frame according to the backward optical flows, namely backward pixel points;
calculating the position error of the current pixel point and the backward pixel point;
and generating a reference frame according to the position error.
5. The method of claim 4, wherein the generating the reference frame based on the position error comprises:
constructing a mask according to the position error, specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise, marking the current pixel point as 0;
if the position error is smaller than a second preset value, taking the forward pixel point as a reference point of the current pixel point;
generating a reference alignment frame according to the reference point;
and generating a reference frame according to the mask, the fuzzy frame and the reference alignment frame.
6. The method according to claim 1, wherein performing weighted fusion according to weights corresponding to pixel points in the image block to obtain a fused image block specifically includes:
performing Fourier transform (FFT) on the pixel points in the image blocks to obtain FFT values of the pixel points;
calculating the weight of the pixel point, specifically: performing Gaussian blurring on the FFT value to obtain a blurred FFT value, and taking the power of the 11 th power of the blurred FFT value as the weight of the pixel point;
and then, performing weighted fusion on all pixel points at the same position in each image block according to the following formula:
Figure FDA0001892524200000021
in the formula, WmIs the weight of the pixel point and is,
Figure FDA0001892524200000022
is the FFT value of the pixel point, m is the label of the pixel point, and ε is oneA value where the denominator is zero is prevented,
Figure FDA0001892524200000023
obtaining a weighted fusion value;
performing inverse Fourier transform (IFFT) on the weighted fusion value to obtain a fusion pixel value of the pixel point;
and combining all the pixel points to obtain a fused image block.
7. The method according to claim 6, wherein performing fourier FFT on the pixel points in the image block to obtain FFT values of the pixel points specifically includes:
respectively calculating FFT component values corresponding to the red R channel, the green G channel and the blue B channel of the pixel point;
and averaging the FFT component values to obtain the FFT value of the pixel point.
8. An apparatus for deblurring a video based on blurriness, comprising:
the ambiguity calculation module is used for calculating the ambiguity of the video frame;
a clear frame and fuzzy frame determining module for determining a clear frame and a fuzzy frame according to the fuzziness;
a reference frame generation module for generating a reference frame according to the sharp frame and the fuzzy frame;
the image block extraction module is used for extracting image blocks of the fuzzy frame and the reference frame;
the weighted fusion module is used for carrying out weighted fusion according to the weight corresponding to the pixel point in the image block to obtain a fused image block;
and the image block recombination module is used for recombining the fused image blocks to obtain an output image.
9. A video deblurring apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201811477483.9A 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity Active CN111275626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811477483.9A CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811477483.9A CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Publications (2)

Publication Number Publication Date
CN111275626A true CN111275626A (en) 2020-06-12
CN111275626B CN111275626B (en) 2023-06-23

Family

ID=71001439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811477483.9A Active CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Country Status (1)

Country Link
CN (1) CN111275626B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416937A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and mobile equipment
CN112001355A (en) * 2020-09-03 2020-11-27 杭州云栖智慧视通科技有限公司 Training data preprocessing method for fuzzy face recognition under outdoor video
CN112767250A (en) * 2021-01-19 2021-05-07 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN112801890A (en) * 2021-01-08 2021-05-14 北京奇艺世纪科技有限公司 Video processing method, device and equipment
CN113067979A (en) * 2021-03-04 2021-07-02 北京大学 Imaging method, device, equipment and storage medium based on bionic pulse camera
CN113327206A (en) * 2021-06-03 2021-08-31 江苏电百达智能科技有限公司 Image fuzzy processing method of intelligent power transmission line inspection system based on artificial intelligence
CN113409203A (en) * 2021-06-10 2021-09-17 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113409209A (en) * 2021-06-17 2021-09-17 Oppo广东移动通信有限公司 Image deblurring method and device, electronic equipment and storage medium
CN113706414A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of video optimization model and electronic equipment
CN113781357A (en) * 2021-09-24 2021-12-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113781336A (en) * 2021-08-31 2021-12-10 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and storage medium
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN115187446A (en) * 2022-05-26 2022-10-14 北京健康之家科技有限公司 Face changing video generation method and device, computer equipment and readable storage medium
CN115311175A (en) * 2022-10-10 2022-11-08 季华实验室 Multi-focus image fusion method based on no-reference focus quality evaluation
CN115546042A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment
CN115866295A (en) * 2022-11-22 2023-03-28 东南大学 Video key frame secondary extraction method and system for terminal row of convertor station
CN116128769A (en) * 2023-04-18 2023-05-16 聊城市金邦机械设备有限公司 Track vision recording system of swinging motion mechanism
CN116385302A (en) * 2023-04-07 2023-07-04 北京拙河科技有限公司 Dynamic blur elimination method and device for optical group camera
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN116934654A (en) * 2022-03-31 2023-10-24 荣耀终端有限公司 Image ambiguity determining method and related equipment thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001705A1 (en) * 2002-06-28 2004-01-01 Andreas Soupliotis Video processing system and method for automatic enhancement of digital video
US20060280249A1 (en) * 2005-06-13 2006-12-14 Eunice Poon Method and system for estimating motion and compensating for perceived motion blur in digital video
US20090213234A1 (en) * 2008-02-18 2009-08-27 National Taiwan University Method of full frame video stabilization
EP2680568A1 (en) * 2012-06-25 2014-01-01 ST-Ericsson SA Video stabilisation with deblurring
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
US20150103163A1 (en) * 2013-10-14 2015-04-16 Samsung Electronics Co., Ltd. Apparatus, method, and processor for measuring change in distance between a camera and an object
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN107895349A (en) * 2017-10-23 2018-04-10 电子科技大学 A kind of endoscopic video deblurring method based on synthesis
US20180122052A1 (en) * 2016-10-28 2018-05-03 Thomson Licensing Method for deblurring a video, corresponding device and computer program product

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001705A1 (en) * 2002-06-28 2004-01-01 Andreas Soupliotis Video processing system and method for automatic enhancement of digital video
US20060280249A1 (en) * 2005-06-13 2006-12-14 Eunice Poon Method and system for estimating motion and compensating for perceived motion blur in digital video
US20090213234A1 (en) * 2008-02-18 2009-08-27 National Taiwan University Method of full frame video stabilization
EP2680568A1 (en) * 2012-06-25 2014-01-01 ST-Ericsson SA Video stabilisation with deblurring
US20150138379A1 (en) * 2012-06-25 2015-05-21 St-Ericsson Sa Video Stabilisation with Deblurring
US20150103163A1 (en) * 2013-10-14 2015-04-16 Samsung Electronics Co., Ltd. Apparatus, method, and processor for measuring change in distance between a camera and an object
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
US20180122052A1 (en) * 2016-10-28 2018-05-03 Thomson Licensing Method for deblurring a video, corresponding device and computer program product
CN107895349A (en) * 2017-10-23 2018-04-10 电子科技大学 A kind of endoscopic video deblurring method based on synthesis

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416937A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and mobile equipment
CN112001355A (en) * 2020-09-03 2020-11-27 杭州云栖智慧视通科技有限公司 Training data preprocessing method for fuzzy face recognition under outdoor video
CN112801890A (en) * 2021-01-08 2021-05-14 北京奇艺世纪科技有限公司 Video processing method, device and equipment
CN112801890B (en) * 2021-01-08 2023-07-25 北京奇艺世纪科技有限公司 Video processing method, device and equipment
CN112767250A (en) * 2021-01-19 2021-05-07 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN113067979A (en) * 2021-03-04 2021-07-02 北京大学 Imaging method, device, equipment and storage medium based on bionic pulse camera
CN113327206B (en) * 2021-06-03 2022-03-22 江苏电百达智能科技有限公司 Image fuzzy processing method of intelligent power transmission line inspection system based on artificial intelligence
CN113327206A (en) * 2021-06-03 2021-08-31 江苏电百达智能科技有限公司 Image fuzzy processing method of intelligent power transmission line inspection system based on artificial intelligence
CN113409203A (en) * 2021-06-10 2021-09-17 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113409209A (en) * 2021-06-17 2021-09-17 Oppo广东移动通信有限公司 Image deblurring method and device, electronic equipment and storage medium
CN113706414A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of video optimization model and electronic equipment
CN113781336A (en) * 2021-08-31 2021-12-10 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and storage medium
CN113781336B (en) * 2021-08-31 2024-02-02 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN113781357A (en) * 2021-09-24 2021-12-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN116934654A (en) * 2022-03-31 2023-10-24 荣耀终端有限公司 Image ambiguity determining method and related equipment thereof
CN115546042B (en) * 2022-03-31 2023-09-29 荣耀终端有限公司 Video processing method and related equipment thereof
CN115546042A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN115187446A (en) * 2022-05-26 2022-10-14 北京健康之家科技有限公司 Face changing video generation method and device, computer equipment and readable storage medium
CN115311175B (en) * 2022-10-10 2022-12-09 季华实验室 Multi-focus image fusion method based on no-reference focus quality evaluation
CN115311175A (en) * 2022-10-10 2022-11-08 季华实验室 Multi-focus image fusion method based on no-reference focus quality evaluation
CN115866295A (en) * 2022-11-22 2023-03-28 东南大学 Video key frame secondary extraction method and system for terminal row of convertor station
CN116385302A (en) * 2023-04-07 2023-07-04 北京拙河科技有限公司 Dynamic blur elimination method and device for optical group camera
CN116128769A (en) * 2023-04-18 2023-05-16 聊城市金邦机械设备有限公司 Track vision recording system of swinging motion mechanism
CN116128769B (en) * 2023-04-18 2023-06-23 聊城市金邦机械设备有限公司 Track vision recording system of swinging motion mechanism
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Also Published As

Publication number Publication date
CN111275626B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
Zhang et al. Deep image deblurring: A survey
Wang et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data
CN108765343B (en) Image processing method, device, terminal and computer readable storage medium
Yu et al. A unified learning framework for single image super-resolution
Sun et al. Gradient profile prior and its applications in image super-resolution and enhancement
CN110163237B (en) Model training and image processing method, device, medium and electronic equipment
EP2294808B1 (en) Method and system for efficient video processing
EP2164040B1 (en) System and method for high quality image and video upscaling
US20130058588A1 (en) Motion Deblurring Using Image Upsampling
Zeng et al. A generalized DAMRF image modeling for superresolution of license plates
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
Liu et al. A motion deblur method based on multi-scale high frequency residual image learning
CN113592776A (en) Image processing method and device, electronic device and storage medium
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
Dong et al. Multi-scale residual low-pass filter network for image deblurring
CN112070657A (en) Image processing method, device, system, equipment and computer storage medium
CN113012061A (en) Noise reduction processing method and device and electronic equipment
Zhao et al. High resolution local structure-constrained image upsampling
Lim et al. Deep spectral-spatial network for single image deblurring
Wu et al. Two-level wavelet-based convolutional neural network for image deblurring
Choi et al. Sharpness enhancement and super-resolution of around-view monitor images
CN116071279A (en) Image processing method, device, computer equipment and storage medium
Askari Javaran et al. [Retracted] Using a Blur Metric to Estimate Linear Motion Blur Parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant