CN115578289A - Defocused image deblurring method based on boundary neighborhood gradient difference - Google Patents

Defocused image deblurring method based on boundary neighborhood gradient difference Download PDF

Info

Publication number
CN115578289A
CN115578289A CN202211345557.XA CN202211345557A CN115578289A CN 115578289 A CN115578289 A CN 115578289A CN 202211345557 A CN202211345557 A CN 202211345557A CN 115578289 A CN115578289 A CN 115578289A
Authority
CN
China
Prior art keywords
fuzzy
image
boundary
deblurring
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211345557.XA
Other languages
Chinese (zh)
Inventor
王映辉
陶俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202211345557.XA priority Critical patent/CN115578289A/en
Publication of CN115578289A publication Critical patent/CN115578289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a boundary neighborhood gradient difference-based defocused image deblurring method, and belongs to the technical field of computer vision subjects. Aiming at the problem that the fuzzy quantity of the boundary position of the defocused image cannot be accurately obtained for a multi-depth-layer static scene by the conventional defocused image deblurring method, the invention fully utilizes the relation between the gradient difference value of the boundary neighborhood and the fuzzy quantity to accurately obtain the fuzzy quantity of the boundary position of the defocused image, thereby solving the problem of boundary ringing artifacts in the deblurring result; aiming at the problem that the capability of a non-blind deconvolution algorithm for retaining image detail information cannot be strong enough to cause the loss of the detail information in a deblurring result, the invention designs a non-blind convolution algorithm by combining a discrete fuzzy quantity selection strategy and sparse prior to enhance the capability of the non-blind deconvolution algorithm for retaining the image detail information, solves the problem of the loss of the detail information in the deblurring result, and can be used for deblurring processing of defocused images in a multi-depth-layer static scene.

Description

Defocused image deblurring method based on boundary neighborhood gradient difference
Technical Field
The invention relates to a boundary neighborhood gradient difference-based defocused image deblurring method, and belongs to the field of computer vision.
Background
Defocused images are visible everywhere in real life, so-called defocusing means that the images become unclear and blurred, and the blurred images can make people not clearly see objects in the images and lose detailed information, which limits further application of the images in the fields of precision instrument flaw detection, augmented reality and the like.
In order to improve the use efficiency of the defocused image, it is necessary to remove the defocusing in the defocused image to make the image sharp. The existing traditional defocused image deblurring method obtains a blur amount by estimating the blur degree of a pixel point, and then calculates a blur kernel by using a point spread function represented by a Gaussian function and the blur amount. Finally, the blur kernel and the defocused image are input into a non-blind deconvolution algorithm to perform a deblurring operation.
For a multi-depth-layer static scene, the existing defocused image deblurring method cannot accurately obtain the blur quantity at the boundary of the defocused image, and the blur quantity represents the blur degree of a pixel point. The inaccurate estimation of the defocus blur amount at the boundary can cause boundary ringing artifact in the deblurring result, wherein the boundary ringing artifact refers to oscillation generated at the severe gray change position. In addition, the priori knowledge of the non-blind deconvolution algorithm in the defocused image deblurring method is not strong enough, the priori knowledge refers to some known characteristics and characteristics in the image, the non-blind deconvolution algorithm can be restricted by the aid of the priori knowledge, so that a deblurring result can continuously approach to a real clear image, the loss of the priori knowledge can cause that the capability of the non-blind deconvolution algorithm for recovering image detail information is not strong enough, and the problem of loss of the detail information occurs in a final deblurring result.
In a word, the existence of the boundary ringing artifact and the lack of the prior knowledge both influence the deblurring process of the defocused image, and finally cause the deblurring effect to be poor and the image definition to be insufficient.
Disclosure of Invention
In order to solve the problems of poor deblurring effect and insufficient image definition of the traditional defocused image deblurring method in a static scene facing multiple depth layers, the invention provides a defocused image deblurring method, which comprises the following steps:
step 1: solving a boundary of the defocused image, and solving a gradient difference value of a boundary neighborhood at a boundary position;
step 2: obtaining a fuzzy quantity at the boundary position by using the gradient difference value of the boundary neighborhood obtained in the step 1, thereby obtaining a sparse fuzzy graph;
and step 3: interpolating the sparse fuzzy graph obtained in the step 2 to obtain an interpolated fuzzy graph;
and 4, step 4: carrying out fuzzy detection on the defocused image by using the interpolation fuzzy image obtained in the step 3 and calculating a fuzzy ratio, and when the fuzzy ratio is greater than a preset fuzzy ratio threshold value, deblurring the defocused image;
and 5: for the image needing to be deblurred in the step 4, obtaining a blur kernel by using the interpolation blur map in the step 3, and then performing a deblurring operation by combining a non-blind deconvolution algorithm.
Optionally, step 1 includes:
and solving the boundary of the defocused image by using a scale-consistent boundary detection algorithm, solving the gradient in a neighborhood taking each boundary position as a center in the original image after the boundary position is obtained, and then performing difference on the maximum gradient and the minimum gradient in the neighborhood to obtain the gradient difference at the boundary position.
Optionally, the step 2 includes:
step 2.1: obtaining a relation between the boundary neighborhood gradient difference value and the fuzzy quantity:
modeling the defocusing process of the image, as shown in formula (1):
Figure BDA0003917084050000021
wherein I represents the defocused image, k represents a blur kernel, x represents a sharp image, n represents noise,
Figure BDA0003917084050000022
representing convolution operation, wherein the fuzzy core performs convolution operation on the clear image and the noise is added to obtain the defocused image;
the fuzzy core is obtained by a point spread function and a fuzzy quantity, the point spread function is represented by a Gaussian function, and the formula (2) is as follows:
Figure BDA0003917084050000023
wherein σ represents a blurring amount representing a blurring degree of the pixel, and (x, y) represents a pixel coordinate;
the boundary l (x, y) of the sharp image is modeled as shown in formula (3):
l (x, y) = a u (x, y) + b (3) wherein a is amplitude, b is offset, and u (x, y) is a step function;
defining the boundary neighborhood gradient difference value is shown as formula (4):
Figure BDA0003917084050000024
wherein GD (x, y) is the boundary neighborhood gradient difference, (x, y)' denotes a neighborhood centered at (x, y), J (x, y) denotes the defocused image boundary,
Figure BDA0003917084050000025
the gradient in the neighborhood of the boundary is represented, i.e. the boundary is differentiated, and the result of the derivation is shown in equation (5):
Figure BDA0003917084050000026
the boundary position is (x, y) = (0, 0), so the boundary neighborhood gradient difference is as shown in equation (6):
Figure BDA0003917084050000027
deriving a relation between the boundary neighborhood gradient difference and the fuzzy quantity according to equation (6), as shown in equation (7):
Figure BDA0003917084050000031
step 2.2: and (2) solving the fuzzy quantity at the boundary according to the relation between the boundary neighborhood gradient difference and the fuzzy quantity and the boundary neighborhood gradient difference obtained in the step (1), thereby obtaining a sparse fuzzy graph.
Optionally, the process of calculating the blur ratio in step 4 includes:
setting a fuzzy quantity threshold, judging whether the fuzzy quantity of each pixel point in the interpolation fuzzy graph is greater than the fuzzy quantity threshold, when the fuzzy quantity of the pixel point is greater than the threshold, setting the pixel point as fuzzy, otherwise, setting the pixel point as non-fuzzy;
and dividing the interpolation fuzzy graph into a fuzzy area and a non-fuzzy area according to the judgment result, and calculating the proportion of the number of pixels in the fuzzy area to the number of pixels in the whole image, namely the fuzzy ratio.
Optionally, the step 5 includes:
step 5.1: selecting discrete fuzzy quantity for the interpolation fuzzy graph obtained in the step 3, and obtaining n fuzzy quantities sigma by stepping q 1 、σ 2 …σ n (ii) a Obtaining n fuzzy kernels by using the n fuzzy quantities and utilizing a point spread function, wherein the point spread function is expressed by a Gaussian function, and a standard deviation in the Gaussian function expresses the fuzzy quantities;
step 5.2, using a non-blind deconvolution algorithm based on sparse prior and n fuzzy cores to perform n times of non-blind deconvolution operations on the defocused image to obtain n deblurred images; each fuzzy core is corresponding to a fuzzy quantity, when the nth fuzzy core is used for deblurring a defocused image, the positions of the fuzzy quantities which are larger than the nth fuzzy quantity in the fuzzy image are firstly obtained, and pixels in deblurring results corresponding to the positions are extracted to obtain the nth deblurring result;
the operations are executed for n times, and n deblurred images are finally obtained;
and 5.3, adding the pixel values of the corresponding positions of the n deblurred images obtained in the step 5.2, and finally dividing each pixel value by 255 to obtain a full-focus image, namely a completely clear image.
Optionally, in the step 3, a KNN matching interpolation algorithm is used to interpolate the sparse blur map.
Optionally, the size of the neighborhood is 11 × 11.
Optionally, the image for processing is at least one of: blurring a face image; blurring the character image; blurring a scene image; blurring the vehicle image; blurring an animal image; and blurring the plant image.
A second object of the present invention is to provide an electronic apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the defocused image deblurring method described above.
It is a third object of the present invention to provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the defocused image deblurring method described above.
The invention has the beneficial effects that:
the method for deblurring the defocused image fully utilizes the relation between the boundary neighborhood gradient difference and the blur amount to accurately obtain the blur amount of the boundary position of the defocused image, thereby solving the problem of boundary ringing artifacts in a deblurring result. Aiming at the problem that the non-blind deconvolution algorithm has insufficient capability of retaining image detail information, which causes the loss of detail information in a deblurring result, the invention designs a non-blind convolution algorithm by combining a discrete fuzzy quantity selection strategy and sparse prior to strengthen the capability of retaining the image detail information of the non-blind deconvolution algorithm, and solves the problem of the loss of the detail information in the deblurring result.
Compared with the existing defocused image deblurring method, the method can effectively solve the problem of boundary ringing artifacts, can avoid the loss of image detail information, and effectively improves the definition of the defocused image after deblurring.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of a second input defocused image according to an embodiment of the present invention.
Fig. 2 shows a boundary diagram obtained by performing boundary detection on fig. 1 by using a scale-consistent boundary detection algorithm according to a second embodiment of the present invention.
FIG. 3 is a diagram illustrating a boundary neighborhood gradient difference map obtained by using the boundary positions in FIG. 2 to find the boundary in FIG. 1 according to a second embodiment of the present invention.
FIG. 4 is a diagram showing the blur amount at the boundary position obtained by using the boundary neighborhood gradient difference in FIG. 3 according to the second embodiment of the present invention.
Fig. 5 shows a blur amount map of each pixel position obtained by using the blur amount of the boundary position in fig. 4 and the KNN matching interpolation algorithm according to the second embodiment of the present invention.
FIG. 6 shows a final deblurring result graph obtained according to the obtained blur amount and the non-blind deconvolution algorithm in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The first embodiment is as follows:
the embodiment provides a defocused image deblurring method, which comprises the following steps:
step 1: solving a boundary of the defocused image, and solving a gradient difference value of a boundary neighborhood at the position of the boundary;
step 2: obtaining a fuzzy quantity at the boundary position by using the gradient difference value of the boundary neighborhood obtained in the step 1, thereby obtaining a sparse fuzzy graph;
and step 3: interpolating the sparse fuzzy graph obtained in the step 2 to obtain an interpolated fuzzy graph;
and 4, step 4: carrying out fuzzy detection on the defocused image by using the interpolation fuzzy image obtained in the step 3 and calculating a fuzzy ratio, and when the fuzzy ratio is greater than a preset fuzzy ratio threshold value, deblurring the defocused image;
and 5: for the image needing to be deblurred in the step 4, obtaining a blur kernel by using the interpolation blur map in the step 3, and then performing a deblurring operation by combining a non-blind deconvolution algorithm.
Example two:
the embodiment provides a defocused image deblurring method based on a boundary neighborhood gradient difference, which comprises the following steps of:
step 1: and solving the boundary of the defocused image by using a dimension consistent boundary detection algorithm, wherein the dimension consistent boundary detection algorithm is derived from Edge-Based Defocus Blur Estimation With Adaptive Scale Selection, solving gradients in an 11x11 neighborhood taking each boundary position as a center in an original image after the boundary position is obtained, and then performing difference on the maximum gradient and the minimum gradient in the neighborhood to obtain a gradient difference value at the boundary position.
Step 2: obtaining the fuzzy quantity at the boundary position by using the boundary neighborhood gradient difference value obtained in the step 1, thereby obtaining a sparse fuzzy graph, which comprises the following specific steps:
step 2.1: and acquiring a relation between the gradient difference value of the boundary neighborhood and the fuzzy quantity.
Modeling the defocusing process of the image, as shown in formula (1):
Figure BDA0003917084050000051
wherein I denotes a defocused image, k denotes a blur kernel, x denotes a sharp image, n denotes noise,
Figure BDA0003917084050000052
representing convolution operation, the fuzzy core convolves the clear image and noise to obtain a defocused image. The blur kernel is obtained from a point spread function and a blur amount, and the point spread function is represented by a gaussian function, as shown in equation (2):
Figure BDA0003917084050000053
where σ denotes a blur amount, which denotes a blur degree of a pixel, and (x, y) denotes a pixel coordinate.
The boundary l (x, y) of the sharp image is modeled as shown in equation (3):
l(x,y)=a*u(x,y)+b (3)
where a is amplitude, b is offset, and u (x, y) is a step function.
Defining the boundary neighborhood gradient difference is shown as equation (4):
Figure BDA0003917084050000054
the gradient minima in the border neighborhood are found by experimentation to be small and close to zero, so the minima are omitted in the formula derivation. Where GD (x, y) is the boundary neighborhood gradient difference, (x, y)' denotes the neighborhood centered at (x, y), J (x, y) denotes the defocused image boundary,
Figure BDA0003917084050000061
the gradient in the neighborhood of the boundary is represented, i.e. the boundary is derived. The derivation result is shown in formula (5):
Figure BDA0003917084050000062
the boundary position is (x, y) = (0, 0), so the boundary neighborhood gradient difference is as shown in equation (6):
Figure BDA0003917084050000063
according to the equation (6), the relation between the gradient difference of the boundary neighborhood and the fuzzy quantity can be deduced, as shown in the equation (7):
Figure BDA0003917084050000064
step 2.2: and (3) solving the fuzzy quantity at the boundary according to the relation of the boundary neighborhood gradient difference and the fuzzy quantity and the boundary neighborhood gradient difference obtained in the step (1), thereby obtaining the sparse fuzzy graph.
And step 3: in order to obtain the blur amount of the remaining pixel points, the sparse blur map obtained in step 2 is interpolated by using a KNN matching interpolation algorithm in the present embodiment, so as to obtain an interpolation blur map containing all the blur amounts of the pixel points.
And 4, step 4: and (4) dividing the defocused image into a blurred area and a non-blurred area based on the interpolation blur map obtained in the step (3) and the set blur amount threshold. When the fuzzy quantity of the pixel point is larger than the fuzzy quantity threshold value, the pixel point is set to be fuzzy, otherwise, the pixel point is not fuzzy. The above operation is performed on each pixel point, so that the image can be divided into a blurred region and a non-blurred region. And calculating the proportion of the pixel number of the fuzzy area to the pixel number of the whole image, namely the fuzzy ratio. When the blur ratio is greater than a certain value, the image is considered blurred and needs to be deblurred, otherwise it is not needed.
And 5: and (4) for the image needing to be deblurred in the step (4), obtaining a blur kernel by using the interpolation blur map in the step (3), and then performing a deblurring operation by combining a non-blind deconvolution algorithm. The method comprises the following specific steps:
step 5.1: and (3) carrying Out discrete fuzzy quantity selection on the interpolation fuzzy graph obtained in the step (3) to obtain n fuzzy quantities, wherein the discrete fuzzy quantity selection strategy is derived from an article spreading-Varying Out-Of-Focus Image deblocking With L1-2Optimization And A Guided Blur Map. The n blur amounts are used to obtain n blur kernels by using a point spread function, the point spread function of the present embodiment is expressed by a gaussian function, and a standard deviation in the gaussian function represents the blur amount.
Step 5.2: using a sparse prior-based non-blind deconvolution algorithm and n fuzzy cores to perform n times of non-blind deconvolution operations on the defocused Image to obtain n deblurred images, wherein the non-blind deconvolution algorithm is derived from an article Image and Depth from a relational Camera with a Coded Aperture. Each blur check corresponds to a blur amount, after the defocused image is deblurred by using the nth blur check, the pixel positions where the blur amounts are larger than the nth blur amount in the blur image are firstly obtained, the pixel values of the pixel positions in the nth deblurred image are extracted, the rest pixels are omitted to obtain the nth deblurred image, the deblurred image only comprises the values of a part of pixel positions, and the values of other pixel positions are 0. The blur map is a gray scale map, the value of each pixel position is the blur amount, the blur map corresponds to the defocused image, each pixel position in the defocused image corresponds to a pixel position in the blur map in a one-to-one correspondence mode, and the blur amount of a certain pixel position in the blur map is the blur amount of the corresponding pixel position in the defocused image. The above operations are executed n times, and finally n deblurred images are obtained.
Step 5.3: the n deblurred images from step 5.2 are added in terms of pixel value and finally each pixel value is divided by 255 to obtain a fully focused image, i.e. a completely sharp image.
The environment for implementing the system in the embodiment is as follows:
table 1: system hardware configuration table
Figure BDA0003917084050000071
Table 2: system software configuration table
Software Related information
Operating system Windows 10 bit 64
Matlab Matlab R2021a
The experimental effect is shown in the attached figure.
FIG. 1: the input defocused image, i.e., the image that needs to be processed to be sharp. It can be seen in the figure that the degree of blur is highest on the left, and decreases from left to right.
FIG. 2: the boundary map obtained by using the scale-consistent boundary detection algorithm is a binary map, that is, only 0 and 1,1 pixel values represent white, and 0 pixel value represents black. The white pixel points represent the boundary positions.
FIG. 3: the boundary neighborhood gradient difference map represents the gradient difference of each boundary position, and the boundary neighborhood gradient difference is found to be inversely proportional to the fuzzy quantity in the method of the embodiment, that is, the more fuzzy boundary position gradient difference is smaller, and the clearer boundary position gradient difference is larger. Also depicted in fig. 1 is that the input defocused image is blurriest to the left, with the degree of blurring decreasing from left to right. In the boundary neighborhood gradient difference map, it can be seen that the gradient difference at the boundary position on the left side is small, and the gradient difference at the boundary position on the right side is large. Thus, the method of the present embodiment was also verified. The brighter area in the figure indicates a larger value, and the darker area indicates a smaller value.
FIG. 4 is a schematic view of: a sparse blur map, which only contains the blur amount at the boundary positions, which represents the blur degree of the pixels, can be seen in the figure that the blur amount at the boundary positions on the left is larger, and the blur amount at the boundary positions on the right is smaller, which also corresponds to the description of fig. 1. The brighter area in the figure indicates a larger value, and the darker area indicates a smaller value.
FIG. 5: and the fuzzy graph represents the fuzzy amount of each pixel point in the defocused image and is a gray scale graph, wherein brighter places represent higher fuzzy degree, and darker places represent lower fuzzy degree.
FIG. 6: the deblurred image, i.e. the sharpened image, is clearly visible in the figure, in which the right-hand side is clearly sharpened compared to fig. 1, and in which no boundary ringing artifacts appear and the detailed information of the image is completely retained.
Some steps in the embodiments of the present invention may be implemented by software, and the corresponding software program may be stored in a readable storage medium, such as an optical disc or a hard disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. A method of deblurring a defocused image, the method comprising:
step 1: solving a boundary of the defocused image, and solving a gradient difference value of a boundary neighborhood at the position of the boundary;
step 2: obtaining a fuzzy quantity at the boundary position by using the gradient difference value of the boundary neighborhood obtained in the step 1, thereby obtaining a sparse fuzzy graph;
and 3, step 3: interpolating the sparse fuzzy graph obtained in the step 2 to obtain an interpolated fuzzy graph;
and 4, step 4: carrying out fuzzy detection on the defocused image by using the interpolation fuzzy image obtained in the step 3 and calculating a fuzzy ratio, and when the fuzzy ratio is greater than a preset fuzzy ratio threshold value, deblurring the defocused image;
and 5: for the image needing to be deblurred in the step 4, obtaining a blur kernel by using the interpolation blur graph in the step 3, and then performing a deblurring operation by combining a non-blind deconvolution algorithm.
2. The method of deblurring a defocused image as claimed in claim 1, wherein said step 1 comprises:
and solving the boundary of the defocused image by using a scale-consistent boundary detection algorithm, solving the gradient in a neighborhood taking each boundary position as a center in the original image after the boundary position is obtained, and then performing difference on the maximum gradient and the minimum gradient in the neighborhood to obtain the gradient difference at the boundary position.
3. The method of claim 1, wherein the step 2 comprises:
step 2.1: obtaining a relation between the boundary neighborhood gradient difference value and the fuzzy quantity:
modeling the defocusing process of the image, as shown in equation (1):
Figure FDA0003917084040000011
wherein I represents the defocused image, k represents a blur kernel, x represents a sharp image, n represents noise,
Figure FDA0003917084040000012
representing convolution operation, wherein the fuzzy core performs convolution operation on the clear image and the noise is added to obtain the defocused image;
the fuzzy kernel is obtained by a point spread function and a fuzzy quantity, the point spread function is represented by a Gaussian function, and the formula (2) is as follows:
Figure FDA0003917084040000013
wherein σ represents a blurring amount representing a blurring degree of the pixel, and (x, y) represents a pixel coordinate;
the boundary l (x, y) of the sharp image is modeled as shown in formula (3):
l(x,y)=a*u(x,y)+b (3)
where a is amplitude, b is offset, and u (x, y) is a step function;
defining the boundary neighborhood gradient difference value is shown as formula (4):
Figure FDA0003917084040000014
wherein GD (x, y) is the boundary neighborhood gradient difference, (x, y)' denotes a neighborhood centered at (x, y), J (x, y) denotes the defocused image boundary,
Figure FDA0003917084040000021
the gradient in the neighborhood of the boundary is represented, i.e. the boundary is differentiated, and the result of the derivation is shown in equation (5):
Figure FDA0003917084040000022
the boundary position is (x, y) = (0, 0), so the boundary neighborhood gradient difference is as shown in equation (6):
Figure FDA0003917084040000023
deriving a relation between the boundary neighborhood gradient difference and the fuzzy quantity according to equation (6), as shown in equation (7):
Figure FDA0003917084040000024
step 2.2: and (3) solving the fuzzy quantity at the boundary according to the relation between the boundary neighborhood gradient difference and the fuzzy quantity and the boundary neighborhood gradient difference obtained in the step (1), thereby obtaining a sparse fuzzy graph.
4. The method of claim 1, wherein the step 4 of calculating the blur ratio comprises:
setting a fuzzy quantity threshold, judging whether the fuzzy quantity of each pixel point in the interpolation fuzzy graph is greater than the fuzzy quantity threshold, when the fuzzy quantity of the pixel point is greater than the threshold, setting the pixel point as fuzzy, otherwise, setting the pixel point as non-fuzzy;
and dividing the interpolation fuzzy graph into a fuzzy region and a non-fuzzy region according to the judgment result, and calculating the proportion of the number of pixels in the fuzzy region to the number of pixels in the whole image, namely a fuzzy ratio.
5. The method of claim 1, wherein the step 5 comprises:
step 5.1: selecting discrete fuzzy quantity for the interpolation fuzzy graph obtained in the step 3, and obtaining n fuzzy quantities sigma by stepping q 1 、σ 2 …σ n (ii) a Obtaining n fuzzy kernels by using the n fuzzy quantities and utilizing a point spread function, wherein the point spread function is expressed by a Gaussian function, and a standard deviation in the Gaussian function expresses the fuzzy quantities;
step 5.2, using a non-blind deconvolution algorithm based on sparse prior and n fuzzy cores to perform n times of non-blind deconvolution operations on the defocused image to obtain n deblurred images; each fuzzy core is corresponding to a fuzzy quantity, when the nth fuzzy core is used for deblurring a defocused image, the positions of the fuzzy quantities which are larger than the nth fuzzy quantity in the fuzzy image are firstly obtained, and pixels in deblurring results corresponding to the positions are extracted to obtain the nth deblurring result;
executing the operations for n times to finally obtain n deblurred images;
and 5.3, adding the pixel values of the corresponding positions of the n deblurred images obtained in the step 5.2, and finally dividing each pixel value by 255 to obtain a full-focus image, namely a completely clear image.
6. The method according to claim 1, wherein the step 3 interpolates the sparse blur map by using a KNN matching interpolation algorithm.
7. The method of claim 2, wherein the size of the neighborhood is 11x 11.
8. The defocused image deblurring method of any of claims 1-7, wherein the image for processing is at least one of: blurring a face image; blurring the image of the character; blurring a scene image; blurring the vehicle image; blurring an animal image; and blurring the plant image.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the defocused image deblurring method of any of claims 1-8.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the defocused image deblurring method of any one of claims 1-8.
CN202211345557.XA 2022-10-31 2022-10-31 Defocused image deblurring method based on boundary neighborhood gradient difference Pending CN115578289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211345557.XA CN115578289A (en) 2022-10-31 2022-10-31 Defocused image deblurring method based on boundary neighborhood gradient difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211345557.XA CN115578289A (en) 2022-10-31 2022-10-31 Defocused image deblurring method based on boundary neighborhood gradient difference

Publications (1)

Publication Number Publication Date
CN115578289A true CN115578289A (en) 2023-01-06

Family

ID=84588287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211345557.XA Pending CN115578289A (en) 2022-10-31 2022-10-31 Defocused image deblurring method based on boundary neighborhood gradient difference

Country Status (1)

Country Link
CN (1) CN115578289A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311262A (en) * 2023-05-23 2023-06-23 济南大陆机电股份有限公司 Instrument information identification method, system, equipment and storage medium
CN116664451A (en) * 2023-07-27 2023-08-29 中铁九局集团第一建设有限公司 Measurement robot measurement optimization method based on multi-image processing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311262A (en) * 2023-05-23 2023-06-23 济南大陆机电股份有限公司 Instrument information identification method, system, equipment and storage medium
CN116664451A (en) * 2023-07-27 2023-08-29 中铁九局集团第一建设有限公司 Measurement robot measurement optimization method based on multi-image processing
CN116664451B (en) * 2023-07-27 2023-10-10 中铁九局集团第一建设有限公司 Measurement robot measurement optimization method based on multi-image processing

Similar Documents

Publication Publication Date Title
CN103942758B (en) Dark channel prior image dehazing method based on multiscale fusion
CN110706174B (en) Image enhancement method, terminal equipment and storage medium
CN115578289A (en) Defocused image deblurring method based on boundary neighborhood gradient difference
US9142009B2 (en) Patch-based, locally content-adaptive image and video sharpening
CN107133923B (en) Fuzzy image non-blind deblurring method based on adaptive gradient sparse model
CN110827229A (en) Infrared image enhancement method based on texture weighted histogram equalization
Deng et al. A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized gamma distribution
CN114359665B (en) Training method and device of full-task face recognition model and face recognition method
WO2017100971A1 (en) Deblurring method and device for out-of-focus blurred image
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN113592776A (en) Image processing method and device, electronic device and storage medium
Agrawal et al. Dense haze removal by nonlinear transformation
Wang et al. An efficient method for image dehazing
CN103971345A (en) Image denoising method based on improved bilateral filtering
CN113438386B (en) Dynamic and static judgment method and device applied to video processing
CN112825189B (en) Image defogging method and related equipment
CN111915497B (en) Image black-and-white enhancement method and device, electronic equipment and readable storage medium
CN103618904B (en) Motion estimation method and device based on pixels
CN115330637A (en) Image sharpening method and device, computing device and storage medium
Dasgupta Comparative analysis of non-blind deblurring methods for noisy blurred images
CN110363723B (en) Image processing method and device for improving image boundary effect
CN107194931A (en) It is a kind of that the method and system for obtaining target depth information is matched based on binocular image
CN111626966A (en) Sonar image denoising model training method and device and readable storage medium thereof
Robinson et al. Blind deconvolution of Gaussian blurred images containing additive white Gaussian noise
CN115908184B (en) Automatic removal method and device for mole pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination