CN107945125B - Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network - Google Patents

Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network Download PDF

Info

Publication number
CN107945125B
CN107945125B CN201711145578.6A CN201711145578A CN107945125B CN 107945125 B CN107945125 B CN 107945125B CN 201711145578 A CN201711145578 A CN 201711145578A CN 107945125 B CN107945125 B CN 107945125B
Authority
CN
China
Prior art keywords
image
value
neural network
convolutional neural
blurred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711145578.6A
Other languages
Chinese (zh)
Other versions
CN107945125A (en
Inventor
柯逍
罗幼春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201711145578.6A priority Critical patent/CN107945125B/en
Publication of CN107945125A publication Critical patent/CN107945125A/en
Application granted granted Critical
Publication of CN107945125B publication Critical patent/CN107945125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

The invention provides a fuzzy image processing method integrating a frequency spectrum estimation method and a convolutional neural network, which comprises the steps of firstly carrying out graying processing on an input image, carrying out Fourier transform and generating a frequency spectrogram; secondly, calculating a fuzzy length and an angle by performing binarization processing on the frequency spectrogram and generating a horizontal projection diagram; and finally, restoring the blurred image by utilizing wiener filtering, and further enhancing the effect through a convolutional neural network. The method is simple and efficient, and has good development prospect.

Description

Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a fuzzy image processing method fusing a frequency spectrum estimation method and a convolutional neural network.
Background
With the development of science and technology, the images are applied more and more frequently in daily life, and the images can be used in daily office work and online entertainment. In response to this, image restoration of degraded images is also becoming more and more important. Motion blurred images are one of the common types of blurred images. When we take pictures using a mobile phone, this often happens: at the moment we press the shutter, we shake our hands and then find that the picture is blurred. An image captured in this manner is referred to as a "motion-blurred image". As is known, the image restoration technique plays an important role in the whole image processing module, and its main purpose is to restore the blurred image to the original image quality standard. In image restoration, the motion-blurred image is an important part of the image and has practical significance, so that the method can be widely applied to real life and has a wide prospect.
Image restoration, which is an important part of image processing technology, is naturally receiving wide attention from scholars at home and abroad, and many related studies are being conducted. The following image restoration methods, from the initial deconvolution (i.e., inverse filtering) method to the subsequent linear restoration method, and the image blind deconvolution algorithm, are basically improved around the three methods. The main contents of the deconvolution restoration algorithm are as follows: power spectrum equalization, geometric mean filtering and Wiener filtering, etc., which are more traditional and very classical image restoration methods, are more suitable for the case that the linear space is not changed or the noise signal is not correlated. In the mid sixties, deconvolution processing of blurred images in the telescope due to atmospheric surge has been started using a point spread function (PSF for short) in Wiener filtering. The image blind deconvolution restoration method can directly estimate the true signal and the degradation function of the image according to the blurred image. However, the quality of the target image results obtained using this method is directly dependent on the choice of initial conditions, and the image results may not be unique. This method is not suitable if the image signal-to-noise ratio is low. The traditional Wiener filtering processing mode is to perform checking operation under the condition of knowing the angle and the length of the motion blur, which has great limitation on the practical use.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a fuzzy image processing method fusing a spectrum estimation method and a convolutional neural network, on the basis of traditional image restoration, the super-resolution realized by combining the spectrum estimation method and the convolutional neural network is combined to improve the image quality based on computer vision, and the use of the traditional Wiener filtering is converted into the fuzzy image which can be directly adapted to different motions through the change of point spread function parameters through a spectrogram analysis method.
In order to achieve the purpose, the technical scheme of the invention is as follows: a fuzzy image processing method fusing a spectrum estimation method and a convolutional neural network comprises the following steps:
step 1: inputting a blurred image;
step 2: carrying out graying processing on the blurred image, carrying out Fourier transform and generating a spectrogram;
and step 3: carrying out binarization processing on the spectrogram, generating a horizontal projection graph, and calculating a fuzzy length and an angle;
and 4, step 4: and restoring the blurred image by utilizing wiener filtering, and inputting the blurred image into a convolutional neural network to obtain a final image. Further, the step 2 specifically includes:
step 21: firstly, converting an image into YCbCr in a color space, then extracting a Y channel for gray processing, and adopting the following formula: gray (x, y) ═ α R (x, y) + β G (x, y) + γ B (x, y), where Gray (x, y) is the Gray value of the corresponding image position (x, y), R, G, B are the components of the three colors red, green, blue of the corresponding position, respectively, and α, β, γ are parameters;
step 22: performing one-dimensional Fourier transform on the grayscale image with N rows and N columns according to rows and columns by using the following formula:
Figure BDA0001472380200000021
firstly, performing discrete Fourier transform according to rows, then performing discrete Fourier transform according to columns, converting an image from a spatial domain F (x, y) into a frequency domain F (u, v), and finally obtaining a frequency domain value containing a real part and an imaginary part, wherein F (x, y) is a gray value of a corresponding position (x, y), u is a frequency component after row transform, v is a frequency component after column transform, and F (u, v) is a frequency spectrum value under corresponding u and v;
step 23: moving the origin of the spectrum image from the starting point (0,0) to the central point (N/2 ) of the image;
step 24: performing Fourier transform on complex values
Figure BDA0001472380200000022
Operating to obtain corresponding amplitude, wherein Re is a real part of a complex number, and Im is an imaginary part of the complex number;
step 25: and carrying out normalization operation on the amplitude map.
Further, α is 0.30, β is 0.59, and γ is 0.11.
Further, the step 3 specifically includes:
step 31: counting the number of pixels in each gray level in the spectrogram, calculating the proportion of the number of pixels in each gray level in the whole image, segmenting the images into a foreground and a background by utilizing a threshold value, and respectively calculating the probability w of dividing the images into the foreground0And its average gray value q0And probability of being divided into backgrounds w1And its average gray value q1Adopting a traversal method and using a formula sigma as w0*w1*(q0-q1)2Obtaining a segmentation threshold value which enables the sigma to be maximum, and then thresholding the image to obtain a non-black or white binary image;
step 32: dividing the binary image according to pixel points, searching according to rows from top to bottom, searching for the first row with white pixel points, searching for the first column with white pixel points from left to right, and overlapping the search results twice to obtain the target point A (x) at the top left corner1,y1) (ii) a Then, the same method is used to obtain the target point B (x) at the lower right corner2,y2) The following formula is adopted:
Figure BDA0001472380200000031
calculating to obtain a fuzzy angle theta of the motion blur;
step 33: clockwise rotating the binary image by an angle theta, calculating an accumulated value according to columns, obtaining a maximum value and a horizontal distance D of the image, then re-assigning half of the maximum value to the whole image which exceeds half of the maximum value, and traversing to obtain a minimum value region omega; the distance d to the first stripe of the central spot is calculated within Ω using the following equation:
Figure BDA0001472380200000032
the blur length L of the motion-blurred image is obtained.
Further, the step 4 specifically includes:
step 41: point spread function h of a sharp image f in motion blurL,θUnder the action ofThe blurred image g is obtained by adding the noise n, and the following equation is used: (h)L,θF) (x, y) + n (x, y) ═ g (x, y), deconvoluting the blurred image for image restoration;
step 42: inputting a series of training pictures { Xi,Yi},XiFor the input original picture, YiFor the processed fuzzy picture, there are m groups of picture data in total, and mean square error is adopted
Figure BDA0001472380200000033
As a loss function, where Θ represents each parameter in the training process, and the F function is the function of Y under a series of parameters ΘiAnd (3) performing deblurring operation, adjusting parameters during training to minimize the mean square error, reversely propagating by using a random gradient algorithm, adjusting the parameters to minimize loss, and inputting the image subjected to wiener filtering into the trained convolutional neural network.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of recovering a motion blurred image by combining a frequency spectrum estimation method and a convolution neural network, estimating the length and the angle of motion blur by utilizing a frequency spectrum image obtained after Fourier transform and combining a horizontal projection image, aiming at the fact that the traditional Wiener filtering processing mode is to carry out inspection operation under the condition that the angle and the length of the motion blur are known, and having great limitation on practical use.
Drawings
FIG. 1 is a flow chart of a fuzzy image processing method combining a spectrum estimation method and a convolutional neural network according to the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Aiming at the condition that the traditional processing method is known in fuzzy angle and length, a method combining a point spread function and a convolution neural network is provided. The convolutional neural network based implicit learning characteristic from the master-slave data does not need to manually select a proper characteristic, and the training speed of the network can be increased and the complexity of the network can be reduced through operations such as weight sharing, maximum pooling and the like. The invention improves the image quality based on computer vision by combining the super-resolution realized by a deep convolutional neural network on the basis of the traditional image restoration.
As shown in fig. 1, the method for processing a blurred image by fusing a spectrum estimation method and a convolutional neural network provided by the present invention includes:
step 1: inputting a blurred image;
step 2: carrying out graying processing on the blurred image, carrying out Fourier transform and generating a spectrogram;
and step 3: carrying out binarization processing on the spectrogram, generating a horizontal projection graph, and calculating a fuzzy length and an angle;
and 4, step 4: and restoring the blurred image by utilizing wiener filtering, and inputting the blurred image into a convolutional neural network to obtain a final image. In this embodiment, the step 2 specifically includes:
step 21: firstly, converting an image into YCbCr in a color space, then extracting a Y channel for gray processing, and adopting the following formula: gray (x, y) ═ α R (x, y) + β G (x, y) + γ B (x, y), where Gray (x, y) is the Gray value of the corresponding image position (x, y), R, G, B are the components of the three colors red, green, blue of the corresponding position, respectively, and α, β, γ are parameters;
step 22: performing one-dimensional Fourier transform on the grayscale image with N rows and N columns according to rows and columns by using the following formula:
Figure BDA0001472380200000051
firstly, performing discrete Fourier transform according to rows, then performing discrete Fourier transform according to columns, converting an image from a spatial domain F (x, y) into a frequency domain F (u, v), and finally obtaining a frequency domain value containing a real part and an imaginary part, wherein F (x, y) is a gray value of a corresponding position (x, y), u is a frequency component after row transform, v is a frequency component after column transform, and F (u, v) isCorresponding to the spectral values under u and v;
step 23: moving the origin of the spectrum image from the starting point (0,0) to the central point (N/2 ) of the image;
step 24: performing Fourier transform on complex values
Figure BDA0001472380200000052
Operating to obtain corresponding amplitude, wherein Re is a real part of a complex number, and Im is an imaginary part of the complex number;
step 25: and carrying out normalization operation on the amplitude map.
In this embodiment, α is 0.30, β is 0.59, and γ is 0.11.
In this embodiment, the step 3 specifically includes:
step 31: counting the number of pixels in each gray level in the spectrogram, calculating the proportion of the number of pixels in each gray level in the whole image, segmenting the images into a foreground and a background by utilizing a threshold value, and respectively calculating the probability w of dividing the images into the foreground0And its average gray value q0And probability of being divided into backgrounds w1And its average gray value q1Adopting a traversal method and using a formula sigma as w0*w1*(q0-q1)2Obtaining a segmentation threshold value which enables the sigma to be maximum, and then thresholding the image to obtain a non-black or white binary image;
step 32: dividing the binary image according to pixel points, searching according to rows from top to bottom, searching for the first row with white pixel points, searching for the first column with white pixel points from left to right, and overlapping the search results twice to obtain the target point A (x) at the top left corner1,y1) (ii) a Then, the same method is used to obtain the target point B (x) at the lower right corner2,y2) The following formula is adopted:
Figure BDA0001472380200000053
calculating to obtain a fuzzy angle theta of the motion blur;
step 33: clockwise rotation angle theta is carried out on the binary image, the accumulated value is calculated according to columns, and the maximum value is obtainedAnd the horizontal distance D of the image, then assigning half of the maximum value again to the half of the maximum value in the whole image, and traversing to obtain a minimum value region omega; the distance d to the first stripe of the central spot is calculated within Ω using the following equation:
Figure BDA0001472380200000061
the blur length L of the motion-blurred image is obtained.
In this embodiment, the step 4 specifically includes:
step 41: point spread function h of a sharp image f in motion blurL,θWith the addition of noise n, the image becomes a blurred image g, using the following equation: (h)L,θF) (x, y) + n (x, y) ═ g (x, y), deconvoluting the blurred image for image restoration;
step 42: inputting a series of training pictures { Xi,Yi},XiFor the input original picture, YiFor the processed fuzzy picture, there are m groups of picture data in total, and mean square error is adopted
Figure BDA0001472380200000062
As a loss function, where Θ represents each parameter in the training process, and the F function is the function of Y under a series of parameters ΘiAnd (3) performing deblurring operation, adjusting parameters during training to minimize the mean square error, reversely propagating by using a random gradient algorithm, adjusting the parameters to minimize loss, and inputting the image subjected to wiener filtering into the trained convolutional neural network.
The convolutional neural network can automatically learn from data without manually selecting proper characteristics, and the training speed of the network is increased and the complexity of the network is reduced through operations such as weight sharing, maximum pooling and the like.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above. The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (4)

1. A fuzzy image processing method fusing a spectrum estimation method and a convolutional neural network is characterized by comprising the following steps:
step 1: inputting a blurred image;
step 2: carrying out graying processing on the blurred image, carrying out Fourier transform and generating a spectrogram;
and step 3: carrying out binarization processing on the spectrogram, generating a horizontal projection graph, and calculating a fuzzy length and an angle;
and 4, step 4: restoring the blurred image by utilizing wiener filtering, and inputting the blurred image into a convolutional neural network to obtain a final image; the step 3 specifically includes:
step 31: counting the number of pixels in each gray level in the spectrogram, calculating the proportion of the number of pixels in each gray level in the whole image, segmenting the images into a foreground and a background by utilizing a threshold value, and respectively calculating the probability w of dividing the images into the foreground0And its average gray value q0And probability of being divided into backgrounds w1And its average gray value q1Adopting a traversal method and using a formula sigma as w0*w1*(q0-q1)2Obtaining a segmentation threshold value which enables the sigma to be maximum, and then thresholding the image to obtain a non-black or white binary image;
step 32: dividing the binary image according to pixel points, searching according to rows from top to bottom, searching for the first row with white pixel points, searching for the first column with white pixel points from left to right, and overlapping the search results twice to obtain the target point A (x) at the top left corner1,y1) (ii) a Then, the same method is used to obtain the target point B (x) at the lower right corner2,y2) The following formula is adopted:
Figure FDA0003023447240000011
calculating to obtain a fuzzy angle theta of the motion blur;
step 33: clockwise rotating the binary image by an angle theta, calculating an accumulated value according to columns, obtaining a maximum value and a horizontal distance D of the image, then re-assigning half of the maximum value to the whole image which exceeds half of the maximum value, and traversing to obtain a minimum value region omega; the distance d to the first stripe of the central spot is calculated within Ω using the following equation:
Figure FDA0003023447240000012
the blur length L of the motion-blurred image is obtained.
2. The blurred image processing method according to claim 1, wherein the step 2 specifically comprises:
step 21: firstly, converting an image into YCbCr in a color space, then extracting a Y channel for gray processing, and adopting the following formula: gray (x, y) ═ α R (x, y) + β G (x, y) + γ B (x, y), where Gray (x, y) is the Gray value of the corresponding image position (x, y), R, G, B are the components of the three colors red, green, blue of the corresponding position, respectively, and α, β, γ are parameters;
step 22: performing one-dimensional Fourier transform on the grayscale image with N rows and N columns according to rows and columns by using the following formula:
Figure FDA0003023447240000021
firstly, performing discrete Fourier transform according to rows, then performing discrete Fourier transform according to columns, converting an image from a spatial domain F (x, y) into a frequency domain F (u, v), and finally obtaining a frequency domain value containing a real part and an imaginary part, wherein F (x, y) is a gray value of a corresponding position (x, y), u is a frequency component after row transform, v is a frequency component after column transform, and F (u, v) is a frequency spectrum value under corresponding u and v;
step 23: moving the origin of the spectrum image from the starting point (0,0) to the central point (N/2 ) of the image;
step 24: performing Fourier transform on complex values
Figure FDA0003023447240000022
Operating to obtain corresponding amplitude, wherein Re is a real part of a complex number, and Im is an imaginary part of the complex number;
step 25: and carrying out normalization operation on the amplitude map.
3. The blurred image processing method according to claim 2,
α=0.30,β=0.59,γ=0.11。
4. the blurred image processing method according to claim 1, wherein the step 4 specifically comprises:
step 41: point spread function h of a sharp image f in motion blurL,θWith the addition of noise n, the image becomes a blurred image g, using the following equation: (h)L,θF) (x, y) + n (x, y) ═ g (x, y), deconvoluting the blurred image for image restoration;
step 42: inputting a series of training pictures { Xi,Yi},XiFor the input original picture, YiFor the processed fuzzy picture, there are m groups of picture data in total, and mean square error is adopted
Figure FDA0003023447240000023
As a loss function, where Θ represents each parameter in the training process, and the F function is the function of Y under a series of parameters ΘiAnd (3) performing deblurring operation, adjusting parameters during training to minimize the mean square error, reversely propagating by using a random gradient algorithm, adjusting the parameters to minimize loss, and inputting the image subjected to wiener filtering into the trained convolutional neural network.
CN201711145578.6A 2017-11-17 2017-11-17 Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network Active CN107945125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711145578.6A CN107945125B (en) 2017-11-17 2017-11-17 Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711145578.6A CN107945125B (en) 2017-11-17 2017-11-17 Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network

Publications (2)

Publication Number Publication Date
CN107945125A CN107945125A (en) 2018-04-20
CN107945125B true CN107945125B (en) 2021-06-22

Family

ID=61932816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711145578.6A Active CN107945125B (en) 2017-11-17 2017-11-17 Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network

Country Status (1)

Country Link
CN (1) CN107945125B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10931853B2 (en) * 2018-10-18 2021-02-23 Sony Corporation Enhanced color reproduction for upscaling
CN111105357B (en) * 2018-10-25 2023-05-02 杭州海康威视数字技术股份有限公司 Method and device for removing distortion of distorted image and electronic equipment
CN109284751A (en) * 2018-10-31 2019-01-29 河南科技大学 The non-textual filtering method of text location based on spectrum analysis and SVM
CN109410143B (en) * 2018-10-31 2021-03-09 泰康保险集团股份有限公司 Image enhancement method and device, electronic equipment and computer readable medium
CN110060220A (en) * 2019-04-26 2019-07-26 中国科学院长春光学精密机械与物理研究所 Based on the image de-noising method and system for improving BM3D algorithm
CN110264415B (en) * 2019-05-24 2020-06-12 北京爱诺斯科技有限公司 Image processing method for eliminating jitter blur
CN110443882B (en) * 2019-07-05 2021-06-11 清华大学 Light field microscopic three-dimensional reconstruction method and device based on deep learning algorithm
CN111080524A (en) * 2019-12-19 2020-04-28 吉林农业大学 Plant disease and insect pest identification method based on deep learning
CN111340724B (en) * 2020-02-24 2021-02-19 卡莱特(深圳)云科技有限公司 Image jitter removing method and device in LED screen correction process
CN111415313B (en) * 2020-04-13 2022-08-30 展讯通信(上海)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111986102B (en) * 2020-07-15 2024-02-27 万达信息股份有限公司 Digital pathological image deblurring method
CN112712467B (en) * 2021-01-11 2022-11-11 郑州科技学院 Image processing method based on computer vision and color filter array
CN113807246A (en) * 2021-09-16 2021-12-17 平安普惠企业管理有限公司 Face recognition method, device, equipment and storage medium
CN116188254A (en) * 2021-11-25 2023-05-30 北京字跳网络技术有限公司 Fourier domain-based super-resolution image processing method, device, equipment and medium
CN114723642B (en) * 2022-06-07 2022-08-19 深圳市资福医疗技术有限公司 Image correction method and device and capsule endoscope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079149A (en) * 2006-09-08 2007-11-28 浙江师范大学 Noise-possessing movement fuzzy image restoration method based on radial basis nerve network
CN104655583A (en) * 2015-02-04 2015-05-27 中国矿业大学 Fourier-infrared-spectrum-based rapid coal quality recognition method
CN105825484A (en) * 2016-03-23 2016-08-03 华南理工大学 Depth image denoising and enhancing method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079149A (en) * 2006-09-08 2007-11-28 浙江师范大学 Noise-possessing movement fuzzy image restoration method based on radial basis nerve network
CN104655583A (en) * 2015-02-04 2015-05-27 中国矿业大学 Fourier-infrared-spectrum-based rapid coal quality recognition method
CN105825484A (en) * 2016-03-23 2016-08-03 华南理工大学 Depth image denoising and enhancing method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Blurred image restoration:A fast method of finding the motion length and angle;Michal Dobes等;《Digtial Signal Processing》;20100327;全文 *
运动模糊车牌识别关键技术研究;史海玲;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20160615;论文第1-2章 *

Also Published As

Publication number Publication date
CN107945125A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945125B (en) Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network
Claus et al. Videnn: Deep blind video denoising
Li et al. Edge-preserving decomposition-based single image haze removal
CN108921800B (en) Non-local mean denoising method based on shape self-adaptive search window
WO2016206087A1 (en) Low-illumination image processing method and device
US20180122051A1 (en) Method and device for image haze removal
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN108932699B (en) Three-dimensional matching harmonic filtering image denoising method based on transform domain
WO2014070273A1 (en) Recursive conditional means image denoising
Al-Hatmi et al. A review of Image Enhancement Systems and a case study of Salt &pepper noise removing
CN111445424A (en) Image processing method, image processing device, mobile terminal video processing method, mobile terminal video processing device, mobile terminal video processing equipment and mobile terminal video processing medium
CN105719251B (en) A kind of compression degraded image restored method that Linear Fuzzy is moved for big picture
CN107256539B (en) Image sharpening method based on local contrast
CN111353955A (en) Image processing method, device, equipment and storage medium
Das et al. A comparative study of single image fog removal methods
Patil et al. Bilateral filter for image denoising
CN107945119B (en) Method for estimating correlated noise in image based on Bayer pattern
CN111415317B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110852947B (en) Infrared image super-resolution method based on edge sharpening
CN108573478B (en) Median filtering method and device
Kaur et al. An improved adaptive bilateral filter to remove gaussian noise from color images
CN115829967A (en) Industrial metal surface defect image denoising and enhancing method
CN110648291B (en) Unmanned aerial vehicle motion blurred image restoration method based on deep learning
Sophia et al. An efficient method for Blind Image Restoration using GAN
Zhang et al. Uav remote sensing image dehazing based on saliency guided two-scaletransmission correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant