CN107945125A - It is a kind of to merge spectrum estimation method and the fuzzy image processing method of convolutional neural networks - Google Patents
It is a kind of to merge spectrum estimation method and the fuzzy image processing method of convolutional neural networks Download PDFInfo
- Publication number
- CN107945125A CN107945125A CN201711145578.6A CN201711145578A CN107945125A CN 107945125 A CN107945125 A CN 107945125A CN 201711145578 A CN201711145578 A CN 201711145578A CN 107945125 A CN107945125 A CN 107945125A
- Authority
- CN
- China
- Prior art keywords
- image
- row
- gray
- carried out
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 21
- 238000001228 spectrum Methods 0.000 title claims abstract description 14
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000001914 filtration Methods 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 8
- 230000000694 effects Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 241001269238 Data Species 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000011161 development Methods 0.000 abstract description 4
- 230000004927 fusion Effects 0.000 abstract 1
- 230000008859 change Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000013456 study Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of fusion spectrum estimation method and the fuzzy image processing method of convolutional neural networks, and input picture is carried out gray processing processing first, and carries out Fourier transformation, generates spectrogram;Secondly, by by binary conversion treatment and generating horizontal projection to spectrogram and calculating blurred length and angle;Finally, blurred picture is restored using Wiener filtering, and passes through the further stiffening effect of convolutional neural networks.The method of the present invention is simply efficient, has good development prospect.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of spectrum estimation method and convolutional neural networks of merging
Fuzzy image processing method.
Background technology
With the development of science and technology, the application of image in daily life is more and more frequent, whether routine office work, or on
Online amusement is found pleasure in, and can use image.It is adapted therewith, the image restoration of degraded image also seems more and more important.Motion blur
Image is a kind of wherein common blurred picture.When we use mobile phone camera, such situation can usually occur:When
We press shutter at that moment, our hand shaking, then it finds that this photo just becomes very fuzzy.Pass through this side
The image of formula shooting gained is referred to as " motion blur image ".It is well known that this technology of image restoration, in whole image processing
All occupy critically important status in module, its main purpose is exactly the image for allowing very fuzzy image to return to script
Quality standard.And among image restoration, motion blur image is a wherein critically important part again, there is very much realistic meaning, because
And can be widely applied among actual life, prospect is open.
Image restoration is subject to the extensive of domestic and foreign scholars to close naturally as a highly important part in image processing techniques
Note, has also carried out many correlative studys.Since initial (i.e. liftering) method of deconvoluting, to linear restoring side afterwards
Method, and Image Blind Deconvolution Algorithm Based on Frequency etc., various image recovery methods afterwards, are substantially and change around the development of these three methods
Into.Wherein, deconvolute restoration algorithm the main contents include:Power spectral balancing, geometric mean filtering and Wiener filtering, etc.
Deng, these more traditional and very classical image recovery methods, relatively it is adapted to linear space constant, or noise signal not phase
The situation of pass.Middle nineteen sixties, just have begun to using the point spread function (abbreviation PSF) in Wiener filtering to telescope
In caused by air is sprung up blurred picture carry out deconvolution processing.Image blind deconvolution restored method being capable of direct basis
Blurred picture estimates the actual signal of image and degrades function.But the target image result of gained is good in this way
It is bad, the selection of primary condition is directly depended on, and the image result of gained may not be unique.If signal noise ratio (snr) of image is relatively low
When, just it is not suitable in this way.Traditional Wiener filtering process mode is all the angle and length in known motion blur
In the case of degree, operation of testing, this just has significant limitation to real use.
The content of the invention
In view of the above shortcomings of the prior art, the present invention proposes a kind of mould for merging spectrum estimation method and convolutional neural networks
Image processing method is pasted, on the basis of traditional image restoration, with reference to spectrum estimation method and the oversubscription of convolutional neural networks realization
Resolution, to improve the picture quality based on computer vision, by spectrogram analytic approach, the use that traditional Wiener is filtered
It is converted into that directly different motion blur images can be adapted to by the change of point spread function parameter.
To achieve the above object, the technical scheme is that:A kind of spectrum estimation method and convolutional neural networks of merging
Fuzzy image processing method, including:
Step 1:Input blurred picture;
Step 2:Gray processing processing is carried out to blurred picture, and carries out Fourier transformation, generates spectrogram;
Step 3:Binary conversion treatment is carried out to spectrogram, and generates horizontal projection, calculates blurred length and angle;
Step 4:Blurred picture is restored using Wiener filtering, and inputs convolutional neural networks and obtains image to the end.
Further, the step 2 specifically includes:
Step 21:The YCbCr that is converted to of color space is first carried out to image, Y passages is then extracted and carries out gray processing processing,
Using following formula:Gray (x, y)=α R (x, y)+β G (x, y)+γ B (x, y), wherein Gray (x, y) is correspondence image position
The gray value of (x, y), R, G, B are respectively the component of three kinds of colors of red, green, blue of correspondence position, and α, β, γ are parameter;
Step 22:The gray level image of N rows N row is subjected to one-dimensional Fourier transform by row by row, utilizes following formula:Discrete Fourier transform first is carried out by row, then discrete fourier change is carried out by row
Change, image is converted into frequency domain F (u, v) from spatial domain f (x, y), finally obtains the frequency domain value comprising real and imaginary parts, wherein f
(x, y) be correspondence position (x, y) gray value, u be row conversion after frequency component, v be rank transformation after frequency component, F (u,
V) it is the spectrum value under corresponding u, v;
Step 23:The origin of spectral image is moved on to the central point (N/2, N/2) of image from starting point (0,0);
Step 24:The complex values of Fourier transformation are carried outOperation, obtains corresponding amplitude, Re is
The real part of plural number, Im are the imaginary part of plural number;
Step 25:Operation is normalized in map of magnitudes.
Further, α=0.30, β=0.59, γ=0.11.
Further, the step 3 specifically includes:
Step 31:Each gray-level pixels number in spectrogram is counted, and calculates each gray-level pixels number
The ratio of entire image is accounted for, is foreground and background using Threshold segmentation, calculates the probability w for being divided into prospect respectively0And its average ash
Angle value q0With the probability w for being divided into background1And its average gray value q1, using the method and utilization formula σ=w of traversal0*w1*(q0-
q1)2Trying to achieve makes the segmentation threshold of σ maximums, then carries out thresholding to image, is changed into non-black i.e. white binary image;
Step 32:Binary image is split by pixel, is scanned for from top to bottom by row, finds first and goes out
The row of existing white pixel point, then scans for finding first row for white pixel point occur by row from left to right, and overlapping two
Secondary search result obtains the target point A (x in the upper left corner1,y1);Then the target point B (x in the lower right corner are obtained using same method2,
y2), using equation below:The fuzzy angle, θ of motion blur is calculated;
Step 33:Binary image is carried out to rotate clockwise angle, θ, by column count aggregate-value and obtains maximum and figure
The horizontal distance D of picture, then obtains most the half for being assigned to maximum again in entire image more than maximum half, traversal
Small value region Ω;The distance d of first striped away from center spot is calculated in Ω, utilizes equation below:Transported
The blurred length L of dynamic blurred picture.
Further, the step 4 specifically includes:
Step 41:Point spread function hs of the one width picture rich in detail f in motion blurL,θUnder effect, plus the pollution of noise n,
It is changed into blurred picture g, utilizes following formula:(hL,θ* f) (x, y)+n (x, y)=g (x, y), blurred picture is realized deconvolute into
Row image recovers;
Step 42:Input a series of trained picture { Xi,Yi, XiFor the original picture of input, YiFor the fuzzy graph after processing
Piece, a total of m group pictures sheet data, using mean square errorAs loss function, wherein Θ generations
Parameters in table training process, F functions are by Y under series of parameters Θ effectsiDeblurring operation is carried out, tune when training
Whole parameter make it that mean square error is minimum, and uses stochastic gradient algorithm's backpropagation, and adjusting parameter makes minimization of loss, by wiener
Image after filtering process inputs trained convolutional neural networks.
Compared with prior art, the present invention has beneficial effect:
Recover motion blur image by combining spectrum estimation method and convolutional neural networks, using being obtained after Fourier transformation
Spectrogram, and combine horizontal projection and estimate the length of motion blur and angle, for traditional Wiener filtering process
Mode is all operation of testing in the case of the angle and length of known motion blur, this has real use very big
Traditional Wiener uses filtered are converted into directly by expanding by limitation, the present invention by spectrogram analytic approach
Dissipate the angle and length that motion blur is then calculated to adapt to different motion blur images that change of function parameter, method letter
It is single efficient, there is good development prospect.
Brief description of the drawings
Fig. 1 is a kind of fuzzy image processing method flow signal for merging spectrum estimation method and convolutional neural networks of the present invention
Figure.
Embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
All it is the situation of known fuzzy angle and length for traditional treatment method, it is proposed that with reference to point spread function and volume
The method of product neutral net.Based on convolutional neural networks can from principal and subordinate's data implicitly learned feature, it is not necessary to it is artificial manually
The suitable feature of selection, shared by weights, the operation such as maximum pond can accelerate the training speed of network and reduce network
Complexity.The present invention, with reference to the super-resolution realized with depth convolutional neural networks, comes on the basis of traditional image restoration
Improve the picture quality based on computer vision.
A kind of as shown in Figure 1, fuzzy image processing for merging spectrum estimation method and convolutional neural networks provided by the invention
Method, including:
Step 1:Input blurred picture;
Step 2:Gray processing processing is carried out to blurred picture, and carries out Fourier transformation, generates spectrogram;
Step 3:Binary conversion treatment is carried out to spectrogram, and generates horizontal projection, calculates blurred length and angle;
Step 4:Blurred picture is restored using Wiener filtering, and inputs convolutional neural networks and obtains image to the end.
In the present embodiment, the step 2 specifically includes:
Step 21:The YCbCr that is converted to of color space is first carried out to image, Y passages is then extracted and carries out gray processing processing,
Using following formula:Gray (x, y)=α R (x, y)+β G (x, y)+γ B (x, y), wherein Gray (x, y) is correspondence image position
The gray value of (x, y), R, G, B are respectively the component of three kinds of colors of red, green, blue of correspondence position, and α, β, γ are parameter;
Step 22:The gray level image of N rows N row is subjected to one-dimensional Fourier transform by row by row, utilizes following formula:Discrete Fourier transform first is carried out by row, then discrete fourier is carried out by row
Conversion, is converted to frequency domain F (u, v) from spatial domain f (x, y) by image, finally obtains the frequency domain value comprising real and imaginary parts, wherein
F (x, y) be correspondence position (x, y) gray value, u be row conversion after frequency component, v be rank transformation after frequency component, F
(u, v) is the spectrum value under corresponding u, v;
Step 23:The origin of spectral image is moved on to the central point (N/2, N/2) of image from starting point (0,0);
Step 24:The complex values of Fourier transformation are carried outOperation, obtains corresponding amplitude, Re is
The real part of plural number, Im are the imaginary part of plural number;
Step 25:Operation is normalized in map of magnitudes.
In the present embodiment, α=0.30, β=0.59, γ=0.11.
In the present embodiment, the step 3 specifically includes:
Step 31:Each gray-level pixels number in spectrogram is counted, and calculates each gray-level pixels number
The ratio of entire image is accounted for, is foreground and background using Threshold segmentation, calculates the probability w for being divided into prospect respectively0And its average ash
Angle value q0With the probability w for being divided into background1And its average gray value q1, using the method and utilization formula σ=w of traversal0*w1*(q0-
q1)2Trying to achieve makes the segmentation threshold of σ maximums, then carries out thresholding to image, is changed into non-black i.e. white binary image;
Step 32:Binary image is split by pixel, is scanned for from top to bottom by row, finds first and goes out
The row of existing white pixel point, then scans for finding first row for white pixel point occur by row from left to right, and overlapping two
Secondary search result obtains the target point A (x in the upper left corner1,y1);Then the target point B (x in the lower right corner are obtained using same method2,
y2), using equation below:The fuzzy angle, θ of motion blur is calculated;
Step 33:Binary image is carried out to rotate clockwise angle, θ, by column count aggregate-value and obtains maximum and figure
The horizontal distance D of picture, then obtains most the half for being assigned to maximum again in entire image more than maximum half, traversal
Small value region Ω;The distance d of first striped away from center spot is calculated in Ω, utilizes equation below:Transported
The blurred length L of dynamic blurred picture.
In the present embodiment, the step 4 specifically includes:
Step 41:Point spread function hs of the one width picture rich in detail f in motion blurL,θUnder effect, plus the pollution of noise n,
It is changed into blurred picture g, utilizes following formula:(hL,θ* f) (x, y)+n (x, y)=g (x, y), blurred picture is realized deconvolute into
Row image recovers;
Step 42:Input a series of trained picture { Xi,Yi, XiFor the original picture of input, YiFor the fuzzy graph after processing
Piece, a total of m group pictures sheet data, using mean square errorAs loss function, wherein Θ generations
Parameters in table training process, F functions are by Y under series of parameters Θ effectsiDeblurring operation is carried out, tune when training
Whole parameter make it that mean square error is minimum, and uses stochastic gradient algorithm's backpropagation, and adjusting parameter makes minimization of loss, by wiener
Image after filtering process inputs trained convolutional neural networks.
Convolutional neural networks can be from principal and subordinate's data learning, it is not necessary to which the artificial manual suitable feature of selection, passes through
The operations such as weights are shared, maximum pond accelerate the training speed of network and reduce the complexity of network.
Although the present invention is disclosed as above with preferred embodiment, it is not for limiting the present invention, any this area
Technical staff without departing from the spirit and scope of the present invention, may be by the methods and technical content of the disclosure above to this hair
Bright technical solution makes possible variation and modification, therefore, every content without departing from technical solution of the present invention, according to the present invention
Technical spirit to any simple modifications, equivalents, and modifications made for any of the above embodiments, belong to technical solution of the present invention
Protection domain.The foregoing is merely presently preferred embodiments of the present invention, all impartial changes done according to scope of the present invention patent
Change and modify, should all belong to the covering scope of the present invention.
Claims (5)
1. a kind of merge spectrum estimation method and the fuzzy image processing method of convolutional neural networks, it is characterised in that including:
Step 1:Input blurred picture;
Step 2:Gray processing processing is carried out to blurred picture, and carries out Fourier transformation, generates spectrogram;
Step 3:Binary conversion treatment is carried out to spectrogram, and generates horizontal projection, calculates blurred length and angle;
Step 4:Blurred picture is restored using Wiener filtering, and inputs convolutional neural networks and obtains image to the end.
2. fuzzy image processing method according to claim 1, it is characterised in that the step 2 specifically includes:
Step 21:The YCbCr that is converted to of color space is first carried out to image, Y passages is then extracted and carries out gray processing processing, use
Following formula:Gray (x, y)=α R (x, y)+β G (x, y)+γ B (x, y), wherein Gray (x, y) is correspondence image position (x, y)
Gray value, R, G, B are respectively the component of three kinds of colors of red, green, blue of correspondence position, and α, β, γ are parameter;
Step 22:The gray level image of N rows N row is subjected to one-dimensional Fourier transform by row by row, utilizes following formula:
Discrete Fourier transform first is carried out by row, then discrete Fourier transform is carried out by row, image is turned from spatial domain f (x, y)
Frequency domain F (u, v) is changed to, finally obtains the frequency domain value comprising real and imaginary parts, wherein f (x, y) is the gray scale of correspondence position (x, y)
Value, u are the frequency component after row conversion, and v is the frequency component after rank transformation, and F (u, v) is the spectrum value under corresponding u, v;
Step 23:The origin of spectral image is moved on to the central point (N/2, N/2) of image from starting point (0,0);
Step 24:The complex values of Fourier transformation are carried outOperation, obtains corresponding amplitude, and Re is plural number
Real part, Im be plural number imaginary part;
Step 25:Operation is normalized in map of magnitudes.
3. fuzzy image processing method according to claim 2, it is characterised in that
α=0.30, β=0.59, γ=0.11.
4. fuzzy image processing method according to claim 1, it is characterised in that the step 3 specifically includes:
Step 31:Each gray-level pixels number in spectrogram is counted, and calculate each gray-level pixels number account for it is whole
The ratio of width image, is foreground and background using Threshold segmentation, calculates the probability w for being divided into prospect respectively0And its average gray value q0
With the probability w for being divided into background1And its average gray value q1, using the method and utilization formula σ=w of traversal0*w1*(q0-q1)2Ask
It must make the segmentation threshold of σ maximums, thresholding then is carried out to image, is changed into non-black i.e. white binary image;
Step 32:Binary image is split by pixel, is scanned for from top to bottom by row, finds first and occurs in vain
The row of colour vegetarian refreshments, is then scanned for finding first row for white pixel point occur by row from left to right, overlapping to search twice
Hitch fruit obtains the target point A (x in the upper left corner1,y1);Then the target point B (x in the lower right corner are obtained using same method2,y2),
Using equation below:The fuzzy angle, θ of motion blur is calculated;
Step 33:Binary image is carried out to rotate clockwise angle, θ, by column count aggregate-value and obtains maximum and image
Horizontal distance D, then obtains minimum value to the half for being assigned to maximum again in entire image more than maximum half, traversal
Region Ω;The distance d of first striped away from center spot is calculated in Ω, utilizes equation below:Obtain movement mould
Paste the blurred length L of image.
5. fuzzy image processing method according to claim 1, it is characterised in that the step 4 specifically includes:
Step 41:Point spread function hs of the one width picture rich in detail f in motion blurL,θUnder effect, plus the pollution of noise n, it is changed into
Blurred picture g, utilizes following formula:(hL,θ* f) (x, y)+n (x, y)=g (x, y), realizes to deconvolute and carries out figure to blurred picture
As recovering;
Step 42:Input a series of trained picture { Xi,Yi, XiFor the original picture of input, YiFor the blurred picture after processing, altogether
There are m group picture sheet datas, using mean square errorAs loss function, wherein Θ represents training
During parameters, F functions are by Y under series of parameters Θ effectsiDeblurring operation is carried out, adjusting parameter when training
So that mean square error is minimum, and uses stochastic gradient algorithm's backpropagation, adjusting parameter makes minimization of loss, at Wiener filtering
Image after reason inputs trained convolutional neural networks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711145578.6A CN107945125B (en) | 2017-11-17 | 2017-11-17 | Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711145578.6A CN107945125B (en) | 2017-11-17 | 2017-11-17 | Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107945125A true CN107945125A (en) | 2018-04-20 |
CN107945125B CN107945125B (en) | 2021-06-22 |
Family
ID=61932816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711145578.6A Active CN107945125B (en) | 2017-11-17 | 2017-11-17 | Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107945125B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109284751A (en) * | 2018-10-31 | 2019-01-29 | 河南科技大学 | The non-textual filtering method of text location based on spectrum analysis and SVM |
CN109410143A (en) * | 2018-10-31 | 2019-03-01 | 泰康保险集团股份有限公司 | Image enchancing method, device, electronic equipment and computer-readable medium |
CN110060220A (en) * | 2019-04-26 | 2019-07-26 | 中国科学院长春光学精密机械与物理研究所 | Based on the image de-noising method and system for improving BM3D algorithm |
CN110264415A (en) * | 2019-05-24 | 2019-09-20 | 北京爱诺斯科技有限公司 | It is a kind of to eliminate the fuzzy image processing method of shake |
CN110443882A (en) * | 2019-07-05 | 2019-11-12 | 清华大学 | Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm |
CN111080524A (en) * | 2019-12-19 | 2020-04-28 | 吉林农业大学 | Plant disease and insect pest identification method based on deep learning |
CN111105357A (en) * | 2018-10-25 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | Distortion removing method and device for distorted image and electronic equipment |
CN111340724A (en) * | 2020-02-24 | 2020-06-26 | 卡莱特(深圳)云科技有限公司 | Image jitter removing method and device in LED screen correction process |
CN111415313A (en) * | 2020-04-13 | 2020-07-14 | 展讯通信(上海)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN112712467A (en) * | 2021-01-11 | 2021-04-27 | 郑州科技学院 | Image processing method based on computer vision and color filter array |
CN112868046A (en) * | 2018-10-18 | 2021-05-28 | 索尼公司 | Adjusting sharpness and detail in amplified output |
CN113807246A (en) * | 2021-09-16 | 2021-12-17 | 平安普惠企业管理有限公司 | Face recognition method, device, equipment and storage medium |
CN114723642A (en) * | 2022-06-07 | 2022-07-08 | 深圳市资福医疗技术有限公司 | Image correction method and device and capsule endoscope |
WO2023093481A1 (en) * | 2021-11-25 | 2023-06-01 | 北京字跳网络技术有限公司 | Fourier domain-based super-resolution image processing method and apparatus, device, and medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079149A (en) * | 2006-09-08 | 2007-11-28 | 浙江师范大学 | Noise-possessing movement fuzzy image restoration method based on radial basis nerve network |
CN104655583A (en) * | 2015-02-04 | 2015-05-27 | 中国矿业大学 | Fourier-infrared-spectrum-based rapid coal quality recognition method |
CN105825484A (en) * | 2016-03-23 | 2016-08-03 | 华南理工大学 | Depth image denoising and enhancing method based on deep learning |
-
2017
- 2017-11-17 CN CN201711145578.6A patent/CN107945125B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079149A (en) * | 2006-09-08 | 2007-11-28 | 浙江师范大学 | Noise-possessing movement fuzzy image restoration method based on radial basis nerve network |
CN104655583A (en) * | 2015-02-04 | 2015-05-27 | 中国矿业大学 | Fourier-infrared-spectrum-based rapid coal quality recognition method |
CN105825484A (en) * | 2016-03-23 | 2016-08-03 | 华南理工大学 | Depth image denoising and enhancing method based on deep learning |
Non-Patent Citations (2)
Title |
---|
MICHAL DOBES等: "Blurred image restoration:A fast method of finding the motion length and angle", 《DIGTIAL SIGNAL PROCESSING》 * |
史海玲: "运动模糊车牌识别关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112868046A (en) * | 2018-10-18 | 2021-05-28 | 索尼公司 | Adjusting sharpness and detail in amplified output |
CN111105357A (en) * | 2018-10-25 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | Distortion removing method and device for distorted image and electronic equipment |
CN111105357B (en) * | 2018-10-25 | 2023-05-02 | 杭州海康威视数字技术股份有限公司 | Method and device for removing distortion of distorted image and electronic equipment |
CN109284751A (en) * | 2018-10-31 | 2019-01-29 | 河南科技大学 | The non-textual filtering method of text location based on spectrum analysis and SVM |
CN109410143A (en) * | 2018-10-31 | 2019-03-01 | 泰康保险集团股份有限公司 | Image enchancing method, device, electronic equipment and computer-readable medium |
CN110060220A (en) * | 2019-04-26 | 2019-07-26 | 中国科学院长春光学精密机械与物理研究所 | Based on the image de-noising method and system for improving BM3D algorithm |
CN110264415A (en) * | 2019-05-24 | 2019-09-20 | 北京爱诺斯科技有限公司 | It is a kind of to eliminate the fuzzy image processing method of shake |
CN110443882A (en) * | 2019-07-05 | 2019-11-12 | 清华大学 | Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm |
CN110443882B (en) * | 2019-07-05 | 2021-06-11 | 清华大学 | Light field microscopic three-dimensional reconstruction method and device based on deep learning algorithm |
CN111080524A (en) * | 2019-12-19 | 2020-04-28 | 吉林农业大学 | Plant disease and insect pest identification method based on deep learning |
CN111340724B (en) * | 2020-02-24 | 2021-02-19 | 卡莱特(深圳)云科技有限公司 | Image jitter removing method and device in LED screen correction process |
CN111340724A (en) * | 2020-02-24 | 2020-06-26 | 卡莱特(深圳)云科技有限公司 | Image jitter removing method and device in LED screen correction process |
CN111415313A (en) * | 2020-04-13 | 2020-07-14 | 展讯通信(上海)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111415313B (en) * | 2020-04-13 | 2022-08-30 | 展讯通信(上海)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN111986102B (en) * | 2020-07-15 | 2024-02-27 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN112712467A (en) * | 2021-01-11 | 2021-04-27 | 郑州科技学院 | Image processing method based on computer vision and color filter array |
CN112712467B (en) * | 2021-01-11 | 2022-11-11 | 郑州科技学院 | Image processing method based on computer vision and color filter array |
CN113807246A (en) * | 2021-09-16 | 2021-12-17 | 平安普惠企业管理有限公司 | Face recognition method, device, equipment and storage medium |
WO2023093481A1 (en) * | 2021-11-25 | 2023-06-01 | 北京字跳网络技术有限公司 | Fourier domain-based super-resolution image processing method and apparatus, device, and medium |
CN114723642A (en) * | 2022-06-07 | 2022-07-08 | 深圳市资福医疗技术有限公司 | Image correction method and device and capsule endoscope |
Also Published As
Publication number | Publication date |
---|---|
CN107945125B (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107945125A (en) | It is a kind of to merge spectrum estimation method and the fuzzy image processing method of convolutional neural networks | |
CN107527332B (en) | Low-illumination image color retention enhancement method based on improved Retinex | |
US7426312B2 (en) | Contrast enhancement of images | |
CN106780417A (en) | A kind of Enhancement Method and system of uneven illumination image | |
CN107798661B (en) | Self-adaptive image enhancement method | |
CN106780375A (en) | A kind of image enchancing method under low-light (level) environment | |
JP2001229377A (en) | Method for adjusting contrast of digital image by adaptive recursive filter | |
CN112785534A (en) | Ghost-removing multi-exposure image fusion method in dynamic scene | |
WO2020099893A1 (en) | Image enhancement system and method | |
CN107609603A (en) | A kind of image matching method of multiple color spaces difference fusion | |
Cao et al. | NUICNet: Non-uniform illumination correction for underwater image using fully convolutional network | |
CN113962898A (en) | Low-illumination image enhancement method based on illumination map optimization and adaptive gamma correction | |
CN112819688A (en) | Conversion method and system for converting SAR (synthetic aperture radar) image into optical image | |
CN107256539B (en) | Image sharpening method based on local contrast | |
CN112927160B (en) | Single low-light image enhancement method based on depth Retinex | |
CN104966271B (en) | Image de-noising method based on biological vision receptive field mechanism | |
CN116563133A (en) | Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion | |
CN111028181A (en) | Image enhancement processing method, device, equipment and storage medium | |
US11625886B2 (en) | Storage medium storing program, training method of machine learning model, and image generating apparatus | |
Zini et al. | Shallow camera pipeline for night photography rendering | |
Prasenan et al. | A Study of Underwater Image Pre-processing and Techniques | |
Kalyan et al. | A New Concatenated Method for Deep Curve Estimation Using Low Weight CNN for Low Light Image Enhancement | |
CN110223246A (en) | A kind of windy lattice portrait U.S. face mill skin method and device | |
Sharma et al. | Contrast image enhancement using luminance component based on wavelet transform | |
CN113554565B (en) | Underwater image enhancement method based on lambert beer law |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |