CN106934806B - It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus - Google Patents
It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus Download PDFInfo
- Publication number
- CN106934806B CN106934806B CN201710135456.2A CN201710135456A CN106934806B CN 106934806 B CN106934806 B CN 106934806B CN 201710135456 A CN201710135456 A CN 201710135456A CN 106934806 B CN106934806 B CN 106934806B
- Authority
- CN
- China
- Prior art keywords
- image
- image block
- block
- matrix
- scaling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention disclose it is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus, comprising the following steps: image scaling is about 1/4 times of original image area by (1) zoomed image;(2) clarity difference is calculated, the text structure of image corresponding position image block after original image and scaling is calculated separately, and calculates the difference of the two;(3) fuzzy region is extracted, the noise of error image is filtered out, is partitioned into fuzzy region using image segmentation algorithm, and up-sample to the result after segmentation.For the fuzzy region out of focus segmentation of non-reference picture, the present invention constructs zoomed image using original image, the clarity of zoomed image and original image is calculated separately, and then obtains fuzziness distributed image, is finally fast and effeciently partitioned into image focus fuzzy region.
Description
Technical field
The present invention relates to Digital image technologies, and in particular to it is a kind of based on text structure without with reference to figure confusion region out of focus
Domain splitting method.
Background technique
Image fuzzy is exactly the common image degradation process of one kind, it is usually because in exposure process the movement of camera,
The movement or focusing of object are inaccurate and generate, and are to be formed to pursue certain artistic effect and artificially in the case where also having,
Including in the image processing process of photography or later period.Out of focus obscure is that a kind of common image is fuzzy, mainly by focusing
Caused by inaccurate.The fuzzy loss for resulting in information in image of image, causes difficulty to be further processed image.It is accurate and high
The fuzzy pixel of the discovery of effect suffers from important and actual in fields such as image segmentation, object detection, scene classification, picture edittings
Using.In no reference artwork paste region segmentation application, an only input picture, non-reference picture fuzzy region out of focus is divided at present
Segmentation method mainly includes the method for picture frequency domain method, Space domain and some combination machine learning algorithms.However these
Algorithm is primarily present two problems, firstly, processing speed is slow, it is not very practical, secondly, segmentation effect is bad.
One ideal image quality evaluation index can effectively distinguish fuzzy and clear image, therefore can be used to do
Image obscuring area segmentation.No reference graph structure clarity picture quality evaluation index (NRSS) is exactly that an effect is preferably schemed
As quality evaluation index, realized on the basis of text structure (SSIM) index no with reference to figure image quality evaluation.But
It is that the method for carrying out image obscuring area segmentation using NRSS at present is also fewer, poor having last segmentation effect.
Summary of the invention
Goal of the invention: it is an object of the invention to solve the deficiencies in the prior art, provides a kind of clear based on structure
Clear degree without with reference to figure fuzzy region dividing method out of focus.
Technical solution: it is of the present invention it is a kind of based on text structure without with reference to figure fuzzy region segmentation side out of focus
Method includes the following steps:
(1) it scales original image: original image equal proportion is scaled about 0.25 times of original image area, i.e. original graph
The size of picture is M × N, and the size of image is after scalingUsing bilinear interpolation to the image after scaling
Interpolation is carried out, thus the image after being scaled;
(2) image definition difference is calculated:
(2.1) piecemeal is carried out to the image after scaling in original image and step (1), to obtain original image set of blocks
R and image block set S after scaling;
(2.2) the clear journey for being used to measure each image block without reference graph structure similitude of each image block is calculated separately
Degree;
(2.3) difference for calculating separately each image block readability, obtains matrix of differences;
(3) it is partitioned into fuzzy region:
(3.1) matrix of differences obtained in step (2) is filtered to filter out noise using Steerable filter;
(3.2) image after denoising is split using Otsu threshold method, i.e., inter-class variance can be maximized by first finding out
Gray value, and with this gray value carry out binaryzation;
(3.3) image after segmentation is up-sampled, reverts to original image scale.
Further, the detailed process of the step (2.1) are as follows:
Firstly, choosing image block from original image, step-length is (2,2), it may be assumed that since the upper left corner of image, first every time
2 pixels are moved along the x-axis, then along mobile 2 pixels of Y, are repeated the above steps, until obtaining all image blocks, choosing
The image block set taken is R, and each image block is denoted as Ri,j;
Wherein, i indicates line flag, and j indicates column label, it is assumed that the size of image block is 2m × 2n, then
Then, image block is chosen from the image after scaling, step-length is (1,1), it may be assumed that since the upper left corner of image, first
1 pixel is moved along X-axis every time, then along mobile 1 pixel of Y-axis, is repeated the above steps, until obtaining all image blocks
Until;The image block sequence of selection is denoted as S, and each image block is denoted as Si,j;
Wherein, i indicates line flag, and j indicates column label, it is assumed that the size of image block is m × n, then
The number of image block should be consistent in set R and S.
Further, the detailed process of the step (2.2) are as follows:
Assuming that given image block P, steps are as follows for the calculating of no reference graph structure similitude: blurred picture block P uses Gauss
It is fuzzy that image block is obscured to obtain image Pb;Gradient is extracted, the horizontal and vertical side of Sobel operator extraction image block is used
To gradient, image block P and PbGradient image be denoted as G and G respectivelyb;Find out the most abundant N number of image of information in gradient image
The abundant degree of block, gradient information is measured using the variance of image, that is, finds out the maximum N number of image block of wherein variance;Calculate figure
As the calculation formula without reference graph structure clarity NRSS, NRSS of block P are as follows:
Wherein, image block is denoted as G in Gi,GbMiddle image block is denoted as Gi b;SSIM function is used to calculate the knot of two picture blocks
Structure similitude, the function while the correlation for considering the brightness between image block, contrast and structural information.In given two figures
In the case where as block a, b, SSIM function can be indicated are as follows:
SSIM (a, b)=[l (a, b)]α[c(a,b)]β[s(a,b)]γ
Wherein,
S (a, b)=(σab+C3)/(σaσb+C3),
ua,ubRespectively indicate image block a, the image grayscale mean value of b, σa,σbRespectively indicate image block a, the image grayscale of b
Standard deviation, σabIndicate image block a, the image grayscale covariance of b, α, beta, gamma is parameter item, C1,C2,C3Constant term, with to prevent
Only denominator close to zero when there is the unstable situation of operation.
And then respectively obtain NRSS (Si,j) and NRSS (Ri,j)。
Further, the method for the step (2.3) is as follows:
Calculate the difference M ' of correspondence image block clarity in image block set R and Si,j, thus obtain matrix of differences M '=
{M′i,j}:
M′i,j=NRSS (Si,j)-NRSS(Ri,j);
Matrix M ' is normalized to obtain matrix M={ Mi,j, it is normalized using max min:
Mi,j=(M 'i,j- min (M '))/(max (M ')-min (M ')), wherein in max (M ') representing matrix element maximum
It is worth, the minimum value of element in min (M ') representing matrix.
The utility model has the advantages that the present invention is without loss of generality, after image down, fuzzy image can be apparent from originally, therefore can
Using use reduce after the poor definition of image and original image as new discriminant criterion.It can get image based on this method
Fuzziness distribution, in conjunction with image segmentation algorithm, is finally effectively partitioned into image obscuring area.Pass through image scale transform, institute
The data volume that need to be handled substantially reduces, so algorithm process speed of the invention is fast, meanwhile, segmentation effect is better than existing algorithm.
In conclusion the present invention has many advantages, such as that splitting speed is fast, segmentation effect is good, is suitable for non-reference picture.
Detailed description of the invention
Fig. 1 is original-gray image in embodiment;
Fig. 2 is the image being manually partitioned into embodiment using other methods;
Fig. 3 is in embodiment using matrix of differences after the normalization of the invention obtained;
Fig. 4 is the segmentation result obtained in embodiment using the present invention;
Fig. 5 is effect contrast figure in embodiment.
Specific embodiment
Technical solution of the present invention is described in detail below, but protection scope of the present invention is not limited to the implementation
Example.
Embodiment:
Step 1: reading original color image, obtain color image matrix;
Step 2: it is gray level image matrix by color image matrix conversion, so that gray level image as described in Figure 1 is obtained, this
When determine image size be 640x621;
Step 3: zoomed image determines that the size of image after scaling is 320x310, using linear interpolation shown in FIG. 1
Image scaling at above-mentioned size image;
Step 4: image block and the clarity for calculating each image block.Image after original image and scaling is divided
Block, in the process, the moving step length of original image are (2,2), and the moving step length of image is (1,1), original image after scaling
Tile size be 32x32, the tile size of image is 16x16 after scaling, and original picture block is obtained after the completion of this process
Set R and image block set S after scaling.To setIn each image block calculate without reference graph structure clarity
(NRSS), to measure the readability of each image block, original image clarity matrix R is obtained after the completion of this processcAnd contracting
Put rear image definition matrix Sc;
Step 5: the difference of clarity is calculated, according to the clarity matrix R being calculated in step 4cAnd Sc, difference can be obtained
Matrix M '=Sc-Rc;
Step 6: matrix of differences M ' being normalized to obtain M, specific method for normalizing can be minimum using maximum value
It is worth method for normalizing, i.e. Mi,j=(M 'i,j- min (M '))/(max (M ')-min (M ')), obtained normalized image such as Fig. 3 institute
Show.
Step 7: Steerable filter being carried out to matrix M, using M itself as reference picture;
Step 8: matrix of differences M being split using Da-Jin algorithm;
Step 9: enlarged drawing up-sample to the result after segmentation and carries out interpolation using linear interpolation algorithm, obtains
To final segmentation result, as shown in Figure 4.
In addition, the original-gray image in Fig. 1 is obtained image as shown in Figure 2, then will be adopted using artificial segmentation again
Contrast on effect is carried out with the structure that two methods are divided, as shown in figure 5, white is overlapping clear area in figure, black is overlapping
Fuzzy region, grey are erroneous judgement region.
It can thus be seen that segmentation effect of the present invention is good, it is more accurate.
Claims (3)
1. it is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus, include the following steps:
(1) it scales original image: original image equal proportion is scaled 0.25 times of original image area, the i.e. ruler of original image
Very little is M × N, and the size of image is after scalingThe image after scaling is carried out using bilinear interpolation slotting
Value, thus the image after being scaled;
(2) image definition difference is calculated:
(2.1) piecemeal is carried out to the image after scaling in original image and step (1), thus obtain original image set of blocks R and
Image block set S after scaling;
(2.2) readability for being used to measure each image block without reference graph structure similitude of each image block is calculated separately;
(2.3) difference for calculating separately each image block readability, obtains matrix of differences:
Calculate the difference M' of correspondence image block clarity in image block set R and Si,j, thus obtain matrix of differences M '=
{M'i,j}:
M'i,j=NRSS (Si,j)-NRSS(Ri,j);
Matrix M ' is normalized to obtain matrix M={ Mi,j, it is normalized using max min:
Mi,j=(M'i,j- min (M'))/(max (M')-min (M')), wherein in max (M') representing matrix element maximum value,
The minimum value of element in min (M') representing matrix;
(3) it is partitioned into fuzzy region:
(3.1) matrix of differences obtained in step (2) is filtered to filter out noise using Steerable filter;
(3.2) image after denoising is split using Otsu threshold method, i.e., first finds out the ash that can maximize inter-class variance
Angle value, and binaryzation is carried out with this gray value;
(3.3) image after segmentation is up-sampled, reverts to original image scale.
2. it is according to claim 1 based on text structure without with reference to figure fuzzy region dividing method out of focus, feature
It is: the detailed process of the step (2.1) are as follows:
Firstly, choosing image block from original image, step-length is (2,2), it may be assumed that since the upper left corner of image, first every time along X-axis
Mobile 2 pixels repeat the above steps then along mobile 2 pixels of Y, until obtaining all image blocks, the figure of selection
Picture set of blocks is R, and each image block is denoted as Ri,j;
Wherein, i indicates line flag, and j indicates column label, it is assumed that the size of image block is 2m × 2n, then
Then, image block is chosen from the image after scaling, step-length is (1,1), it may be assumed that since the upper left corner of image, first every time
1 pixel is moved along X-axis, then along mobile 1 pixel of Y-axis, is repeated the above steps, is until obtaining all image blocks
Only;The image block sequence of selection is denoted as S, and each image block is denoted as Sp,q;
Wherein, p indicates line flag, and q indicates column label, it is assumed that the size of image block is m × n, then
The number of image block should be consistent in set R and S.
3. it is according to claim 1 based on text structure without with reference to figure fuzzy region dividing method out of focus, feature
It is: the detailed process of the step (2.2) are as follows:
Assuming that given image block P, steps are as follows for the calculating of no reference graph structure similitude: blurred picture block P uses Gaussian Blur
Image block is obscured to obtain image Pb;Extract gradient, using Sobel operator extraction image block both horizontally and vertically
Gradient, image block P and PbGradient image be denoted as G and G respectivelyb;The most abundant T image block of information in gradient image is found out, ladder
The abundant degree for spending information is measured using the variance of image, that is, finds out the maximum T image block of wherein variance;Calculate image block P
The calculation formula without reference graph structure clarity NRSS, NRSS are as follows:
Wherein, image block is denoted as G in Gi,GbMiddle image block is denoted as Gi b;The structure that SSIM function is used to calculate two picture blocks is similar
Property, which considers the correlation of brightness between image block, contrast and structural information simultaneously, given two image block a,
In the case where b, SSIM function representation are as follows:
SSIM (a, b)=[l (a, b)]α[c(a,b)]β[s(a,b)]γ
Wherein,
S (a, b)=(σab+C3)/(σaσb+C3),
ua,ubRespectively indicate image block a, the image grayscale mean value of b, σa,σbRespectively indicate image block a, the image grayscale standard of b
Difference, σabIndicate image block a, the image grayscale covariance of b, α, beta, gamma is parameter item, C1,C2,C3It is constant term, to prevent point
There is the unstable situation of operation when mother is close to zero;
And then respectively obtain NRSS (Si,j) and NRSS (Ri,j)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710135456.2A CN106934806B (en) | 2017-03-09 | 2017-03-09 | It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710135456.2A CN106934806B (en) | 2017-03-09 | 2017-03-09 | It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106934806A CN106934806A (en) | 2017-07-07 |
CN106934806B true CN106934806B (en) | 2019-09-10 |
Family
ID=59432070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710135456.2A Active CN106934806B (en) | 2017-03-09 | 2017-03-09 | It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106934806B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292879B (en) * | 2017-07-17 | 2019-08-20 | 电子科技大学 | A kind of sheet metal surface method for detecting abnormality based on image analysis |
CN107492078B (en) * | 2017-08-14 | 2020-04-07 | 厦门美图之家科技有限公司 | Method for removing black noise in image and computing equipment |
CN111417981A (en) * | 2018-03-12 | 2020-07-14 | 华为技术有限公司 | Image definition detection method and device |
CN111626974B (en) * | 2019-02-28 | 2024-03-22 | 苏州润迈德医疗科技有限公司 | Quality scoring method and device for coronary angiography image sequence |
CN112714246A (en) * | 2019-10-25 | 2021-04-27 | Tcl集团股份有限公司 | Continuous shooting photo obtaining method, intelligent terminal and storage medium |
CN111010556B (en) * | 2019-12-27 | 2022-02-11 | 成都极米科技股份有限公司 | Method and device for projection bi-directional defocus compensation and readable storage medium |
CN111179259B (en) * | 2019-12-31 | 2023-09-26 | 北京灵犀微光科技有限公司 | Optical definition testing method and device |
CN112017163A (en) * | 2020-08-17 | 2020-12-01 | 中移(杭州)信息技术有限公司 | Image blur degree detection method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996406A (en) * | 2010-11-03 | 2011-03-30 | 中国科学院光电技术研究所 | No-reference structural sharpness image quality evaluation method |
CN103955934A (en) * | 2014-05-06 | 2014-07-30 | 北京大学 | Image blurring detecting algorithm combined with image obviousness region segmentation |
CN104200475A (en) * | 2014-09-05 | 2014-12-10 | 中国传媒大学 | Novel no-reference image blur degree estimation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7006677B2 (en) * | 2002-04-15 | 2006-02-28 | General Electric Company | Semi-automatic segmentation algorithm for pet oncology images |
-
2017
- 2017-03-09 CN CN201710135456.2A patent/CN106934806B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996406A (en) * | 2010-11-03 | 2011-03-30 | 中国科学院光电技术研究所 | No-reference structural sharpness image quality evaluation method |
CN103955934A (en) * | 2014-05-06 | 2014-07-30 | 北京大学 | Image blurring detecting algorithm combined with image obviousness region segmentation |
CN104200475A (en) * | 2014-09-05 | 2014-12-10 | 中国传媒大学 | Novel no-reference image blur degree estimation method |
Also Published As
Publication number | Publication date |
---|---|
CN106934806A (en) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106934806B (en) | It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus | |
CN110827200B (en) | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal | |
CN109409366B (en) | Distorted image correction method and device based on angular point detection | |
US9692937B1 (en) | Methods and apparatus for identifying lines in an image and using identified lines | |
CN110008806B (en) | Information processing device, learning processing method, learning device, and object recognition device | |
CN107749987B (en) | Digital video image stabilization method based on block motion estimation | |
EP3798975A1 (en) | Method and apparatus for detecting subject, electronic device, and computer readable storage medium | |
US9785850B2 (en) | Real time object measurement | |
CN113592776A (en) | Image processing method and device, electronic device and storage medium | |
CN111368717A (en) | Sight line determining method and device, electronic equipment and computer readable storage medium | |
WO2017135120A1 (en) | Computationally efficient frame rate conversion system | |
CN114049499A (en) | Target object detection method, apparatus and storage medium for continuous contour | |
CN111222507A (en) | Automatic identification method of digital meter reading and computer readable storage medium | |
WO2018076172A1 (en) | Image display method and terminal | |
CN106156691A (en) | The processing method of complex background image and device thereof | |
Kim et al. | A texture-aware salient edge model for image retargeting | |
CN108647605B (en) | Human eye gaze point extraction method combining global color and local structural features | |
CN112801141B (en) | Heterogeneous image matching method based on template matching and twin neural network optimization | |
US20170352170A1 (en) | Nearsighted camera object detection | |
CN106056575B (en) | A kind of image matching method based on like physical property proposed algorithm | |
Nieuwenhuizen et al. | Dynamic turbulence mitigation with large moving objects | |
WO2013011797A1 (en) | Degradation restoration system, degradation restoration method and program | |
JP6717769B2 (en) | Information processing device and program | |
JP2018010359A (en) | Information processor, information processing method, and program | |
CN111091513A (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |