CN103426144A - Method and device for deblurring image having perspective distortion - Google Patents

Method and device for deblurring image having perspective distortion Download PDF

Info

Publication number
CN103426144A
CN103426144A CN201210154166XA CN201210154166A CN103426144A CN 103426144 A CN103426144 A CN 103426144A CN 201210154166X A CN201210154166X A CN 201210154166XA CN 201210154166 A CN201210154166 A CN 201210154166A CN 103426144 A CN103426144 A CN 103426144A
Authority
CN
China
Prior art keywords
image
subregion
cutting apart
subimage
blur level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210154166XA
Other languages
Chinese (zh)
Other versions
CN103426144B (en
Inventor
李建杰
李献
罗兆海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN201210154166.XA priority Critical patent/CN103426144B/en
Publication of CN103426144A publication Critical patent/CN103426144A/en
Application granted granted Critical
Publication of CN103426144B publication Critical patent/CN103426144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method and device for deblurring an image having perspective distortion. The method comprises a perspective information detection step for detecting perspective information of the image, a subregion segmentation step for segmenting the image into a plurality of subregions on the basis of the perspective information and enabling the blurring degree of every subregion to be identical and a deblurring step for deblurring every subregion on the basis of blurring information of every subregion. Compared with the prior art, the method and device can improve the deblurring effect of the image having perspective distortion.

Description

Method and apparatus for the image deblurring that makes to have perspective distortion
Technical field
The present invention relates to the method and apparatus of the image deblurring for making to have perspective distortion.
Background technology
Can utilize digital camera to come photographic images with for record with extract text message, and the text message extracted can be by optical character identification (OCR) Module recognition, with for for example other processing of retrieval.But if the surface of destination document is out of focus, the image obtained will be fuzzy, and this deteriorated picture quality will affect the precision of the identification of text message.
And, when taking direction not with the Surface Vertical of destination document, usually in the image of taking, perspective distortion appears.If image has perspective distortion, this means from the surface of the different piece of image different to the distance of digital camera lens, so blur level difference of the different piece of image.
For making image deblurring (that is, recovering), many methods have been proposed.
In U.S. Patent No. 6154574, with a predetermined window size, fuzzy image is divided into to a plurality of subimages, edge direction based on all subimages is calculated average step response (Step Response), estimate the fuzzy message such as point spread function (PSF) based on average step response, and recover fuzzy image based on PSF.
In the open No.2009/0067742 of United States Patent (USP), fuzzy image is divided into to a plurality of overlapping pieces, unit of account step response is to estimate the PSF of each overlapping block, the PSF parameter of the overlapping block based on adjacent is determined the validity of the PSF parameter of an overlapping block, and further the PSF parameter of the overlapping block based on adjacent substitutes the invalid PSF parameter of a described overlapping block.
In the open No.2011/0044554 of United States Patent (USP), fuzzy image is divided into to a plurality of overlapping subimage with fixed measure, mainly based on its marginal information, estimate PSF for each subimage, make the PSF of each subimage based on himself and deblurring, and finally with image, piece together to synthesize the subimage of deblurring.
In the open No.2011/0032575 of United States Patent (USP), two different Image estimations moduluses out of focus based on comprising identical non-flat portion and scanning in two different focal planes.Can be by the distance of described two different focal planes and the Fourier transform parameter estimation modulus out of focus of described two different images.
But, for the blurred picture with perspective distortion (that is, the blur level difference of its different piece), the deblurring method of prior art all has shortcoming, because they only consider the local message of blurred picture, and does not consider distortion information.
More specifically, for U.S. Patent No. 6154574, the average step response based on all subimages is estimated single PSF, to recover whole blurred picture.But if, the blur level difference of a plurality of subimages, average step response is unsuitable for whole image.
For the open No.2009/0067742 of United States Patent (USP), the PSF parameter of an overlapping block is subject to the impact of the PSF parameter of adjacent overlapping block.Therefore, if one or more adjacent overlapping block can not produce accurate PSF parameter, the PSF parameter of the overlapping block at center will be affected so, even these parameters are accurate.Same with the method in U.S. Patent No. 6154574, the method in the open No.2009/0067742 of United States Patent (USP) is more suitable for having basically the blurred picture of blur level uniformly.
For the open No.2011/0044554 of United States Patent (USP), the PSF of each subimage based on himself is by deblurring, and the PSF of himself mainly depends on the marginal information of himself.In marginal information situation pockety, for example, if sub-picture pack contains marginal information (, near subimage margin always has marginal information seldom) seldom, the precision that its PSF estimates so will be affected.
For the open No.2011/0032575 of United States Patent (USP), need to know in advance the distance of two focal planes.But, for digital camera, be difficult to obtain two images with different lens distance that can be scheduled.
Therefore, need to make to have method and the device thereof of the image deblurring of perspective distortion.
In addition, need to there is than prior art method and the device thereof for the image deblurring that makes to have perspective distortion of better deblurring effect.
Summary of the invention
In view of at least one in above problem, the present invention is proposed.
One of purpose of the present invention is to provide method and the device thereof of the image deblurring that can make to have perspective distortion.
Another object of the present invention is to provide method and the device thereof for the image deblurring that makes to have perspective distortion that has better deblurring effect than prior art.
According to a first aspect of the invention, provide a kind of method for the image deblurring that makes to have perspective distortion, comprising: the perspective information detecting step, for detection of the perspective information of image; The subregion segmentation procedure, for based on perspective information, image being divided into to a plurality of subregions, the blur level in all subregion is identical; And the deblurring step, the deblurring for making all subregion based on its fuzzy message.
According to a second aspect of the invention, provide a kind of device for the image deblurring that makes to have perspective distortion, comprising: the perspective information detection part, for detection of the perspective information of image; The subregion partition member, for based on perspective information, image being divided into to a plurality of subregions, the blur level in all subregion is identical; And the deblurring parts, the deblurring for making all subregion based on its fuzzy message.
By means of above feature, the present invention can provide method and the device thereof of the image deblurring that can make to have perspective distortion.In addition, the present invention can further provide method and the device thereof for the image deblurring that makes to have perspective distortion that has better deblurring effect than prior art.
Following detailed description from the reference accompanying drawing to exemplary embodiment, further purpose of the present invention, feature and advantage will become obvious.
The accompanying drawing explanation
Be merged in instructions and form its a part of accompanying drawing embodiments of the invention are shown, and together with the description for explaining principle of the present invention.
Fig. 1 is schematically illustrated realizes the block diagram according to the hardware configuration of the computing equipment of deblurring method of the present invention.
The schematically illustrated figure of the general flow according to deblurring method of the present invention of Fig. 2.
Two substituting process flow diagrams of the schematically illustrated detecting step of the perspective information according to deblurring method of the present invention of Fig. 3 and Fig. 4.
Two substituting process flow diagrams of the schematically illustrated segmentation procedure of the subregion according to deblurring method of the present invention of Fig. 5 and Fig. 6.
The process flow diagram of accurate adjustment (refining) step in the schematically illustrated segmentation procedure of the subregion according to deblurring method of the present invention of Fig. 7.
The process flow diagram of the segmentation procedure in the schematically illustrated segmentation procedure of the subregion according to deblurring method of the present invention of Fig. 8.
The process flow diagram of the schematically illustrated step of the deblurring according to deblurring method of the present invention of Fig. 9.
Typical subimage in the schematically illustrated step of the deblurring according to deblurring method of the present invention of Figure 10 is selected the process flow diagram of step.
The schematically illustrated general block diagram according to deblurring device of the present invention of Figure 11.
Figure 12 is shown schematically in the subimage in the accurate adjustment step in the situation of just cutting apart angle and cuts apart.
Figure 13 is shown schematically in the subimage in the accurate adjustment step in the negative situation of cutting apart angle and cuts apart.
Figure 14 is shown schematically in the subregion of cutting apart in the situation that angle is greater than 45 ° and cuts apart.
Figure 15 is shown schematically in the subregion of cutting apart in the situation that angle is less than 45 ° and cuts apart.
Figure 16 illustrates the part of the exemplary blurred picture with perspective distortion.
Figure 17 illustrates the exemplary deblurring result according to prior art.
Figure 18 illustrates according to exemplary deblurring result of the present invention.
Embodiment
Describe exemplary embodiment of the present invention in detail hereinafter with reference to accompanying drawing.It should be noted that following being described in is only indicative and exemplary in essence, never be intended to limit the present invention and application or purposes.Unless stated otherwise, otherwise the assembly of setting forth in an embodiment and positioned opposite, numerical expression and the numerical value of step do not limit the scope of the invention.In addition, technology well known by persons skilled in the art, method and apparatus may not be discussed in detail, but are intended in appropriate circumstances become the part of instructions.
By the way, in this manual, usually take " pixel " mean size (such as length, highly, width etc.) as unit.
At first, with reference to Fig. 1, the exemplary hardware components configuration that can realize according to the computing equipment 1000 of deblurring processing of the present invention is described.In order to simplify, a computing equipment only is shown.But, also can use as required a plurality of computing equipments.
As shown in Figure 1, computing equipment 1000 can comprise CPU 1110, chipset 1120, RAM1130, memory controller 1140, display controller 1150, hard disk drive 1160, CD-ROM drive 1170 and display 1180.Computing equipment 1000 also can comprise the signal wire 1210 be connected between CPU 1110 and chipset 1120, be connected to the signal wire 1220 between chipset 1120 and RAM 1130, be connected to the peripheral bus 1230 between chipset 1120 and various peripherals, be connected to the signal wire 1240 between memory controller 1140 and hard disk drive 1160, be connected to the signal wire 1250 between memory controller 1140 and CD-ROM drive 1170, and be connected to the signal wire 1260 between display controller 1150 and display 1180.
Client computer 1300 can directly or via network 1400 be connected with computing equipment 1000.Client computer 1300 can send the deblurring task to computing equipment 1000, and computing equipment 1000 can turn back to the deblurring result client computer 1300.
Next, will describe according to the method for the image that makes to have perspective distortion (such as file and picture etc.) deblurring of the present invention.
The schematically illustrated figure of the general flow according to deblurring method of the present invention of Fig. 2.
As shown in Figure 2, at first, at step 210(perspective information detecting step) locate, detect the perspective information of the blurred picture with perspective distortion.
Then, in step 220(subregion segmentation procedure) locate, based on perspective information, image is divided into to a plurality of subregions, the blur level in all subregion identical (in this manual, this means that the blur level in all subregion is substantially the same).
Finally, in step 230(deblurring step) locate, make all subregion based on its fuzzy message and deblurring, to obtain the image of deblurring.
As previously mentioned, the blurred picture with perspective distortion has inhomogeneous blur level, that is, and and the blur level difference of its different piece.In the deblurring method of prior art, in the situation that do not consider that inhomogeneous blur level blurred picture is divided into a plurality of subimages and, by deblurring, thus, the deblurring effect is affected to a certain extent.Contrast ground, in above deblurring method of the present invention, by considering perspective information, blurred picture is divided into a plurality of subregions, and the blur level in all subregion is substantially the same, and, the fuzzy message of all subregion based on himself that has basically a blur level uniformly is by deblurring individually, thus, compared with prior art, can obtain better deblurring effect.
The process flow diagram of Fig. 2 illustrates the basic step according to deblurring method of the present invention briefly.Below, will the exemplary process of each above step be described in more detail.
At first implement in the present invention perspective information detecting step 210, can blurred picture being divided into and respectively having basically a plurality of subregions of blur level uniformly the perspective information based on detecting, and Fig. 3 and schematically illustrated two the substituting process flow diagram of Fig. 4.By the way, here, will take horizontal end point (vanishing point) and vertical end point to be described as example, still, clearly, perspective information is not necessarily limited to this.
As shown in Figure 3, at first, at step 212 place, the edge image of detected image.Can adopt any proper method as known in the art to obtain edge image.For example, can adopt the Canny method.The parameter of Canny method comprises minimum edge threshold, maximal margin threshold value, σ etc., and, in an example of the present invention, for 256 gray levels, minimum edge threshold is set as 8, and the maximal margin threshold value is set as 35, and σ is set as 1.3.But, it should be noted that all these values are only exemplary, and those skilled in the art can select any appropriate value as required.
Next, at step 214 place, detection level page boundary and vertical page boundary in edge image.Can adopt any proper method as known in the art to obtain horizontal page boundary and vertical page boundary.As an example, the strategy of detection level page boundary can comprise: an edge pixel selecting the upper left corner on edge image; Whether the pixel of determining the right side of current edge pixel comprises at least one edge pixel; If so, the edge pixel detected will be set as current edge pixel, and this processing continuation detection of edge pixels, until arrive the end of row or can not find edge pixel.By this way, can obtain horizontal page boundary.Much less, the detection side is to also can be from right to left.For vertical page boundary, by with from top to bottom or detection side from top to bottom to obtaining similarly them.
In an example of the present invention, the length threshold of horizontal page boundary is set as 0.3 * picture traverse, and vertically the length threshold of page boundary is set as 0.3 * picture altitude, but these values are not necessarily limited to this.
Then, at step 216 place, horizontal end point and vertical end point based on horizontal page boundary and vertical page boundary computed image.
If the page boundary that comprises horizontal page boundary and vertical page boundary can be detected and blurred picture has perspective distortion, so horizontal page boundary will be in the x direction (, horizontal direction) upper intersecting, and the intersection point on the x direction will be regarded as candidate's horizontal end point.Similarly, vertical page boundary will be upper crossing in y direction (that is, vertical direction), and the intersection point on the y direction will be regarded as candidate's vertical end point.
In the present invention, the candidate's intersection point P the x direction by following formula (1) IntersectSelect final horizontal end point Pt HorVanish:
Pt HorVanish = arg min Pinter sec t ( Σ i = 1 n | slope HORi - slope CONi _ inter sec t | ) - - - ( 1 )
Here, n means the counting of horizontal page boundary; Slope HORiThe slope that means i horizontal page boundary; Slope CONi_intersectThe slope that means the i bar connecting line that marginal point and candidate's intersection point on the x direction by i horizontal page boundary form; And the whole right side of formula (1) means the candidate's intersection point on the x direction of the value minimum in its bracket.
Similarly, the candidate's intersection point P the y direction by following formula (2) IntersectSelect final vertical end point Pt VertVanish:
Pt VertVanish = arg min Pinter sec t ( Σ j = 1 m | slope VERTj - slope CONj _ inter sec t | ) - - - ( 2 )
Here, m means the counting of vertical page boundary; Slope VERTjThe slope that means j vertical page boundary; Slope CONj_intersectThe slope that means the j bar connecting line that marginal point and candidate's intersection point on the y direction by j vertical page boundary form; And the whole right side of formula (2) means the candidate's intersection point on the y direction of the value minimum in its bracket.
In some images, may there be enough page boundary, for example, the counting of horizontal page boundary is less than 2.In this case, the horizontal end point and vertical end point (referring to Fig. 4) based on line of text and vertical characters stroke computed image alternatively.
As shown in Figure 4, at first, at step 212 place, the edge image of detected image.This step is identical with the step 212 in Fig. 3, and will omit its detailed description.
Next, in step 214 ' locate, detect line of text and vertical characters stroke in edge image.Can adopt any proper method be known in the art to obtain line of text and vertical characters stroke.
For example, in order to obtain line of text, can implement run length smoothing algorithm (Run Length Smoothing Algorithm, RLSA) in the horizontal direction with on vertical direction.In an example of the present invention, the connecting length of horizontal direction is set as 0.05 * picture traverse, and the connecting length of vertical direction is set as 0.01 * picture altitude.Then, can extract the central point of black run in the y-direction, and can be line of text (referring to step 214) by the level curve detection by the same policy of using in page boundary detects.
On the other hand, for example, in order to obtain the vertical characters stroke, can at first as following, remove horizontal edge.By the gradient oblique line slopeGrad of following formula (3) edge calculation pixel, to determine its edge direction:
slopeGrad=arctan(Grad y/Grad x)(3)
Here, Grad yAnd Grad xThe y gradient and the x gradient that mean respectively edge pixel.For example, if the angle of slopeGrad and line of text is less than predetermined angle (, 30 °), this edge will be regarded as horizontal edge and be removed so.After horizontal edge is removed, detect connected domain (CC) in edge image.If the edge pixel in CC has good linearity, these edge pixels will be regarded as a vertical stroke so.In an example of the present invention, adopt principal component analysis (PCA) (Principal Component Analysis, PCA) method to calculate the linearity of CC.
Then, in step 216 ' locate horizontal end point and vertical end point based on line of text and vertical characters stroke computed image.For this purpose, can use and formula (1) and (2) similar formula, and will omit its detailed description.
To so far, obtained perspective information (for example, horizontal end point and vertical end point).It will be used to blurred picture is divided into and respectively has basically a plurality of subregions of blur level uniformly in subregion segmentation procedure 220 subsequently.Next, with reference to Fig. 5 to 8 and Figure 12 to 15, the subregion segmentation procedure 220 according to deblurring method of the present invention is described in more detail.
Two substituting process flow diagrams of the schematically illustrated subregion segmentation procedure 220 of Fig. 5 and Fig. 6.
As shown in Figure 5, at first, at step 222 place, based on perspective information, calculate for cutting apart the direction of cutting apart of image.
Then, at step 224 place, along cutting apart direction, image is divided into to a plurality of subregions.As a result, the blur level in all subregion is substantially the same.
From relatively can finding out of Fig. 5 and Fig. 6, only difference therebetween is the additional accurate adjustment step 223 of Fig. 6.More specifically, step 223 place between step 222 and step 224, based on perspective information, obtain cut apart direction further based on blur level by accurate adjustment, then, at step 224 place, the cut apart direction of image after accurate adjustment is divided into and respectively has basically a plurality of subregions of blur level uniformly.That is to say, must not need step 223, but can implement as desired it or not implement it.Below, the Fig. 6 of take is described in more detail as example.
At step 222 place of Fig. 6, for example, the slope of straight line that can be by the horizontal end point based on image carrys out the computed segmentation direction with the slope of the straight line of the vertical end point based on image.More specifically, in an example of the present invention, by following formula (4) computed segmentation direction:
Direction split=arctan(slope y/slope x)(4)
Here, Direction SplitMean to cut apart the angle of cutting apart of direction, slope yMean the central point of connection layout picture and the slope of the straight line of the vertical end point of image, slope xThe slope of the straight line of the central point of expression connection layout picture and the horizontal end point of image.
For the impact that reduces the local serious precision for perspective information out of focus and obtain thus cutting apart more accurately direction, alternatively, what can based on the further accurate adjustment of blur level, at step 222 place, based on perspective information, obtain at step 223 place cuts apart direction.Accurate adjustment exemplary shown in Fig. 7 is processed.
As shown in Figure 7, at first, at step 223a place, select the part of image, and be partitioned into a plurality of subimages with the window size of being scheduled to from described part along cutting apart direction from the bight of described part.
As an example, can select core p ' q ' r ' s ' (referring to Figure 12 and Figure 13) of the blurred picture pqrs of input.In example of the present invention, left margin, right margin, top boundary and the bottom boundaries of core p ' q ' r ' s ' can lay respectively at 0.2 * picture traverse, 0.8 * picture traverse, 0.2 * picture altitude and 0.8 * picture altitude place, but the present invention is not necessarily limited to this.
Subsequently, the direction SS ' of cutting apart obtained along step 222 from the bight of core p ' q ' r ' s ' is partitioned into a plurality of subimages with the window size of being scheduled to from core p ' q ' r ' s '.The height of window and width for example can be identical, and, in an example of the present invention, each in them is set as 256 pixels.But the present invention is not necessarily limited to this, and those skilled in the art can select any suitable value as required.
Such x direction and y direction of defining as shown in Figure 12 and Figure 13.In this case, if cut apart direction SS ' for just (referring to Figure 12), so along cutting apart direction SS ' from cutting apart left to bottom right core p ' q ' r ' s ':, the first subimage is positioned at the upper left corner of core p ' q ' r ' s '; The dynatron image is between the first subimage and last subimage, and the central point of dynatron image is positioned on the connecting line of the first subimage and the central point of last subimage (that is, the central point of all subimages is positioned at along cutting apart on the same line of direction); And, the position based on the first subimage and the position of cutting apart direction (or it cuts apart angle θ) calculating dynatron image.On the other hand, if cut apart direction for negative (referring to Figure 13), cut apart core p ' q ' r ' s ' along cutting apart direction SS ' similarly, but cut apart from the lower-left to the upper right:, the first subimage is positioned at the lower left corner of core p ' q ' r ' s '.The position of the first subimage in the schematically illustrated core p ' q ' of Figure 12 and Figure 13 r ' s ' and last subimage, and can obtain as follows the particular location of subimage.
Be less than 45 ° (can referring to the Figure 15 will be described later) or equal 45 ° if cut apart the absolute value of cutting apart angle of direction, can obtain by following formula (5) the center point coordinate x of the subimage between the first subimage and last subimage so CenterAnd y Center:
x center = x first + radius * i y center = y first + radius * i * tan ( θ ) - - - ( 5 )
Here, radius means the length of side (that is, the length of side of subimage of window; Here, suppose that the height of window or subimage is identical with width), and can for example be set as 256 pixels; x FirstAnd y FirstThe center point coordinate that means the first subimage; I means the label (i=0 is corresponding with the first subimage) of subimage; θ means to cut apart the angle of cutting apart of direction.
In addition, can obtain by following formula (6) and (7) left margin coordinate, right margin coordinate, top boundary coordinate and the bottom boundaries coordinate x of the subimage between the first subimage and last subimage Left, x Right, y topAnd y Bottom:
x left = x center - radius * 0.5 x right = x center + radius * 0.5 - - - ( 6 )
y top = y center - radius * 0.5 y bottom = y center + radius * 0.5 - - - ( 7 ) .
On the other hand, if cut apart the absolute value of cutting apart angle of direction, be greater than 45 ° (can referring to the Figure 14 will be described later), can obtain by following formula (8) the center point coordinate x of the subimage between the first subimage and last subimage so CenterAnd y Center:
y center = y first + radius * sign ( θ ) * i x center = x first + radius * | c tan ( θ ) | * i ,
Figure BDA00001648545600114
In this case, also can obtain by above formula (6) and (7) left margin coordinate, right margin coordinate, top boundary coordinate and the bottom boundaries coordinate x of the subimage between the first subimage and last subimage Left, x Right, y topAnd y Bottom.
To (8), from the bight of core p ' q ' r ' s ', along cutting apart direction SS ', with the window size of being scheduled to, from core p ' q ' r ' s ', be partitioned into a plurality of subimages by above formula (5).
Next, at the step 223b place of Fig. 7, calculate the blur level BD of a plurality of subimages (comprising the first subimage, last subimage and dynatron image) of cutting apart.In an example of the present invention, can for example pass through following formula (9) and calculate BD for each subimage:
BD = 1 n Σ q ∈ Edge max k ∈ neib ( q ) f ( k ) - min k ∈ neib ( q ) f ( k ) ( q max - q min ) * 255 - - - ( 9 )
Here, q means edge pixel; K means the neighborhood pixels of pixel q; F (k) means the gray-scale value of pixel k; q maxExpression is along the position of the pixel q pixel with maximum gray scale on every side of greatest gradient direction; q minExpression is along the position of the pixel q pixel with minimal gray level on every side of greatest gradient direction; N means the sum of the edge pixel in subimage.Contiguous size can for example be set as 13 * 13 pixels.The blur level obtained can be used to calculate blur level difference (such as the blur level variance etc.) subsequently.
Then, at step 223c place, among the predetermined direction scope of cutting apart direction that comprises step 222 acquisition, the direction of the difference minimum of the blur level of a plurality of subimages that selection is cut apart along it is as the direction of cutting apart after accurate adjustment.More specifically, can change a δ angle (for example, 10 °) by the direction of cutting apart that step 222 place is obtained, and then again cut apart subimage and recalculate blur level difference to find minimum one, realize this point.As an example, the difference of the blur level of a plurality of subimages of cutting apart can be the variance (but it is not necessarily limited to this) of the blur level of cut apart a plurality of subimages, and, in this case, can use following formula (10) to find the direction of cutting apart after accurate adjustment:
θ split = arg m in θ θ init - 30 ≤ θ ≤ θ init + 30 ( Σ i = 1 n ( BD i - BD aver ) 2 / n ) - - - ( 10 )
Here, θ SplitMean the accurate adjustment angle of cutting apart of cutting apart direction afterwards; θ InitExpression produces based on perspective information cut apart direction cut apart angle (that is cutting apart direction) before the accurate adjustment, obtained at step 222 place; BD iThe blur level that means the i number of sub images in core p ' q ' r ' s '; BD AverThe average blur degree that means all subimages in core p ' q ' r ' s '; N means the counting of the subimage in core p ' q ' r ' s '.
By the way, in above description, the predetermined direction scope is set as θ Init-30 °≤θ≤θ Init+ 30 °, and the δ angle is set as 10 °.But all values are only all exemplary, and those skilled in the art can select any suitable value as required.
By the above processing in Fig. 7, the direction of cutting apart based on after perspective information and blur level acquisition accurate adjustment.This cuts apart direction more accurately will further be conducive to subregion dividing processing subsequently, make each in must divided subregion have more uniform blur level, make thus compared with prior art deblurring better effects if of the present invention.
Get back to now Fig. 6.At step 224 place, along cutting apart direction (in this case, being the direction of cutting apart after the accurate adjustment of step 223 place acquisition), image is divided into to a plurality of subregions.As a result, the blur level in all subregion is substantially the same.Fig. 8 illustrates exemplary subregion dividing processing, and Figure 14 and Figure 15 illustrate respectively and cut apart in the situation that angle is greater than 45 ° and cut apart the details in the situation that angle is less than 45 °.
As shown in Figure 8, at first, at step 224a place, from image, be partitioned into a plurality of basic subimages.More specifically, as shown in figure 14, if cut apart the absolute value of cutting apart angle of direction, be greater than 45 °, be partitioned into a plurality of basic subimage as, b, c, d and e with the window size of being scheduled to from image pqrs from the bight along continuous straight runs of image pqrs so.On the other hand, as shown in figure 15, if cut apart the absolute value of cutting apart angle of direction, be less than 45 °, vertically with the window size of being scheduled to, from image pqrs, be partitioned into a plurality of basic subimage a, b, c, d and e from the bight of image pqrs so.In addition, if cut apart the absolute value of cutting apart angle of direction, equal 45 °, can be partitioned into a plurality of basic subimages with the window size of being scheduled to from image from bight along continuous straight runs or the vertical direction of image so.
By the way, the height of window and width for example can be identical, and they are set as 256 pixels in an example of the present invention.But the present invention is not necessarily limited to this, and those skilled in the art can select any suitable value as required.In addition, the window size in step 224a and the window size in step 223a can be identical or can be different.
Next, at step 224b place, each from a plurality of basic subimage a, b, c, d and e is partitioned into the window size of being scheduled to a plurality of subimages that belong to a sub regions along cutting apart direction from image pqrs respectively.That is to say, all subregion consists of a plurality of subimages with same size, and described a plurality of subimage such distribution as shown in Figure 14 and Figure 15.And the height of each subimage is identical with height and the width of window respectively with width, therefore, the height of each subimage and width for example can be identical, and, in an example of the present invention, they can be set as 256 pixels.But the present invention is not necessarily limited to this.By the way, the partitioning scheme in the partitioning scheme in step 224b and step 223a is similar, and, can be with reference to the above description to step 223a.
Then, at step 224c place, if the subregion obtained does not cover whole image, along cutting apart direction, the remainder of image is continued to be divided into subregion so in a similar fashion.
By the way, as shown in Figure 14 and Figure 15, the subregion dividing processing in Fig. 8 and following processing are equal to.At first, be partitioned into the window size of being scheduled to a plurality of subimages that belong to a sub regions along cutting apart direction from image from a bight of image; Then, by following processing, whole image is divided into to a plurality of subregions: be greater than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that the along continuous straight runs translation is cut apart so; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that so vertically translation is cut apart; And, equal 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that along continuous straight runs or vertical direction translation are cut apart so.
Up to the present, the blurred picture with perspective distortion of inputting is divided into a plurality of subregions, and the blur level in all subregion is substantially the same.
Next, with reference to Fig. 9 and Figure 10, deblurring step 230 of the present invention is described in more detail.As previously mentioned, in the present invention, the fuzzy message of all subregion based on himself that has basically a blur level uniformly, by deblurring individually, thus, compared with prior art, can obtain better deblurring effect.
The process flow diagram of the schematically illustrated step of the deblurring according to deblurring method of the present invention of Fig. 9.
As shown in Figure 9, at first, at step 232 place, in all subregion, the marginal density based on subimage and blur level are selected a typical subimage.In this manual, typically subimage is the best subimage of the fuzzy message for calculating corresponding subregion.
Figure 10 illustrates exemplary typical subimage and selects to process.
At the step 232a place of Figure 10, for all subregion, calculate marginal density and the blur level of its each subimage.For example, can be for each subimage edge calculation pixel counts and blur level.
Can adopt any suitable method to obtain edge pixel counting and blur level.In an example of the present invention, for each subimage, can adopt the Otsu method to obtain bianry image.Then, can determine as follows that edge pixel is to obtain the edge pixel counting.For example, if determine that for the one prospective pixel its 3 * 3 neighborhood pixels is background pixel, so described foreground pixel will be regarded as non-edge pixel; Otherwise it will be regarded as edge pixel.About the blur level of each subimage, for example can use formula (9).
Next, at the step 232b place of Figure 10, for all subregion, calculate the mark of its each subimage.The degree of approach between the mean value of the blur level of the degree of approach between the maximal value of the marginal density of mark indication subimage and the marginal density of all subimages in corresponding subregion and the blur level of described subimage and all subimages in corresponding subregion.The mean value that if the marginal density of subimage is high and the blur level of described subimage approaches the blur level of all subimages in corresponding subregion simultaneously, the mark of so described subimage is high.That is to say, described subimage is the best subimage of the fuzzy message for calculating corresponding subregion.
As an example, can pass through following formula (11) to (13) and calculate mark:
Score i=Score i_edge+Score i_BD (11)
Score i_BD=MIN(BD i,BD aver)/MAX(BD i,BD aver)(12)
Score i_edge=EdgeCount i/EdgeCount max (13)
Here, Score iThe mark that means the i number of sub images; Score Ie_dgeThe mark that means the marginal density of i number of sub images; Score I_BDThe mark that means the blur level of i number of sub images; EdgeCount iThe counting that means the edge pixel of i number of sub images; EdgeCount maxThe maximal value of counting that means the edge pixel of all subimages in corresponding subregion; BD iIt is the blur level of i number of sub images; BD AverThe mean value of the blur level of all subimages in corresponding subregion.
It it should be noted that formula (11) to (13) is only exemplary, and those skilled in the art can use any suitable formula for mark, as long as can be used to find the best subimage of the fuzzy message for calculating corresponding subregion.
Then, at the step 232c place of Figure 10, for all subregion, select its number of sub images with largest score as typical subimage.
By the above processing of Figure 10, selected a typical subimage for all subregion, it is the best subimage of the fuzzy message for calculating corresponding subregion.
Get back to now Fig. 9.At step 234 place, calculate the fuzzy message of each typical subimage.Can adopt any suitable method to obtain the fuzzy message for deblurring.In an example of the present invention, fuzzy message comprises point spread function.Therefore, can calculate step response by near the gray level of pixel horizontal edge and vertical edge, and then can pass through step response calculation level spread function.Below provide concise and to the point description.
At first, by Grad detection level edge and vertical edge.More specifically, if the gradient of the gradient ratio y direction of x direction is large, edge pixel will be marked as horizontal edge so; Otherwise edge pixel will be marked as vertical edge.By this way, can produce two edge images: horizontal edge image and vertical edge image.
Next, for two edge images, implement connected domains (CC) analysis, and as following as described in, collect gray-scale value based on edge pixel along edge direction in the connected domain detected.
More specifically, select an edge pixel from a connected domain.Selected edge pixel is set as center, and collects gray-scale value along edge direction on blurred picture, until the difference of the gray-scale value of neighborhood pixels is less than threshold value.Threshold value for example can be set as 1, but is not limited to this.
For each edge pixel, produce a gray-scale value group.Any length is greater than 7 group and all is rejected.All gray-scale value groups of collecting from two edge images are combined, with for making noise smoothing.The gray-scale value group is aimed at according to gray-scale value, and wherein, identical gray-scale value means identical position.The final lengths of the gray-scale value group of combination can not surpass 7, and exceeds part and clipped.
Then, the difference diff of the gray-scale value by following formula (14) calculation combination i:
diff i=grayValue i+1-grayValue i-1(14)
Here, grayValue I+1The combination gray-scale value of (i+1) element in the gray-scale value group that means to combine; GrayValue I-1The combination gray-scale value of (i-1) element in the gray-scale value group that means to combine.
Based on more than, find the maximal value of difference results, and this corresponding element is set as center.Then, can calculate step response by following steps: make central element as the first element in step response; And calculate other element in step response by following formula (15):
Elem sRi=(Elem diffCenter-i+Elem diffCenter+i)*0.5(15)
Here, Elem SRiMean the i element in step response; Subscript diffCenter means the label of the central element in the difference results of gray-scale value of combination; Elem DiffCenter-i(diffCenter-i) element in the difference results of the gray-scale value of expression combination; Elem DiffCenter+i(diffCenter+i) element in the difference results of the gray-scale value of expression combination.
Finally, calculate the point spread function as fuzzy message by step response.
For example, in the fuzzy message (, point spread function) that obtains each typical subimage afterwards, at step 236 place of Fig. 9, the fuzzy message of all subregion based on its typical subimage is by deblurring.In an example of the present invention, based on point spread function, by the Wiener filtrator, make all subregion deblurring.
More specifically, point spread function is transformed into optical transform function (OTF), and adopts the Wiener filtrator to make all subregion deblurring in frequency domain.Mean the Wiener filtrator by following formula (16):
F ( u , v ) = H * ( u , v ) Y ( u , v ) | H ( u , v ) | 2 + γ - - - ( 16 )
Here, F(u, v) mean the function of the de-blurred image in frequency domain; H(u, v) and H*(u, v) mean respectively optical transform function and transposition thereof in frequency domain; Y(u, v) mean the function of the blurred picture in frequency domain; γ means the parameter of noise, and it for example can be set as 0.004.
By above processing, recover the blurred picture of inputting by deblurring method according to the present invention.
As previously mentioned, for the blurred picture with perspective distortion, the deblurring method of prior art all has shortcoming.More specifically, they do not consider the different blur leveles of the different piece of image, and the blurred picture that therefore is unsuitable for having perspective distortion.And the marginal information that some in them mainly depend on current subimage is with for deblurring.In this case, if marginal information is not enough or be subject to the impact of serious noise, be difficult to so obtain accurate fuzzy message with for deblurring.
Contrast ground, in the present invention, for the blurred picture with perspective distortion, at first the marginal information based on whole image obtains perspective information, so it will not be affected by the part shortage institute of local noise or marginal information will.Then, based on perspective information, obtain and cut apart direction, image is divided into to a plurality of subregions, and the blur level in all subregion is substantially the same.Finally, the fuzzy message of all subregion based on himself that has basically a blur level uniformly, by deblurring individually, thus, compared with prior art, can obtain better deblurring effect.
And In some examples of the invention, what obtain based on perspective information cuts apart direction and can, based on blur level by further accurate adjustment, to reduce the impact of the local serious precision for perspective information out of focus, and obtain thus cutting apart more accurately direction.Therefore, compared with prior art, can obtain even better deblurring effect.
And In some examples of the invention, the marginal density based on subimage in all subregion and blur level are selected a typical subimage.Typical subimage is the best subimage of the fuzzy message for calculating corresponding subregion, and for example, it always has enough marginal informations for ambiguous estimation information.Then, the fuzzy message of all subregion based on its typical subimage is by deblurring.By means of these features, compared with prior art, also can further improve deblurring effect of the present invention.
Up to the present, the deblurring method that has the image of perspective distortion according to of the present invention has schematically been described.Next, with reference to the deblurring effect of Figure 16 to 18 evaluation deblurring method of the present invention.
Figure 16 illustrates the part of the exemplary blurred picture with perspective distortion.In Figure 16, the blur level on the left side is more serious than the right.
Figure 17 illustrate deblurring method by prior art (its marginal information ambiguous estimation information based on each subimage, and make each subimage by the fuzzy message of himself deblurring) the exemplary deblurring result of image in Figure 16 of obtaining.As can be seen from Figure 17, the ring in the zone in the bounded frame (ringing) is serious, that is, the deblurring effect is bad.
Contrast ground, Figure 18 illustrates the exemplary deblurring result of image in the Figure 16 obtained by deblurring method of the present invention.As can be seen from Figure 18, the ring in the zone in corresponding bounded frame is slight,, has greatly improved the deblurring effect that is.
Following table 1 and 2 provides respectively above example images part and the OCR precision of whole image (not shown).
Table 1
Example images part (142 characters altogether) The error character counting
Prior art 33
The present invention 18
Table 2
Whole image (679 characters altogether) The error character counting
Prior art 127
The present invention 99
From table 1 and 2 obviously, the present invention can reduce the error character counting of the blurred picture with perspective distortion.
Therefore, the present invention can provide the method for the image deblurring that can make to have perspective distortion.In addition, the present invention can further provide the method for the image deblurring that makes to have perspective distortion that has better deblurring effect than prior art.
By the way, deblurring method of the present invention can have many application, such as image or video extraction and the identification text message captured from camera.
Below, with reference to the concise and to the point device 100 that makes to have the image deblurring of perspective distortion according to of the present invention of describing of Figure 11.
As shown in figure 11, deblurring device 100 can comprise: perspective information detection part 110, for detection of the perspective information of image; Subregion partition member 120, for based on perspective information, image being divided into to a plurality of subregions, the blur level in all subregion is identical; And deblurring parts 130, the deblurring for making all subregion based on its fuzzy message.
Alternatively, subregion partition member 120 further comprises: based on perspective information, calculate for cutting apart the parts of cutting apart direction of image; And image is divided into to the parts of described a plurality of subregions along cutting apart direction.
Alternatively, subregion partition member 120 further comprises: based on perspective information, calculate for cut apart image cut apart direction after and along cutting apart before direction is divided into image described a plurality of subregion, carry out the parts of accurate adjustment to cutting apart direction based on blur level.
Alternatively, based on perspective information, calculating for cutting apart the parts of cutting apart direction of image, the slope of the straight line by the horizontal end point based on image carrys out the computed segmentation direction with the slope of the straight line of the vertical end point based on image.
Alternatively, based on perspective information, calculating for cutting apart the parts of cutting apart direction of image, by following formula computed segmentation direction:
Direction split=arctan(slope y/slope x)
Here, Direction SplitThe angle of cutting apart of direction, slope are cut apart in representative yRepresent the central point of connection layout picture and the slope of the straight line of the vertical end point of image, slope xRepresent the slope of straight line of the horizontal end point of the central point of connection layout picture and image.
Alternatively, the parts that carry out accurate adjustment to cutting apart direction based on blur level further comprise: select the part of image and from described part be partitioned into the parts of a plurality of subimages along cutting apart direction with the window size of being scheduled to from the bight of described part; Calculate the parts of the blur level of described a plurality of subimages; And from the direction of the difference minimum that comprises among the predetermined direction scope of cutting apart direction the blur level of selecting a plurality of subimages of cutting apart along it as the parts of cutting apart direction after accurate adjustment.
The difference of the blur level of a plurality of subimages of cutting apart alternatively, is variances of the blur level of cut apart a plurality of subimages.
Alternatively, the height of window is identical with width.
Alternatively, along cutting apart the parts that direction is divided into described a plurality of subregions by image, further comprise: if cut apart the absolute value of cutting apart angle of direction, be greater than 45 ° be partitioned into a plurality of basic subimages with the window size of being scheduled to from image from the bight along continuous straight runs of image, be less than 45 ° vertically with the window size of being scheduled to, from image, be partitioned into a plurality of basic subimages from the bight of image if cut apart the absolute value of cutting apart angle of direction, if and the absolute value of cutting apart angle of cutting apart direction equals 45 ° be partitioned into the parts of a plurality of basic subimages from bight along continuous straight runs or the vertical direction of image from image with the window size of being scheduled to, each from described a plurality of basic subimages is partitioned into the parts of a plurality of subimages that belong to a sub regions along cutting apart direction with the window size of being scheduled to from image respectively, and if the subregion obtained does not cover whole image continues along cutting apart direction and the remainder of image is divided into to the parts of subregion.
Alternatively, along cutting apart the parts that direction is divided into described a plurality of subregions by image, further comprise: from a bight of image, along cutting apart direction, with the window size of being scheduled to, be partitioned into the parts of a plurality of subimages that belong to a sub regions from image; And by following processing, whole image is divided into to the parts of described a plurality of subregions: be greater than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that the along continuous straight runs translation is cut apart so; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that so vertically translation is cut apart; And if the absolute value of cutting apart angle of cutting apart direction equals 45 °, the sub regions that along continuous straight runs or vertical direction translation are cut apart so.
Alternatively, the height of window is identical with width.
Alternatively, fuzzy message comprises point spread function, and deblurring parts 130 make all subregion deblurring based on point spread function by the Wiener filtrator.
Alternatively, deblurring parts 130 further comprise: the marginal density based on subimage in all subregion and blur level are selected the parts of a typical subimage, and described typical subimage is the best subimage of the fuzzy message for calculating corresponding subregion; Calculate the parts of the fuzzy message of each typical subimage; And make the fuzzy message of all subregion based on its typical subimage and the parts of deblurring.
Alternatively, in all subregion, the marginal density based on subimage and blur level select the parts of a typical subimage further to comprise: for all subregion, calculate the marginal density of its each subimage and the parts of blur level; Calculate the parts of the mark of its each subimage for all subregion, the degree of approach between the mean value of the blur level of the degree of approach between the maximal value of the marginal density of described mark indication subimage and the marginal density of all subimages in corresponding subregion and the blur level of described subimage and all subimages in corresponding subregion; And select it to there are the parts of a number of sub images of largest score as typical subimage for all subregion.
Alternatively, calculate mark by following formula:
Score i=EdgeCount i/EdgeCount max+MIN(BD i,BD aver)/MAX(BD i,BD aver)
Here, Score iRepresent the mark of i number of sub images, EdgeCount iRepresent the counting of the edge pixel of i number of sub images, EdgeCount maxRepresent the maximal value of counting of the edge pixel of all subimages in corresponding subregion, BD iThe blur level of i number of sub images, and BD AverThe mean value of the blur level of all subimages in corresponding subregion.
Alternatively, perspective information comprises image horizontal end point and vertical end point.
Alternatively, perspective information detection part 110 further comprises: the parts of the edge image of detected image; The parts of detection level page boundary and vertical page boundary in edge image; And the parts that come horizontal end point and the vertical end point of computed image based on horizontal page boundary and vertical page boundary.Perhaps, perspective information detection part 110 further comprises: the parts of the edge image of detected image; Detect the parts of line of text and vertical characters stroke in edge image; And the parts that come horizontal end point and the vertical end point of computed image based on line of text and vertical characters stroke.
Up to the present, schematically described according to the device for the image deblurring that makes to have perspective distortion of the present invention.It should be noted that above all parts are for implementing the exemplary preferred module of deblurring method of the present invention.Yet, do not describe exhaustively for implementing the module of various steps in the above.Usually, when having the step of carrying out a certain processing, exist for implementing corresponding function module or the parts of this same treatment.
In addition, it should be noted that and two or more parts can be merged into to parts, as long as can realize their function; On the other hand, any one parts can be divided into to a plurality of parts, as long as can realize similar function.
Can implement in many ways method and apparatus of the present invention.For example, can realize method and apparatus of the present invention by software, hardware, firmware or its any combination.In addition, in certain embodiments, the present invention also may be implemented as the program be recorded in recording medium, comprises for implementing the machine readable instructions of the method according to this invention.Thereby the recording medium that stores the program for implementing the method according to this invention has also been contained in the present invention.
Although with reference to exemplary embodiment, described the present invention, be appreciated that and the invention is not restricted to disclosed exemplary embodiment.It will be obvious to those skilled in the art that and can revise above exemplary embodiment in the situation that do not deviate from scope and spirit of the present invention.The scope of appended claim will be endowed the widest explanation, to comprise all such modifications and the 26S Proteasome Structure and Function be equal to.

Claims (32)

1. the method for the image deblurring that makes to have perspective distortion comprises:
The perspective information detecting step, for detection of the perspective information of image;
The subregion segmentation procedure, for based on perspective information, image being divided into to a plurality of subregions, the blur level in all subregion is identical; And
The deblurring step, the deblurring for making all subregion based on its fuzzy message.
2. according to the process of claim 1 wherein, the subregion segmentation procedure is further comprising the steps:
Based on perspective information, calculate for cutting apart the direction of cutting apart of image; And
Along cutting apart direction, image is divided into to described a plurality of subregion.
3. according to the method for claim 2, wherein, the subregion segmentation procedure is further comprising the steps:
After based on perspective information, calculating the step of cutting apart direction for cutting apart image and along cutting apart before direction is divided into the step of described a plurality of subregions by image, based on blur level, to cutting apart direction, carry out accurate adjustment.
4. according to the method for claim 2, wherein, based on perspective information, calculating for cutting apart the step of cutting apart direction of image, the slope of the straight line by the horizontal end point based on image carrys out the computed segmentation direction with the slope of the straight line of the vertical end point based on image.
5. according to the method for claim 4, wherein, based on perspective information, calculating for cutting apart the step of cutting apart direction of image, by following formula computed segmentation direction:
Direction split=arctan(slope y/slope x)
Here, Direction SplitMean to cut apart the angle of cutting apart of direction, slope yMean the central point of connection layout picture and the slope of the straight line of the vertical end point of image, slope xThe slope of the straight line of the central point of expression connection layout picture and the horizontal end point of image.
6. according to the method for claim 3, wherein, based on blur level, to cutting apart direction, to carry out the step of accurate adjustment further comprising the steps:
Select the part of image, and be partitioned into a plurality of subimages with the window size of being scheduled to from described part along cutting apart direction from the bight of described part;
Calculate the blur level of described a plurality of subimages; And
From the direction of the difference minimum that comprises among the predetermined direction scope of cutting apart direction the blur level of selecting a plurality of subimages of cutting apart along it as the direction of cutting apart after accurate adjustment.
7. according to the method for claim 6, wherein, the difference of the blur level of a plurality of subimages of cutting apart is variances of the blur level of cut apart a plurality of subimages.
8. according to the method for claim 6, wherein, the height of window is identical with width.
9. according to the method for claim 2, wherein, along cutting apart, the step that direction is divided into described a plurality of subregions by image is further comprising the steps:
Be partitioned into the window size of being scheduled to a plurality of subimages that belong to a sub regions along cutting apart direction from image from a bight of image; And
By following processing, whole image is divided into to described a plurality of subregion: be greater than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that the along continuous straight runs translation is cut apart so; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that so vertically translation is cut apart; And, equal 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that along continuous straight runs or vertical direction translation are cut apart so.
10. according to the method for claim 9, wherein, the height of window is identical with width.
11. according to the process of claim 1 wherein, fuzzy message comprises point spread function, and the deblurring step makes all subregion deblurring based on point spread function by the Wiener filtrator.
12., according to the method for any one in claim 1 to 11, wherein, the deblurring step is further comprising the steps:
Marginal density based on subimage in all subregion and blur level are selected a typical subimage, and described typical subimage is the best subimage of the fuzzy message for calculating corresponding subregion;
Calculate the fuzzy message of each typical subimage; And
Make the fuzzy message of all subregion based on its typical subimage and deblurring.
13., according to the method for claim 12, wherein, the marginal density based on subimage in all subregion and blur level select the step of a typical subimage further comprising the steps:
Calculate marginal density and the blur level of its each subimage for all subregion;
Calculate the mark of its each subimage for all subregion, the degree of approach between the mean value of the blur level of the degree of approach between the maximal value of the marginal density of described mark indication subimage and the marginal density of all subimages in corresponding subregion and the blur level of described subimage and all subimages in corresponding subregion; And
For all subregion, select its number of sub images with largest score as typical subimage.
14., according to the method for claim 13, wherein, by following formula, calculate mark:
Score i=EdgeCount i/EdgeCount max+MIN(BD i,BD aver)/MAX(BD i,BD aver)
Here, Score iThe mark that means the i number of sub images, EdgeCount iThe counting that means the edge pixel of i number of sub images, EdgeCount maxThe maximal value of counting that means the edge pixel of all subimages in corresponding subregion, BD iThe blur level of i number of sub images, and BD AverThe mean value of the blur level of all subimages in corresponding subregion.
15. according to the method for any one in claim 1 to 11, wherein, the horizontal end point that perspective information comprises image and vertical end point.
16., according to the method for claim 15, wherein, the perspective information detecting step further comprises following steps:
The edge image of detected image;
Detection level page boundary and vertical page boundary in edge image; And
Come the horizontal end point and vertical end point of computed image based on horizontal page boundary and vertical page boundary,
Perhaps, the perspective information detecting step further comprises following steps:
The edge image of detected image;
Detect line of text and vertical characters stroke in edge image; And
Come the horizontal end point and vertical end point of computed image based on line of text and vertical characters stroke.
17. the device for the image deblurring that makes to have perspective distortion comprises:
The perspective information detection part, for detection of the perspective information of image;
The subregion partition member, for based on perspective information, image being divided into to a plurality of subregions, the blur level in all subregion is identical; And
The deblurring parts, the deblurring for making all subregion based on its fuzzy message.
18., according to the device of claim 17, wherein, the subregion partition member further comprises:
Based on perspective information, calculate for cutting apart the parts of cutting apart direction of image; And
Image is divided into to the parts of described a plurality of subregions along cutting apart direction.
19., according to the device of claim 18, wherein, the subregion partition member further comprises:
Based on perspective information, calculate for cut apart image cut apart direction after and along cutting apart before direction is divided into image described a plurality of subregion, carry out the parts of accurate adjustment to cutting apart direction based on blur level.
20. the device according to claim 18, wherein, based on perspective information, calculating for cutting apart the parts of cutting apart direction of image, the slope of the straight line by the horizontal end point based on image carrys out the computed segmentation direction with the slope of the straight line of the vertical end point based on image.
21. according to the device of claim 20, wherein, based on perspective information, calculating for cutting apart the parts of cutting apart direction of image, by following formula computed segmentation direction:
Direction split=arctan(slope y/slope x)
Here, Direction SplitMean to cut apart the angle of cutting apart of direction, slope yMean the central point of connection layout picture and the slope of the straight line of the vertical end point of image, slope xThe slope of the straight line of the central point of expression connection layout picture and the horizontal end point of image.
22., according to the device of claim 19, wherein, the parts that carry out accurate adjustment based on blur level to cutting apart direction further comprise:
Select the part of image and from described part be partitioned into the parts of a plurality of subimages along cutting apart direction with the window size of being scheduled to from the bight of described part;
Calculate the parts of the blur level of described a plurality of subimages; And
From the direction of the difference minimum that comprises among the predetermined direction scope of cutting apart direction the blur level of selecting a plurality of subimages of cutting apart along it as the parts of cutting apart direction after accurate adjustment.
23., according to the device of claim 22, wherein, the difference of the blur level of a plurality of subimages of cutting apart is variances of the blur level of cut apart a plurality of subimages.
24., according to the device of claim 22, wherein, the height of window is identical with width.
25., according to the device of claim 18, wherein, further comprise along cutting apart the parts that direction is divided into described a plurality of subregions by image:
From image be partitioned into the parts of a plurality of subimages that belong to a sub regions along cutting apart direction with the window size of being scheduled to from a bight of image; And
By following processing, whole image is divided into to the parts of described a plurality of subregions: be greater than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that the along continuous straight runs translation is cut apart so; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that so vertically translation is cut apart; And, equal 45 ° if cut apart the absolute value of cutting apart angle of direction, the sub regions that along continuous straight runs or vertical direction translation are cut apart so.
26., according to the device of claim 25, wherein, the height of window is identical with width.
27. according to the device of claim 17, wherein, fuzzy message comprises point spread function, and the deblurring parts make all subregion deblurring based on point spread function by the Wiener filtrator.
28., according to claim 17 to the device of any one in 27, wherein, the deblurring parts further comprise:
Marginal density based on subimage in all subregion and blur level are selected the parts of a typical subimage, and described typical subimage is the best subimage of the fuzzy message for calculating corresponding subregion;
Calculate the parts of the fuzzy message of each typical subimage; And
Make the fuzzy message of all subregion based on its typical subimage and the parts of deblurring.
29., according to the device of claim 28, wherein, the marginal density based on subimage in all subregion and blur level select the parts of a typical subimage further to comprise:
Calculate the marginal density of its each subimage and the parts of blur level for all subregion;
Calculate the parts of the mark of its each subimage for all subregion, the degree of approach between the mean value of the blur level of the degree of approach between the maximal value of the marginal density of described mark indication subimage and the marginal density of all subimages in corresponding subregion and the blur level of described subimage and all subimages in corresponding subregion; And
For all subregion, select it to there are the parts of a number of sub images of largest score as typical subimage.
30., according to the device of claim 29, wherein, by following formula, calculate mark:
Score i=EdgeCount i/EdgeCount max+MIN(BD i,BD aver)/MAX(BD i,BD aver)
Here, Score iThe mark that means the i number of sub images, EdgeCount iThe counting that means the edge pixel of i number of sub images, EdgeCount maxThe maximal value of counting that means the edge pixel of all subimages in corresponding subregion, BD iThe blur level of i number of sub images, and BD AverThe mean value of the blur level of all subimages in corresponding subregion.
31. according to claim 17 to the device of any one in 27, wherein, the horizontal end point that perspective information comprises image and vertical end point.
32., according to the device of claim 31, wherein, the perspective information detection part further comprises:
The parts of the edge image of detected image;
The parts of detection level page boundary and vertical page boundary in edge image; And
Carry out the parts of horizontal end point and the vertical end point of computed image based on horizontal page boundary and vertical page boundary,
Perhaps, the perspective information detection part further comprises:
The parts of the edge image of detected image;
Detect the parts of line of text and vertical characters stroke in edge image; And
Carry out the parts of horizontal end point and the vertical end point of computed image based on line of text and vertical characters stroke.
CN201210154166.XA 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion Active CN103426144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210154166.XA CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210154166.XA CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Publications (2)

Publication Number Publication Date
CN103426144A true CN103426144A (en) 2013-12-04
CN103426144B CN103426144B (en) 2016-05-11

Family

ID=49650841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210154166.XA Active CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Country Status (1)

Country Link
CN (1) CN103426144B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292892A (en) * 2017-06-23 2017-10-24 北京奇艺世纪科技有限公司 The dividing method and device of video frame images
CN107784631A (en) * 2016-08-24 2018-03-09 中安消物联传感(深圳)有限公司 Image deblurring method and device
CN108979624A (en) * 2018-08-07 2018-12-11 东北大学 A kind of rod pumping system friction factor discrimination method based on indicator card moment characteristics
CN112017163A (en) * 2020-08-17 2020-12-01 中移(杭州)信息技术有限公司 Image blur degree detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165006A (en) * 2006-12-28 2008-07-17 Brother Ind Ltd Projection device and image distortion correcting method
CN101739679A (en) * 2008-11-27 2010-06-16 奇景光电股份有限公司 System and method for generating three-dimensional depth message
CN101789122A (en) * 2009-01-22 2010-07-28 佳能株式会社 Method and system for correcting distorted document image
US20110044554A1 (en) * 2009-08-21 2011-02-24 Konica Minolta Systems Laboratory, Inc. Adaptive deblurring for camera-based document image processing
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165006A (en) * 2006-12-28 2008-07-17 Brother Ind Ltd Projection device and image distortion correcting method
CN101739679A (en) * 2008-11-27 2010-06-16 奇景光电股份有限公司 System and method for generating three-dimensional depth message
CN101789122A (en) * 2009-01-22 2010-07-28 佳能株式会社 Method and system for correcting distorted document image
US20110044554A1 (en) * 2009-08-21 2011-02-24 Konica Minolta Systems Laboratory, Inc. Adaptive deblurring for camera-based document image processing
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAVID DOERMANN ET AL.: "Progress in Camera-Based Document Image Analysis", 《PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784631A (en) * 2016-08-24 2018-03-09 中安消物联传感(深圳)有限公司 Image deblurring method and device
CN107784631B (en) * 2016-08-24 2020-05-05 深圳市豪恩安全科技有限公司 Image deblurring method and device
CN107292892A (en) * 2017-06-23 2017-10-24 北京奇艺世纪科技有限公司 The dividing method and device of video frame images
CN107292892B (en) * 2017-06-23 2020-04-21 北京奇艺世纪科技有限公司 Video frame image segmentation method and device
CN108979624A (en) * 2018-08-07 2018-12-11 东北大学 A kind of rod pumping system friction factor discrimination method based on indicator card moment characteristics
CN112017163A (en) * 2020-08-17 2020-12-01 中移(杭州)信息技术有限公司 Image blur degree detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN103426144B (en) 2016-05-11

Similar Documents

Publication Publication Date Title
US10210415B2 (en) Method and system for recognizing information on a card
US9230382B2 (en) Document image capturing and processing
CN109034165B (en) Method, device and system for cutting certificate image and storage medium
JP4847592B2 (en) Method and system for correcting distorted document images
US8467606B2 (en) Method for segmenting a composite image
US20120294528A1 (en) Method of Detecting and Correcting Digital Images of Books in the Book Spine Area
US8811751B1 (en) Method and system for correcting projective distortions with elimination steps on multiple levels
US8897600B1 (en) Method and system for determining vanishing point candidates for projective correction
CN111353961B (en) Document curved surface correction method and device
JP2016004287A (en) Traffic lane boundary line extraction device, traffic line boundary line extraction method and program
CN111695540A (en) Video frame identification method, video frame cutting device, electronic equipment and medium
WO2018122605A1 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
US8306335B2 (en) Method of analyzing digital document images
CN103426144A (en) Method and device for deblurring image having perspective distortion
US8913836B1 (en) Method and system for correcting projective distortions using eigenpoints
KR101377910B1 (en) Image processing method and image processing apparatus
EP2184712A2 (en) Noise reduction for digital images
JP6542230B2 (en) Method and system for correcting projected distortion
US8897538B1 (en) Document image capturing and processing
KR101205565B1 (en) Method for Dectecting Front and Rear Vehicle by Using Image
CN112285876A (en) Camera automatic focusing method based on image processing and bubble detection
JP2006285564A (en) Road area detection device and road area detection method
JP5664078B2 (en) Imaging apparatus, image evaluation method, and focus adjustment control program
JP2006155126A (en) Vehicle number recognition device
CN108629786B (en) Image edge detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant