CN103136758A - Rapid edge detecting method based on orthogonal polynomial fitting - Google Patents

Rapid edge detecting method based on orthogonal polynomial fitting Download PDF

Info

Publication number
CN103136758A
CN103136758A CN2013100970732A CN201310097073A CN103136758A CN 103136758 A CN103136758 A CN 103136758A CN 2013100970732 A CN2013100970732 A CN 2013100970732A CN 201310097073 A CN201310097073 A CN 201310097073A CN 103136758 A CN103136758 A CN 103136758A
Authority
CN
China
Prior art keywords
formula
edge
centerdot
window
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100970732A
Other languages
Chinese (zh)
Other versions
CN103136758B (en
Inventor
孙秋成
李纯净
曹蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Normal University
Original Assignee
Changchun University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Technology filed Critical Changchun University of Technology
Priority to CN201310097073.2A priority Critical patent/CN103136758B/en
Publication of CN103136758A publication Critical patent/CN103136758A/en
Application granted granted Critical
Publication of CN103136758B publication Critical patent/CN103136758B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A rapid edge detecting method based on orthogonal polynomial fitting belongs to the field of subpixel image edge detecting. The two-dimensional edge detecting method utilizes a spatial moment to calculate the projection direction of an edge and convert two-dimensional edge detecting into one-dimensional edge detecting according to a projection formula during every window image numeralization processing, and avoids complex operation needed by gray curved surface fitting. An analytic formula for calculating a one-dimensional edge point position is obtained by applying the orthogonal polynomial fitting, and the difficult problem that a fitting method generally cannot obtain an analytical solution is solved effectively. The analytic formula is simple in form and contains relatively little inner product computation, and the edge point position and the edge normal direction which are obtained finally are utilized to confirm the position of an edge straight line in a window image. The method improves operating speed greatly, can be suitable for a two-dimensional detecting process which is high in requirement for instantaneity, and integrates the characteristics that a moment-based method or an interpolation method is high in operating speed and a fitting method is not sensitive to image noise. Therefore, the accuracy and the speed of the edge detecting are improved greatly.

Description

Rapid edge-detection method based on way of fitting
Technical field
The invention belongs to sub-pix Image Edge-Detection field, be specifically related to a kind of rapid edge-detection method based on way of fitting.
Background technology
The essential characteristic of image is the edge, so-called edge refers to that its surrounding pixel gray scale has the set of those pixels of step variation or roof variation, the edge detection method of early application mostly is Pixel-level, the bearing accuracy of these methods is whole pixel, namely, can judge marginal position is in some pixels, but can't segment again in this pixel, yet along with industrial detection improving constantly the measuring accuracy requirement, the Pixel-level precision can not satisfy the requirement of actual detection, and subpixel method arises at the historic moment.
The edge detection method of existing sub-pixel can be summarized as three kinds: based on the method for square, based on the fitting process of least square method with based on the method for interpolation, its robustness of fitting process based on least square method is stronger, to insensitive for noise, therefore Edge detected is relatively accurate, but due to the nonlinear match computing of needs, so computing velocity is relatively slow, can't satisfy processing and the monitoring occasion of real-time detection.And based on the method for square or based on the method for interpolation, computing velocity both is all than comparatively fast, but these two kinds of detection methods are all more responsive to picture noise again, in the situation that picture noise is more, its accuracy of detection both all can be a greater impact.
More with the prior art that the present invention comparatively approaches, such as, " a kind of based on the improved sub-pix algorithm of subdivision of polynomial interpolation " [J] of Li Qingli etc. University of Science ﹠ Technology, Beijing's journal, 2003,25 (3): 280-283, the method is applied in the direction template that constructs on classical Sobel operator basis, gray level image is processed, obtain gradient image, then the gradient direction along object edge carries out polynomial interpolation sub-pix segmentation calculating on gradient image, object edge is carried out sub-pix accurately locate.Yet the method has been inherited the characteristic of method of interpolation to the picture noise sensitivity, in the situation that picture noise is more, its rim detection precision will be subject to severe jamming.
Wang Jianmins etc. are at " research of operator of spatial moment based edge detection " [J] optical technology .1999, No, 4,3-6.Proposed a kind of Sub-pixel Edge Detection based on spatial moment, can realize faster that sub-pixel edge detects.The basic model of the method is the step edge model, yet be subject to the effect of imaging noise in the actual imaging process, and edge model is strict step not often, but fuzzy, so the method easily is subject to the interference of noise, thereby affects the bearing accuracy at edge.
And for example, old waiting quietly at " research of fitting of a polynomial Sub-pixel Edge Detection fast " [J]. applied optics 2011,32 (1): introduced a kind of fast sub-picture element edge detection algorithm based on fitting of a polynomial in 91-95.DOI:10.3969/j.issn.1002-2082.2011.01.020, this algorithm is according to the intensity profile of image, use the edge of cubic polynomial fitted figure picture to realize sub-pixel positioning, thereby reach higher bearing accuracy.Although the method has also utilized fitting of a polynomial to eliminate noise, but the inner product operation that contains much complex in the computing formula that it provides, can cause solution procedure slower, so the solution procedure of this detection method is slower, can't adapts to the real-time rim detection that actual effect is had relatively high expectations.In addition, two-dimentional rim detection is to realize the main application means of sub-pixel edge location, and the method only limits to the discussion to the one dimension edge detection method, does not provide the method that how its one dimension detection method is applied on two-dimentional rim detection.
Sun Qiucheng etc. are at " a kind of edge detection method of sub-pixel precision " the journal .2009.10.35 of Beijing University of Technology (10), 1332-1337. proposed a kind of Sub-pixel Edge Detection based on Bezier edge fitting model, by match correction Bezier edge model, obtain the sub-pixel location of image border.This approximating method has robustness preferably to noise, has higher edge precision.But due to the analytic formula that does not have edge calculation, all need to carry out the match computing in each testing process, can cause the approximating method detection speed slower, especially when the method is applied to two-dimentional rim detection, need match Three-Dimensional Gray curved surface, because fitting function is too complicated, detection speed can be slower.
Summary of the invention
in order to solve in existing sub-pixel edge detection method, all relatively easily be subject to the interference of picture noise based on square or based on the method for interpolation, and existing match class methods are not because exist the analytic formula of edge calculation point, thereby cause all needing to carry out the match computing in each one dimension edge detection process, particularly need to carry out the surface fitting computing in two-dimentional rim detection, cause thus its calculating process complicated, find the solution speed slow, the technical matters that can't adapt to the higher two-dimentional rim detection of requirement of real-time, the invention provides a kind of rapid edge-detection method based on way of fitting.
The technical scheme that technical solution problem of the present invention is taked is as follows:
Rapid edge-detection method based on way of fitting comprises the steps:
Step 1: the whole pixel edge of image detects and the location: utilize the Canny edge detection operator to carry out rim detection to image, obtain coordinate and the gray scale of all whole pixel grids at place, image border;
Step 2: extract video in window: contain centered by marginate whole pixel grid so that step 1 is described, extract the square gray matrix A in a part of region, image border M * m, namely video in window, comprise a bit of edge line to be detected in video in window;
Step 3: the processing that quantizes of video in window specifically comprises the following steps:
Step 3.1: take the center of the described video in window of step 2 as initial point, set up the xoy rectangular coordinate system, wherein transverse axis is the x axle, and the longitudinal axis is the y axle; The unit length of coordinate system is 1 pixel;
Step 3.2: establish that in video in window, the coordinate of each pixel grid central point in rectangular coordinate system as described in step 3.1 is (x, y), the gray-scale value of whole pixel grid is put corresponding gray-scale value as this, and add the Z axis Special composition three-dimensional coordinate system vertical with this plane right-angle coordinate in the primary plane rectangular coordinate system, Z axis represents the gray-scale value that pixel is corresponding, obtain thus about the discrete distributed function I (x, y) of gray-scale value with the coordinate points variation;
Step 3.3: this straight line is the normal at edge by coordinate origin and perpendicular to the straight line of edge line to be detected at one at the rectangular coordinate system graphic memory, and the angle that defines this edge normal and horizontal cross shaft is θ;
Step 4: determine the direction of edge normal, its concrete steps are:
Step 4.1: utilize the spatial moment computing formula, each rank spatial moment M corresponding to square window image inscribed circle that extracts in calculation procedure 3.1 pq:
M pq = ∫ ∫ x 2 + y 2 ≤ r x p y q f ( x , y ) dxdy , p , q = 0,1 , . . . , N · · · · · · ( 1 )
The Continuous Gray Scale distribution function of f (x, y) presentation video pixel in formula (1), the exponent number of p and q representation space square is positive integer, r represents the radius of inscribed circle;
Step 4.2: with the described Continuous Gray Scale distribution function of step 4.1 f (x, y) rotation θ as described in step 3.3 angle centered by the described Z axis of step 3.2, can obtain new Continuous Gray Scale distribution function in video in window, the numerical value of each rank spatial moment that the video in window inscribed circle is corresponding at this moment changes;
Step 4.3: theoretical according to the spatial moment rotational invariance, obtain rotating spatial moment M' corresponding to rear inscribed circle pqWith not the rotation before spatial moment M pqRelation as follows:
M pq ′ = Σ i = 0 p Σ s = 0 q p i q s ( - 1 ) q - s ( cos θ ) p - i + s × ( sin θ ) q + i - s M p + q - i - s , i + s · · · · · · ( 2 )
I represents from 0 to p integer, s represents from 0 to q integer, as Continuous Gray Scale distribution function f (x, y) behind rotation θ angle, edge line to be detected in video in window is vertical with the described coordinate system transverse axis of step 3.1 x, namely rotate gray scale function corresponding to rear hatch image inscribed circle symmetrical about transverse axis x, according to the spatial moment computing formula as can be known, spatial moment M' corresponding to rotation rear hatch image inscribed circle 01=0, this moment p=0, q=1, utilize formula (2) to obtain:
M' 01=-sinθ·M 10+cosθ·M 01=0……(3)
M wherein 10And M 01Spatial moment corresponding to inscribed circle before expression is not rotated;
Step 4.4: utilize the spatial moment calculation template to replace formula (1), and the discrete gray scale function I (x, y) of pixel in the video in window inscribed circle is updated in above-mentioned discrete template formula, with the spatial moment M in calculating formula (3) 01And M 10Value;
Step 4.5: obtain formula (4) according to formula (3), the spatial moment M that step 4.4 is obtained 01And M 10Value substitution formula (4), try to achieve the value at θ angle:
θ = arctan M 01 M 10 · · · · · · ( 4 )
Step 4.6: establishing the edge normal is new horizontal cross shaft X, with the θ angle in step 4.5 as projected angle, on the pixel coordinate (x, y) of the described have a few of step 3.2 in pixel planes and normal place, edge transverse axis, there is following projection formula in point coordinate X:
X=xcosθ+ysinθ……(5)
After in formula (5), being projected in X value representation video in window pressed projected angle θ projection, the coordinate readings value of corresponding subpoint on the described new transverse axis X of step 4.6;
Step 5: based on the one dimension endpoint detections of way of fitting, determine the position of intersecting point of edge normal and edge line to be detected, it comprises the steps:
Step 5.1: the described Z axis of step 3.2 and the described horizontal cross shaft X of step 4.6 are reconstituted the plane right-angle coordinate XOZ that the one dimension endpoint detections is required, i.e. a projection plane;
According to projection formula (5), the central point of each pixel grid in video in window is projected on the described horizontal cross shaft X of step 4.6; In plane right-angle coordinate XOZ, obtain the discrete distributed function (X of edge normal upslide shadow point gray-scale value i, Z i), i=1 ..., 2n+1;
Step 5.2: the normalization of subpoint coordinate, if the subpoint number that forms as upper in transverse axis X as described in step 5.1 is 2n+1, establish L = X 2 n + 1 - X 1 2 , And order x i = X i L i = 1 , . . . , 2 n + 1 , X is arranged i∈ [1,1];
Step 5.3: calculate system of orthogonal polynomials, utilize schemite orthogonalization method calculating system of orthogonal polynomials about weight function ρ (x)=1 on [1,1] interval;
Figure BDA00002959884700053
Step 5.4: get the front quadravalence orthogonal basis function in step 5.3
Figure BDA00002959884700055
Carry out way of fitting with formula (7) three times as the match substrate, the equation of polynomial fitting is:
Figure BDA00002959884700056
Coefficient { a of the polynomial fitting in formula (8) 1, a 2, a 3, a 4It is fitting parameter to be asked;
Step 5.5: utilize formula (8) can derive normal equation group corresponding to the equation of polynomial fitting as follows:
In formula (9) Represent discrete vector
Figure BDA00002959884700059
With the inner product of self, Z=[Z 1, Z 2..., Z 2n+1] TDiscrete gray-scale value vector,
Figure BDA000029598847000510
The expression Z with Inner product;
Step 5.6: utilize system of equations formula (9) to draw the coefficient { a of polynomial fitting 1, a 2, a 3, a 4Computing formula:
Figure BDA00002959884700062
With each known quantity substitution formula (10), get respectively a 0, a 1, a 2And a 3Result;
Step 5.7: as known quantity substitution formula (8), the polynomial fitting equation becomes with the result of the described polynomial fitting of step 5.6:
Figure BDA00002959884700065
Step 5.8: the second derivative of calculating cubic polynomial formula (11) is zero point, that is:
Figure BDA00002959884700066
Find the solution and obtain the above-mentioned second derivative that makes formula (12) and be the coordinate of zero point;
Step 5.9: find the solution formula (12) and with the result of calculation renormalization, obtain the position coordinates R of one dimension marginal point on transverse axis X, that is:
Figure BDA00002959884700067
Formula (13) is the analytic formula of Computing One-Dimensional edge point position, and this marginal point is the intersection point of edge normal and edge line to be detected;
Step 6: utilize formula (13) to determine to cross in video in window the position of intersecting point of initial point edge normal and edge line to be detected, utilize step 4.5 to obtain the deflection θ of edge normal, determine edge line to be detected equation in the xoy coordinate system, thereby realize the sub-pixel positioning of edge line to be detected in video in window;
Step 7: reselect the described whole pixel grid of step 1 by the operator, and repeat step 2 to step 6, to obtain the sub-pixel positioning of edge line to be detected in the video in window of newly obtaining;
Step 8: when the operator judge completed entire image on during the sub-pixel positioning of required marginal position, detection method finishes.
The invention has the beneficial effects as follows: rapid edge-detection method of the present invention can directly apply to two-dimentional rim detection, the method is all utilized the projecting direction of spatial moment edge calculation at every turn when video in window being quantized processing, and in conjunction with projection formula, two-dimentional rim detection is converted into the one dimension rim detection, avoided all repeating the required complex calculation of match gray surface in each one dimension edge detection process.Use way of fitting to obtain the analytic formula of Computing One-Dimensional edge point position, the method that has effectively overcome match can not obtain the difficult problem of analytic solution usually, this analytic formula form is simple, containing less inner product calculates, final in conjunction with detecting edge point position and the edge normal direction that obtains, determine the position of edge line in video in window.The method has significantly improved arithmetic speed, can adapt to the higher two-dimentional edge detection process of requirement of real-time, also had both based on the method for square or method of interpolation fast operation and fitting process the insensitive characteristic of picture noise, can significantly improve precision and the speed of rim detection.
Description of drawings
Fig. 1 is the rapid edge-detection method flow diagram that the present invention is based on way of fitting;
Fig. 2 is the process flow diagram of step 3 of the present invention;
Fig. 3 is the process flow diagram of step 4 of the present invention;
Fig. 4 is the process flow diagram of step 5 of the present invention;
Fig. 5 be on video in window of the present invention pixel by the application schematic diagram of projected angle θ to the transverse axis X projection of normal place, edge.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further details.
As shown in Figure 1, the rapid edge-detection method that the present invention is based on way of fitting comprises the steps:
Step 1: the whole pixel edge of image detects and the location: utilize the Canny edge detection operator to carry out rim detection to image, obtain coordinate and the gray scale of all whole pixel grids at place, image border, thus edge to be detected cardinal principle orientation and scope in the pixel coordinate plane in positioning image.
Step 2: extract video in window: contain centered by marginate whole pixel grid so that step 1 is described, extract the square gray matrix A in a part of region, image border M * m, namely video in window 2.The interior edge to be detected of video in window 2 can be similar to and regard a bit of straight line as under the microcosmic level.As shown in Figure 5, in the present embodiment, as video in window 2, extract the square-shaped image that contains a bit of edge line to be detected 1 with 5 * 5 formed image-regions of whole pixel grid from image.
Step 3: the processing that quantizes of video in window 2 specifically comprises following substep as shown in Figure 2:
Step 3.1: as shown in Figure 5, take the center of the described video in window 2 of step 2 as initial point, set up the xoy rectangular coordinate system, wherein transverse axis is the x axle, and the longitudinal axis is the y axle.The unit length of coordinate system is 1 pixel, and so by step 1 as can be known, edge to be detected is arranged in this coordinate system.
Step 3.2: as shown in Figure 5, if in video in window 2, the coordinate of each pixel grid central point in rectangular coordinate system as described in step 3.1 is (x, y), the gray-scale value of whole pixel grid is put corresponding gray-scale value as this, and add Z axis in former xoy plane right-angle coordinate, gray-scale value corresponding to expression pixel obtains the discrete distributed function I (x, y) that changes with coordinate points about gray-scale value thus.
Step 3.3: as shown in Figure 5, this straight line is the normal at edge by coordinate origin o and perpendicular to the straight line at edge to be detected at one at xoy rectangular coordinate system graphic memory, and the angle that defines this edge normal 3 and horizontal cross shaft is θ.
Step 4: determine edge normal 3 directions, it specifically comprises following substep as shown in Figure 3:
Step 4.1: utilize the spatial moment computing formula, each rank spatial moment M corresponding to square window image 2 inscribed circles that extracts in can calculation procedure 3.1 pq:
M pq = ∫ ∫ x 2 + y 2 ≤ r x p y q f ( x , y ) dxdy , p , q = 0,1 , . . . , N · · · · · · ( 1 )
The continuous distribution function of f (x, y) presentation video pixel grey scale in formula (1), positive integer p and q be the exponent number of representation space square respectively, and r represents the radius of inscribed circle.
Step 4.2: with the described Continuous Gray Scale distribution function of step 4.1 f (x, y) rotation θ as described in step 3.3 angle centered by the described Z axis of step 3.2, obtain new Continuous Gray Scale distribution function in the interior meeting of video in window 2, the numerical value of each rank spatial moment that video in window 2 inscribed circles are corresponding at this moment changes.
Step 4.3: theoretical according to the spatial moment rotational invariance, can obtain rotating spatial moment M' corresponding to rear inscribed circle pqWith not the rotation before square M pqRelation as follows:
M pq ′ = Σ i = 0 p Σ s = 0 q p i q s ( - 1 ) q - s ( cos θ ) p - i + s × ( sin θ ) q + i - s M p + q - i - s , i + s · · · · · · ( 2 )
I represents from 0 to p integer, s represents from 0 to q integer, as Continuous Gray Scale distribution function f (x, y) behind rotation θ angle as shown in Figure 5, edge line to be detected 1 in video in window 2 is vertical with the described coordinate system transverse axis of step 3.1 x, namely rotate gray scale function corresponding to rear hatch image 2 inscribed circles symmetrical about transverse axis x, according to the spatial moment computing formula as can be known, spatial moment M' corresponding to rotation rear hatch image 2 inscribed circles 01=0, this moment p=0, q=1, utilize formula (2) to obtain:
M' 01=-sinθ·M 10+cosθ·M 01=0……(3)
M wherein 10, (p=1, q=0) and M 01, spatial moment corresponding to inscribed circle before (p=0, q=1) expression is not rotated.
Step 4.4: due to the grey scale pixel value I (x in video in window, y) be discrete, namely there is not continuous gray scale function f (x, y), therefore utilize the spatial moment calculation template to replace formula (1), and the discrete gray-scale value I (x, y) of pixel in the video in window inscribed circle is updated in above-mentioned spatial moment calculation template formula, with the spatial moment M in calculating formula (3) 01And M 10Value.Described spatial moment calculation template is by LyversEP, Mitchell OR. is at its works Precisionedgecontrastandorientationestimation[J], IEEETrans.PatternAnal.Mach.Intell, 1988,10 (6): 927-937.[1] the spatial moment calculation template that provides in.
Step 4.5: obtain formula (4) according to formula (3), the spatial moment M that step 4.4 is obtained 01And M 10Value substitution formula (4), try to achieve the value at θ angle:
θ = arctan M 01 M 10 · · · · · · ( 4 )
After the θ angle was tried to achieve, the position of as described in step 3.3 edge normal 3 in the xoy coordinate system just was determined, and become known quantity.
Step 4.6: as shown in Figure 5, if edge normal 3 is new horizontal cross shaft X, as projected angle, on the pixel coordinate (x, y) of the described have a few of step 3.2 in pixel planes and edge normal 3 place transverse axis, there is following projection formula in point coordinate X with the θ angle in step 4.5:
X=xcosθ+ysinθ……(5)
After in formula (5), being projected in X value representation video in window 2 pressed projected angle θ projection, the coordinate readings value of corresponding subpoint 4 on the described new transverse axis X of step 4.6.Due on this projecting direction, Continuous Gray Scale distribution function f (x, y) does not change, and the gray-scale value of putting on therefore new transverse axis is identical with corresponding subpoint gray-scale value 5.
Pass through step 4, pixel in two-dimentional sample window all can be projected on the edge normal of video in window matrix initial point, and then should two dimension rim detection problem change one dimension rim detection problem into, and then utilize following steps can locate and obtain the position of edge line.
Step 5: carry out the one dimension endpoint detections based on way of fitting, to determine the position of intersecting point of edge normal 3 and edge line 1 to be detected, it comprises following substep as shown in Figure 4:
Step 5.1: as shown in Figure 5, the described Z axis of step 3.2 and the described horizontal cross shaft X of step 4.6 are reconstituted a plane right-angle coordinate XOZ that the one dimension endpoint detections is required, it is projection plane 6, new XOZ coordinate system is the one-dimensional plane coordinate system, its XOZ initial point overlaps with former xoy coordinate origin position, therefore, the coordinate of subpoint 4 is (X, Z) in new coordinate.
According to projection formula (5), the central point of each pixel grid in video in window 2 is projected on the described horizontal cross shaft X of step 4.6, owing to being identical along gray-scale value corresponding to the pixel of projecting direction θ in video in window, therefore the gray-scale value of subpoint be projected a little identical, therefore in plane right-angle coordinate XOZ, can obtain the discrete distribution function (X of projection image's vegetarian refreshments gray-scale value 5 on edge normal 3 i, Z i), i=1 ..., 2n+1.
Step 5.2: the normalization of subpoint coordinate, if the number of the subpoint 4 on transverse axis X as described in step 5.1 is 2n+1, according to the symmetry of square matrix projection, above-mentioned subpoint 4 X both sides on transverse axis are symmetrical.If establish L = X 2 n + 1 - X 1 2 , And order x i = X i L i = 1 , . . . , 2 n + 1 , X is arranged i∈ [1,1] is [1,1] with regard to making subpoint in the scope of X-axis like this.
Step 5.3: calculate system of orthogonal polynomials, utilize the schemite orthogonalization method to calculate on [1,1] interval about the system of orthogonal polynomials of weight function ρ (x)=1, described system of polynomials is as follows:
Figure BDA00002959884700103
Step 5.4: get the front quadravalence orthogonal basis function in step 5.3
Figure BDA00002959884700104
Carry out way of fitting with formula (7) three times as the match substrate, the equation of polynomial fitting is
Figure BDA00002959884700106
Coefficient { a of the polynomial fitting in formula (8) 1, a 2, a 3, a 4It is fitting parameter to be asked.
Step 5.5: according to the property of orthogonality of least square fitting principle and orthogonal polynomial, utilize formula (8) can derive normal equation group corresponding to the equation of polynomial fitting as follows:
Figure BDA00002959884700111
In formula (9)
Figure BDA00002959884700112
Represent discrete vector
Figure BDA00002959884700113
With the inner product of self, Z=[Z 1, Z 2..., Z 2n+1] TDiscrete gray-scale value vector,
Figure BDA00002959884700114
The expression Z with Inner product.
Step 5.6: utilize system of equations formula (9) can draw the coefficient { a of polynomial fitting 1, a 2, a 3, a 4Computing formula:
Figure BDA00002959884700116
Figure BDA00002959884700117
Figure BDA00002959884700118
Figure BDA00002959884700119
With each known quantity substitution formula (10), get respectively a 0, a 1, a 2And a 3Result.
Step 5.7: as known quantity substitution formula (8), the polynomial fitting equation becomes with the result of the described polynomial fitting of step 5.6:
Figure BDA000029598847001110
Step 5.8: because formula (11) is cubic polynomial, therefore having unique second derivative is zero point, and the second derivative of calculating cubic polynomial formula (11) is zero point, that is:
Figure BDA000029598847001111
Find the solution and obtain the above-mentioned second derivative that makes formula (12) and be the coordinate of zero point.
Step 5.9: find the solution formula (12) and with the result of calculation renormalization, can obtain the position coordinates R on transverse axis X of one dimension marginal point, that is:
Figure BDA000029598847001112
Formula (13) is the analytic formula of Computing One-Dimensional edge point position, and this marginal point is the intersection point of edge normal 3 and edge line 1 to be detected.
By the described method of step 5, we just can be by the position of this formula edge calculation point on X-axis, the namely position of intersecting point of edge line and normal.
Step 6: utilize formula (13) to determine the interior position of intersecting point of crossing initial point edge normal 3 and edge line 1 to be detected of video in window 2, utilize step 4.5 to obtain the deflection θ of edge normal 3, determine the equation of edge line to be detected 1 in the xoy coordinate system, thereby realize the sub-pixel positioning of edge line 1 to be detected in video in window 2;
Step 7: reselect the described whole pixel grid of step 1 by the operator, and repeat step 2 to step 6, to obtain the sub-pixel positioning of edge line 1 to be detected in the video in window 2 of newly obtaining;
Step 8: when the operator judge completed entire image on during the sub-pixel positioning of required marginal position, detection method finishes.
the rapid edge-detection method that the present invention is based on way of fitting can directly apply to two-dimentional rim detection, unchangeability when it utilizes the spatial moment rotation obtains the projecting direction θ at edge, and draw projection formula, thereby the pixel in video in window and corresponding gray-scale value thereof are all projected to take the edge normal as transverse axis by this projection formula, in plane take gray scale as the longitudinal axis, and then two-dimentional rim detection problem is converted into one dimension rim detection problem, by obtain edge normal in video in window and the intersecting point coordinate position at edge based on the one dimension edge detection method of way of fitting, final combination detects and obtains edge normal direction θ, determine the position of edge line in video in window.
Detection method of the present invention is in processing procedure that each video in window is quantized, utilize spatial moment theoretical, at first calculate the projecting direction θ at edge, in conjunction with projection formula, two-dimentional rim detection problem is converted into one dimension rim detection problem, has avoided all repeating the required complex calculation of match gray surface in each one dimension edge detection process.Theoretical according to way of fitting, can obtain the analytic formula of Computing One-Dimensional edge point position, the method that has effectively overcome match can not obtain the difficult problem of analytic solution usually, the gray-scale value that utilizes this analytic formula only need to bring marginal position into can obtain the analytic solution of relevant edge point position very easily, because the analytic formula form is simple, contain less inner product and calculate, significantly improved arithmetic speed, thereby can adapt to the real-time working condition edge detection process that actual effect is had relatively high expectations.
In addition, detection method of the present invention has merged well based on square and based on two kinds of sub-pix detection methods of fitting process, not only inherited based on the method for square or the characteristic of method of interpolation fast operation, more had fitting process concurrently to the insensitive advantage of picture noise, picture noise be can reduce preferably on interference and the impact of result, thereby precision and the speed of rim detection significantly improved.

Claims (1)

1. based on the rapid edge-detection method of way of fitting, it is characterized in that: the method comprises the steps:
Step 1: the whole pixel edge of image detects and the location: utilize the Canny edge detection operator to carry out rim detection to image, obtain coordinate and the gray scale of all whole pixel grids at place, image border;
Step 2: extract video in window: contain centered by marginate whole pixel grid so that step 1 is described, extract the square gray matrix A in a part of region, image border M * m, namely video in window (2), comprise a bit of edge line to be detected (1) in video in window (2);
Step 3: the processing that quantizes of video in window (2) specifically comprises the following steps:
Step 3.1: take the center of the described video in window of step 2 (2) as initial point, set up the xoy rectangular coordinate system, wherein transverse axis is the x axle, and the longitudinal axis is the y axle; The unit length of coordinate system is 1 pixel;
Step 3.2: establish that in video in window (2), the coordinate of each pixel grid central point in rectangular coordinate system as described in step 3.1 is (x, y), the gray-scale value of whole pixel grid is put corresponding gray-scale value as this, and add the Z axis Special composition three-dimensional coordinate system vertical with this plane right-angle coordinate in the primary plane rectangular coordinate system, Z axis represents the gray-scale value that pixel is corresponding, obtain thus about the discrete distributed function I (x, y) of gray-scale value with the coordinate points variation;
Step 3.3: this straight line is the normal at edge by coordinate origin and perpendicular to the straight line of edge line to be detected (1) at one at the rectangular coordinate system graphic memory, and the angle that defines this edge normal (3) and horizontal cross shaft is θ;
Step 4: determine the direction of edge normal (3), its concrete steps are:
Step 4.1: utilize the spatial moment computing formula, each rank spatial moment M corresponding to square window image (2) inscribed circle that extracts in calculation procedure 3.1 pq:
M pq = ∫ ∫ x 2 + y 2 ≤ r x p y q f ( x , y ) dxdy , p , q = 0,1 , . . . , N · · · · · · ( 1 )
The Continuous Gray Scale distribution function of f (x, y) presentation video pixel in formula (1), positive integer p and q be the exponent number of representation space square respectively, and r represents the radius of inscribed circle;
Step 4.2: with the described Continuous Gray Scale distribution function of step 4.1 f (x, y) rotation θ as described in step 3.3 angle centered by the described Z axis of step 3.2, can obtain new Continuous Gray Scale distribution function in video in window (2), the numerical value of each rank spatial moment that video in window (2) inscribed circle is corresponding at this moment changes;
Step 4.3: spatial moment M' corresponding to inscribed circle after rotation pqWith not the rotation before spatial moment M pqRelation as follows:
M pq ′ = Σ i = 0 p Σ s = 0 q p i q s ( - 1 ) q - s ( cos θ ) p - i + s × ( sin θ ) q + i - s M p + q - i - s , i + s · · · · · · ( 2 )
I represents from 0 to p integer, s represents from 0 to q integer, as Continuous Gray Scale distribution function f (x, y) behind rotation θ angle, edge line to be detected (1) in video in window (2) is vertical with the described coordinate system transverse axis of step 3.1 x, namely rotate gray scale function corresponding to rear hatch image (2) inscribed circle symmetrical about transverse axis x, according to the spatial moment computing formula as can be known, spatial moment M' corresponding to rotation rear hatch image (2) inscribed circle 01=0, this moment p=0, q=1, utilize formula (2) to obtain:
M' 01=-sinθ·M 10+cosθ·M 01=0……(3)
M wherein 10And M 01Spatial moment corresponding to inscribed circle before expression is not rotated;
Step 4.4: utilize the spatial moment calculation template to replace formula (1), and the discrete gray scale function I (x, y) of pixel in video in window (2) inscribed circle is updated in above-mentioned spatial moment calculation template formula, with the spatial moment M in calculating formula (3) 01And M 10Value;
Step 4.5: obtain formula (4) according to formula (3), the spatial moment M that step 4.4 is obtained 01And M 10Value substitution formula (4), try to achieve the value at θ angle:
θ = arctan M 01 M 10 · · · · · · ( 4 )
Step 4.6: establishing edge normal (3) is new horizontal cross shaft X, with the θ angle in step 4.5 as projected angle, on the pixel coordinate (x, y) of the described have a few of step 3.2 in pixel planes and edge normal (3) place transverse axis, there is following projection formula in point coordinate X:
X=xcosθ+ysinθ……(5)
After in formula (5), being projected in X value representation video in window (2) pressed projected angle θ projection, the coordinate readings value of corresponding subpoint (4) on the described new transverse axis X of step 4.6;
Step 5: based on the one dimension endpoint detections of way of fitting, determine the position of intersecting point of edge normal (3) and edge line to be detected (1), it comprises the steps:
Step 5.1: the described Z axis of step 3.2 and the described horizontal cross shaft X of step 4.6 are reconstituted a plane right-angle coordinate XOZ that the one dimension endpoint detections is required, i.e. projection plane (6);
According to projection formula (5), the central point of each pixel grid in video in window (2) is projected on the described horizontal cross shaft X of step 4.6; In plane right-angle coordinate XOZ, obtain the discrete distributed function (X of the upper subpoint gray-scale value (5) of edge normal (3) i, Z i), i=1 ..., 2n+1;
Step 5.2: the normalization of subpoint coordinate, if subpoint (4) number that forms as upper in transverse axis X as described in step 5.1 is 2n+1, establish L = X 2 n + 1 - X 1 2 , And order x i = X i L i = 1 , . . . , 2 n + 1 , X is arranged i∈ [1,1];
Step 5.3: calculate system of orthogonal polynomials, utilize schemite orthogonalization method calculating system of orthogonal polynomials about weight function ρ (x)=1 on [1,1] interval;
Step 5.4: get the front quadravalence orthogonal basis function in step 5.3
Figure FDA00002959884600034
Figure FDA00002959884600035
Carry out way of fitting with formula (7) three times as the match substrate, the equation of polynomial fitting is:
Figure FDA00002959884600036
Coefficient { a of the polynomial fitting in formula (8) 1, a 2, a 3, a 4It is fitting parameter to be asked;
Step 5.5: utilize formula (8) can derive normal equation group corresponding to the equation of polynomial fitting as follows:
Figure FDA00002959884600037
In formula (9)
Figure FDA00002959884600038
Represent discrete vector
Figure FDA00002959884600039
With the inner product of self, Z=[Z 1, Z 2..., Z 2n+1] TDiscrete gray-scale value vector,
Figure FDA000029598846000310
The expression Z with Inner product;
Step 5.6: utilize system of equations formula (9) to draw the coefficient { a of polynomial fitting 1, a 2, a 3, a 4Computing formula:
Figure FDA00002959884600041
Figure FDA00002959884600042
Figure FDA00002959884600043
Figure FDA00002959884600044
With each known quantity substitution formula (10), get respectively a 0, a 1, a 2And a 3Result;
Step 5.7: as known quantity substitution formula (8), the polynomial fitting equation becomes with the result of the described polynomial fitting of step 5.6:
Figure FDA00002959884600045
Step 5.8: the second derivative of calculating cubic polynomial formula (11) is zero point, that is:
Figure FDA00002959884600046
Find the solution and obtain the above-mentioned second derivative that makes formula (12) and be the coordinate of zero point;
Step 5.9: find the solution formula (12) and with the result of calculation renormalization, obtain the position coordinates R of one dimension marginal point on transverse axis X, that is:
Figure FDA00002959884600047
Formula (13) is the analytic formula of Computing One-Dimensional edge point position, and this marginal point is the intersection point of edge normal (3) and edge line to be detected (1);
Step 6: utilize formula (13) to determine to cross in video in window (2) position of intersecting point of initial point edge normal (3) and edge line to be detected (1), utilize step 4.5 to obtain the deflection θ of edge normal (3), determine the equation of edge line to be detected (1) in the xoy coordinate system, thereby realize the sub-pixel positioning of edge line (1) to be detected in video in window (2);
Step 7: reselect the described whole pixel grid of step 1 by the operator, and repeat step 2 to step 6, to obtain the sub-pixel positioning of edge line (1) to be detected in the video in window (2) of newly obtaining;
Step 8: when the operator judge completed entire image on during the sub-pixel positioning of required marginal position, detection method finishes.
CN201310097073.2A 2013-03-25 2013-03-25 Rapid edge detecting method based on orthogonal polynomial fitting Expired - Fee Related CN103136758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310097073.2A CN103136758B (en) 2013-03-25 2013-03-25 Rapid edge detecting method based on orthogonal polynomial fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310097073.2A CN103136758B (en) 2013-03-25 2013-03-25 Rapid edge detecting method based on orthogonal polynomial fitting

Publications (2)

Publication Number Publication Date
CN103136758A true CN103136758A (en) 2013-06-05
CN103136758B CN103136758B (en) 2015-05-27

Family

ID=48496548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310097073.2A Expired - Fee Related CN103136758B (en) 2013-03-25 2013-03-25 Rapid edge detecting method based on orthogonal polynomial fitting

Country Status (1)

Country Link
CN (1) CN103136758B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715491A (en) * 2015-04-09 2015-06-17 大连理工大学 Subpixel edge detection method based on one-dimensional gray moment
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105740869A (en) * 2016-01-28 2016-07-06 北京工商大学 Square operator edge extraction method and system based on multiple scales and multiple resolutions
CN105761256A (en) * 2016-02-05 2016-07-13 成都康烨科技有限公司 Image sub-pixel edge line acquisition method and device
CN106408583A (en) * 2016-08-25 2017-02-15 凌云光技术集团有限责任公司 Multi-edge defect detecting method and device
CN106651828A (en) * 2016-09-21 2017-05-10 哈尔滨工业大学 Product dimension sub-pixel measurement method under industrial microscale motion blurred imaging condition
CN108256522A (en) * 2018-01-09 2018-07-06 北京航空航天大学 A kind of SPR and the two-dimentional recognition methods of waveguide blended absorbent spectrum signature position
CN109522893A (en) * 2018-10-08 2019-03-26 武汉工程大学 A kind of quick detection method of aerial noncooperative target balloon key point
CN109934820A (en) * 2019-03-22 2019-06-25 大连大学 Linear edge sub-pixel detection method in a kind of laser assembly solder part to be welded image
CN111667534A (en) * 2020-05-21 2020-09-15 常州大学 Rapid sub-pixel reed dent positioning method and device applied to high-speed drawing-in machine

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
MIROSLAV HAGARA AND PETER KULLA: "Edge Detection with Sub-pixel Accuracy Based on Approximation of Edge with Erf Function", 《RADIOENGINEERING》 *
孙秋成 等: "一种亚像素精度的边缘检测方法", 《北京工业大学学报》 *
李庆利 等: "一种基于多项式插值改进的亚像素细分算法", 《北京科技大学学报》 *
王社阳 等: "改进的空间矩亚像素边缘检测算法", 《哈尔滨工业大学学报》 *
郭玉波 等: "基于空间矩的圆标记中心亚像素定位算法", 《吉林大学学报(工学版)》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715491A (en) * 2015-04-09 2015-06-17 大连理工大学 Subpixel edge detection method based on one-dimensional gray moment
CN104715491B (en) * 2015-04-09 2017-07-21 大连理工大学 A kind of sub-pixel edge detection method based on one-dimensional Gray Moment
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105005985B (en) * 2015-06-19 2017-10-31 沈阳工业大学 Backlight image micron order edge detection method
CN105740869A (en) * 2016-01-28 2016-07-06 北京工商大学 Square operator edge extraction method and system based on multiple scales and multiple resolutions
CN105740869B (en) * 2016-01-28 2019-04-12 北京工商大学 A kind of rectangular operator edge extracting method and system based on multiple dimensioned multiresolution
CN105761256B (en) * 2016-02-05 2019-02-26 成都康烨科技有限公司 The sub-pixel edge straight line acquisition methods and device of image
CN105761256A (en) * 2016-02-05 2016-07-13 成都康烨科技有限公司 Image sub-pixel edge line acquisition method and device
CN106408583A (en) * 2016-08-25 2017-02-15 凌云光技术集团有限责任公司 Multi-edge defect detecting method and device
CN106408583B (en) * 2016-08-25 2019-01-25 凌云光技术集团有限责任公司 A kind of multiple edge defect inspection method and device
CN106651828A (en) * 2016-09-21 2017-05-10 哈尔滨工业大学 Product dimension sub-pixel measurement method under industrial microscale motion blurred imaging condition
CN106651828B (en) * 2016-09-21 2020-05-26 哈尔滨工业大学 Method for measuring sub-pixel of product size under industrial small-scale motion blur imaging condition
CN108256522A (en) * 2018-01-09 2018-07-06 北京航空航天大学 A kind of SPR and the two-dimentional recognition methods of waveguide blended absorbent spectrum signature position
CN108256522B (en) * 2018-01-09 2020-05-19 北京航空航天大学 Two-dimensional identification method for SPR (surface plasmon resonance) and waveguide mixed absorption spectrum characteristic position
CN109522893A (en) * 2018-10-08 2019-03-26 武汉工程大学 A kind of quick detection method of aerial noncooperative target balloon key point
CN109522893B (en) * 2018-10-08 2022-12-06 武汉工程大学 Method for rapidly detecting key points of aerial non-cooperative target balloons
CN109934820A (en) * 2019-03-22 2019-06-25 大连大学 Linear edge sub-pixel detection method in a kind of laser assembly solder part to be welded image
CN109934820B (en) * 2019-03-22 2020-10-09 大连大学 Method for detecting linear edge sub-pixels in laser tailor-welded workpiece image
CN111667534A (en) * 2020-05-21 2020-09-15 常州大学 Rapid sub-pixel reed dent positioning method and device applied to high-speed drawing-in machine

Also Published As

Publication number Publication date
CN103136758B (en) 2015-05-27

Similar Documents

Publication Publication Date Title
CN103136758B (en) Rapid edge detecting method based on orthogonal polynomial fitting
CN103727930B (en) A kind of laser range finder based on edge matching and camera relative pose scaling method
CN104484868B (en) The moving target of a kind of combination template matches and image outline is taken photo by plane tracking
CN103871039B (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
Wang et al. A dual quaternion-based, closed-form pairwise registration algorithm for point clouds
CN101408985A (en) Method and apparatus for extracting circular luminous spot second-pixel center
Bian et al. 3D reconstruction of single rising bubble in water using digital image processing and characteristic matrix
CN108230375B (en) Registration method of visible light image and SAR image based on structural similarity rapid robustness
Pinggera et al. High-performance long range obstacle detection using stereo vision
Báčová et al. A GIS method for volumetric assessments of erosion rills from digital surface models
US11182942B2 (en) Map generation system and method for generating an accurate building shadow
CN102496181A (en) True-orthophotomap making method oriented to large-scale production
Xiaoqi et al. Centroid automatic extraction of spaceborne laser spot image
CN104715491A (en) Subpixel edge detection method based on one-dimensional gray moment
CN104615880A (en) Rapid ICP (inductively coupled plasma) method for point cloud matching of three-dimensional laser radar
CN102927973B (en) Quick edge locating method of sub pixel image of target celestial body for deep space exploration autonomous navigation
CN106529548A (en) Sub-pixel level multi-scale Harris corner detection algorithm
CN104268837A (en) Method for extracting phase position information of electronic speckle interference fringe pattern
Song et al. Flexible line-scan camera calibration method using a coded eight trigrams pattern
Zhang et al. Enhancement of measurement accuracy of discontinuous specular objects with stereo vision deflectometer
CN102789644B (en) Novel camera calibration method based on two crossed straight lines
Ma et al. Variable subset DIC algorithm for measuring discontinuous displacement based on pixel-level ZNCC value distribution map
CN102637094A (en) Correction information calculation method and system applied to optical touch device
Cui et al. The high precision positioning algorithm of circular landmark center in visual measurement
CN105631849A (en) Polygon object change detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: CHANGCHUN NORMAL UNIVERSITY

Free format text: FORMER OWNER: CHANGCHUN POLYTECHNIC UNIV.

Effective date: 20150324

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 130012 CHANGCHUN, JILIN PROVINCE TO: 130032 CHANGCHUN, JILIN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20150324

Address after: 130032 Jilin city two district Changchun Changji Road No. 677

Applicant after: CHANGCHUN NORMAL University

Address before: 130012 Yanan street, Jilin, Chaoyang District, No. 2055, No.

Applicant before: Changchun University of Technology

C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Sun Qiucheng

Inventor after: Liu Jianhao

Inventor after: Liu Renyun

Inventor after: Yu Fanhua

Inventor after: Li Chunjing

Inventor after: Cao Lei

Inventor before: Sun Qiucheng

Inventor before: Li Chunjing

Inventor before: Cao Lei

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: SUN QIUCHENG LI CHUNJING CAO LEI TO: SUN QIUCHENG LIU JIANHAO LIU RENYUN YU FANHUA LI CHUNJING CAO LEI

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150527

CF01 Termination of patent right due to non-payment of annual fee