CN101334263A - Circular target circular center positioning method - Google Patents

Circular target circular center positioning method Download PDF

Info

Publication number
CN101334263A
CN101334263A CNA2008100227931A CN200810022793A CN101334263A CN 101334263 A CN101334263 A CN 101334263A CN A2008100227931 A CNA2008100227931 A CN A2008100227931A CN 200810022793 A CN200810022793 A CN 200810022793A CN 101334263 A CN101334263 A CN 101334263A
Authority
CN
China
Prior art keywords
point
pixel
edge
prime
gradient direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100227931A
Other languages
Chinese (zh)
Other versions
CN101334263B (en
Inventor
达飞鹏
张虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Outpace Building Material Equipment Co ltd
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2008100227931A priority Critical patent/CN101334263B/en
Publication of CN101334263A publication Critical patent/CN101334263A/en
Application granted granted Critical
Publication of CN101334263B publication Critical patent/CN101334263B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a circle center locating method of a circular target, mainly relating to a great deal of target identification and target location. Firstly, rough circle center location is carried out to an image by using a simple contour centroid method, a key square region is extracted as a region of interest according to the information of a rough location circle center and a rough location radius, and pixel-level edge location is carried out to the circular target in the region of interest by a canny operator; then sub-pixel location is carried out to the circular target according to the geometric features and the gray information of the circle, therefore, a precise coordinate of sub-pixel edge points is obtained; after that, a curvature filtering method and an average filtering method are respectively used for filtering 'isolated points' and noise occurring in the sub-pixel edge points; finally, a least squares method is utilized to fit a circle to the filtered sub-pixel edge points so as to obtain the final circular center and radius. The method not only effectively improves the precision of circle center location, but also improves the robustness thereof, thus further improving the measurement precision of a measurement system and perfecting the stability of the measurement system.

Description

The circle center locating method of circular target
Technical field
The present invention relates to a kind of sub-pixel edge location of circular target and the method for sub-pixel edge point filtering, relate in particular to a kind of circle center locating method of circular target.
Background technology
In numerous graphical analyses and disposal system, obtain the center of circle quickly and accurately and be widely used on Target Recognition and the location, as in the system of three-dimensionalreconstruction, obtaining the scaling board center of circle accurately has fundamental influence to whole three-dimensionalreconstruction precision.The common algorithm of asking for the center of circle at present has: centroid method, Hough converter technique, Gauss's surface fitting method, least square curve fitting method.It is more even that centroid method requires gradation of image to distribute, otherwise can produce than mistake; The Hough converter technique need be to each frontier point pointwise ballot, record, and computing time is longer, and it is bigger to take calculator memory, is restricted in actual applications; Gauss's surface fitting is that the principle of utilizing the intensity profile of circular light spot to be similar to Gauss model is carried out surface fitting, its bearing accuracy height, if bigger but handle the area of a circle, then operand is very big, and computing time is long, is not suitable in the real-time optical detection system; The least square curve fitting method is the most common method, because it realizes simple, calculated amount is little, be used widely in practice, but often all be to utilize the pixel edge point of asking for to carry out the least square fitting center of circle at present in the least square curve fitting method setting circle, make its fitting precision be subjected to certain restriction, thus sub-pixel edge circular in the accurate image can be obtained, for the fitting precision important influence that improves the center of circle.
Common sub-pixel edge location algorithm has at present: parameter fitting method, the method for square, method of interpolation and the method for utilizing other geometric properties of image.Document " Subpixel edge localization and the interpolation of still images " (Jensen K, Anastassiou, IEEE Trans on PAMI, 1995,4 (3): 285~295) be method of interpolation, be that the pixel edge location algorithm is used for the interpolation enlarged image, under real image intensity profile rule and the corresponding to prerequisite of interpolation rule, make the location, edge reach sub-pixel precision; Document " utilize tangential direction information detect sub-pixel edge " (Zhang Yujin, Fu is notable, pattern-recognition and artificial intelligence, 1997,10 (1): 69~74) mainly utilized the information of tangential direction, the target of known form is carried out the sub-pixel edge location, and this method is too dependent on the center of circle and the radius information of coarse positioning, does not make full use of the gray value information of image; Document " A fast subpixel edge detection method usingSobel-Zemike moments operator " (Qu Ying-Dong, Cui Cheng-Song, Chen San-Ben, LiJin-Quan, Image and Vision Computing, 2005,23:11~17) middle method realization edge sub-pixel location with square.Method application at present based on square is more extensive, but owing to will use a plurality of 7 * 7 templates, therefore is not suitable for the real-time processing of large-scale bitmap; Document " the fast sub-picture element edge detection method of image " (Liu Lishuan, open small pot with a handle and a spout for boiling water or herbal medicine, photoelectron laser, 2005,16 (8): 993~996) introduced the principle of sub-pixel edge parameter fitting, it adopts is that the method for Sobel operator and parameter fitting realizes the sub-pixel edge location, wherein there are two problems, the first, the Sobel operator all can not be realized whole pixel edge location to most images, causes sub-pixel positioning inaccurate; The second, the method that the document provides can not be guaranteed the edge precision of quafric curve type.
Summary of the invention
At existing in prior technology shortcoming and restriction, the object of the present invention is to provide a kind of circle center locating method that can improve the circular target of measuring system precision.
The present invention designs and a kind ofly at first the circular target in the image is carried out center of circle coarse positioning, after obtaining the pixel edge point of circular target with the canny operator, take all factors into consideration round geometric properties and the edge gray distribution features is asked for the sub-pixel edge point coordinate, then accurate sub-pixel edge point is carried out filtering, try to achieve the method in the center of circle of circular target at last with the filtered sub-pixel edge point of least square fitting.The present invention adopts following technical scheme:
A kind of circle center locating method of circular target, its key step is:
The 1st step: the center of circle and radius to circular target in the image carry out coarse positioning, obtain the coarse positioning center of circle (x of circular target Oc, y Oc) and coarse positioning radius of circle R c, wherein, (x Oc, y Oc) be the coordinate in the center of circle under the image coordinate system, concrete grammar is as follows:
The 1.1st step: image removed make an uproar, Threshold Segmentation, obtain each pixel gray-scale value in the image and be 255 or 0 bianry image;
The 1.2nd step: the circular target in the bianry image is carried out Boundary Extraction and border tracking;
The 1.3rd step: round to the frontier point of following the tracks of with the match of contour centroid method, obtain the coarse positioning center of circle and the coarse positioning radius of circle of this circular target, and store coarse positioning central coordinate of circle (x Oc, y Oc) and coarse positioning radius of circle R c
The 2nd step: the circular target in the image is carried out the pixel edge location, and concrete steps are as follows:
The 2.1st step: according to the coarse positioning center of circle and coarse positioning radius of circle, from image, extract a square area, be called " area-of-interest ", the extracting method of this square area is: with the coarse positioning center of circle of circular target in the image central point as square area, add the length of side of 2~6 pixels as square area with the coarse positioning diameter of circular target;
The 2.2nd step: with the canny operator area-of-interest that is extracted in the image is carried out pixel edge and detect, obtain the pixel edge of circular target, again the pixel edge point of the circular target of canny operator extraction being carried out the border follows the tracks of, obtain the coordinate of each pixel edge point, and press the coordinate of clockwise each pixel edge point of sequential storage;
The 3rd step: the circular target in the image is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the edge pixel point of circular target is carried out area dividing, and division methods is as follows: claim the coarse positioning center of circle with circular target to be initial point, to be positive dirction, to be that the straight line of unit length is the x axle with the pixel with level to the right; Title with the coarse positioning center of circle of circular target be initial point, to be positive dirction vertically upward, to be that the straight line of unit length is the y axle with the pixel; With the coarse positioning center of circle is the point of rotation, the angle that is rotated counterclockwise by x axle positive dirction is θ, edge pixel point according to big young pathbreaker's circular target of θ is divided into two parts, a part is [0 ° of θ ∈, 45 °] [135 ° of ∪, 225 °] [315 ° of ∪, 360 °] pairing edge pixel point, the edge pixel point that is called the 1st zone, another part is the corresponding edge pixel point of θ ∈ (45 °, 135 °) ∪ (225 °, 315 °), be called the edge pixel point in the 2nd zone, the line of the coarse positioning center of circle and edge pixel point is the gradient direction of this edge pixel point;
The 3.2nd step: ask for the neighbor point of edge pixel point, (x along gradient direction p, y p) for being the coordinate figure of the marginal point P of unit with the pixel, concrete grammar is as follows: if edge pixel point P (x p, y p) in the 1st zone, get the gradient direction straight line and the straight line x=x of this pixel p+ 1 intersection point A 1(x A1, y A1) be the right first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 2 intersection points B 1(x B1, y B1) be the right second neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 3 intersection point F (x f, y f) be the right three neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-1 intersection point C 1(x C1, y C1) be the left side first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-2 intersection point D 1(x D1, y D1) be the left side second neighbor point of this edge pixel point along gradient direction, get the gradient direction straight line and the straight line x=x of this pixel p-3 intersection point E 1(x E1, y E1) be the left side three neighbor point of this edge pixel point along gradient direction; If edge pixel point P (x p, y p) in the 2nd zone, get the gradient direction straight line and the straight line y=y of this pixel p+ 1 intersection point A 2(x A2, y A2) be the top first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 2 intersection points B 2(x B2, y B2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 3 intersection point F 2(x F2, y F2) be the top three next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-1 intersection point C 2(x C2, y C2) be the bottom first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-2 intersection point D 2(x D2, y D2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-3 intersection point E 2(x E2, y E2) be the top three next-door neighbour point of this edge pixel point along gradient direction;
The 3.3rd step: adopting the method for linear gray-level interpolation to ask for the pixel is the edge pixel point P (x of unit p, y p) along the gray-scale value of the neighbor point of gradient direction, wherein, (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (use symbol [] is represented the round numbers part for x, gray values of pixel points y), and the described linear gray-level interpolation method that is used to obtain the neighbor point gray-scale value is as follows:
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) gray-scale value
f(x a1,y a1)=(1-λ)*f(x a1,[y a1])+λ*f(x a1,[y a1]+1)
λ=y a1-[y a1],
Neighbor point B 1(x B1, y B1) gray-scale value
f(x b1,y b1)=(1-λ)*f(x b1,[y b1])+λ*f(x b1,[y b1]+1)
λ=y b1-[y b1]
Neighbor point F 1(x F1, y F1) gray-scale value
f(x f1,y f1)=(1-λ)*f(x f1,[y f1])+λ*f(x f1,[y f1]+1)
λ=y f1-[y f1]
Neighbor point C 1(x C1, y C1) gray-scale value
f(x c1,y c1)=(1-λ)*f(x c1,[y c1])+λ*f(x c1,[y c1]+1)
λ=y c1-[y c1]
Neighbor point D 1(x d, y d) gray-scale value
f(x d1,y d1)=(1-λ)*f(x d1,[y d1])+λ*f(x d1,[y d1]+1)
λ=y d1-[y d1]
Neighbor point E 1(x E1, y E1) gray-scale value
f(x e1,y e1)=(1-λ)*f(x e1,[y e1])+λ*f(x e1,[y e1]+1)
λ=y e1-[y e1]
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) gray-scale value
f(x a2,y a2)=(1-λ)*f([x a2],y a2)+λ*f([x a2]+1,y a2)
λ=x a2-[x a2]
Neighbor point B 2(x B2, y B2) gray-scale value
f(x b2,y b2)=(1-λ)*f([x b2],y b2)+λ*f([x b2]+1,y b2)
λ=x b2-[x b2]
Neighbor point F 2(x F2, y F2) gray-scale value
f(x f2,y f2)=(1-λ)*f([x f2],y f2)+λ*f([x f2]+1,y f2)
λ=x f2-[x f2]
Neighbor point C 2(x C2, y C2) gray-scale value
f(x c2,y c2)=(1-λ)*f([x c2],y c2)+λ*f([x c2]+1,y c2)
λ=x c2-[x c2]
Neighbor point D 2(x D2, y D2) gray-scale value
f(x d2,y d2)=(1-λ)*f([x d2],y d2)+λ*f([x d2]+1,y d2)
λ=x d2-[x d2]
Neighbor point E 2(x E2, y E2) gray-scale value
f(x e2,y e2)=(1-λ)*f([x e2],y e2)+λ*f([x e2]+1,y e2)
λ=x e2-[x e2]
The 3.4th step: according to the 3.3rd step the marginal point of asking along the gray-scale value and the marginal point P (x of the neighbor point of gradient direction p, y p) gray-scale value itself, the mean value of choosing the forward difference of gray-scale value and backward difference is as marginal point and along the gray scale difference of the correspondence of the neighbor point of gradient direction, wherein, and (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (the described method of asking for gray scale difference is as follows for x, gray values of pixel points y):
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) corresponding gray scale difference
f a 1 = | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2 + | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2
Neighbor point B 1(x B1, y B1) corresponding gray scale difference
f b 1 = | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2 + | f ( x b 1 , y b 1 ) - f ( x f 1 , y f 1 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 1 = | f ( x c 1 , y c 1 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2
Neighbor point C 1(x C1, y C1) corresponding gray scale difference
f c 1 = | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2 + | f ( x c 1 , y c 1 ) - f ( x p , y p ) | 2
Neighbor point D 1(x d, y d) corresponding gray scale difference
f d 1 = | f ( x e 1 , y e 1 ) - f ( x d 1 , y d 1 ) | 2 + | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) corresponding gray scale difference
f a 2 = | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2 + | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2
Neighbor point B 2(x B2, y B2) corresponding gray scale difference
f b 2 = | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2 + | f ( x b 2 , y b 2 ) - f ( x f 2 , y f 2 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 2 = | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2
Neighbor point C 2(x C2, y C2) corresponding gray scale difference
f c 2 = | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2 + | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2
Neighbor point D 2(x D2, y D2) corresponding gray scale difference
f d 2 = | f ( x e 2 , y e 2 ) - f ( x d 2 , y d 2 ) | 2 + | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2
The 3.5th step: according to gained marginal point in the 3.4th step and along the gray scale difference of the correspondence of the neighbor point of gradient direction, the range difference δ that asks for sub-pixel edge point P ' and pixel edge point P is
Figure A20081002279300156
If marginal point and along having to be 0 in the gray scale difference of the neighbor point correspondence of gradient direction then makes δ=0, and storage δ=0 number of times Count_zero that occurs;
The 3.6th step: ask for the coordinate of sub-pixel edge point P ', concrete grammar is: according to 3.5 go on foot the slope k of gradient direction straight line of the range difference δ of the sub-pixel edge point P ' that asks and pixel edge point P and pixel edge point P Gradient, obtain P point and P ' some coordinate difference δ on x direction and y direction respectively x, δ y, formula is as follows:
δ x = δ k grad ient 2 + 1
δ y = k grad ient * δ k grad ient 2 + 1
Wherein, the slope k of the gradient direction straight line of pixel edge point P GradientCan be by the coordinate (x of this pixel edge point p, y p) and coarse positioning central coordinate of circle (x Oc, y Oc) obtain, that is:
k grad ient = y oc - y p x oc - x p
So, for pixel P (x p, y p), its corresponding sub-pixel edge point is P ' (x p+ δ x, y p+ δ y);
The 3.7th step: to all edge pixel point P (x p, y p), asking for corresponding sub-pixel edge point successively according to~the 3.6 step of the 3.2nd step is P ' (x p+ δ x, y p+ δ y), wherein, (x p+ δ x, y p+ δ y) be the coordinate of sub-pixel edge point P ';
The 4th step: the sub-pixel edge point is carried out filtering,, handle with the method for curvature filtering and the method for mean filter respectively at " isolated point " and the noise that sub-pixel edge point occurs;
The 4.1st step: at " isolated point " that sub-pixel edge occurs, handle with the method for curvature filtering, concrete steps are as follows:
(1) ask for the curvature of all sub-pixel edge points, method is as follows: in the sub-pixel edge point of sequential storage, (x ' P2, y ' P2) be the coordinate of sub-pixel edge point P ', (x ' P1, y ' P1) be any coordinate before P ' the next-door neighbour, (x ' P3, y ' P3) be any coordinate behind P ' the next-door neighbour, the curvature of sub-pixel edge point P ' is so
k = 1 r = 1 ( ( x 0 - x p 2 ′ ) 2 + ( y 0 - y p 2 ′ ) 2 )
Wherein, (x 0, y 0) for before P ', P ' the next-door neighbour a bit, any forms the round center of circle behind the next-door neighbour of P ',
x 0 = a - b + c d , y 0 = e - f + g - d ,
a=(x′ p1+x′ p2)(x′ p2-x′ p1)(y′ p3-y′ p2)
b=(x′ p2+x′ p3)(x′ p3-x′ p2)(y′ p2-y′ p1)
c=(y′ p1-y′ p3)(y′ p2-y′ p1)(y′ p3-y′ p2)
d=2[(x′ p2-x′ p1)(y′ p3-y′ p2)-(x′ p3-x′ p2)(y′ p2-y′ p1)]
e=(y′ p1+y′ p2)(y′ p2-y′ p1)(x′ p3-x′ p2)
f=(y′ p2+y′ p3)(y′ p3-y′ p2)(x′ p2-x′ p1)
g=(x′ p1-x′ p3)(x′ p2-x′ p1)(x′ p3-x′ p2)
(2) according to the curvature of sub-pixel edge point all sub-pixel edge points are carried out filtering, filter criteria is as follows:
At first the curvature of each sub-pixel edge point is carried out descending sort, get curvature threshold and be this sequence curvature value 3*n element from high to low, wherein, n is the number of isolated point, and its numerical value is the occurrence number Count_zero of δ=0 when sub-pixel edge is located in the 3.5th step; With curvature threshold the curvature of sub-pixel edge point is cut apart then, if the curvature of sub-pixel edge point P greater than curvature threshold, and the curvature of P thinks then that greater than the curvature of next-door neighbour's point before and after it P is an isolated point, gives filtering;
The 4.2nd step: all the sub-pixel edge points after the filtering " isolated point " are carried out filtering with Mean Method, and concrete grammar is: the average of horizontal stroke, ordinate of choosing 2 horizontal stroke, ordinate and the sub-pixel edge point itself that sub-pixel edge point front and back are close to is as new horizontal stroke, the ordinate of this sub-pixel edge point;
The 5th step: filtered sub-pixel edge point is justified with least square fitting, finally obtained the center of circle and the radius of circular target.
Compared with prior art, the present invention has following advantage:
(1) compares with traditional two dimensional image edge localization method, the method of the sub-pixel edge location that the present invention proposes is carried out based on the determined area-of-interest in the coarse positioning center of circle, and when match edge gray scale difference, Gauss's surface fitting is reduced to gaussian curve approximation along gradient direction, simplify calculating, improved the rapidity of algorithm.
(2) method of the sub-pixel edge location of the present invention's proposition is when considering the gradient direction of each edge pixel point, be not limited to the several special direction that traditional edge detection operator is determined, but considered the concrete gradient direction of each edge pixel point according to the feature of circle, and utilized linear gray-level interpolation to obtain the gray-scale value of edge pixel point, and then made the location of marginal point more accurate along the neighbor point of gradient direction.
(3) the present invention is directed in the sub-pixel edge point " isolated point " and the noise that occurs, proposed a kind of simple and practical filtering method, this method not only principle and algorithm is simple, and filter effect is fine.
(4) circle center locating method of the circular target of the present invention's proposition is compared with existent method, higher precision and better stable is arranged, and have very strong practicality.
Description of drawings
Fig. 1 is the process flow diagram of the circle center locating method concrete steps of circular target.
Fig. 2 is area-of-interest extraction figure.
Fig. 3 is that non-maximum value suppresses synoptic diagram, and wherein Fig. 3 (a) is four sector charts dividing according to gradient direction; 3 (b) are 3*3 neighborhood figure.
Fig. 4 (a) is the schematic diagram of the definition pixel of edge pixel point P when the 1st zone along the gradient direction neighbor point; Fig. 4 (b) is the schematic diagram of the definition pixel of edge pixel point P when the 2nd zone along the gradient direction neighbor point.
Fig. 5 (a) be when edge pixel point P in the 1st when zone, edge pixel point P schemes along the neighbor point definition of gradient direction; Fig. 5 (b) be when edge pixel point P in the 2nd when zone, edge pixel point P schemes along the neighbor point definition of gradient direction.
Fig. 6 is that the sub-pixel edge point is asked for schematic diagram.
Embodiment
Below in conjunction with accompanying drawing the specific embodiment of the present invention is further described.According to said method, in Windows operating system, realized the center of circle positioning action of circular target with the C++ programming by the VC++6.0 platform.
Using this method carries out the location, the center of circle of circular target and comprises that mainly center of circle coarse positioning, pixel edge location, sub-pixel edge location, the filtering of sub-pixel edge point, least square fitting try to achieve five operation stepss in the center of circle, the process flow diagram of concrete steps is used the concrete steps following (not specified unit all is unit with the pixel in the step) that this method is carried out the location, the center of circle of circular target as shown in Figure 1:
The 1st step: the center of circle and radius to the circular target in the image carry out coarse positioning, obtain the coarse positioning center of circle (x of circular target Oc, y Oc) and coarse positioning radius of circle R c, wherein, (x Oc, y Oc) be the coordinate in the center of circle under the image coordinate system, concrete steps are as follows:
The 1.1st step: image removed make an uproar, Threshold Segmentation, obtain each pixel gray-scale value in the image and be 255 or 0 bianry image.Wherein, Threshold Segmentation adopts the iteration threshold method.The iteration threshold method can calculate proper segmentation threshold automatically, and its calculation procedure is as follows:
(1) select threshold value T, the average gray value of selecting image usually is as initial threshold;
(2), the average gray value of image is divided into two groups of R by initial threshold T 1And R 2
(3) calculate two groups of average gray value μ 1And μ 2
(4) reselect threshold value T, new T is defined as: T=(μ 1+ μ 2)/2;
Circulation (2) to (4) is up to two groups average gray value μ 1And μ 2No longer change, therefore obtain needed threshold value, according to resulting threshold value image is carried out binaryzation again.
The 1.2nd step: the circular target in the bianry image is carried out Boundary Extraction and border tracking.In bianry image, gray-scale value is that 255 pixel is circular target, and the method for circular target being carried out Boundary Extraction is as follows: if having among the former figure a bit for white, and 8 adjacent pixels all are to deceive, and then this gray values of pixel points are made as 0.Through after the Boundary Extraction, the target wheel profile among this figure is followed the tracks of and stored, can obtain the frontier point and the coordinate thereof of circular target.
The 1.3rd step: round to the frontier point of following the tracks of with the match of contour centroid method, obtain the coarse positioning center of circle and the coarse positioning radius of circle of this circular target, and store coarse positioning central coordinate of circle (x Oc, y Oc) and coarse positioning radius of circle R cThe contour centroid method is a kind of the simplest sub-pixel positioning algorithm, and algorithm is as follows: the frontier point coordinate of establishing tracking for (i, j), the lock-on boundary point adds up to N, according to the contour centroid method, the central coordinate of circle of circular target is:
x oc = Σi N y oc = Σj N
The radius of circular target for each frontier point to distance of center circle from mean value, that is:
R c = Σ ( i - x oc ) 2 + ( j - y oc ) 2 N
The 2nd step: the circular target in the image is carried out the pixel edge location, and concrete steps are as follows:
The 2.1st step: according to the coarse positioning center of circle and coarse positioning radius of circle, from image, extract a square area, be called " area-of-interest ", the extracting method of this square area is: with the coarse positioning center of circle of circular target in the image central point as square area, add the length of side of 2~6 pixels as square area with the coarse positioning diameter of circular target.As shown in Figure 2, the circular target among the figure in the circle representative image, square area is area-of-interest, and wherein, O is the coarse positioning center of circle, R cBe the coarse positioning radius, d generally gets 2~6 pixels.Subsequent step is only handled at area-of-interest.
The 2.2nd step: with the canny operator area-of-interest that is extracted in the image is carried out pixel edge and detect, obtain the pixel edge of circular target, again the pixel edge point of the circular target of canny operator extraction being carried out the border follows the tracks of, obtain the coordinate of each pixel edge point, and press the coordinate of clockwise each pixel edge point of sequential storage.Wherein, the Canny rim detection is estimated signal to noise ratio (S/N ratio) and location product, belong to earlier level and smooth after the method for differentiate again, its key step and ultimate principle are as follows:
(1) use the Gaussian filter smoothed image, Gauss's smooth function is as follows:
H ( x , y ) = e - x 2 + y 2 2 σ 2
Wherein, σ is the standard deviation of Gaussian function, and x, y are respectively horizontal stroke, the ordinates of pixel in the image.Image is carried out gaussian filtering, promptly to image f I(x, y) with Gaussian function H (x y) carries out convolution, obtain filtered smoothed image G (x, y):
G(x,y)=f I(x,y)*H(x,y)
(2) with the finite difference of single order partial derivative the assign to amplitude and the direction of compute gradient.First order difference convolution template is as follows:
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
(x y) carries out convolution with above two templates, and the gradient image that obtains on level and the vertical direction is respectively with the image G after level and smooth And then by
Figure A20081002279300204
Can obtain the amplitude of gradient
Figure A20081002279300205
And direction
Figure A20081002279300206
The reckoning formula is as follows:
Figure A20081002279300207
Figure A20081002279300208
(3) (non-maxima suppression, NMS): Fig. 3 is that non-maximum value suppresses synoptic diagram gradient magnitude to be carried out non-maximum value inhibition.According to the direction of gradient, the label of four sectors is 0 to 3, and shown in Fig. 3 (a), four kinds of corresponding 3*3 neighborhood may be made up, and the 3*3 neighborhood is shown in Fig. 3 (b).On each pixel, with the center pixel P of neighborhood CpCompare with two pixels, if P along gradient line CpGrad big unlike two neighbor Grad along gradient line, then this is not a marginal point.
(4) usefulness dual threshold algorithm detects and is connected the edge.The dual threshold algorithm suppresses two threshold tau 1 of image effect and τ 2 to non-maximum value, and 2 τ, 1 ≈ τ 2, thereby can obtain two threshold value edge image N 1[i, j] and N 2[i, j].Because N 2[i, j] uses high threshold to obtain, thereby contains false edge seldom, but interruption is arranged.The dual threshold method will be at N 2In [i, j] edge is connected into profile, when arriving the end points of profile, this algorithm is just at N 1Seek the edge that can be connected on the profile in the 8 neighborhood points of [i, j], like this, algorithm is constantly at N 1Collect edge in [i, j], up to N 2Till [i, j] couples together.
The 3rd step: the circular target in the image is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the edge pixel point of circular target is carried out area dividing, and division methods is as follows: claim the coarse positioning center of circle with circular target to be initial point, to be positive dirction, to be that the straight line of unit length is the x axle with the pixel with level to the right; Title with the coarse positioning center of circle of circular target be initial point, to be positive dirction vertically upward, to be that the straight line of unit length is the y axle with the pixel; With the coarse positioning center of circle is the point of rotation, the angle that is rotated counterclockwise by x axle positive dirction is θ, edge pixel point according to big young pathbreaker's circular target of θ is divided into two parts, a part is [0 ° of θ ∈, 45 °] [135 ° of ∪, 225 °] [315 ° of ∪, 360 °] pairing edge pixel point, the edge pixel point that is called the 1st zone, another part is the corresponding edge pixel point of θ ∈ (45 °, 135 °) ∪ (225 °, 315 °), be called the edge pixel point in the 2nd zone, the line of the coarse positioning center of circle and edge pixel point is the gradient direction of this edge pixel point.
The 3.2nd step: ask for the neighbor point of edge pixel point along gradient direction, concrete principle is as follows: if edge pixel point P (x p, y p) in the 1st zone, Fig. 4 (a) is the schematic diagram of definition pixel along the gradient direction neighbor point, (x p, y p) for being the coordinate figure of the marginal point P of unit with the pixel, L GBe the gradient direction straight line that P is ordered, straight line y 1, y 2, y 3, y 4Is four horizontal lines of unit for the P point is neighbouring with the pixel, respectively with straight line L GIntersect at a B ' 1, A ' 1, C ' 1, D ' 1, straight line x 1, x 2, x 3, x 4Is four ordinates of unit for the P point is neighbouring with the pixel, respectively with straight line L GIntersect at a D 1, C 1, A 1, B 1, choose B ' 1, A ' 1, C ' 1, D ' 1And { B 1, A 1, C 1, D 1Two groups of points are as the candidate neighbor point of marginal point along gradient direction since P o'clock in the 1st zone, gradient direction is based on the x direction, i.e. L GThe absolute value of slope less than 1, therefore, candidate's neighbor point B ' 1, A ' 1, C ' 1, D ' 1The distance of ordering to P obviously is greater than candidate's neighbor point { B 1, A 1, C 1, D 1Arrive the distance that P is ordered, if select for use B ' 1, A ' 1, C ' 1, D ' 1As the neighbor point of P point, can cause the location, edge accurate inadequately along gradient direction, therefore, select { B for use 1, A 1, C 1, D 1As its neighbor point; If edge pixel point P (x p, y p) in the 2nd zone, Fig. 4 (b) is the schematic diagram of definition pixel along the gradient direction neighbor point, (x p, y p) for being the coordinate figure of the marginal point P of unit with the pixel, L GBe the gradient direction straight line that P is ordered, straight line y 1, y 2, y 3, y 4Is four horizontal lines of unit for the P point is neighbouring with the pixel, respectively with straight line I GIntersect at a B 2, A 2, C 2, D 2, straight line x 1, x 2, x 3, x 4Is four ordinates of unit for the P point is neighbouring with the pixel, respectively with straight line L GIntersect at a D ' 2, C ' 2, A ' 2, B ' 2, choose B ' 2, A ' 2, C ' 2, D ' 2And { B 2, A 2, C 2, D 2Two groups of points are as the candidate neighbor point of marginal point along gradient direction.Because gradient direction was based on y direction, i.e. L in the 2nd zone in P o'clock GThe absolute value of slope greater than 1, therefore, candidate's neighbor point B ' 2, A ' 2, C ' 2, D ' 2The distance of ordering to P obviously is greater than candidate's neighbor point { B 2, A 2, C 2, D 2Arrive the distance that P is ordered, if select for use B ' 2, A ' 2, C ' 2, D ' 2As the neighbor point of P point, can cause the location, edge accurate inadequately along gradient direction, therefore, select { B for use 2, A 2, C 2, D 2As its neighbor point.Therefore, the pixel of two zoness of different of dividing at the 3.1st step, its neighbor point along gradient direction is defined as follows: (x p, y p) for being the coordinate figure of the marginal point P of unit, if edge pixel point P (x with the pixel p, y p) in the 1st zone, Fig. 5 (a) is the neighbor point definition figure of pixel along gradient direction, L GBe edge pixel point P (x p, y p) the gradient direction straight line, L 1Represent straight line x=x p-3, L 2Represent straight line x=x p-2, L 3Represent straight line x=x p-1, L 4Represent straight line x=x p+ 1, L 5Represent straight line x=x p+ 2, L 6Represent straight line x=x p+ 3.Get the gradient direction straight line and the straight line x=x of this pixel p+ 1 intersection point A 1(x A1, y A1) be the right first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 2 intersection points B 1(x B1, y B1) be the right second neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 3 intersection point F (x f, y f) be the right three neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-1 intersection point C 1(x C1, y C1) be the left side first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-2 intersection point D 1(x D1, y D1) be the left side second neighbor point of this edge pixel point along gradient direction, get the gradient direction straight line and the straight line x=x of this pixel p-3 intersection point E 1(x E1, y E1) be the left side three neighbor point of this edge pixel point along gradient direction; If edge pixel point P (x p, y p) in the 2nd zone, Fig. 5 (b) is the neighbor point definition figure of pixel along gradient direction, L GBe edge pixel point P (x p, y p) the gradient direction straight line, get the gradient direction straight line and the straight line y=y of this pixel p+ 1 intersection point A 2(x A2, y A2) be the top first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 2 intersection points B 2(x B2, y B2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 3 intersection point F 2(x F2, y F2) be the top three next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-1 intersection point C 2(x C2, y C2) be the bottom first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-2 intersection point D 2(x D2, y D2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-3 intersection point E 2(x E2, y E2) be the top three next-door neighbour point of this edge pixel point along gradient direction.
The 3.3rd step: adopting the method for linear gray-level interpolation to ask for the pixel is the edge pixel point P (x of unit p, y p) along the gray-scale value of the neighbor point of gradient direction, concrete grammar is: at first find out two the whole pixels nearest with the neighbor point along gradient direction of this pixel, then these two whole pixels are carried out linear gray-level interpolation, interpolation result is the gray-scale value along the neighbor point of gradient direction of this edge pixel point, wherein, (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (use symbol [] is represented the round numbers part for x, gray values of pixel points y), and the described linear gray-level interpolation method that is used to obtain the neighbor point gray-scale value is as follows:
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) gray-scale value
f(x a1,y a1)=(1-λ)*f(x a1,[y a1])+λ*f(x a1,[y a1]+1)
λ=y a1-[y a1],
Neighbor point B 1(x B1, y B1) gray-scale value
f(x b1,y b1)=(1-λ)*f(x b1,[y b1])+λ*f(x b1,[y b1]+1)
λ=y b1-[y b1]
Neighbor point F 1(x F1, y F1) gray-scale value
f(x f1,y f1)=(1-λ)*f(x f1,[y f1])+λ*f(x f1,[y f1]+1)
λ=y f1-[y f1]
Neighbor point C 1(x C1, y C1) gray-scale value
f(x c1,y c1)=(1-λ)*f(x c1,[y c1])+λ*f(x c1,[y c1]+1)
λ=y c1-[y c1]
Neighbor point D 1(x d, y d) gray-scale value
f(x d1,y d1)=(1-λ)*f(x d1,[y d1])+λ*f(x d1,[y d1]+1)
λ=y d1-[y d1]
Neighbor point E 1(x E1, y E1) gray-scale value
f(x e1,y e1)=(1-λ)*f(x e1,[y e1])+λ*f(x e1,[y e1]+1)
λ=y e1-[y e1]
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) gray-scale value
f(x a2,y a2)=(1-λ)*f([x a2],y a2)+λ*f([x a2]+1,y a2)
λ=x a2-[x a2]
Neighbor point B 2(x B2, y B2) gray-scale value
f(x b2,y b2)=(1-λ)*f([x b2],y b2)+λ*f([x b2]+1,y b2)
λ=x b2-[x b2]
Neighbor point F 2(x F2, y F2) gray-scale value
f(x f2,y f2)=(1-λ)*f([x f2],y f2)+λ*f([x f2]+1,y f2)
λ=x f2-[x f2]
Neighbor point C 2(x C2, y C2) gray-scale value
f(x c2,y c2)=(1-λ)*f([x c2],y c2)+λ*f([x c2]+1,y c2)
λ=x c2-[x c2]
Neighbor point D 2(x D2, y D2) gray-scale value
f(x d2,y d2)=(1-λ)*f([x d2],y d2)+λ*f([x d2]+1,y d2)
λ=x d2-[x d2]
Neighbor point E 2(x E2, y E2) gray-scale value
f(x e2,y e2)=(1-λ)*f([ e2],y e2)+λ*f([ e2]+1,y e2)
λ=x e2-[x e2]
The 3.4th step: according to the 3.3rd step the marginal point of asking along the gray-scale value and the marginal point P (x of the neighbor point of gradient direction p, y p) gray-scale value itself, the mean value of choosing the forward difference of gray-scale value and backward difference is as marginal point and along the gray scale difference of the correspondence of the neighbor point of gradient direction, wherein, and (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (the described method of asking for gray scale difference is as follows for x, gray values of pixel points y):
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) corresponding gray scale difference
f a 1 = | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2 + | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2
Neighbor point B 1(x B1, y B1) corresponding gray scale difference
f b 1 = | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2 + | f ( x b 1 , y b 1 ) - f ( x f 1 , y f 1 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 1 = | f ( x c 1 , y c 1 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2
Neighbor point C 1(x C1, y C1) corresponding gray scale difference
f c 1 = | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2 + | f ( x c 1 , y c 1 ) - f ( x p , y p ) | 2
Neighbor point D 1(x d, y d) corresponding gray scale difference
f d 1 = | f ( x e 1 , y e 1 ) - f ( x d 1 , y d 1 ) | 2 + | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) corresponding gray scale difference
f a 2 = | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2 + | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2
Neighbor point B 2(x B2, y B2) corresponding gray scale difference
f b 2 = | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2 + | f ( x b 2 , y b 2 ) - f ( x f 2 , y f 2 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 2 = | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2
Neighbor point C 2(x C2, y C2) corresponding gray scale difference
f c 2 = | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2 + | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2
Neighbor point D 2(x D2, y D2) corresponding gray scale difference
f d 2 = | f ( x e 2 , y e 2 ) - f ( x d 2 , y d 2 ) | 2 + | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2
The 3.5th step: ask for the range difference of sub-pix point edge point and pixel edge point, acquiring method and being described as follows: according to the square aperture principle, the gray-scale value of a certain pixel is exported and can be expressed as
f ( i , j ) = ∫ j - 0.5 j + 0.5 ∫ i - 0.5 i + 0.5 g ( x , y ) dxdy
In the formula, (x y) is the light distribution of consecutive image to g, and (i j) is the result of each several part light intensity combined action on the pixel light-sensitive surface to f, and sampled result is to be the discrete matrix of numerical value with the gray-scale value.Because the convolution effect and the optical diffraction effect of optical component, become the form of gradual change through optical imagery in the gray scale of object space drastic change, the edge is characterized by a kind of intensity profile in image, the image border gray-value variation should be a Gaussian distribution, the position of Gaussian curve summit correspondence is the position of true edge point, and for circular image, the gradient direction of pixel edge point is the rectilinear direction that this point is arrived in the center of circle, only need to carry out the position that gaussian curve approximation can be obtained sub-pixel edge point, Gauss's surface fitting of two dimension can be converted into the gaussian curve approximation of one dimension like this to pixel edge point and along the neighbor point of gradient direction.As shown in Figure 6, P (x p, y p) point is for the pixel edge point, L GBe edge pixel point P (x p, y p) the gradient direction straight line, if P is in the 1st zone, B 1, A 1, C 1, D 1Be the neighbor point of P point along gradient direction, if P is in the 2nd zone, B 2, A 2, C 2, D 2Be the neighbor point of P point along gradient direction, pairing P ' of Gaussian curve summit M should be its true edge point position, and P point and P ' 's range difference is δ.The expression formula of one dimension Gaussian curve is:
y ~ = 1 2 π σ G exp ( - ( δ - μ ) 2 2 σ G 2 ) - - - ( 1 )
In the formula, μ is the average of Gaussian function, σ GStandard deviation for Gaussian function.
Calculate for convenient, taken the logarithm in the following formula both sides, and order y ~ * = ln y ~ , Following formula can be converted into:
y ~ * = m 11 δ 2 + m 12 δ + m 13 - - - ( 2 )
According to the square aperture sampling thheorem, the pixel grey scale difference is
y ~ * ( n ) = ∫ n - 0.5 n + 0.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ - - - ( 3 )
The sequence number that makes marginal point P is 0, and in the 1st zone, the gray scale difference that the P point is corresponding is f as if P P1, its neighbor point D 1, C 1, A 1, B 1Sequence number is expressed as-2 ,-1,1 and 2 respectively, and corresponding gray scale difference is f D1, f C1, f A1And f B1, according to formula (3), have:
f d 1 = ∫ - 2.5 - 1.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 49 12 m 11 - 2 m 12 + m 13 - - - ( 4 )
f c 1 = ∫ - 1.5 - 0.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 13 12 m 11 - m 12 + m 13 - - - ( 5 )
f p 1 = ∫ - 0.5 0.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 1 12 m 11 + m 13 - - - ( 6 )
f a 1 = ∫ 0.5 1.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 13 12 m 11 + m 12 + m 13 - - - ( 7 )
f b 1 = ∫ 1.5 2.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 49 12 m 11 + 2 m 12 + m 13 - - - ( 8 )
According to formula (4)~(8), Simultaneous Equations can be tried to achieve m with least square method 11, m 12, m 13About f D1, f C1, f P1, f A1And f B1Expression formula, with its substitution para-curve apex coordinate value δ=-m 12/ 2m 11:
δ = - - 0.2 f d 1 - 0.1 f c 1 + 0.1 f a 1 - 0.2 f b 1 2 ( 0.1429 f d 1 - 0.0714 f c 1 - 0.1429 f p 1 - 0.0714 f a 1 + 0.1429 f b 1
Attention formula (1) arrives in the process of formula (2),
Figure A20081002279300278
Passed through the operation of taking from right logarithm, therefore, the pixel grey scale difference also should be taken the logarithm, so P point and P ' 's range difference is:
δ = 0.1 ln f d 1 + 0.05 ln f c 1 - 0.05 ln f a 1 + 0 . 1 ln f b 1 0.1429 ln f d 1 - 0.0714 ln f c 1 - 0.1429 ln f p 1 - 0.714 ln f a 1 + 0.1429 ln f b 1
If P is in the 2nd zone, the gray scale difference that the P point is corresponding is f P2, its neighbor point D 2, C 2, A 2, B 2Sequence number is expressed as-2 ,-1,1 and 2 respectively, and corresponding gray scale difference is f D2, f C2, f A2And f B2, according to formula (3), have:
f d 2 = ∫ - 2.5 - 1.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 49 12 m 11 - 2 m 12 + m 13 - - - ( 9 )
f c 2 = ∫ - 1.5 - 0.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 13 12 m 11 - m 12 + m 13 - - - ( 10 )
f p 2 = ∫ - 0.5 0 . 5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 1 12 m 11 + m 13 - - - ( 11 )
f a 2 = ∫ 0 . 5 1.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 13 12 m 11 + m 12 + m 13 - - - ( 12 )
f b 2 = ∫ 1.5 2.5 ( m 11 δ 2 + m 12 δ + m 13 ) dδ = 49 12 m 11 + 2 m 12 + m 13 - - - ( 13 )
According to formula (9)~(13), Simultaneous Equations can be tried to achieve m with least square method 11, m 12, m 13About f D2, f C2, f P2, f A2And f B2Expression formula, with its substitution para-curve apex coordinate value δ=-m 12/ 2m 11:
δ = - - 0.2 f d 2 - 0.1 f c 2 + 0.1 f a 2 - 0.2 f b 2 2 ( 0.1429 f d 2 - 0.0714 f c 2 - 0.1429 f p 2 - 0.0714 f a 2 + 0.1429 f b 2
Attention formula (1) arrives in the process of formula (2),
Figure A20081002279300283
Passed through the operation of taking from right logarithm, therefore, the pixel grey scale difference also should be taken the logarithm, so P point and P ' 's range difference is:
δ = 0.1 ln f d 2 + 0.05 ln f c 2 - 0.05 ln f a 2 + 0.1 ln f b 2 0.1429 ln f d 2 - 0.0714 ln f c 2 - 0.1429 ln f p 2 - 0.0174 ln f a 2 + 0.142 9 ln f b 2
In sum, according to gained marginal point in the 3.4th step and along the gray scale difference of the correspondence of the neighbor point of gradient direction, the range difference δ that asks for sub-pixel edge point P ' and pixel edge point P is
Figure A20081002279300285
If marginal point and along having to be 0 in the gray scale difference of the neighbor point correspondence of gradient direction then makes δ=0, and storage δ=0 number of times Count_zero that occurs.
The 3.6th step: ask for the coordinate of sub-pixel edge point P ', concrete grammar is: according to 3.5 go on foot the slope k of gradient direction straight line of the range difference δ of the sub-pixel edge point P that asks and pixel edge point P and pixel edge point P Gradient, obtain P point and P ' some coordinate difference δ on x direction and y direction respectively x, δ y, formula is as follows:
δ x = δ k grad ient 2 + 1
δ y = k grad ient * δ k grad ient 2 + 1
Wherein, the slope k of the gradient direction straight line of pixel edge point P GradientCan be by the coordinate (x of this pixel edge point p, y p) and coarse positioning central coordinate of circle (x Oc, y Oc) obtain, that is:
k grad ient = y oc - y p x oc - x p
So, for pixel P (x p, y p), its corresponding sub-pixel edge point is P ' (x p+ δ x, y p+ δ y),
The 3.7th step: to all edge pixel point P (x p, y p), asking for corresponding sub-pixel edge point successively according to~the 3.6 step of the 3.2nd step is P ' (x p+ δ x, y p+ δ y), wherein, (x p+ δ x, y p+ δ y) be the coordinate of sub-pixel edge point P '.
The 4th step: the sub-pixel edge point is carried out filtering,, handle with the method for curvature filtering and the method for mean filter respectively at " isolated point " and the noise that sub-pixel edge point occurs.
The 4.1st step: at " isolated point " that sub-pixel edge occurs, handle with the method for curvature filtering, concrete steps are as follows:
(1) ask for the curvature of all sub-pixel edge points, method is as follows: in the sub-pixel edge point of sequential storage, Be the coordinate of sub-pixel edge point P ',
Figure A20081002279300292
Be any coordinate before P ' the next-door neighbour,
Figure A20081002279300293
Be any coordinate behind P ' the next-door neighbour, the curvature of sub-pixel edge point P ' is so
k = 1 r = 1 ( ( x 0 - x p 2 ′ ) 2 + ( y 0 - y p 2 ′ ) 2 )
Wherein, (x 0, y 0) for before P ', P ' the next-door neighbour a bit, any forms the round center of circle behind the next-door neighbour of P ',
x 0 = a - b + c d , y 0 = e - f + g - d ,
a = ( x p 1 ′ + x p 2 ′ ) ( x p 2 ′ - x p 1 ′ ) ( y p 3 ′ - y p 2 ′ )
b = ( x p 2 ′ + x p 3 ′ ) ( x p 3 ′ - x p 2 ′ ) ( y p 2 ′ - y p 1 ′ )
c = ( y p 1 ′ - y p 3 ′ ) ( y p 2 ′ - y p 1 ′ ) ( y p 3 ′ - y p 2 ′ )
d = 2 [ ( x p 2 ′ - x p 1 ′ ) ( y p 3 ′ - y p 2 ′ ) - ( x p 3 ′ - x p 2 ′ ) ( y p 2 ′ - y p 1 ′ ) ]
e = ( y p 1 ′ + y p 2 ′ ) ( y p 2 ′ - y p 1 ′ ) ( x p 3 ′ - x p 2 ′ )
f = ( y p 2 ′ + y p 3 ′ ) ( y p 3 ′ - y p 2 ′ ) ( x p 2 ′ - x p 1 ′ )
g = ( x p 1 ′ - x p 3 ′ ) ( x p 2 ′ - x p 1 ′ ) ( x p 3 ′ - x p 2 ′ )
(2) according to the curvature of asking sub-pixel edge point all sub-pixel edge points are carried out filtering, filter criteria is as follows:
At first the curvature of each sub-pixel edge point is carried out descending sort, get curvature threshold and be this sequence curvature value 3*n element from high to low, n is the number of isolated point, and the reason that causes " isolated point " is because in the 3.5th step, edge pixel point and to have one or more along the neighbor point corresponding gray scale difference of gradient direction be 0, therefore, the number n numerical value of isolated point is the occurrence number Count_zero of δ=0 when sub-pixel edge is located in the 3.5th step; With curvature threshold the curvature of sub-pixel edge point is cut apart then, if the curvature of sub-pixel edge point P greater than curvature threshold, and the curvature of P thinks then that greater than the curvature of next-door neighbour's point before and after it P is an isolated point, gives filtering.
The 4.2nd step: all the sub-pixel edge points after the filtering " isolated point " are carried out filtering with Mean Method, and concrete grammar is: the average of horizontal stroke, ordinate of choosing 2 horizontal stroke, ordinate and the sub-pixel edge point itself that sub-pixel edge point front and back are close to is as new horizontal stroke, the ordinate of this sub-pixel edge point.
The 5th step: filtered sub-pixel edge point is justified with least square fitting, finally obtained the center of circle and the radius of circular target.The algorithm of least square fitting circle is as follows:
If filtered sub-pixel edge point number is N_sub, the general equation of quafric curve is
x 2+2B exy+C ey 2+2D ex+2E ey+F e=0
When utilizing N_sub marginal point to carry out curve fitting, its mean square deviation and be
e 2 = Σ i = 1 N _ sub ( x i 2 + 2 B e x i y i + C e y i 2 + 2 D e x i + 2 E e y i + F e ) 2
To following formula about B e, C e, D e, E e, F eGet partial derivative respectively, and make that each formula is zero, can obtain a static determinacy system of equations that comprises 5 equations and 5 unknown numbers.Can be with methods such as matrix inversion or the cancellations of Gauss's pivot in a column in the hope of final centre coordinate (x Cf, y Cf) be:
x cf = B e E e - C e D e C e - B e 2
y cf = B e D e - E e C e - B e 2 .

Claims (2)

1, a kind of circle center locating method of circular target is characterized in that:
The 1st step: the center of circle and radius to circular target in the image carry out coarse positioning, obtain the coarse positioning center of circle (x of circular target Oc, y Oc) and coarse positioning radius of circle R c, wherein, (x Oc, y Oc) be the coordinate in the center of circle under the image coordinate system,
The 2nd step: the circular target in the image is carried out the pixel edge location, and concrete steps are as follows:
The 2.1st step: according to the coarse positioning center of circle and coarse positioning radius of circle, from image, extract a square area, be called " area-of-interest ", the extracting method of this square area is: with the coarse positioning center of circle of circular target in the image central point as square area, coarse positioning diameter with circular target adds the length of side of 2~6 pixels as square area
The 2.2nd step: with the canny operator area-of-interest that is extracted in the image is carried out pixel edge and detect, obtain the pixel edge of circular target, again the pixel edge point of the circular target of canny operator extraction being carried out the border follows the tracks of, obtain the coordinate of each pixel edge point, and coordinate by clockwise each pixel edge point of sequential storage
The 3rd step: the circular target in the image is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the edge pixel point of circular target is carried out area dividing, and division methods is as follows: claim the coarse positioning center of circle with circular target to be initial point, to be positive dirction, to be that the straight line of unit length is the x axle with the pixel with level to the right; Title with the coarse positioning center of circle of circular target be initial point, to be positive dirction vertically upward, to be that the straight line of unit length is the y axle with the pixel; With the coarse positioning center of circle is the point of rotation, the angle that is rotated counterclockwise by x axle positive dirction is θ, edge pixel point according to big young pathbreaker's circular target of θ is divided into two parts, a part is [0 ° of θ ∈, 45 °] [135 ° of ∪, 225 °] [315 ° of ∪, 360 °] pairing edge pixel point, the edge pixel point that is called the 1st zone, another part are (225 ° of θ ∈ (45 °, 135 °) ∪, 315 °) corresponding edge pixel point, be called the edge pixel point in the 2nd zone, the line of the coarse positioning center of circle and edge pixel point is the gradient direction of this edge pixel point
The 3.2nd step: ask for the neighbor point of edge pixel point, (x along gradient direction p, y p) for being the coordinate figure of the marginal point P of unit with the pixel, concrete grammar is as follows: if edge pixel point P (x p, y p) in the 1st zone, get the gradient direction straight line and the straight line x=x of this pixel p+ 1 intersection point A 1(x A1, y A1) be the right first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 2 intersection points B 1(x B1, y B1) be the right second neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p+ 3 intersection point F (x f, y f) be the right three neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-1 intersection point C 1(x C1, y C1) be the left side first neighbor point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line x=x of this pixel p-2 intersection point D 1(x D1, y D1) be the left side second neighbor point of this edge pixel point along gradient direction, get the gradient direction straight line and the straight line x=x of this pixel p-3 intersection point E 1(x E1, y E1) be the left side three neighbor point of this edge pixel point along gradient direction; If edge pixel point P (x p, y p) in the 2nd zone, get the gradient direction straight line and the straight line y=y of this pixel p+ 1 intersection point A 2(x A2, y A2) be the top first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 2 intersection points B 2(x B2, y B2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p+ 3 intersection point F 2(x F2, y F2) be the top three next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-1 intersection point C 2(x C2, y C2) be the bottom first next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-2 intersection point D 2(x D2, y D2) be the top second next-door neighbour point of this edge pixel point along gradient direction; Get the gradient direction straight line and the straight line y=y of this pixel p-3 intersection point E 2(x E2, y E2) be the top three next-door neighbour point of this edge pixel point along gradient direction,
The 3.3rd step: adopting the method for linear gray-level interpolation to ask for the pixel is the edge pixel point P (x of unit p, y p) along the gray-scale value of the neighbor point of gradient direction, wherein, (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (use symbol [] is represented the round numbers part for x, gray values of pixel points y), and the described linear gray-level interpolation method that is used to obtain the neighbor point gray-scale value is as follows:
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) gray-scale value
f(x a1,y a1)=(1-λ)*f(x a1,[y a1])+λ*f(x a1,[y a1]+1)
λ=y a1-[y a1],
Neighbor point B 1(x B1, y B1) gray-scale value
f(x b1,y b1)=(1-λ)*f(x b1,[y b1])+λ*f(x b1,[y b1]+1)
λ=y b1-[y b1]
Neighbor point F 1(x F1, y F1) gray-scale value
f(x f1,y f1)=(1-λ)*f(x f1,[y f1])+λ*f(x f1,[y f1]+1)
λ=y f1-[y f1]
Neighbor point C 1(x C1, y C1) gray-scale value
f(x c1,y c1)=(1-λ)*f(x c1,[y c1])+λ*f(x c1,[y c1]+1)
λ=y c1-[y c1]
Neighbor point D 1(x d, y d) gray-scale value
f(x d1,y d1)=(1-λ)*f(x d1,[y d1])+λ*f(x d1,[y d1]+1)
λ=y d1-[y d1]
Neighbor point E 1(x E1, y E1) gray-scale value
f(x e1,y e1)=(1-λ)*f(x e1,[y e1])+λ*f(x e1,[y e1]+1)
λ=y e1-[y e1]
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) gray-scale value
f(x a2,y a2)=(1-λ)*f([x a2],y a2)+λ*f([x a2]+1,y a2)
λ=x a2-[x a2]
Neighbor point B 2(x B2, y B2) gray-scale value
f(x b2,y b2)=(1-λ)*f([x b2],y b2)+λ*f([x b2]+1,y b2)
λ=x b2-[x b2]
Neighbor point F 2(x F2, y F2) gray-scale value
f(x f2,y f2)=(1-λ)*f([x f2],y f2)+λ*f([x f2]+1,y f2)
λ=x f2-[x f2]
Neighbor point C 2(x C2, y C2) gray-scale value
f(x c2,y c2)=(1-λ)*f([x c2],y c2)+λ*f([x c2]+1,y c2)
λ=x c2-[x c2]
Neighbor point D 2(x D2, y D2) gray-scale value
f(x d2,y d2)=(1-λ)*f([x d2],y d2)+λ*f([x d2]+1,y d2)
λ=x d2-[x d2]
Neighbor point E 2(x E2, y E2) gray-scale value
f(x e2,y e2)=(1-λ)*f([x e2],y e2)+λ*f([x e2]+1,y e2)
λ=x e2-[x e2]
The 3.4th step: according to the 3.3rd step the marginal point of asking along the gray-scale value and the marginal point P (x of the neighbor point of gradient direction p, y p) gray-scale value itself, the mean value of choosing the forward difference of gray-scale value and backward difference is as marginal point and along the gray scale difference of the correspondence of the neighbor point of gradient direction, wherein, and (x p, y p) for being the coordinate figure of the edge pixel point P of unit with the pixel, use f (x, y) denotation coordination be (the described method of asking for gray scale difference is as follows for x, gray values of pixel points y):
(1) if edge pixel point P (x p, y p) in the 1st zone,
Neighbor point A 1(x A1, y A1) corresponding gray scale difference
f a 1 = | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2 + | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2
Neighbor point B 1(x B1, y B1) corresponding gray scale difference
f b 1 = | f ( x a 1 , y a 1 ) - f ( x b 1 , y b 1 ) | 2 + | f ( x b 1 , y b 1 ) - f ( x f 1 , y f 1 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 1 = | f ( x c 1 , y c 1 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 1 , y a 1 ) | 2
Neighbor point C 1(x C1, y C1) corresponding gray scale difference
f c 1 = | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2 + | f ( c c 1 , y c 1 ) - f ( x p , y p ) | 2
Neighbor point D 1(x d, y d) corresponding gray scale difference
f d 1 = | f ( x e 1 , y e 1 ) - f ( x d 1 , y d 1 ) | 2 + | f ( x d 1 , y d 1 ) - f ( x c 1 , y c 1 ) | 2
(2) if edge pixel point P (x p, y p) in the 2nd zone,
Neighbor point A 2(x A2, y A2) corresponding gray scale difference
f a 2 = | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2 + | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2
Neighbor point B 2(x B2, y B2) corresponding gray scale difference
f b 2 = | f ( x a 2 , y a 2 ) - f ( x b 2 , y b 2 ) | 2 + | f ( x b 2 , y b 2 ) - f ( x f 2 , y f 2 ) | 2
Marginal point P (x p, y p) corresponding gray scale difference
f p 2 = | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2 + | f ( x p , y p ) - f ( x a 2 , y a 2 ) | 2
Neighbor point C 2(x C2, y C2) corresponding gray scale difference
f c 2 = | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2 + | f ( x c 2 , y c 2 ) - f ( x p , y p ) | 2
Neighbor point D 2(x D2, y D2) corresponding gray scale difference
f d 2 = | f ( x e 2 , y e 2 ) - f ( x d 2 , y d 2 ) | 2 + | f ( x d 2 , y d 2 ) - f ( x c 2 , y c 2 ) | 2
The 3.5th step: according to gained marginal point in the 3.4th step and along the gray scale difference of the correspondence of the neighbor point of gradient direction, the range difference δ that asks for sub-pixel edge point P ' and pixel edge point P is
Figure A2008100227930006C8
If marginal point and along having to be 0 in the gray scale difference of the neighbor point correspondence of gradient direction then makes δ=0, and storage δ=0 number of times Count_zero that occurs,
The 3.6th step: ask for the coordinate of sub-pixel edge point P ', concrete grammar is: according to 3.5 go on foot the slope k of gradient direction straight line of the range difference δ of the sub-pixel edge point P ' that asks and pixel edge point P and pixel edge point P Gradient, obtain P point and P ' some coordinate difference δ on x direction and y direction respectively x, δ y, formula is as follows:
δ x = δ k grad ient 2 + 1
δ y = k grad ient * δ k grad ient 2 + 1
Wherein, the slope k of the gradient direction straight line of pixel edge point P GradientCan be by the coordinate (x of this pixel edge point p, y p) and coarse positioning central coordinate of circle (x Oc, y Oc) obtain, that is:
k grad ient = y oc - y p x oc - x p
So, for pixel P (x p, y p), its corresponding sub-pixel edge point is P ' (x p+ δ x, y p+ δ y),
The 3.7th step: to all edge pixel point P (x p, y p), asking for corresponding sub-pixel edge point successively according to~the 3.6 step of the 3.2nd step is P ' (x p+ δ x, y p+ δ y), wherein, (x p+ δ x, y p+ δ y) be the coordinate of sub-pixel edge point P ',
The 4th step: the sub-pixel edge point is carried out filtering,, handle with the method for curvature filtering and the method for mean filter respectively at " isolated point " and the noise that sub-pixel edge point occurs,
The 4.1st step: at " isolated point " that sub-pixel edge occurs, handle with the method for curvature filtering, concrete steps are as follows:
(1) ask for the curvature of all sub-pixel edge points, method is as follows: in the sub-pixel edge point of sequential storage,
Figure A2008100227930007C4
Be the coordinate of sub-pixel edge point P ', Be any coordinate before P ' the next-door neighbour,
Figure A2008100227930007C6
Be any coordinate behind P ' the next-door neighbour, the curvature of sub-pixel edge point P ' is so
k = 1 r = 1 ( ( x 0 - x p 2 ′ ) 2 + ( y 0 - y p 2 ′ ) 2 )
Wherein, (x 0, y 0) for before P ', P ' the next-door neighbour a bit, any forms the round center of circle behind the next-door neighbour of P ',
x 0 = a - b + c d , y 0 = e - f + g - d ,
a = ( x p 1 ′ + x p 2 ′ ) ( x p 2 ′ - x p 1 ′ ) ( y p 3 ′ - y p 2 ′ )
b = ( x p 2 ′ + x p 3 ′ ) ( x p 3 ′ - x p 2 ′ ) ( y p 2 ′ - y p 1 ′ )
c = ( y p 1 ′ - y p 3 ′ ) ( y p 2 ′ - y p 1 ′ ) ( y p 3 ′ - y p 2 ′ )
d = 2 [ ( x p 2 ′ - x p 1 ′ ) ( y p 3 ′ - y p 2 ′ ) - ( x p 3 ′ - x p 2 ′ ) ( y p 2 ′ - y p 1 ′ ) ]
e = ( y p 1 ′ + y p 2 ′ ) ( y p 2 ′ - y p 1 ′ ) ( x p 3 ′ - x p 2 ′ )
f = ( y p 2 ′ + y p 3 ′ ) ( y p 3 ′ - y p 2 ′ ) ( x p 2 ′ - x p 1 ′ )
g = ( x p 1 ′ - x p 3 ′ ) ( x p 2 ′ - x p 1 ′ ) ( x p 3 ′ - x p 2 ′ )
(2) according to the curvature of sub-pixel edge point all sub-pixel edge points are carried out filtering, filter criteria is as follows:
At first the curvature of each sub-pixel edge point is carried out descending sort, get curvature threshold and be this sequence curvature value 3*n element from high to low, wherein, n is the number of isolated point, and its numerical value is the occurrence number Count_zero of δ=0 when sub-pixel edge is located in the 3.5th step; With curvature threshold the curvature of sub-pixel edge point is cut apart then, if the curvature of sub-pixel edge point P greater than curvature threshold, and the curvature of P thinks then that greater than the curvature of next-door neighbour's point before and after it P is an isolated point, gives filtering,
The 4.2nd step: all the sub-pixel edge points after the filtering " isolated point " are carried out filtering with Mean Method, concrete grammar is: the average of horizontal stroke, ordinate of choosing next-door neighbour's 2 horizontal stroke, ordinate and sub-pixel edge point itself before and after the sub-pixel edge point is as new horizontal stroke, the ordinate of this sub-pixel edge point
The 5th step: filtered sub-pixel edge point is justified with least square fitting, finally obtained the center of circle and the radius of circular target.
2, the circle center locating method of circular target according to claim 1 is characterized in that:
The 1st step, the described method that the center of circle and the radius of circular target in the image are carried out coarse positioning was:
The 1.1st step: image removed make an uproar, Threshold Segmentation, obtain each pixel gray-scale value in the image and be 255 or 0 bianry image,
The 1.2nd step: the circular target in the bianry image is carried out Boundary Extraction and border tracking,
The 1.3rd step: round to the frontier point of following the tracks of with the match of contour centroid method, obtain the coarse positioning center of circle and the coarse positioning radius of circle of this circular target, and store coarse positioning central coordinate of circle (x Oc, y Oc) and coarse positioning radius of circle R c
CN2008100227931A 2008-07-22 2008-07-22 Circular target circular center positioning method Expired - Fee Related CN101334263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100227931A CN101334263B (en) 2008-07-22 2008-07-22 Circular target circular center positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100227931A CN101334263B (en) 2008-07-22 2008-07-22 Circular target circular center positioning method

Publications (2)

Publication Number Publication Date
CN101334263A true CN101334263A (en) 2008-12-31
CN101334263B CN101334263B (en) 2010-09-15

Family

ID=40197024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100227931A Expired - Fee Related CN101334263B (en) 2008-07-22 2008-07-22 Circular target circular center positioning method

Country Status (1)

Country Link
CN (1) CN101334263B (en)

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902548A (en) * 2009-05-27 2010-12-01 索尼公司 Image processing equipment, imaging device and image processing method
CN101604380B (en) * 2009-07-21 2011-07-20 上海理工大学 Method for identifying human head by diameter searching
CN102135416A (en) * 2010-12-30 2011-07-27 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102213591A (en) * 2010-04-01 2011-10-12 安鹏科技股份有限公司 Digital image analysis device
CN102252661A (en) * 2011-04-02 2011-11-23 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102496003A (en) * 2011-11-21 2012-06-13 中国科学院自动化研究所 Identification block and target locating method based on identification block
CN102590052A (en) * 2012-02-28 2012-07-18 清华大学 Method for measuring particulate size of foreign matters in liquid
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN102706280A (en) * 2012-06-21 2012-10-03 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102721364A (en) * 2011-03-30 2012-10-10 比亚迪股份有限公司 Positioning method and positioning device for workpiece
CN102855608A (en) * 2012-07-18 2013-01-02 奇瑞汽车股份有限公司 Method and device for weakening image
CN103235939A (en) * 2013-05-08 2013-08-07 哈尔滨工业大学 Datum point positioning method based on machine vision
CN103245309A (en) * 2013-05-21 2013-08-14 杭州鼎热科技有限公司 Method for compensating laser flatness measurement error
CN103512494A (en) * 2013-07-16 2014-01-15 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103632152A (en) * 2013-12-04 2014-03-12 天津普达软件技术有限公司 Method for detecting forks for drummed instant noodle
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN104408465A (en) * 2014-11-01 2015-03-11 西南大学 Canny-matrix-pulse-edge-mode-based silkworm chrysalis male-female identification method
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105457908A (en) * 2015-11-12 2016-04-06 孙高磊 Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN105956536A (en) * 2016-04-26 2016-09-21 北京森科赛德科技有限公司 Pretreatment method and device for iris recognition
CN103729655B (en) * 2014-01-22 2017-03-01 哈尔滨工业大学 A kind of detection method for slice component vision localization
CN106651959A (en) * 2016-11-15 2017-05-10 东南大学 Optical field camera micro-lens array geometric parameter calibration method
CN106815585A (en) * 2017-01-20 2017-06-09 浙江大学 A kind of high-precision vision positioning method of complex dynamic environment hole characteristic
CN106878701A (en) * 2016-12-31 2017-06-20 歌尔科技有限公司 The detection method and device of a kind of TVLine
WO2017118285A1 (en) * 2016-01-05 2017-07-13 北京度量科技有限公司 Method for rapidly extracting central point of circular image
CN107301636A (en) * 2017-05-17 2017-10-27 华南理工大学 A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN107678551A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment
CN107845098A (en) * 2017-11-14 2018-03-27 南京理工大学 Liver cancer image full-automatic partition method based on random forest and fuzzy clustering
CN108226915A (en) * 2017-12-25 2018-06-29 中国人民解放军63921部队 A kind of quantitatively characterizing space multiple target spatial distribution method
CN108921865A (en) * 2018-06-27 2018-11-30 南京大学 A kind of jamproof sub-pix line fitting method
CN109035230A (en) * 2018-07-19 2018-12-18 中导光电设备股份有限公司 A kind of Circularhole diameter vision measuring method
CN109011654A (en) * 2018-09-05 2018-12-18 浙江大丰实业股份有限公司 Peoperty walking identification mechanism
CN109084675A (en) * 2018-06-04 2018-12-25 哈尔滨工业大学 Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square
CN109410268A (en) * 2018-11-06 2019-03-01 温州雷蒙光电科技有限公司 A kind of determination method and system in the concentric loop center of circle of corneal topography
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN109990936A (en) * 2019-03-12 2019-07-09 高新兴创联科技有限公司 High speed railway track stress automated watch-keeping facility and method
CN110223339A (en) * 2019-05-27 2019-09-10 盐城工学院 One kind being based on machine vision thermal protector calibration point center positioning method
CN110335322A (en) * 2019-07-09 2019-10-15 成都理工大学 Roads recognition method and road Identification device based on image
CN110349199A (en) * 2019-06-25 2019-10-18 杭州汇萃智能科技有限公司 A kind of object roundness measurement method
CN110533682A (en) * 2019-08-30 2019-12-03 福建省德腾智能科技有限公司 A kind of image border real time extracting method based on curvature filtering
CN110930423A (en) * 2019-11-26 2020-03-27 广州敏视数码科技有限公司 Object edge feature recognition and extraction method
CN111539972A (en) * 2020-04-24 2020-08-14 大连理工大学 Method for segmenting earthworm part in ultrasound image
CN112066874A (en) * 2020-08-14 2020-12-11 苏州环球科技股份有限公司 Multi-position 3D scanning online detection method
CN112116667A (en) * 2020-09-22 2020-12-22 扬州大学 Engine surface machining hole diameter measurement algorithm
CN112478779A (en) * 2020-11-27 2021-03-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN113192120A (en) * 2021-04-25 2021-07-30 无锡信捷电气股份有限公司 Circle positioning algorithm based on two-dimensional edge measurement and least square principle
CN113284154A (en) * 2021-05-25 2021-08-20 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113406093A (en) * 2021-08-19 2021-09-17 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113487589A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113610881A (en) * 2021-08-25 2021-11-05 浙江大华技术股份有限公司 Target object determination method and device, storage medium and electronic device
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN114923417A (en) * 2022-07-22 2022-08-19 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN115082552A (en) * 2022-07-25 2022-09-20 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN116168025A (en) * 2023-04-24 2023-05-26 日照金果粮油有限公司 Oil curtain type fried peanut production system
CN117291972A (en) * 2023-11-23 2023-12-26 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102072707B (en) * 2011-01-12 2012-05-30 河南理工大学 Method for quickly detecting center and radius of circle in digital image
CN109344785B (en) * 2018-10-12 2021-10-01 北京航空航天大学 High-precision planet center positioning method in deep space autonomous optical navigation

Cited By (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902548A (en) * 2009-05-27 2010-12-01 索尼公司 Image processing equipment, imaging device and image processing method
CN101604380B (en) * 2009-07-21 2011-07-20 上海理工大学 Method for identifying human head by diameter searching
CN102213591A (en) * 2010-04-01 2011-10-12 安鹏科技股份有限公司 Digital image analysis device
CN102213591B (en) * 2010-04-01 2013-10-23 安鹏科技股份有限公司 Digital image analysis device
CN102135416B (en) * 2010-12-30 2012-10-03 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102135416A (en) * 2010-12-30 2011-07-27 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102721364A (en) * 2011-03-30 2012-10-10 比亚迪股份有限公司 Positioning method and positioning device for workpiece
CN102721364B (en) * 2011-03-30 2015-12-02 比亚迪股份有限公司 A kind of localization method of workpiece and device thereof
CN102252661A (en) * 2011-04-02 2011-11-23 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102252661B (en) * 2011-04-02 2013-04-10 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102496003A (en) * 2011-11-21 2012-06-13 中国科学院自动化研究所 Identification block and target locating method based on identification block
CN102590052A (en) * 2012-02-28 2012-07-18 清华大学 Method for measuring particulate size of foreign matters in liquid
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN102637300B (en) * 2012-04-26 2014-08-06 重庆大学 Improved Zernike moment edge detection method
CN102706280A (en) * 2012-06-21 2012-10-03 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102706280B (en) * 2012-06-21 2014-05-14 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102855608B (en) * 2012-07-18 2015-01-28 奇瑞汽车股份有限公司 Method and device for weakening image
CN102855608A (en) * 2012-07-18 2013-01-02 奇瑞汽车股份有限公司 Method and device for weakening image
CN103235939A (en) * 2013-05-08 2013-08-07 哈尔滨工业大学 Datum point positioning method based on machine vision
CN103245309A (en) * 2013-05-21 2013-08-14 杭州鼎热科技有限公司 Method for compensating laser flatness measurement error
CN103512494A (en) * 2013-07-16 2014-01-15 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103512494B (en) * 2013-07-16 2017-02-08 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103632152A (en) * 2013-12-04 2014-03-12 天津普达软件技术有限公司 Method for detecting forks for drummed instant noodle
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN103729655B (en) * 2014-01-22 2017-03-01 哈尔滨工业大学 A kind of detection method for slice component vision localization
CN104408465A (en) * 2014-11-01 2015-03-11 西南大学 Canny-matrix-pulse-edge-mode-based silkworm chrysalis male-female identification method
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN104359403B (en) * 2014-11-21 2017-03-29 天津工业大学 Planar part dimension measurement method based on sub-pixel edge algorithm
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105005985B (en) * 2015-06-19 2017-10-31 沈阳工业大学 Backlight image micron order edge detection method
CN105457908A (en) * 2015-11-12 2016-04-06 孙高磊 Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD
CN105457908B (en) * 2015-11-12 2018-04-13 孙高磊 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD
WO2017118285A1 (en) * 2016-01-05 2017-07-13 北京度量科技有限公司 Method for rapidly extracting central point of circular image
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN105956536A (en) * 2016-04-26 2016-09-21 北京森科赛德科技有限公司 Pretreatment method and device for iris recognition
CN106651959A (en) * 2016-11-15 2017-05-10 东南大学 Optical field camera micro-lens array geometric parameter calibration method
CN106651959B (en) * 2016-11-15 2019-05-31 东南大学 A kind of scaling method of light-field camera microlens array geometric parameter
CN106878701A (en) * 2016-12-31 2017-06-20 歌尔科技有限公司 The detection method and device of a kind of TVLine
CN106815585A (en) * 2017-01-20 2017-06-09 浙江大学 A kind of high-precision vision positioning method of complex dynamic environment hole characteristic
CN106815585B (en) * 2017-01-20 2020-01-10 浙江大学 High-precision visual positioning method for hole characteristics of complex dynamic environment
CN107301636A (en) * 2017-05-17 2017-10-27 华南理工大学 A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting
WO2018209941A1 (en) * 2017-05-17 2018-11-22 华南理工大学 High-density circuit board circular hole sub-pixel detection method based on gaussian fitting
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN107577979B (en) * 2017-07-26 2020-07-03 中科创达软件股份有限公司 Method and device for quickly identifying DataMatrix type two-dimensional code and electronic equipment
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
CN107678551B (en) * 2017-10-19 2021-12-28 京东方科技集团股份有限公司 Gesture recognition method and device and electronic equipment
CN107678551A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment
US11402918B2 (en) 2017-10-19 2022-08-02 Boe Technology Group Co., Ltd. Method for controlling terminal apparatus, apparatus for controlling terminal apparatus, and computer-program product
CN107845098A (en) * 2017-11-14 2018-03-27 南京理工大学 Liver cancer image full-automatic partition method based on random forest and fuzzy clustering
CN108226915A (en) * 2017-12-25 2018-06-29 中国人民解放军63921部队 A kind of quantitatively characterizing space multiple target spatial distribution method
CN108226915B (en) * 2017-12-25 2021-07-30 中国人民解放军63921部队 Quantitative representation space multi-target spatial distribution method
CN109084675A (en) * 2018-06-04 2018-12-25 哈尔滨工业大学 Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square
CN108921865A (en) * 2018-06-27 2018-11-30 南京大学 A kind of jamproof sub-pix line fitting method
CN108921865B (en) * 2018-06-27 2022-03-18 南京大学 Anti-interference sub-pixel straight line fitting method
CN109035230A (en) * 2018-07-19 2018-12-18 中导光电设备股份有限公司 A kind of Circularhole diameter vision measuring method
CN109035230B (en) * 2018-07-19 2021-11-09 中导光电设备股份有限公司 Round hole diameter visual measurement method
CN109011654A (en) * 2018-09-05 2018-12-18 浙江大丰实业股份有限公司 Peoperty walking identification mechanism
CN109011654B (en) * 2018-09-05 2020-05-29 浙江大丰实业股份有限公司 Stage property walking identification mechanism
CN109410268B (en) * 2018-11-06 2020-06-23 温州高视雷蒙光电科技有限公司 Method and system for determining circle center of concentric ring of corneal topography
CN109410268A (en) * 2018-11-06 2019-03-01 温州雷蒙光电科技有限公司 A kind of determination method and system in the concentric loop center of circle of corneal topography
CN109990936A (en) * 2019-03-12 2019-07-09 高新兴创联科技有限公司 High speed railway track stress automated watch-keeping facility and method
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN110223339A (en) * 2019-05-27 2019-09-10 盐城工学院 One kind being based on machine vision thermal protector calibration point center positioning method
CN110223339B (en) * 2019-05-27 2021-07-16 盐城工学院 Thermal protector calibration point center positioning method based on machine vision
CN110349199A (en) * 2019-06-25 2019-10-18 杭州汇萃智能科技有限公司 A kind of object roundness measurement method
CN110349199B (en) * 2019-06-25 2021-07-30 杭州汇萃智能科技有限公司 Object roundness measuring method
CN110335322A (en) * 2019-07-09 2019-10-15 成都理工大学 Roads recognition method and road Identification device based on image
CN110335322B (en) * 2019-07-09 2024-03-01 成都理工大学 Road recognition method and road recognition device based on image
CN110533682B (en) * 2019-08-30 2023-02-14 福建省德腾智能科技有限公司 Image edge real-time extraction method based on curvature filtering
CN110533682A (en) * 2019-08-30 2019-12-03 福建省德腾智能科技有限公司 A kind of image border real time extracting method based on curvature filtering
CN110930423A (en) * 2019-11-26 2020-03-27 广州敏视数码科技有限公司 Object edge feature recognition and extraction method
CN111539972B (en) * 2020-04-24 2023-04-18 大连理工大学 Method for segmenting cerebellar lumbricus in ultrasonic image
CN111539972A (en) * 2020-04-24 2020-08-14 大连理工大学 Method for segmenting earthworm part in ultrasound image
CN112066874A (en) * 2020-08-14 2020-12-11 苏州环球科技股份有限公司 Multi-position 3D scanning online detection method
CN112116667B (en) * 2020-09-22 2023-11-24 扬州大学 Method for measuring diameter of machined hole on surface of engine
CN112116667A (en) * 2020-09-22 2020-12-22 扬州大学 Engine surface machining hole diameter measurement algorithm
CN112478779A (en) * 2020-11-27 2021-03-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN112478779B (en) * 2020-11-27 2022-07-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN113192120A (en) * 2021-04-25 2021-07-30 无锡信捷电气股份有限公司 Circle positioning algorithm based on two-dimensional edge measurement and least square principle
CN113284154A (en) * 2021-05-25 2021-08-20 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113284154B (en) * 2021-05-25 2022-04-26 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113470102B (en) * 2021-06-23 2024-06-11 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113487589A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113487589B (en) * 2021-07-22 2024-04-19 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113406093A (en) * 2021-08-19 2021-09-17 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113406093B (en) * 2021-08-19 2021-11-30 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113610881A (en) * 2021-08-25 2021-11-05 浙江大华技术股份有限公司 Target object determination method and device, storage medium and electronic device
CN113610881B (en) * 2021-08-25 2024-03-01 浙江华感科技有限公司 Target object determination method and device, storage medium and electronic device
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN113470056B (en) * 2021-09-06 2021-11-16 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114923417B (en) * 2022-07-22 2022-10-14 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN114923417A (en) * 2022-07-22 2022-08-19 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN115082552B (en) * 2022-07-25 2022-12-27 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN115082552A (en) * 2022-07-25 2022-09-20 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN116168025A (en) * 2023-04-24 2023-05-26 日照金果粮油有限公司 Oil curtain type fried peanut production system
CN117291972A (en) * 2023-11-23 2023-12-26 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium
CN117291972B (en) * 2023-11-23 2024-02-13 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium

Also Published As

Publication number Publication date
CN101334263B (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN101334263B (en) Circular target circular center positioning method
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN104751187A (en) Automatic meter-reading image recognition method
Timofte et al. Multi-view traffic sign detection, recognition, and 3D localisation
CN107330376A (en) A kind of Lane detection method and system
CN103077529B (en) Based on the plant leaf blade characteristic analysis system of image scanning
US8073233B2 (en) Image processor, microscope system, and area specifying program
CN107092871B (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN108764229A (en) A kind of water gauge automatic distinguishing method for image based on computer vision technique
CN106558072A (en) A kind of method based on SIFT feature registration on remote sensing images is improved
CN102693423A (en) Method for precise positioning of license plate in strong light conditions
CN114627052A (en) Infrared image air leakage and liquid leakage detection method and system based on deep learning
CN104835175A (en) Visual attention mechanism-based method for detecting target in nuclear environment
CN110491132A (en) Vehicle based on video frame picture analyzing, which is disobeyed, stops detection method and device
CN102110227A (en) Compound method for classifying multiresolution remote sensing images based on context
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN112734729B (en) Water gauge water level line image detection method and device suitable for night light supplement condition and storage medium
CN106815583A (en) A kind of vehicle at night license plate locating method being combined based on MSER and SWT
CN107480585A (en) Object detection method based on DPM algorithms
Galsgaard et al. Circular hough transform and local circularity measure for weight estimation of a graph-cut based wood stack measurement
CN104851089A (en) Static scene foreground segmentation method and device based on three-dimensional light field
Yuan et al. Road segmentation in aerial images by exploiting road vector data
CN115841669A (en) Pointer instrument detection and reading identification method based on deep learning technology
CN112396580B (en) Method for detecting defects of round part
CN101430789A (en) Image edge detection method based on Fast Slant Stack transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SOWTHEAST UNIV.

Effective date: 20131018

Owner name: NANTONG OUTE CONSTRUCTION MATERIALS EQUIPMENT CO.,

Free format text: FORMER OWNER: SOWTHEAST UNIV.

Effective date: 20131018

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 210096 NANJING, JIANGSU PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20131018

Address after: 226600 Jiangsu city of Nantong province Haian County baiding town into the Industrial Park

Patentee after: NANTONG OUTPACE BUILDING MATERIAL EQUIPMENT CO.,LTD.

Patentee after: SOUTHEAST University

Address before: 210096 Jiangsu city Nanjing Province four pailou No. 2

Patentee before: Southeast University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100915

Termination date: 20210722

CF01 Termination of patent right due to non-payment of annual fee