CN101465002A - Method for orientating secondary pixel edge of oval-shaped target - Google Patents

Method for orientating secondary pixel edge of oval-shaped target Download PDF

Info

Publication number
CN101465002A
CN101465002A CNA2009100280232A CN200910028023A CN101465002A CN 101465002 A CN101465002 A CN 101465002A CN A2009100280232 A CNA2009100280232 A CN A2009100280232A CN 200910028023 A CN200910028023 A CN 200910028023A CN 101465002 A CN101465002 A CN 101465002A
Authority
CN
China
Prior art keywords
target
edge
ellipse
pixel
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100280232A
Other languages
Chinese (zh)
Other versions
CN101465002B (en
Inventor
达飞鹏
张虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haian Shenling Electrical Appliance Manufacturing Co., Ltd.
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2009100280232A priority Critical patent/CN101465002B/en
Publication of CN101465002A publication Critical patent/CN101465002A/en
Application granted granted Critical
Publication of CN101465002B publication Critical patent/CN101465002B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A sub-pixel edge of an ellipse target positioning method mainly relates to a image processing and machine vision such as calculation of accurate parameters and calibration and matching of a pick-up camera of the ellipse target and the like, the method is mainly divided into three main steps: the first step includes noise elimination of images, edge detection by Sobel operators and extraction of edge points of the ellipse target; the second step includes calculation of geometric parameters of the ellipse target; and the third step includes position of the sub-pixel edge, wherein, the position of the sub-pixel edge can be divided into four parts of calculating target grayness and background grayness of the edge model, calculating edge angles, calculating the distance between edge points and real edge points and calculating accurate position of sub-pixel edge points. The sub-pixel edge of positioning method comprehensively utilizes geometric parameters of the ellipse target, distribution characteristics of grayness of the ellipse target and two-dimensional edge models. The method not only effectively improves the accuracy and the robustness of the edge positioning, but also greatly reduces arithmetic quantity so as to enhance the rapidity.

Description

The sub-pixel edge localization method of ellipse target
Technical field
The present invention relates to a kind of sub-pixel edge localization method of ellipse target, relate in particular to a kind of sub-pixel edge localization method of the ellipse target based on square.
Background technology
The edge is the most basic feature of image, locatees the edge fast and accurately and plays an important role in Flame Image Process and computer vision.Circle is a kind of common figure, through perspective projection transformation, changes ellipse into, is major issue in the present optical measurement to accurately obtaining of ellipse target edge.Technique of image edge detection has reached sub-pixel at present, and the method for common sub-pixel edge location can be divided three classes: fitting process, method of interpolation is based on the method for square.Document " High-accuracyedge detection with Blurred Edge Model " (Ye J, Fu G K, Poudel U P.Image and VisionComputing, 2005,23 (5): 453~467) be fitting process, be according to certain edge model the edge of image gray-scale value to be carried out match to make marginal point error minimum, and then obtain the edge of sub-pixel precision, its bearing accuracy height, but consuming time bigger; Document " Non-linear fourth-order image interpolation for subpixel edge detection andlocalization " (Hermosilla T, Bermejo E, Balaguer A, Ruiz L A.Image and Vision Computing, 2008,26 (9): 1240~1248) be method of interpolation, be according to real image intensity profile rule the relevant gray scale of image to be carried out interpolation to obtain sub-pixel edge, its operation time is shorter, but noise resisting ability is not strong; Because square is based on integral operation, noiseproof feature is well all outstanding advantages based on the method for square.Document " Edge location to subpixel valuesin digital imagery " (Tabatabai A J, Mitchell O R.IEEE Transactions on Pattern Analysis andMachine Intelligence, 1984,6 (2): 188~201) at first propose to adopt the method for square that image is carried out the sub-pixel edge location, its essence is four parameters obtaining edge model; Document " Subpixel Measurements Using aMoment-Based Edge Operator " (Lyvers E P, Mitchell O R, Akey M L, Reeves A P.IEEETransactions on Pattern Analysis and Machine Intelligence, 1989,11 (12): 1293~1309) propose to utilize the gray space square to carry out the sub-pixel edge location, because this method is when asking for the exact position of sub-pixel edge, need utilize six square templates that gradation of image is carried out convolution, algorithm is consuming time bigger; Document " A fast edge detector withsubpixel accuracy " (Lee C K, So W C.In:Proceedings of IEEE Record of the 1992International Power Electronics and Motion Control Conference on Industrial ElectronicsControl, Instrumentation and Automation.IEEE, 1992.710~715) background gray scale in the edge model parameter and target gray scale are obtained separately according to histogrammic double-hump characteristics, therefore, only need three square templates just can obtain remaining two edge model parameters; Because the orthogonality and the rotational invariance of Zernike square, document " Orthogonal momentoperators for subpixel edge detection " (Ghosal S, Mehrotra R.Pattern Recognition, 1993,26 (2): 295~306) proposed sub-pixel edge location algorithm based on the Zernike orthogonal moment, only need 3 square templates when sub-pixel edge is located, its counting yield Billy uses the counting yield height of spatial moment.
In recent years, there are many scholars that the sub-pixel edge localization method based on square has been proposed improvement.Document " Subpixeledge location based on orthogonal Fourier-Mellin moments " (Bin T J, Lei A, Cui J W, KangW J, Liu D D.Image and Vision Computing, 2008,26 (4): 563~569) proposed a kind of method of locating based on the sub-pixel edge of quadrature Fourier-Malin's square, because quadrature Fourier-Malin's square has the comparatively outstanding effect of describing to the small article body, article points out that its bearing accuracy is higher than the Zernike orthogonal moment; Document " a kind of improved Zernike orthogonal moment sub-pixel edge detection algorithm " (Li Jinquan, Wang Jianwei, Chen Shanben, Wu Lin. 500~504) and document " a kind of improved gray scale square sub-pixel edge detection algorithm " (Luo Jun optical technology, 2003,29 (4):, Hou Yan, pay beautiful. University Of Chongqing's journal, 2008,31 (5): 549~586) considered the selection of threshold that the enlarge-effect of template and having proposed gears to actual circumstances more; Document " A fast subpixel edge detection method using Sobel-Zernike momentsoperator " (Qu Y D, Cui C S, Chen S B, Li J Q.Image and Vision Computing, 2005,11~17) and document " A novel fast subpixel edge location method based on Sobel-OFMM " (HuZ F 23 (1):, Dang H S, Li X R.In:Proceedings of the IEEE International Conference on Automationand Logistics.Qingdao, China:IEEE, 2008.828~832) propose on the basis at Sobel operator coarse positioning edge, the square computing that the marginal point that is obtained is correlated with, and then significantly reduce the pixel number that participates in the mask convolution computing, improve the algorithm rapidity; Document " the image sub-pixel edge detection algorithm based on the Zernike orthogonal moment improves " (Gao Shiyi, Zhao Mingyang, Zhang Lei, Zou Yuanyuan. the robotization journal, 2008,34 (9): 1163~1168) the template dimension of Zemike orthogonal moment and number are expanded obtained higher bearing accuracy, but algorithm complex and algorithm all increase operation time to some extent.All above methods based on square, owing to will adopt a plurality of templates that image is carried out convolution, therefore, algorithm is consuming time bigger.
Summary of the invention
At existing in prior technology shortcoming and restriction, the object of the present invention is to provide a kind of sub-pixel edge localization method that can improve the ellipse target of measuring system precision.
The present invention designs a kind of sub-pixel edge localization method of ellipse target, this method is at first used Sobel edge detection operator detected image edge, after extracting the ellipse target edge, with least square method the elliptical edge point is carried out match, obtain the geometric parameter of ellipse target, ask for the edge model parameter according to the principle of geometric parameter and image moment then, and then try to achieve accurate sub-pixel edge position.The present invention adopts following technical scheme:
A kind of sub-pixel edge localization method of ellipse target, its key step is:
The 1st step: image removed make an uproar, the Sobel operator edge detection, extract the marginal point of ellipse target then, the total count_pixel of record marginal point, and the coordinate of storage pixel edge point are expressed as (x p, y p), p=1,2,3 ... count_pixel;
The 2nd step: utilize the marginal point of the ellipse target of the 1st step extraction, ask for oval general equation x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient A e, B e, C e, D e, E e, and then try to achieve oval centre coordinate (x Co, y Co) be:
x co = 2 B e C e - A e D e A e 2 - 4 B e
y co = 2 D e - A e C e A e 2 - 4 B e
Long axis of ellipse length is:
longaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e - A e 2 + ( 1 - B e ) 2 + 1 )
Oval minor axis length is:
shortaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e + A e 2 + ( 1 - B e ) 2 + 1 )
The 3rd step: ellipse target is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the target gray scale h that asks for ellipse target 1With background gray scale h 2, concrete steps are as follows:
The 3.1.1 step: choose center (x around at the ellipse target edge with the ellipse target of tentatively obtaining Co, y Co) be inside and outside two squares at center, interior square is contained by ellipse target, outer square contains ellipse target, the twice of the difference of the allowance that interior foursquare length of side length is ellipse target minor axis length and 2~3 pixels, outer foursquare length of side length is the twice of the allowance sum of ellipse target long axis length and 2~3 pixels;
The 3.1.2 step: write down the number of the image slices vegetarian refreshments on inside and outside square four length of sides, be expressed as count_inside, count_outside respectively, and the gray-scale value of the image slices vegetarian refreshments on inside and outside square four length of sides is stored in array ain respectively i, i=1,2,3 ... count_inside and aout j, j=1,2,3 ... among the count_outside;
The 3.1.3 step: respectively to array ain i, aout jBy the numerical values recited ordering, choose ain iIn the average of N_inside element in the middle of being positioned at as target gray scale h 1, choose aout jIn the average of N_outside element in the middle of being positioned at as h 2, wherein, N_inside is the result after 80% * count_inside rounds, N_outside is the result after 80% * count_outside rounds;
The 3.1.4 step: to the pixel edge point (x of ellipse target p, y p), if x pX Co, pixel edge point (x then p, y p) be positioned at the RHP of ellipse target, if x p≤ x Co, pixel edge point (x then p, y p) be positioned at the left half-plane of ellipse target; If the gray-scale value of ellipse target is bigger than background value gray scale, then the background gray scale and the target gray scale of ellipse target left side half-plane to be exchanged, the target gray scale and the background gray scale of ellipse target RHP then remain unchanged; If the gray-scale value of ellipse target is littler than background gray levels, then the background gray scale and the target gray scale of ellipse target RHP to be exchanged, the left half-plane target gray scale and the background gray scale of ellipse target then remain unchanged;
The 3.2nd step: to pixel edge point (x p, y p) ask for edge angle θ, computing formula is:
θ = arctan ( A e x p + 2 B e y p + D e 2 x p + A e y p + C e )
Wherein, A e, B e, C e, D e, E eIt is the coefficient of the oval general equation that obtains in the 2nd step;
The 3.3rd step: ask for the spacing l of pixel edge point and true edge point, concrete steps are as follows:
The 3.3.1 step: according to real system precision and the selected template dimension N of rapidity c, N cBe 3,5 or 7, generate the zeroth order square template of corresponding dimension;
The 3.3.2 step: will be the N at center with the pixel edge point c* N cThe gray-scale value of neighborhood territory pixel and 3.3.1 in the step selected zeroth order square template carry out convolution, obtain zeroth order square numerical value M 00
The 3.3.3 step: ask for the area S in the edge model of place, oval background area 2, computing formula is:
S 2 = M 00 - π h 1 h 2 - h 1
Wherein, M 00Be the zeroth order square numerical value of asking in the 3.3.2 step, h 1, h 2Be respectively the target gray scale and the background gray scale of the 3.1st ellipse target asked for of step;
The 3.3.4 step: ask for variable β with look-up table according to following formula,
S 2 = β - 1 2 sin ( 2 β )
Wherein, S 2It is the area in place, the oval background area edge model of asking in the 3.3.3 step;
The 3.3.5 step: ask for the spacing l of true edge and pixel edge point, computing formula is:
l=cosβ
The 3.4th step: for pixel edge point (x p, y p), obtain the sub-pix coordinate (x at corresponding real image edge Sub, y Sub), computing formula is:
x sub = x b + N c 2 l cos ( θ ) y sub = y b + N c 2 l sin ( θ )
Wherein, N cBe the template dimension of choosing, l is the true edge of asking in 3.3.5 step and the spacing of pixel edge point, and θ the 3.2nd goes on foot the edge angle of asking for;
The 3.5th step: each the pixel edge point to storage in the 1st step, repeat the operation of 3.1.4~3.4, the position of asking for all accurate sub-pixel edge points of this ellipse target.
Compared with prior art, the present invention has following advantage:
(1) compares with existing two dimensional image edge localization method based on square, the method of the sub-pixel edge location that the present invention proposes relies on geometric parameter, it is more accurate to obtain the edge model parameter, stable, make that final edge positioning result is more accurate, stable, concrete experiment and experimental data are as follows, wherein SGM (Spatial Gray Moment) method is the method based on space gray scale square, ZOM (Zernike Orthogonal Moment) method is the method based on quadrature Zernike square, and OFMM (Orthogonal Fournier-Mellin Moment) method is the method based on quadrature Fourier-Malin's square.
One, emulation experiment
Table 1 is the result after the present invention and additive method are handled the analogous diagram that comprises ellipse target, and SNR is a signal noise ratio (snr) of image; As can be known from the results, this paper method obtain precision and noiseproof feature is best.
Two, full-scale investigation
Adopt the present invention and additive method that one width of cloth pictorial diagram is carried out the location, edge respectively, primitive figure in kind and positioning result are as shown in Figure 1; Adopt the present invention and additive method that several pictorial diagram are carried out the location, edge respectively, the error of fitting of positioning result as shown in Figure 2; From the full-scale investigation result as can be known, error of fitting of the present invention is less than the error of fitting of additive method, and bearing accuracy is the highest, and stability is best.
Table 1 ellipse target sub-pixel edge positioning result is (unit: 0.01pixel) relatively
Figure A200910028023D00101
(2) method of the sub-pixel edge location of the present invention's proposition was obtained on the basis of marginal point in the thick step of Sobel boundary operator, the template number that participates in the image convolution computing is reduced one, reduced the operation time of algorithm widely, improved the rapidity of algorithm, table 2 be the several different methods convolution algorithm time ratio.
Table 2 algorithm time ratio
Add operation Multiply operation
SGM (5 * 5 template) 25×6×N p 25×6×N p
ZOM (5 * 5 template) 25×3×N p 25×3×N p
OFMM (5 * 5 template) 25×6×N p 25×6×N p
The present invention's (5 * 5 template) 25×1×N p 25×1×N p
(3) the present invention inherited based on the sub-pixel edge localization method noiseproof feature of square good, the advantage that bearing accuracy is high,
Overcome its operation time of big shortcoming, had very strong practicality.
Description of drawings
Fig. 1 is the comparative result figure of the secondary pictorial diagram of list of the present invention and additive method.
Fig. 2 is the comparative result figure of the how secondary pictorial diagram of the present invention and additive method.
Fig. 3 is the process flow diagram of sub-pixel edge location concrete steps.
Fig. 4 is two-dimentional continuous edge model synoptic diagram.
Fig. 5 is that the target and background gray scale is obtained synoptic diagram.
Fig. 6 is a rotation back edge model vertical view.
Fig. 7 is the former figure of actual treatment.
Fig. 8 is the extraction figure as a result of pixel edge point.
Fig. 9 is the inside and outside foursquare figure as a result that chooses.
Figure 10 is the sub-pixel edge positioning result figure of ellipse target.
Embodiment
Embodiment 1
Below in conjunction with accompanying drawing the specific embodiment of the present invention is further described.The sub-pixel edge location that application this method is carried out ellipse target mainly comprises three operation stepss of geometric parameter and sub-pixel edge location that detect the ellipse target pixel edge, ask for ellipse, the process flow diagram of concrete steps is used the concrete steps following steps that this method is carried out the sub-pixel edge location of ellipse target as shown in Figure 3:
The 1st step: image removed make an uproar, the Sobel operator edge detection, extract the marginal point of ellipse target then, the total count_pixel of record marginal point, and the coordinate of storage pixel edge point are expressed as (x p, y p), p=1,2,3 ... count_pixel, wherein, the coordinate of this marginal point of pixel edge point expression is whole Pixel-level;
The 2nd step: utilize the marginal point of the ellipse target of the 1st step extraction, adopt the method for least square fitting marginal point to ask for oval general equation x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient A e, B e, C e, D e, E e, when count_pixel marginal point carried out the least square ellipse fitting, its mean square deviation and be
e 2 = Σ i = 1 N _ sub ( x i 2 + A e x i y i + B e y i 2 + C e x i + D e y i + E e ) 2
About A e, B e, C e, D e, E eRespectively following formula is got partial derivative, and make that each formula is zero, obtain a static determinacy system of equations that comprises 5 equations and 5 unknown numbers, try to achieve the coefficient A of oval general equation with methods such as matrix inversion or the cancellations of Gauss's pivot in a column e, B e, C e, D e, E eAnd then try to achieve oval centre coordinate (x Co, y Co) be:
x co = 2 B e C e - A e D e A e 2 - 4 B e
y co = 2 D e - A e C e A e 2 - 4 B e
Long axis of ellipse length is:
longaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e - A e 2 + ( 1 - B e ) 2 + 1 )
Oval minor axis length is:
shortaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e + A e 2 + ( 1 - B e ) 2 + 1 )
The 3rd step: Fig. 4 is two-dimentional edge model figure, h 1Be target gray value, h 2Be background gray levels, l is the normalization distance that actual edge is put initial point, and initial point is represented the pixel edge point of the ellipse target of extracting in the 1st step, and θ is the angle of edge normal direction and x axle, θ ∈ ( - π 2 , π 2 ) , S 1, S 2Represent background area and the target area area in edge model respectively; Ellipse target is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the target gray scale h that asks for ellipse target 1With background gray scale h 2, concrete steps are as follows:
The 3.1.1 step: according to the intensity profile characteristic of ellipse target, choose pixel grey scale average on inside and outside two square limits around at the ellipse target edge as the target gray value and the background gray levels of ellipse target, wherein, inside and outside two foursquare choosing methods are: choose the center (x with the ellipse target of tentatively obtaining around at the ellipse target edge Co, y Co) be inside and outside two squares at center, interior square is contained by ellipse target, outer square contains ellipse target, the twice of the difference of the allowance that interior foursquare length of side length is ellipse target minor axis length and 2~3 pixels, outer foursquare length of side length is the twice of the allowance sum of ellipse target long axis length and 2~3 pixels, Fig. 5 is the inside and outside foursquare synoptic diagram of choosing, O is the initial point of entire image, elect the upper left corner of image as, two thick line squares are selected inside and outside square, and the center is the ellipse target center O c, shortaxis, longaxis represent the minor axis length and the long axis length of ellipse target respectively, interior square length of side l 1With outer square length of side l 2Length satisfy:
l 1 = 2 × ( shortaxis - c 2 ) l 2 = 2 × ( log naxis + c 1 )
c 1, c 2Be the allowance of choosing;
The 3.1.2 step: write down the number of the image slices vegetarian refreshments on inside and outside square four length of sides, be expressed as count_inside, count_outside respectively, and the gray-scale value of the image slices vegetarian refreshments on inside and outside square four length of sides is stored in array ain respectively i, i=1,2,3 ... count_inside and aout j, j=1,2,3 ... among the count_outside;
The 3.1.3 step: in order to eliminate The noise in the real image, at first respectively to array ain i, aout jBy the numerical values recited ordering, choose ain iIn the average of N_inside element in the middle of being positioned at as target gray scale h 1, choose aout jIn the average of N_outside element in the middle of being positioned at as h 2, wherein, N_inside is the result after 80% * count_inside rounds, N_outside is the result after 80% * count_outside rounds;
The 3.1.4 step: for the pixel edge point (x of ellipse target p, y p), if x pX Co, pixel edge point (x then p, y p) be positioned at the RHP of ellipse target, if x p≤ x Co, pixel edge point (x then p, y p) be positioned at the left half-plane of ellipse target; When edge model when the ellipse target edge moves, indicated background, the target gray scale of the above-mentioned target of asking for, background gray scale and edge model is inconsistent, in order to adapt to background, the target gray scale of edge model indication, need do following adjustment: if the gray-scale value of ellipse target is bigger than background value gray scale, then the background gray scale and the target gray scale of ellipse target left side half-plane are exchanged, the target gray scale and the background gray scale of ellipse target RHP then remain unchanged; If the gray-scale value of ellipse target is littler than background gray levels, then the background gray scale and the target gray scale of ellipse target RHP to be exchanged, the left half-plane target gray scale and the background gray scale of ellipse target then remain unchanged;
The 3.2nd step: utilizing the gradient direction of the edge pixel point of ellipse target is the character of the normal direction of this place elliptic curve, to pixel edge point (x p, y p) ask for edge angle θ, method is: according to geometric knowledge, marginal point (x p, y p) normal direction slope k _ grad be:
k _ grad = ∂ G ( x , y ) ∂ y ∂ G ( x , y ) ∂ x x = x p y = y p = A e x p + 2 B e y p + D e 2 x p + A e y p + C e
Wherein, A e, B e, C e, D e, E eBe the oval general equation G that obtains in the 2nd step (x, y)=x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient;
This marginal point (x p, y p) the edge angle be:
θ = arctan k _ grad = arctan ( A e x p + 2 B e y p + D e 2 x p + A e y p + C e )
The 3.3rd step: ask for the spacing l of pixel edge point and true edge point, concrete steps are as follows:
The 3.3.1 step:, need with its discrete N of being for the ideal edge model c* N cImage, wherein, N cBe the dimension of template, N cBe 3,5,7, the template dimension is high more, and precision is high more, but algorithm complex increases, therefore, and according to real system precision and the selected template dimension N of rapidity cBecause four edge parameters have only one not find the solution as yet, only need a zeroth order square template just can obtain edge parameters l, therefore, according to selected N c, generate the zeroth order square template of corresponding dimension, wherein, 3 * 3,5 * 5,7 * 7 zeroth order square templates are shown in table 3~table 5;
Table 33 * 3 zeroth order square templates
Figure A200910028023D00133
Table 45 * 5 zeroth order square templates
Figure A200910028023D00141
Table 57 * 7 zeroth order square templates
Figure A200910028023D00142
3.3.2 step: under discrete situation, square be calculated as related operation, therefore, will be the N at center with the pixel edge point c* N cThe gray-scale value of neighborhood territory pixel and 3.3.1 in the step selected zeroth order square template carry out convolution, obtain zeroth order square numerical value M 00
The 3.3.3 step: according to edge model figure, the zeroth order square computing formula of continuous two-dimentional edge model is:
M 00=∫∫f(x,y)dxdy=S 1h 1+S 2h 2
The edge model projection in a unit circle, is had:
S 1+S 2=π
Therefore, the area S in the edge model of place, oval background area 2Computing formula be:
S 2 = M 00 - π h 1 h 2 - h 1
Wherein, M 00Be the zeroth order square numerical value of asking in the 3.3.2 step, h 1, h 2Be respectively the target gray scale and the background gray scale of the 3.1st ellipse target asked for of step;
3.3.4 step: with edge model through turning clockwise θ, postrotational edge model vertical view as shown in Figure 6, O pBe the pixel edge point, l is the spacing of pixel edge point and true edge point, and the shade arcuate region is represented the background area of ellipse target, and the friendship excircle is G c, W c, and O pT c⊥ G cW c, T cOn excircle, shade arcuate region area is the S that asks in the 3.3.3 step 2, β is arc G cT cCorresponding central angle according to the simple geometric relation, has:
S 2 = β - 1 2 sin ( 2 β )
Following formula relates to nonlinear equation finds the solution, and in order to improve the algorithm rapidity, finds the solution variable β with look-up table;
The 3.3.5 step: as shown in Figure 6, the computing formula of the spacing l of true edge and pixel edge point is:
l=cosβ
The 3.4th step: according to edge model, for pixel edge point (x p, y p), the sub-pix coordinate (x at corresponding real image edge Sub, y Sub) be:
x sub = x p + l cos ( θ ) y sub = y p + l sin ( θ )
Consider the enlarge-effect of template, therefore, following formula is rewritten as:
x sub = x p + N c 2 l cos ( θ ) y sub = y p + N c 2 l sin ( θ )
Wherein, N cBe the template dimension of choosing, l is the true edge of asking in 3.3.5 step and the spacing of pixel edge point, and θ the 3.2nd goes on foot the edge angle of asking for;
The 3.5th step: each the pixel edge point to storage in the 1st step, repeat the operation of 3.1.4~3.4, the position of asking for all accurate sub-pixel edge points of this ellipse target.
Embodiment 2
According to said method, adopt the present invention that the real image that as shown in Figure 7 a pair comprises ellipse target is carried out the sub-pixel edge location, idiographic flow is as follows:
The 1st step: image removed make an uproar, the Sobel operator edge detection, extract the marginal point of ellipse target then, the elliptical edge point after the extraction writes down the total count_pixel of marginal point as shown in Figure 8, at this moment, count_pixel=47; Store the coordinate of pixel edge point, be expressed as (x p, y p), p=1,2,3 ... count_pixel, wherein, the coordinate of this marginal point of pixel edge point expression is whole Pixel-level, the pixel edge point coordinate of storage is as shown in table 6;
The 2nd step: utilize the marginal point of the ellipse target of the 1st step extraction, adopt the method for least square fitting marginal point to ask for oval general equation x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient A e, B e, C e, D e, E e, at this moment, A e=-0.0841, B e=0.7570, C e=-80.7437, D e=-72.2997, E e=3470.1, and then try to achieve oval centre coordinate (x Co, y Co) be:
x co = 2 B e C e - A e D e A e 2 - 4 B e = 42.4800
y co = 2 D e - A e C e A e 2 - 4 B e = 50.1164
Long axis of ellipse length is:
longaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e - A e 2 + ( 1 - B e ) 2 + 1 ) = 8.6844
Oval minor axis length is:
shortaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e + A e 2 + ( 1 - B e ) 2 + 1 ) = 7.4940
Table 6 pixel edge point coordinate (unit: pixel)
p 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
x p 35 36 36 37 38 39 40 41 42 43 44 45 46 47 48
y p 47 46 45 44 43 42 42 42 41 42 42 42 43 43 44
p 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
x p 48 49 49 50 50 50 50 50 50 49 49 48 47 46 45
y p 45 46 47 48 49 50 51 52 53 54 55 56 57 58 58
p 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
x p 44 43 42 41 40 39 38 37 37 36 36 35 35 35 35
y p 59 59 59 58 58 58 57 56 55 54 53 52 51 50 49
p 46 47
x p 35 35
y p 48 47
The 3rd step: Fig. 4 is two-dimentional edge model figure, h 1Be target gray value, h 2Be background gray levels, l is the normalization distance that actual edge is put initial point, and initial point is represented the pixel edge point of the ellipse target of extracting in the 1st step, and θ is the angle of edge normal direction and x axle, θ ∈ ( - π 2 , π 2 ) , S 1, S 2Represent background area and the target area area in edge model respectively; Ellipse target is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the target gray scale h that asks for ellipse target 1With background gray scale h 2, concrete steps are as follows:
The 3.1.1 step: according to the intensity profile characteristic of ellipse target, choose pixel grey scale average on inside and outside two square limits around at the ellipse target edge as the target gray value and the background gray levels of ellipse target, wherein, inside and outside two foursquare choosing methods are: choose the center (x with the ellipse target of tentatively obtaining around at the ellipse target edge Co, y Co) be inside and outside two squares at center, interior square is contained by ellipse target, outer square contains ellipse target, the twice of the difference of the allowance that interior foursquare length of side length is ellipse target minor axis length and 2~3 pixels, outer foursquare length of side length is the twice of the allowance sum of ellipse target long axis length and 2~3 pixels, and two white square are the inside and outside square of choosing among Fig. 9;
The 3.1.2 step: the number that writes down the image slices vegetarian refreshments on inside and outside square four length of sides, be expressed as count_inside, count_outside respectively, at this moment, count_inside=48, count_outside=88, and the gray-scale value of the image slices vegetarian refreshments on inside and outside square four length of sides is stored in array ain respectively i, i=1,2,3 ... count_inside and aout j, j=1,2,3 ... among the count_outside;
The 3.1.3 step: in order to eliminate The noise in the real image, at first respectively to array ain i, aout jBy the numerical values recited ordering, choose ain iIn the average of N_inside element in the middle of being positioned at as target gray scale h 1, choose aout jIn the average of N_outside element in the middle of being positioned at as h 2, wherein, N_inside is the result after 80% * count_inside rounds, N_outside is the result after 80% * count_outside rounds, at this moment, target gray scale h 1=120, background gray scale h 2=30, N_inside=38, N_outside=70;
The 3.1.4 step: for the pixel edge point (x of ellipse target p, y p), if x pX Co, pixel edge point (x then p, y p) be positioned at the RHP of ellipse target, if x p≤ x Co, pixel edge point (x then p, y p) be positioned at the left half-plane of ellipse target; When edge model when the ellipse target edge moves, indicated background, the target gray scale of the above-mentioned target of asking for, background gray scale and edge model is inconsistent, in order to adapt to background, the target gray scale of edge model indication, need do following adjustment: if the gray-scale value of ellipse target is bigger than background value gray scale, then the background gray scale and the target gray scale of ellipse target left side half-plane are exchanged, the target gray scale and the background gray scale of ellipse target RHP then remain unchanged; If the gray-scale value of ellipse target is littler than background gray levels, then the background gray scale and the target gray scale of ellipse target RHP to be exchanged, the left half-plane target gray scale and the background gray scale of ellipse target then remain unchanged;
The 3.2nd step: utilizing the gradient direction of the edge pixel point of ellipse target is the character of the normal direction of this place elliptic curve, to pixel edge point (x p, y p) ask for edge angle θ, method is: according to geometric knowledge, marginal point (x p, y p) normal direction slope k _ grad be:
k _ grad = ∂ G ( x , y ) ∂ y ∂ G ( x , y ) ∂ x x = x p y = y p = A e x p + 2 B e y p + D e 2 x p + A e y p + C e
Wherein, A e, B e, C e, D e, E eBe the oval general equation G that obtains in the 2nd step (x, y)=x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient;
This marginal point (x p, y p) the edge angle be:
θ = arctan k _ grad = arctan ( A e x p + 2 B e y p + D e 2 x p + A e y p + C e )
The 3.3rd step: ask for the spacing l of pixel edge point and true edge point, concrete steps are as follows:
The 3.3.1 step:, need with its discrete N of being for the ideal edge model c* N cImage, wherein, N cBe the dimension of template, N cBe 3,5,7, the template dimension is high more, and precision is high more, but algorithm complex increases, therefore, and according to real system precision and the selected template dimension N of rapidity cIn this example, choose N c=5,5 * 5 zeroth order square template is as shown in table 4;
3.3.2 step: under discrete situation, square be calculated as related operation, therefore, will be the N at center with the pixel edge point c* N cThe gray-scale value of neighborhood territory pixel and 3.3.1 in the step selected zeroth order square template carry out convolution, obtain zeroth order square numerical value M 00
The 3.3.3 step: according to edge model figure, the zeroth order square computing formula of continuous two-dimentional edge model is:
M 00=∫∫f(x,y)dxdy=S 1h 1+S 2h 2
The edge model projection in a unit circle, is had:
S 1+S 2=π
Therefore, the area S in the edge model of place, oval background area 2Computing formula be:
S 2 = M 00 - π h 1 h 2 - h 1
Wherein, M 00Be the zeroth order square numerical value of asking in the 3.3.2 step, h 1, h 2Be respectively the target gray scale and the background gray scale of the 3.1st ellipse target asked for of step;
3.3.4 step: with edge model through turning clockwise θ, postrotational edge model vertical view as shown in Figure 6, O pBe the pixel edge point, l is the spacing of pixel edge point and true edge point, and the shade arcuate region is represented the background area of ellipse target, and the friendship excircle is G c, W c, and O pT c⊥ G cW c, T cOn excircle, shade arcuate region area is the S that asks in the 3.3.3 step 2, β is arc G cT cCorresponding central angle according to the simple geometric relation, has:
S 2 = β - 1 2 sin ( 2 β )
Following formula relates to nonlinear equation finds the solution, and in order to improve the algorithm rapidity, finds the solution variable β with look-up table;
The 3.3.5 step: as shown in Figure 6, the computing formula of the spacing l of true edge and pixel edge point is:
l=cosβ
The 3.4th step: according to edge model, for pixel edge point (x p, y p), the sub-pix coordinate (x at corresponding real image edge Sub, y Sub) be:
x sub = x p + l cos ( θ ) y sub = y p + l sin ( θ )
Consider the enlarge-effect of template, therefore, following formula is rewritten as:
x sub = x p + N c 2 l cos ( θ ) y sub = y p + N c 2 l sin ( θ )
Wherein, N cBe the template dimension of choosing, l is the true edge of asking in 3.3.5 step and the spacing of pixel edge point, and θ the 3.2nd goes on foot the edge angle of asking for;
The 3.5th step: each the pixel edge point to storage in the 1st step, repeat the operation of 3.1.4~3.4, the position of asking for all accurate sub-pixel edge points of this ellipse target, net result is as shown in figure 10.

Claims (1)

1, a kind of sub-pixel edge localization method of ellipse target is characterized in that:
The 1st step: image removed make an uproar, the Sobel operator edge detection, extract the marginal point of ellipse target then, the total count_pixel of record marginal point, and the coordinate of storage pixel edge point are expressed as (x p, y p), p=1,2,3 ... count_pixel;
The 2nd step: utilize the marginal point of the ellipse target of the 1st step extraction, ask for oval general equation x 2+ A eXy+B ey 2+ C eX+D eY+E e=0 coefficient A e, B e, C e, D e, E e, and then try to achieve oval centre coordinate (x Co, y Co) be:
x co = 2 B e C e - A e D e A e 2 - 4 B e
y co = 2 D e - A e C e A e 2 - 4 B e
Long axis of ellipse length is:
longaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e - A e 2 + ( 1 - B e ) 2 + 1 )
Oval minor axis length is:
shortaxis = 2 ( A e C e D e - B e C e 2 - D e 2 + 4 B e E e - A e 2 E e ) ( A e 2 - 4 B e ) ( B e + A e 2 + ( 1 - B e ) 2 + 1 )
The 3rd step: ellipse target is carried out the sub-pixel edge location, and concrete steps are as follows:
The 3.1st step: the target gray scale h that asks for ellipse target 1With background gray scale h 2, concrete steps are as follows:
The 3.1.1 step: choose center (x around at the ellipse target edge with the ellipse target of tentatively obtaining Co, y Co) be inside and outside two squares at center, interior square is contained by ellipse target, outer square contains ellipse target, the twice of the difference of the allowance that interior foursquare length of side length is ellipse target minor axis length and 2~3 pixels, outer foursquare length of side length is the twice of the allowance sum of ellipse target long axis length and 2~3 pixels;
The 3.1.2 step: write down the number of the image slices vegetarian refreshments on inside and outside square four length of sides, be expressed as count_inside, count_outside respectively, and the gray-scale value of the image slices vegetarian refreshments on inside and outside square four length of sides is stored in array ain respectively i, i=1,2,3 ... count_inside and aout j, j=1,2,3 ... among the count_outside;
The 3.1.3 step: respectively to array ain l, aout jBy the numerical values recited ordering, choose ain iIn the average of N_inside element in the middle of being positioned at as target gray scale h 1, choose aout jIn the average of N_outside element in the middle of being positioned at as h 2, wherein, N_inside is the result after 80% * count_inside rounds, N_outside is the result after 80% * count_outside rounds;
The 3.1.4 step: to the pixel edge point (x of ellipse target p, y p), if x pX Co, pixel edge point (x then p, y p) be positioned at the RHP of ellipse target, if x p≤ x Co, pixel edge point (x then p, y p) be positioned at the left half-plane of ellipse target; If the gray-scale value of ellipse target is bigger than background value gray scale, then the background gray scale and the target gray scale of ellipse target left side half-plane to be exchanged, the target gray scale and the background gray scale of ellipse target RHP then remain unchanged; If the gray-scale value of ellipse target is littler than background gray levels, then the background gray scale and the target gray scale of ellipse target RHP to be exchanged, the left half-plane target gray scale and the background gray scale of ellipse target then remain unchanged;
The 3.2nd step: to pixel edge point (x p, y p) ask for edge angle θ, computing formula is:
θ = arctan ( A e x p + 2 B e y p + D e 2 x p + A e y p + C e )
Wherein, A e, B e, C e, D e, E eIt is the coefficient of the oval general equation that obtains in the 2nd step;
The 3.3rd step: ask for the spacing l of pixel edge point and true edge point, concrete steps are as follows:
The 3.3.1 step: according to real system precision and the selected template dimension N of rapidity c, N cBe 3,5 or 7, generate the zeroth order square template of corresponding dimension;
The 3.3.2 step: will be the N at center with the pixel edge point c* N cThe gray-scale value of neighborhood territory pixel and 3.3.1 in the step selected zeroth order square template carry out convolution, obtain zeroth order square numerical value M 00
The 3.3.3 step: ask for the area S in the edge model of place, oval background area 2, computing formula is:
S 2 = M 00 - π h 1 h 2 - h 1
Wherein, M 00Be the zeroth order square numerical value of asking in the 3.3.2 step, h 1, h 2Be respectively the target gray scale and the background gray scale of the 3.1st ellipse target asked for of step;
The 3.3.4 step: ask for variable β with look-up table according to following formula,
S 2 = β - 1 2 sin ( 2 β )
Wherein, S 2It is the area in place, the oval background area edge model of asking in the 3.3.3 step;
The 3.3.5 step: ask for the spacing l of true edge and pixel edge point, computing formula is:
l=cosβ
The 3.4th step: for pixel edge point (x p, y p), obtain the sub-pix coordinate (x at corresponding real image edge Sub, y Sub), computing formula is:
x sub = x p + N c 2 l cos ( θ ) y sub = y p + N c 2 l sin ( θ )
Wherein, N cBe the template dimension of choosing, l is the true edge of asking in 3.3.5 step and the spacing of pixel edge point, and θ the 3.2nd goes on foot the edge angle of asking for;
The 3.5th step: each the pixel edge point to storage in the 1st step, repeat the operation of 3.1.4~3.4, the position of asking for all accurate sub-pixel edge points of this ellipse target.
CN2009100280232A 2009-01-05 2009-01-05 Method for orientating secondary pixel edge of oval-shaped target Expired - Fee Related CN101465002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100280232A CN101465002B (en) 2009-01-05 2009-01-05 Method for orientating secondary pixel edge of oval-shaped target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100280232A CN101465002B (en) 2009-01-05 2009-01-05 Method for orientating secondary pixel edge of oval-shaped target

Publications (2)

Publication Number Publication Date
CN101465002A true CN101465002A (en) 2009-06-24
CN101465002B CN101465002B (en) 2010-09-01

Family

ID=40805569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100280232A Expired - Fee Related CN101465002B (en) 2009-01-05 2009-01-05 Method for orientating secondary pixel edge of oval-shaped target

Country Status (1)

Country Link
CN (1) CN101465002B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976440A (en) * 2010-11-09 2011-02-16 东华大学 Sobel operator-based extraction method of profile and detail composite characteristic vector used for representing fabric texture
CN101996315A (en) * 2009-08-18 2011-03-30 通用电气公司 System, method and program product for camera-based object analysis
CN102061517A (en) * 2010-12-13 2011-05-18 浙江长兴众成电子有限公司 Czochralski single crystal silicon diameter measurement method
CN102611887A (en) * 2011-01-21 2012-07-25 华为技术有限公司 Method and device for rounding coordinate value of non-integer pixel position motion vector
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN103035004A (en) * 2012-12-10 2013-04-10 浙江大学 Circular target centralized positioning method under large visual field
CN103559506A (en) * 2013-11-19 2014-02-05 中国科学院地理科学与资源研究所 Sub-pixel drawing method based on vector boundaries
CN103632366A (en) * 2013-11-26 2014-03-12 清华大学 Parameter identification method for elliptical target
CN104715491A (en) * 2015-04-09 2015-06-17 大连理工大学 Subpixel edge detection method based on one-dimensional gray moment
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105719298A (en) * 2016-01-22 2016-06-29 北京航空航天大学 Edge detection technology based line diffusion function extracting method
CN105913082A (en) * 2016-04-08 2016-08-31 北京邦焜威讯网络技术有限公司 Method and system for classifying objects in image
CN106680086A (en) * 2016-12-29 2017-05-17 上海大学 Video extensometer applied to high-speed tensile experiment of plastic material
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN109934819A (en) * 2019-03-22 2019-06-25 大连大学 Curved edge sub-pixel detection method in a kind of laser assembly solder part to be welded image
CN110428435A (en) * 2019-07-18 2019-11-08 哈尔滨工业大学 A kind of generation method of ideal edge emulating image
CN113487594A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel angular point detection method, system and medium based on deep learning
CN113793309A (en) * 2021-08-27 2021-12-14 西北工业大学 Sub-pixel level ellipse detection method based on morphological characteristics
CN117611651A (en) * 2023-11-23 2024-02-27 湖南科天健光电技术有限公司 Detection method, detection system, detection equipment and electronic medium for subpixel ellipse center

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000655A (en) * 2007-01-12 2007-07-18 浙江工业大学 Positioning method of spherical-like fruit and vegetable

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996315A (en) * 2009-08-18 2011-03-30 通用电气公司 System, method and program product for camera-based object analysis
CN101996315B (en) * 2009-08-18 2014-02-12 通用电气公司 System and method for camera-based object analysis
CN101976440B (en) * 2010-11-09 2012-06-13 东华大学 Sobel operator-based extraction method of profile and detail composite characteristic vector used for representing fabric texture
CN101976440A (en) * 2010-11-09 2011-02-16 东华大学 Sobel operator-based extraction method of profile and detail composite characteristic vector used for representing fabric texture
CN102061517B (en) * 2010-12-13 2012-03-07 浙江长兴众成电子有限公司 Czochralski single crystal silicon diameter measurement method
CN102061517A (en) * 2010-12-13 2011-05-18 浙江长兴众成电子有限公司 Czochralski single crystal silicon diameter measurement method
CN102611887A (en) * 2011-01-21 2012-07-25 华为技术有限公司 Method and device for rounding coordinate value of non-integer pixel position motion vector
CN102611887B (en) * 2011-01-21 2015-08-05 华为技术有限公司 The coordinate figure of non-Integer Pel position motion vector rounds method and apparatus
US9521408B2 (en) 2011-01-21 2016-12-13 Huawei Technologies Co., Ltd. Method and apparatus for rounding coordinate value of non-integer pixel position motion vector
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN102637300B (en) * 2012-04-26 2014-08-06 重庆大学 Improved Zernike moment edge detection method
CN103035004A (en) * 2012-12-10 2013-04-10 浙江大学 Circular target centralized positioning method under large visual field
CN103035004B (en) * 2012-12-10 2015-08-12 浙江大学 The method of circular target centralized positioning under a kind of Large visual angle
CN103559506A (en) * 2013-11-19 2014-02-05 中国科学院地理科学与资源研究所 Sub-pixel drawing method based on vector boundaries
CN103559506B (en) * 2013-11-19 2015-04-15 中国科学院地理科学与资源研究所 Sub-pixel drawing method based on vector boundaries
CN103632366B (en) * 2013-11-26 2016-04-20 清华大学 A kind of parameter identification method of ellipse target
CN103632366A (en) * 2013-11-26 2014-03-12 清华大学 Parameter identification method for elliptical target
CN104715491B (en) * 2015-04-09 2017-07-21 大连理工大学 A kind of sub-pixel edge detection method based on one-dimensional Gray Moment
CN104715491A (en) * 2015-04-09 2015-06-17 大连理工大学 Subpixel edge detection method based on one-dimensional gray moment
CN104751151B (en) * 2015-04-28 2017-12-26 苏州安智汽车零部件有限公司 A kind of identification of multilane in real time and tracking
CN104751151A (en) * 2015-04-28 2015-07-01 苏州安智汽车零部件有限公司 Method for identifying and tracing multiple lanes in real time
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105005985B (en) * 2015-06-19 2017-10-31 沈阳工业大学 Backlight image micron order edge detection method
CN105719298A (en) * 2016-01-22 2016-06-29 北京航空航天大学 Edge detection technology based line diffusion function extracting method
CN105719298B (en) * 2016-01-22 2018-05-29 北京航空航天大学 A kind of method of the line spread function extraction based on edge detecting technology
CN105913082B (en) * 2016-04-08 2020-11-27 北京邦视科技有限公司 Method and system for classifying targets in image
CN105913082A (en) * 2016-04-08 2016-08-31 北京邦焜威讯网络技术有限公司 Method and system for classifying objects in image
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN106778541B (en) * 2016-11-28 2019-08-13 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN106680086A (en) * 2016-12-29 2017-05-17 上海大学 Video extensometer applied to high-speed tensile experiment of plastic material
CN109934819A (en) * 2019-03-22 2019-06-25 大连大学 Curved edge sub-pixel detection method in a kind of laser assembly solder part to be welded image
CN109934819B (en) * 2019-03-22 2021-02-05 大连大学 Method for detecting curve edge sub-pixel in laser tailor-welded workpiece image
CN110428435A (en) * 2019-07-18 2019-11-08 哈尔滨工业大学 A kind of generation method of ideal edge emulating image
CN113487594A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel angular point detection method, system and medium based on deep learning
CN113487594B (en) * 2021-07-22 2023-12-01 上海嘉奥信息科技发展有限公司 Sub-pixel corner detection method, system and medium based on deep learning
CN113793309A (en) * 2021-08-27 2021-12-14 西北工业大学 Sub-pixel level ellipse detection method based on morphological characteristics
CN113793309B (en) * 2021-08-27 2024-04-09 西北工业大学 Subpixel level ellipse detection method based on morphological characteristics
CN117611651A (en) * 2023-11-23 2024-02-27 湖南科天健光电技术有限公司 Detection method, detection system, detection equipment and electronic medium for subpixel ellipse center

Also Published As

Publication number Publication date
CN101465002B (en) 2010-09-01

Similar Documents

Publication Publication Date Title
CN101465002B (en) Method for orientating secondary pixel edge of oval-shaped target
Hermosilla et al. Non-linear fourth-order image interpolation for subpixel edge detection and localization
CN102750697B (en) Parameter calibration method and device
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN107169981B (en) Method and device for detecting three-dimensional profile of ballast particles
CN107292310B (en) Visual positioning and automatic reading method for circular pointer type dial plate
CN105205858A (en) Indoor scene three-dimensional reconstruction method based on single depth vision sensor
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN101256156B (en) Precision measurement method for flat crack and antenna crack
CN101650828B (en) Method for reducing random error of round object location in camera calibration
CN104899888B (en) A kind of image sub-pixel edge detection method based on Legendre squares
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN104376564B (en) Method based on anisotropic Gaussian directional derivative wave filter extraction image thick edge
CN108765476A (en) Polarized image registration method
CN105894521A (en) Sub-pixel edge detection method based on Gaussian fitting
CN103162818B (en) Based on the laser beam beamwidth evaluation method of invariant moment
CN104715491B (en) A kind of sub-pixel edge detection method based on one-dimensional Gray Moment
CN103813095B (en) Test chart and its application method
CN107133986A (en) A kind of camera calibration method based on two-dimensional calibrations thing
CN103400399A (en) Spatial moment based line structured light center extraction method
CN109060290A (en) The method that wind-tunnel density field is measured based on video and Sub-pixel Technique
CN103646395A (en) A high-precision image registering method based on a grid method
CN104751458A (en) Calibration angle point detection method based on 180-degree rotating operator
CN102096920A (en) Target image-based sub-pixel registering method
CN106940782A (en) High score SAR based on variogram increases construction land newly and extracts software

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SOWTHEAST UNIV.

Effective date: 20131022

Owner name: SHENLING ELECTRIC MANUFACTURING CO., LTD., HAIAN

Free format text: FORMER OWNER: SOWTHEAST UNIV.

Effective date: 20131022

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 210096 NANJING, JIANGSU PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20131022

Address after: 226600 Haian, Jiangsu province Haian Zhenhai Road, No. 88, South Road, No.

Patentee after: Haian Shenling Electrical Appliance Manufacturing Co., Ltd.

Patentee after: Southeast University

Address before: 210096 Jiangsu city Nanjing Province four pailou No. 2

Patentee before: Southeast University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100901

Termination date: 20190105

CF01 Termination of patent right due to non-payment of annual fee