CN103035004B - The method of circular target centralized positioning under a kind of Large visual angle - Google Patents

The method of circular target centralized positioning under a kind of Large visual angle Download PDF

Info

Publication number
CN103035004B
CN103035004B CN201210535881.8A CN201210535881A CN103035004B CN 103035004 B CN103035004 B CN 103035004B CN 201210535881 A CN201210535881 A CN 201210535881A CN 103035004 B CN103035004 B CN 103035004B
Authority
CN
China
Prior art keywords
area
circular target
rectangular area
suitable rectangular
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210535881.8A
Other languages
Chinese (zh)
Other versions
CN103035004A (en
Inventor
徐之海
陈阔
冯华君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201210535881.8A priority Critical patent/CN103035004B/en
Publication of CN103035004A publication Critical patent/CN103035004A/en
Application granted granted Critical
Publication of CN103035004B publication Critical patent/CN103035004B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the method for circular target centralized positioning under a kind of Large visual angle, comprise the following steps: 1) extract the suitable rectangular area comprising circular target in the input image, calculate horizontal width and the vertical height of suitable rectangular area, the initial offset at record circular target center and input picture center; 2) extract the background area at four angles of suitable rectangular area, calculate its corresponding elemental area; 3) according to step 2) in elemental area build area features vector L, adopt the centre coordinate of nonlinear optimization algorithm iterative circular target.The present invention by the pixel faces product value in indirect inspection region, thus estimates circular target centre coordinate, measuring error is reduced hundred times of magnitudes.

Description

The method of circular target centralized positioning under a kind of Large visual angle
Technical field
The present invention relates to the method for circular target centralized positioning, particularly relate to the method for circular target centralized positioning under a kind of Large visual angle.
Background technology
For ensureing the correct flight track of spacecraft in survey of deep space task, autonomous navigation technology becomes an important guarantee, wherein Optical autonomous navigation utilizes spaceborne useful load, photographic subjects celestial body and background fixed star thereof, by the real-time flight track of the image processing method determination spacecraft of celestial body centralized positioning and star pattern matching.Target celestial body is considered circular target in the image processing arts, namely celestial body centralized positioning has been the centralized positioning of circular target, when spacecraft is close to target celestial body, for observing the imaging system that whole celestial body often needs compared with Large visual angle, often there is larger distortion in the star images now photographed.Close to the target celestial body stage, for ensureing the correct flight track of spacecraft, circular target and target celestial body to there is larger distortion is needed to implement centralized positioning.
Traditional circular target center positioning method, mainly contains centroid method, Gauss curved method, edge fitting method.Centroid method is the grayscale distribution information of based target, relatively be applicable to the less and uniform target of intensity profile, most widely used is the square weighting centroid method improved, its adopt target gray value square as weights, highlight the nearer area pixel point of decentering to the impact of target's center, obtain in the application of star chart in star sensor shooting, centroid method embodies good use value; Gauss curved ratio juris thinks that target celestial body is after imaging system, the light distribution Gaussian distributed that image device is collected, in the so last digital picture exported, the intensity profile of target celestial body is exactly a Gauss curved, the parameter of Gauss curved function is calculated finally by least square method, just can obtain the centre coordinate of target celestial body, but when target celestial body is excessive, its calculated amount is very large; Edge fitting method obtains the marginal point of target celestial body by rim detection means, the precision of sub-pixel is obtained again by methods such as square, matching, interpolation, central coordinate of circle is gone out finally by least square fitting, it is insensitive to the intensity profile of target, but in order to successfully object edge be detected, it needs the elemental area of target enough large.
But when spacecraft is close to target celestial body, for observing whole target celestial body, often needing the imaging system of Large visual angle, considering the global design of imaging system, the target celestial body now photographed almost is full of whole visual field, even part is beyond visual field.When the visual field of imaging system is very large, lens distortion will inevitably become the key of problem, central vision place can think does not have distortion, along with the increase of true field, amount of distortion in image planes is corresponding increase also, although aberration correction technique is developed so far, for the distortion under comparatively Large visual angle (as angle of half field-of view 75 degree), be difficult to be corrected to sub-pixel level, this to a certain degree on limit the utilization of traditional circular target centralized positioning algorithm.Further, centroid method require target gray distribution more symmetrical, lens distortion can not its positioning precision must be caused very poor by correction factor; Gauss curved method is at target celestial body overexposure or when distorting larger, and deviations is very large, and the operand of general objective is very large; That to be target be the applicable criterion of edge fitting method is circular definitely accurately, and distortion can not limit its application completely by correction factor.
Summary of the invention
What technical scheme of the present invention avoided distortion can not correction factor, after pre-service is carried out to target image, background area in extraction image and four angles of image, because camera debug ensured camera lens in process optical axis point to overlap with the physical centre of image device, if so the center of circular target and the center superposition of input picture in input picture, so the elemental area at these four angles should be equal, otherwise the side-play amount at relative input picture center, the center then can extrapolating circular target, namely achieves the centralized positioning of circular target.
A method for circular target centralized positioning under Large visual angle, comprises the following steps:
1) extract in the input image and comprise the suitable rectangular area of circular target, calculate horizontal width W and the vertical height H of described suitable rectangular area, make the initial offset at circular target center and described input picture center be: Δ 0=(x 0, y 0);
The condition that described suitable rectangular area should meet is: suitably the length of side upper limit of rectangular area is the circumscribed square length of side of circular target, and its lower limit is connect the square length of side in described circular target;
When described input picture meets the condition that suitable rectangular area should meet, do not need to process input picture, above-mentioned suitable rectangular area can be obtained, then the initial offset Δ at circular target center and input picture center 0=(0,0);
When input picture does not meet the condition that suitable rectangular area should meet, binary conversion treatment is carried out to input picture and obtains bianry image, according to the criterion of suitable rectangular area corrode described bianry image, obtain described suitable rectangular area, wherein, R is the radius of circular target, and N is the length of side of suitable rectangular area.
Dividing method between maximum kind is adopted to process input picture, and obtain best binarization segmentation threshold value, for filtering noise and the interference of other non-targeted object, after input picture binaryzation, only retain the largest connected region in bianry image and think that it is exactly circular target;
Calculate horizontal width W and the vertical height H in largest connected region, use the radius R of Least Square Method circular target, circular shuttering is adopted to corrode bianry image, W and H is reduced be about 1.6 times of original 0.8 times or R, make the rectangular area of now extracting be described suitable rectangular area, and write down the top left co-ordinate (x of now suitably rectangular area 0, y 0) and the suitable horizontal width W of rectangular area and vertical height H, and note initial offset Δ 0=(x 0, y 0).
2) extract the background area at four angles of suitable rectangular area, calculate its corresponding elemental area, and be designated as S respectively upper left, S lower-left, S upper right, S bottom right;
According to pixels line by line scan to described suitable rectangular area and carry, every a line obtains a corresponding grey scale curve S g, then try to achieve corresponding gradient curve S d; Gradient curve has two kinds of distribution forms, one is in bimodal shape, correspondence smooth background and target area are distinguished by both sides and middle mountain valley district, mountain peak district is the just corresponding transition region of object edge, another kind of form is that whole piece curve is all more smooth, it shows that this row view data belongs to circular target region, and two kinds of forms of gradient curve and the corresponding relation of image information, directly can differentiate from image;
Being separated circular target and background area, is with form line by line, at grey scale curve S gthe middle cut-point finding the best, namely gray scale thinks background area below cut-point gray scale, is labeled as 1, otherwise is labeled as 0, is considered as circular target region.The defining method of optimal partition point is, compute gradient curve S dhistogram, to utilize between maximum kind dividing method by S dbe divided into two-layer, upper strata is large gradient layer, region, corresponding mountain peak, and lower floor is little gradient layer, region, corresponding mountain valley, and the center point calculating top area is designated as x 1and x 2, be namely the optimal partition point of certain row view data;
After progressive scanning picture, obtain the bianry image after optimal segmentation, region segmentation is carried out to it and obtains five regions, namely first region represent circular target, all the other four regions represent the background area near four angles respectively, are designated as upper left, region, lower-left, region, region upper right, bottom right, region, calculate the area of four background areas in units of pixel, namely the pixel count of statistical regions, is designated as S respectively upper left, S lower-left, S upper right, S bottom right.
3) elemental area S is remembered upper left, S lower-left, S upper right, S bottom rightbe respectively S 1, S 2, S 3, S 4, build area features vector L:
L=[L 1,L 2]
Wherein:
L 1=[P 12-P 34,S 13,S 24]
L 2=[P 13-P 24,S 12,S 34]
P 12=|S 1+S 2|
P 34=|S 3+S 4|
S 13=|S 1-S 3|
S 24=|S 2-S 4|
P 13=|S 1+S 3|
P 24=|S 2+S 4|
S 12=|S 1-S 2|
S 34=|S 3-S 4|
Choose area in four background areas minimum, and two square boundaries of the minimum background area of fixed-area, the direction of another two borders of described suitable rectangular area to the minimum place, background area of described area is moved, and make amount of movement be (x ', y '), the area features vector L described in utilization builds following constraint:
(x′,y′)=arg min{||L 1|| 2+||L 2|| 2}
And utilize nonlinear optimization algorithm can try to achieve the optimum solution (x ', y ') of above formula, the positioning result at final circular target center is:
Δ=Δ 0+0.5·(W-x′,H-y′)
In formula: W and H is respectively horizontal width and the vertical height of described suitable rectangular area.
Under Large visual angle of the present invention, the main thought of circular target center positioning method is:
1, when circular target center relative image center superposition, the background area area at four angles will be completely equal, if shifted by delta in a small amount occurs in target's center, the area at four angles then will produce relatively large change, can analyze as follows:
Suppose that image size is N × N, for ensureing that image opposing circular target is suitable rectangular area, the circular target radius R photographed should meet the side-play amount of hypothetical target center and picture centre is Δ=(Δ x, Δ y), so now can try to achieve upper left corner area and upper right corner area respectively,
S 1 = ( N / 2 - y 1 ) · ( N / 2 + x 1 ) 2
S 3 = ( N / 2 - x 2 ) · ( N / 2 - y 2 ) 2
Wherein
x 1 = Δ x - R 2 - ( N / 2 - Δ y ) 2
y 1 = Δ y + R 2 - ( N / 2 + Δ x ) 2
x 2 = Δ x + R 2 - ( N / 2 - Δ y ) 2
y 2 = Δ y + R 2 - ( N / 2 - Δ x ) 2
For solving offset Δ, actual in image processing process is reference area difference Δ s=S 1-S 3, so just by measured value Δ scalculate estimated value Δ, can function be set up
Δ S=f(Δ x,Δ y)
The difference of its expression background area, the upper left corner of image and the area in the upper right corner is circular target center offset Δ=(Δ x, Δ y) binary function, suppose in the image that a certain width photographs, the center offset of circular target is (Δ x0, Δ y0), fix Δ y0, allow target move in the x direction, namely change offset Δ x, observe the difference in areas Δ measured swith variable Δ xbetween relation, just can obtain measured value Δ sand the multiplying power relation between estimated value Δ, therefore, by function f (Δ x, Δ y) point (0, Δ y0) place's Taylor expansion, be approximately linear function, can obtain
Δ S = [ Δ y 0 - N 2 + R 2 - N 2 / 4 - N · ( N / 2 - R 2 - ( N / 2 - Δ y 0 ) 2 ) 2 R 2 - N 2 / 4 ] · Δ x
Definition measured value is enlargement ratio β with the ratio of estimated value, known, in y-direction, circular target center is fixed on Δ y0place, there is the amount of moving freely Δ in another x direction x, the background area, the upper left corner now measured and the difference in areas Δ in the upper right corner swith x direction amount of movement Δ xenlargement ratio be
β = Δ y 0 - N 2 + R 2 - N 2 / 4 - N · ( N / 2 - R 2 - ( N / 2 - Δ y 0 ) 2 ) 2 R 2 - N 2 / 4
Because initial offset Δ y0n, so enlargement ratio is approximate meet relation
β ≈ N 2 - 2 R 2 4 R 2 - N 2
When circular target radius R meets relation time, β value is (0 ,+∞) and reduces along with the increase of R.Step 1 of the present invention) in extract suitable rectangular area, horizontal width W and vertical height H is narrowed down to original 0.8 times, be because now just in time make R value be N/2 and mean value, now enlargement ratio β=0.4N, if image device resolution is 2048 × 2048, multiplying power β=822 can be amplified, its physical significance is, if create the error of 1 pixel during the area features vector L of computed image in experiment, then it to the centralized positioning result of circular target by the error of generation 0.0012 pixel.
Therefore when micro-shifted by delta occurs circular target xtime, the difference Δ of the area of the measured value in experiment and upper left corner background and upper right corner background sβ will be produced doubly to offset Δ xchange.Such estimated value offset Δ xseveral times are exaggerated, indirect inspection Δ stime produce error be applied to Δ xupper just reduced several times, when circular target size is suitable, the center using the present invention to ask for circular target will obtain higher precision.
2, when not considering lens distortion, circular target is relative to the offset Δ at input picture center and measured value Δ smeet analysable funtcional relationship, therefore under theoretical case, obtain measured value Δ safter instead can release offset Δ accurately.But there is very large distortion compared with camera lens under wide visual field, if do not corrected distortion, that must analyze the impact of distortion on algorithm in the present invention, and analytic process is as follows:
Suppose that the center of circular target and the center of input picture exist offset Δ=(Δ x, Δ y), now the edge of circular target and border, suitable rectangular area intersect at 8 points, and get 4 somes A, B, C, D of the first half, wherein A, B belong to background area, the upper left corner, C, D belong to background area, the upper right corner, and the visual field number percent that can calculate now A, B, C, D is θ 1, θ 2, θ 3, θ 4, and the round object edge part average visual field number percent remembering the upper left corner is θ l=(θ 1+ θ 2)/2, the round object edge part average visual field number percent in the upper right corner is θ r=(θ 3+ θ 4)/2, so can obtain the average visual field percent difference Δ at edge, the upper left corner and edge, the upper right corner θpoint (0, Δ y0) Taylor expansion at place,
Δ θ = 2 2 N · 2 R 2 - ( N / 2 - Δ y 0 ) 2 N 2 / 4 - ( N / 2 - Δ y 0 ) 2 + R 2 + N ( Δ y 0 + R 2 - N 2 / 4 ) ( Δ y 0 + R 2 - N 2 / 4 ) 2 + N 2 / 4 · R 2 - N 2 / 4 · Δ x
Due to initial offset Δ y0n, this Taylor expansion can dissolve for:
Δ θ = 2 ( N + 4 R 2 - N 2 ) 2 NR · Δ x .
If R value be N/2 and mean value, now Δ θ=1.96 Δs x/ N, assuming as structure definition is 2048 × 2048, then Δ θ=0.096% Δ x, the poor impact in the average visual field of side-play amount on the edge of image four corners that circular target produces relative to input picture center is very little, therefore by such method can theory calculate circular target and input picture center exist offset time, the edge, the upper left corner of circular target and the average visual field percent difference existing for edge, the upper right corner on image, its existence makes circular target after large visual field optical system imaging just, the distortion degree at edge, the upper left corner and edge, the upper right corner is different, the difference of the area of background area, the upper left corner and background area, the upper right corner and theoretical value is finally caused to there is certain margin of error Δ d, it is the error of area features vector L and its theoretical value that calculates of the present invention just also, and this error size is β times of circular target centralized positioning error, to this margin of error Δ dbe analyzed as follows:
Can calculate do not consider distortion when, when circular target and input picture center exist offset Δ=(Δ x, Δ y) time, the difference in areas Δ of background area, the image upper left corner and background area, the upper right corner smeet funtcional relationship:
Δ S=f(Δ x,Δ y)
Considering the situation of distortion, supposing that the edge, the upper left corner of Circle in Digital Images shape target on average moves Δ because distorting to angular direction, upper left r1, edge, the upper right corner on average moves Δ to upper right angular direction r2, these two amounts can according to Δ θcalculate with optical system distortion curve, therefore calculate background area, upper left corner area S ' equally 1with background area, upper right corner area S ' 1,
S 1 ′ = ( N / 2 - y 1 ′ ) · ( N / 2 + x 1 ′ ) 2
S 3 ′ = ( N / 2 - x 2 ′ ) · ( N / 2 - y 2 ′ ) 2
Wherein
x 1 ′ = Δ x - ( R + Δ r 1 ) 2 - ( N / 2 - Δ y ) 2
y 1 ′ = Δ y + ( R + Δ r 1 ) 2 - ( N / 2 + Δ x ) 2
x 2 ′ = Δ x + ( R + Δ r 2 ) 2 - ( N / 2 - Δ y ) 2
y 2 ′ = Δ y + ( R + Δ r 2 ) 2 - ( N / 2 - Δ x ) 2
Equally, when considering distortion, the difference Δ of the area of background area, the upper left corner and background area, the lower right corner ' smeet funtcional relationship
Δ′ S=S′ 1-S′ 3=g(Δ x,Δ y)
So, utilize method of the present invention can calculate the error amount Δ of the area features vector L ' and area features Vector Theory value L of experiment d, can be described as
Δ D=Δ′ SS=g(Δ x,Δ y)-f(Δ x,Δ y)
To sum up, the distortion curve of given a certain optical system, just can calculate when using it to observe a certain circular target, in consideration distortion with under not considering distortion two kinds of situations, the theoretical value of area features vector L when circular target and input picture center exist certain deviation amount and measured value, because the object of the invention is in actual applications, avoid the process of distortion correction, the measured value of proper vector is directly considered as theoretical value, thus complete optimization calculating, by the theoretical method that the present invention proposes, the error between the theoretical value of proper vector and measured value can be calculated, error of the present invention can be calculated according to the application conditions of reality, enough feasibilities are had in engineer applied.
Accompanying drawing explanation
Fig. 1 is the operating process block diagram of the method for circular target centralized positioning under Large visual angle of the present invention.
Fig. 2 is input picture of the present invention.
Fig. 3 is the best binary image of input picture of the present invention.
Fig. 4 is the suitable rectangular area comprising circular target of the present invention.
Fig. 5 is the intensity profile curve of certain row view data of the present invention.
Fig. 6 is the gradient curve of intensity profile curve in Fig. 5.
Fig. 7 is the optimal segmentation result of gradient curve in Fig. 6.
Fig. 8 is the region of separate targets line by line of the present invention and background area result.
Embodiment
In order to understand implementation procedure of the present invention better, the method of circular target centralized positioning under Large visual angle of the present invention is elaborated below in conjunction with accompanying drawing, as shown in Figure 1, applying method of the present invention, to ask for the concrete steps at circular target center under Large visual angle as follows for the process flow diagram of concrete steps:
Step 1: utilize large visual field optical system to take circular target, obtain the target image that there is larger distortion, be input picture, its analogous diagram as shown in Figure 2.
Step 2: between use maximum kind, dividing method is by input picture (i.e. Fig. 2), be treated to bianry image, obtain the result shown in Fig. 3, go out to comprise the minimum rectangular area of circular target according to largest connected extracted region, and calculate its horizontal width W and vertical height H.
Step 3: all outward flange points of target area in scintigram 3 one by one, the i.e. area limit line of background area and target area, reject the candidate point of wherein possible level or vertical edge (when circular target exceedes the field angle of object of optical system, level or vertical edge is just there will be) in this step, based on remaining edge candidate point, least square fitting is adopted to go out radius of a circle R.
Step 4: according to criterion r is made to get the intermediate value in criterion interval, when there is not level or vertical edge points in step 3, etching operation is carried out to the bianry image that step 2 obtains, W and H is made all to be reduced to original 0.8 times, if step 3 exists level or vertical edge points, make W and H all be reduced to 1.6 times of R, by the two-value template after corrosion and Fig. 2 phase and, obtain the suitable rectangular area comprising circular target, write down the upper left corner coordinate (x in fig. 2 of suitable rectangular area 0, y 0), and remember initial displacement amount Δ 0=(x 0, y 0), suitable rectangular area is cut out and obtains Fig. 4, and the horizontal width W of the suitable rectangular area upgraded and vertical height H.
Step 5: line by line scan Fig. 4, can obtain the grey scale curve of two types, as shown in Figure 5.Wherein horizontal ordinate is the columns of image, ordinate is the gray-scale value of certain row image, zone circle curve represents that the row data contain the lower background area of gray scale and the higher target area of gray scale, row as shown in the solid black lines in Fig. 4, and represent that the row data are target areas that gray scale is higher with point curve, row as shown in the black dotted lines in Fig. 4.
Step 6: gradient is asked to all grey scale curve that progressive scanning picture in step 5 obtains, obtains the gradient curve of two types, as shown in Figure 6.Wherein horizontal ordinate represents the columns of image, ordinate represents the Grad of certain row gradation of image, zone circle, band point curve are corresponding with Fig. 5 respectively, can find out, zone circle curve is bimodal shape, both sides and middle mountain valley district corresponding respectively smooth background and smooth circular target interior zone, and the just corresponding transition region at circular target edge, mountain peak district, then whole piece curve is relatively smooth for band point curve, and it shows that the row data belong to circular target region.
Step 7: observe data in Fig. 6, Grad size according to gray scale can be divided into two classes significantly, the first kind represents the transition region, edge of circular target, represent with 1, Equations of The Second Kind represents the flat site of image, represents with 0, and the classification results that between use maximum kind, split plot design obtains as shown in Figure 7, calculate the center of primary sources and the cut-point of circular target and background area, and be designated as x 1and x 2, process obtains the cut-point of often row view data line by line, and finally marking background area is 1, and target area is 0, the results are shown in Figure 8.
Step 8: the elemental area of the background area at four angles in statistical graph 8, is designated as S 1, S 2, S 3, S 4represent the elemental area of upper left, region, lower-left, region, region upper right, bottom right, region respectively, build area features vector L, have
L=[L 1,L 2]
Wherein:
L 1=[P 12-P 34,S 13,S 24]
L 2=[P 13-P 24,S 12,S 34]
P 12=|S 1+S 2|
P 34=|S 3+S 4|
S 13=|S 1-S 3|
S 24=|S 2-S 4|
P 13=|S 1+S 3|
P 24=|S 2+S 4|
S 12=|S 1-S 2|
S 34=|S 3-S 4|
Calculate S 1, S 2, S 3, S 4in reckling, suppose it is elemental area S 1, so fix S 1two square boundaries comprised, by two other border to S 1direction is moved, and supposes that amount of movement is for (x ', y '), obtains area features vector L now, and builds following constraint,
(x′,y′)=arg min{||L 1|| 2+||L 2|| 2}
Utilize nonlinear optimization algorithm to try to achieve optimum solution (x ', y '), the centre coordinate Δ of so last circular target is calculated by following formula.
Δ=Δ 0+0.5·(W-x′,H-y′)
In formula: W and H is respectively horizontal width and the vertical height of described suitable rectangular area.

Claims (7)

1. a method for circular target centralized positioning under Large visual angle, is characterized in that, comprises the following steps:
1) extract in the input image and comprise the suitable rectangular area of circular target, calculate horizontal width W and the vertical height H of described suitable rectangular area, make the initial offset at circular target center and described input picture center be: Δ 0=(x 0, y 0);
2) extract the background area at four angles of suitable rectangular area, calculate its corresponding elemental area, and be designated as S respectively upper left, S lower-left, S upper right, S bottom right;
3) elemental area S is remembered upper left, S lower-left, S upper right, S bottom rightbe respectively S 1, S 2, S 3, S 4, build area features vector L:
L=[L 1,L 2]
Wherein:
L 1=[P 12-P 34,S 13,S 24]
L 2=[P 13-P 24,S 12,S 34]
P 12=|S 1+S 2|
P 34=|S 3+S 4|
S 13=|S 1-S 3|
S 24=|S 2-S 4|
P 13=|S 1+S 3|
P 24=|S 2+S 4|
S 12=|S 1-S 2|
S 34=|S 3-S 4|
Choose area in four background areas minimum, and two square boundaries of the minimum background area of fixed-area, the direction of another two borders of described suitable rectangular area to the minimum place, background area of described area is moved, and make amount of movement be (x ', y '), the area features vector L described in utilization builds following constraint:
(x′,y′)=argmin{||L 1|| 2+||L 2|| 2}
And utilize nonlinear optimization algorithm can try to achieve the optimum solution (x ', y ') of above formula, the positioning result at final circular target center is:
Δ=Δ 0+0.5·(W-x′,H-y′)
In formula: W and H is respectively horizontal width and the vertical height of described suitable rectangular area.
2. the method for circular target centralized positioning under Large visual angle as claimed in claim 1, it is characterized in that, the condition that described suitable rectangular area should meet is: suitably the length of side upper limit of rectangular area is the circumscribed square length of side of circular target, and its lower limit is connect the square length of side in described circular target.
3. the method for circular target centralized positioning under Large visual angle as claimed in claim 2, is characterized in that, when described input picture meets the condition of described suitable rectangular area, and described initial offset Δ 0=(0,0).
4. the method for circular target centralized positioning under Large visual angle as claimed in claim 2, it is characterized in that, when described input picture does not meet the condition of described suitable rectangular area, binary conversion treatment is carried out to described input picture and obtains bianry image, according to the criterion of suitable rectangular area corrode described bianry image, obtain described suitable rectangular area, wherein, R is the radius of circular target, and N is the length of side of suitable rectangular area.
5. the method for circular target centralized positioning under Large visual angle as claimed in claim 4, it is characterized in that, according to pixels lined by line scan in described suitable rectangular area, the grey scale curve of often being gone, and utilize the optimal partition point in described grey scale curve, the circular target in described suitable rectangular area is separated with background area.
6. the method for circular target centralized positioning under Large visual angle as claimed in claim 5, it is characterized in that, corresponding gradient curve can be obtained according to described grey scale curve, utilize dividing method between maximum kind that described gradient curve is divided into large gradient layer and little gradient layer, calculate the center point in region corresponding to large gradient layer, be described optimal partition point.
7. the method for circular target centralized positioning under Large visual angle as claimed in claim 6, is characterized in that, by adding up the pixel count of the background area at four angles, described suitable rectangular area, can obtain corresponding S upper left, S lower-left, S upper rightand S bottom right.
CN201210535881.8A 2012-12-10 2012-12-10 The method of circular target centralized positioning under a kind of Large visual angle Expired - Fee Related CN103035004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210535881.8A CN103035004B (en) 2012-12-10 2012-12-10 The method of circular target centralized positioning under a kind of Large visual angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210535881.8A CN103035004B (en) 2012-12-10 2012-12-10 The method of circular target centralized positioning under a kind of Large visual angle

Publications (2)

Publication Number Publication Date
CN103035004A CN103035004A (en) 2013-04-10
CN103035004B true CN103035004B (en) 2015-08-12

Family

ID=48021871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210535881.8A Expired - Fee Related CN103035004B (en) 2012-12-10 2012-12-10 The method of circular target centralized positioning under a kind of Large visual angle

Country Status (1)

Country Link
CN (1) CN103035004B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301636A (en) * 2017-05-17 2017-10-27 华南理工大学 A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting
CN108062771B (en) * 2018-01-09 2019-12-31 北京航空航天大学 Cross-correlation-based two-dimensional SPR absorption spectrum optimal excitation angle position identification method
CN110887474B (en) * 2019-11-19 2023-03-21 中国科学院国家天文台长春人造卫星观测站 Star map identification method for precision tracking telescope
CN111127546B (en) * 2019-11-25 2023-04-28 南京航空航天大学 Circular target center positioning method and system based on polar coordinate transformation
CN111161852B (en) * 2019-12-30 2023-08-15 北京双翼麒电子有限公司 Endoscope image processing method, electronic equipment and endoscope system
CN111735460B (en) * 2020-08-05 2020-11-27 北京控制与电子技术研究所 Spacecraft navigation method, system and device based on small celestial body center extraction
CN113362253B (en) * 2021-06-30 2023-10-13 成都纵横自动化技术股份有限公司 Image shading correction method, system and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206116A (en) * 2007-12-07 2008-06-25 北京机械工业学院 Goal spot global automatic positioning method
CN101465002A (en) * 2009-01-05 2009-06-24 东南大学 Method for orientating secondary pixel edge of oval-shaped target

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5153593B2 (en) * 2008-12-02 2013-02-27 株式会社Pfu Image processing apparatus and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206116A (en) * 2007-12-07 2008-06-25 北京机械工业学院 Goal spot global automatic positioning method
CN101465002A (en) * 2009-01-05 2009-06-24 东南大学 Method for orientating secondary pixel edge of oval-shaped target

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深空探测中光学成像敏感器的研究;陈阔;《中国宇航学会深空探测技术专业委员会第九届学术年会论文集》;20121017;470-477 *

Also Published As

Publication number Publication date
CN103035004A (en) 2013-04-10

Similar Documents

Publication Publication Date Title
CN103035004B (en) The method of circular target centralized positioning under a kind of Large visual angle
US9582885B2 (en) Zonal underground structure detection method based on sun shadow compensation
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN103324936B (en) A kind of vehicle lower boundary detection method based on Multi-sensor Fusion
CN101137003B (en) Gray associated analysis based sub-pixel fringe extracting method
CN105354815B (en) It is a kind of that localization method is accurately identified based on flat-type micro part
CN101826157B (en) Ground static target real-time identifying and tracking method
CN109523585B (en) Multisource remote sensing image feature matching method based on direction phase consistency
CN102800097A (en) Multi-feature multi-level visible light and infrared image high-precision registering method
CN105389774A (en) Method and device for aligning images
CN102034101A (en) Method for quickly positioning circular mark in PCB visual detection
Jeong et al. Improved multiple matching method for observing glacier motion with repeat image feature tracking
CN103136525A (en) Hetero-type expanded goal high-accuracy positioning method with generalized Hough transposition
CN107967692A (en) A kind of target following optimization method based on tracking study detection
CN102819839A (en) High-precision registration method for multi-characteristic and multilevel infrared and hyperspectral images
CN102800099A (en) Multi-feature multi-level visible light and high-spectrum image high-precision registering method
CN111652896A (en) Inertial navigation auxiliary meteorite crater coarse-to-fine detection method
CN108562900B (en) SAR image geometric registration method based on elevation correction
CN103854290A (en) Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors
Sun et al. An improved FAST feature extraction based on RANSAC method of vision/SINS integrated navigation system in GNSS-denied environments
He et al. Centroid extraction algorithm based on grey-gradient for autonomous star sensor
CN103247032A (en) Method for positioning slight expanded target based on gesture compensation
CN110390338B (en) SAR high-precision matching method based on nonlinear guided filtering and ratio gradient
CN113793309B (en) Subpixel level ellipse detection method based on morphological characteristics
Ren et al. Automated SAR reference image preparation for navigation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150812

Termination date: 20151210

EXPY Termination of patent right or utility model