CN103049921B - Method for determining image centroid of small irregular celestial body for deep space autonomous navigation - Google Patents

Method for determining image centroid of small irregular celestial body for deep space autonomous navigation Download PDF

Info

Publication number
CN103049921B
CN103049921B CN201210519685.1A CN201210519685A CN103049921B CN 103049921 B CN103049921 B CN 103049921B CN 201210519685 A CN201210519685 A CN 201210519685A CN 103049921 B CN103049921 B CN 103049921B
Authority
CN
China
Prior art keywords
sigma
overbar
eta
class
calculated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210519685.1A
Other languages
Chinese (zh)
Other versions
CN103049921A (en
Inventor
毛晓艳
黄翔宇
王大轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN201210519685.1A priority Critical patent/CN103049921B/en
Publication of CN103049921A publication Critical patent/CN103049921A/en
Application granted granted Critical
Publication of CN103049921B publication Critical patent/CN103049921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for determining an image centroid of a small irregular celestial body for deep space autonomous navigation. According to the method, a centroid coordinate system is determined according to a small celestial body three-dimensional shape model generated by ground data, the Hu seven-type normalized constant central moment of the small celestial body in different directions is built according to imaging in different directions, the centroid correction factor corresponding to the current form center is given in the direction, and a model base is formed; the contour of the small celestial body is extracted according to the current time shot image; the first-order moment of a contour binary image is calculated, i.e. the form center is used as the center, and the Hu normalized one-type and two-type second-order constant central moment is calculated; the second-order central moment is calculated, the subsequent matching operation is carried out, when the matching is not successful, the three-type to seven-type third-order central moment is continuously calculated, and then, the complementary matching is carried out; and the calculated central moment is matched with the central moment stored in the model base, when the similarity meets the set threshold condition, the matching of the current position is considered to be completed, and the form center of the current form center is modified by the centroid factor. The method has the advantage that the consistency of the centroid output in different directions is guaranteed.

Description

A kind of irregular small feature loss image centroid defining method for Deep-space TT&C network
Technical field
The present invention relates to a kind of irregular small feature loss image centroid defining method for Deep-space TT&C network, belong to the image processing field in spationautics.
Background technology
Small feature loss is the surplus materials after the solar system is formed, and is solar important component part.Virgin state when they maintain the origin of solar system, contains the important information of solar system Forming and evolution.Research small feature loss all has very important scientific meaning to the solar origin of announcement and evolution, exploration origin of life, has important practical significance to exploration and the exploitation solar system and the possible shock earth event of reply.
But small feature loss is not necessarily similar to spheroid, be likely irregular solid.Leap or intersection process in, aircraft utilizes optical sensor to carry out imaging to small feature loss, and needing provides a dead center point of small feature loss as observed quantity, carries out the real-time update of directional information.But because the spin of small feature loss and relative motion, imaging in the same time not correspond to the different orientation landform of small feature loss.Not only appearance profile is different, and the textural characteristics on ground is also inconsistent, and unique point, gray scale barycenter and centroid all can change.So how to extract a great problem that constant central point becomes small celestial body exploration in the picture.
The method that current studies in China utilizes small feature loss directional information to carry out navigating is a lot, but the directional information of hypothesis small feature loss obtains mostly, and concrete preparation method is not quite clear.In foreign literature investigation, the center of small feature loss is just defined as the center of maximum speck in image, then compensates, but does not introduce choosing and computing method of penalty coefficient in detail.
Summary of the invention
The technical problem to be solved in the present invention is: for the irregular feature of small feature loss profile, the profile information of small feature loss under extraction imaging moment present orientation, its centroid is utilized to ask for multistage center square, adopt the translation of permanent center square, ratio and rotational invariance, mate with the moment characteristics of the small feature loss different azimuth stored in model bank, obtain the present orientation of small feature loss, thus according to its center-of-mass coordinate of coefficient correction, ensure the consistance of barycenter under different azimuth.
Technical solution of the present invention comprises the following steps: a kind of irregular small feature loss image centroid defining method for Deep-space TT&C network, and performing step is as follows:
The first step, determines its geocentric coordinate system according to the small feature loss three-dimensional shape model that ground data generates, and according to the imaging of different azimuth, sets up the HuShi 7 class normalization permanent center square under small feature loss different azimuth and scale factor.The barycenter correction factor of the corresponding current centre of form is provided under this orientation;
Second step, according to current time shooting image, extracts the profile of small feature loss.Image is transformed to only containing the bianry image of profile information.Calculate first moment to contour images, namely the centre of form is as center, calculates normalization one class and the two class second order permanent center squares of HuShi.Due to the needs calculated, initially only calculate second-order moment around mean, then carry out subsequent match operation, if matching threshold does not meet the demands, then think that it fails to match, then continue calculating three class to seven class third central moment, then carry out complementary matching;
3rd step, mates the center square calculated with the center square stored in model bank, and when similarity is close to setting threshold value, then thinks that current location matches completes, utilizes barycenter coefficient to revise the current centre of form, provide the directional information of barycenter.
The computation process of described first step model bank is:
(11) small feature loss model is carried out projection imaging in equivalent orbit altitude, generate sample data;
Small feature loss three-dimensional shape model is known, and the surround orbit height of nominal carries out projection imaging to small feature loss, directly forms contour images as sample image, simultaneously by barycenter (x o1, y o1) project in picture plane.According to the requirement of pointing accuracy, determine the quantity of sample image.Such as accuracy requirement is 1 degree, then the interval angles of sample image is less than 0.5 degree.
(12) HuShi 7 class normalization permanent center square and scale factor is calculated;
Suppose that the profile bianry image inputted is f (x, y), x=1,2 ..., M, y=1,2 ..., N, the size of image is M × N, first calculates its first moment, i.e. the centre of form (x c1, y c1):
x c 1 = x ‾ = Σ m = 1 M Σ n = 1 N mf ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) , y c 1 = y ‾ = Σ m = 1 M Σ n = 1 N nf ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) .
Then two-dimensional center square is calculated:
η 20 = u 20 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 2 f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) , η 02 = u 02 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - y ‾ ) 2 f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) ,
η 11 = u 11 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) ( n - y ‾ ) f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) .
Wherein u pq = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) p ( n - y ‾ ) q f ( m , n ) .
Then an independently class second order not bending moment σ is calculated 1with two class second orders not bending moment σ 2:
σ 1 = η 20 + η 02 σ 2 = ( η 20 - η 02 ) 2 + 4 η 11 2 , Then the constant rate factor is calculated:
Finally calculate third moment:
η 30 = u 30 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 3 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 , η 03 = u 03 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - y ‾ ) 3 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 ,
η 12 = u 12 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) ( n - y ‾ ) 2 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 , η 21 = u 21 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 2 ( n - y ‾ ) f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2
By above-mentioned gauge calculate independently normalization three class, four classes, five classes, six classes and seven Lei Sanjie centers not bending moment stored in database:
σ 3=(η 30-3η 12) 2+(η 03-3η 21) 2
σ 4=(η 3012) 2+(η 0321) 2
σ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 0321) 2]
+(η 03-3η 21)(η 0321)[(η 0321) 2-3(η 3012) 2]
σ 6=(η 2002)[(η 3012) 2-(η 0321) 2]+4η 113012)(η 0321)
σ 7=(η 30-3η 21)(η 0321)[(η 0321) 2-3(η 3012) 2]
-(η 03-3η 21)(η 3012)[(η 3012) 2-3(η 0321) 2]
By the κ calculated i, σ 1to σ 7carry out correspondence position preservation.
(13) correction factor of the present orientation centre of form to barycenter is provided;
The correction constant of the centre of form and barycenter in computed image, Δx = x c 1 - x o 1 Δy = y c 1 - y o 1 , Δ x and Δ y is the correction constant of both direction, and corresponding each sample is preserved.
Described second step is implemented as:
(21) image of typical Boundary extracting algorithm to shooting is adopted to carry out contour edge extraction;
(22) class and the two class second orders not bending moment of contour images is calculated;
Suppose that the profile bianry image inputted is f (x, y), x=1,2 ..., M, y=1,2 ..., N, the size of image is M × N, first calculates its first moment, i.e. the centre of form:
x ‾ = Σ m = 1 M Σ n = 1 N mf ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) , y ‾ = Σ m = 1 M Σ n = 1 N nf ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) .
Then two-dimensional center square is calculated:.
η 20 = u 20 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 2 f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) , η 02 = u 02 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - y ‾ ) 2 f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) ,
η 11 = u 11 u 00 1 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) ( n - y ‾ ) f ( m , n ) Σ m = 1 M Σ n = 1 N f ( m , n ) .
Wherein u pq = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) p ( n - y ‾ ) q f ( m , n ) .
Finally calculate an independently class second order not bending moment with two class second orders not bending moment
Carry out the matching process of the 3rd step, if to unmatch, then carry out next step.
(23) other high-order not bending moment is calculated;
If only with with coupling cannot be confirmed, then for small feature loss contour images, continue to calculate its third moment:
η 30 = u 30 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 3 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 , η 03 = u 03 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - y ‾ ) 3 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 ,
η 12 = u 12 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) ( n - y ‾ ) 2 f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2 , η 21 = u 21 u 00 3 / 2 = Σ m = 1 M Σ n = 1 N ( m - x ‾ ) 2 ( n - y ‾ ) f ( m , n ) ( Σ m = 1 M Σ n = 1 N f ( m , n ) ) 3 / 2
Then independently normalization three class, four classes, five classes, six classes and seven Lei Sanjie center not bending moments are calculated:
The 3rd described step specific implementation:
(31) step of a class and two class second orders not bending moment is compared;
Calculate the constant rate factor:
Close constant rate factor κ will be sought in model bank, | κ-κ i| < ε κ, i is the position number stored in model bank from 1 to num, num, ε κfor the similar threshold value of setting.To the n met the demands obtained κindividual analogous location carries out further exact matching again.The zoom factor of computation model storehouse and true picture σ 1ifor model bank n κthe a class second order not bending moment of i-th position in individual analogous location, i is from 1 to n κ.Calculate wherein σ 2irepresent model bank n κthe two class second orders not bending moment of i-th position in individual analogous location.If between minimum value and sub-minimum, the ratio of difference is greater than threshold epsilon, namely represent sub-minimum), then think that shape corresponding to minimum value is wherein matched shape, coupling is correct.
(32) step of other not bending moment is compared;
If not by with complete coupling, then utilization calculates with adopt following formulae discovery or or re-start coupling to confirm, get its minimum value, and judge whether minimum and multiple that is sub-minimum meets ε.If still cannot confirm, then export with and provide this mark being the centre of form.
(33) to the process that the centre of form is revised.
According to the result of coupling, obtain the correction factor in model bank, calculate as follows:
x o = x c + &lambda;&Delta;x y o = y c + &lambda;&Delta;y , X cand y cfor the centre of form of small feature loss, Δ x and Δ y is the correction constant of both direction, λ=1/ λ ifor zoom factor.
Then barycenter points to and is calculated as follows: (x odx, y ody, f), dx, dy are that pixel is horizontal, linear foot cun, and f is camera focus.Be normalized rear as follows:
( x o dx ( x o dx ) 2 + ( y o dy ) 2 + f 2 , y o dy ( x o dx ) 2 + ( y o dy ) 2 + f 2 , f ( x o dx ) 2 + ( y o dy ) 2 + f 2 ) .
Advantage of the present invention is:
(1) the present invention utilizes HuShi normalization center square to represent the contour shape of small feature loss, and mate with the data of model bank, solve small feature loss profile irregular, the centre of form does not overlap with barycenter, extracts inaccurate problem.
(2) adopt 7 classes not bending moment as the memory contents of model bank, greatly optimize the data volume of model bank, improve the efficiency of coupling.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of barycenter extracting method of the present invention;
Fig. 2 is the small feature loss image that in the present invention, mathematical simulation generates;
Fig. 3 is the contours extract design sketch of small feature loss in the present invention.
Embodiment
As shown in Figure 1, the present invention is implemented as follows:
The first step, the contours extract of small feature loss.
Adopt Sobel operator template:
h 1 = 1 2 1 0 0 0 - 1 - 2 - 1 h 2 = - 1 0 1 - 2 0 2 - 1 0 1
Original image matrix F uses Sobel edge extracting to have:
E 1 = F &CircleTimes; h 1 , E 2 = F &CircleTimes; h 2
E = E 1 &CenterDot; E 1 + E 2 &CenterDot; E 2
Wherein for convolution algorithm symbol, E battle array gets the outline map that threshold value just can obtain binaryzation, and if Fig. 2 is original image, Fig. 3 is the contour images that Fig. 2 is corresponding, the pixel profile that white expression extracts.
Second step, the normalization center square of small feature loss is asked for.
Suppose that the profile bianry image inputted is f (x, y), x=1,2 ..., M, y=1,2 ..., N, the size of image is M × N, first calculates its first moment, i.e. the centre of form:
x &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N mf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , y &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N nf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) .
Then two-dimensional center square is calculated:.
&eta; 20 = u 20 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , &eta; 02 = u 02 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - y &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ,
&eta; 11 = u 11 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) ( n - y &OverBar; ) f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) .
Wherein u pq = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) p ( n - y &OverBar; ) q f ( m , n ) .
Finally calculate an independently class second order not bending moment with two class second orders not bending moment
Profile in Fig. 3 calculates
3rd step, confirmed with mating of model bank normalization center square.
Calculate the constant rate factor:
Profile in Fig. 3 calculates κ=0.2661.
Close constant rate factor κ will be sought in model bank, | κ-κ i| < ε κ, i is the position number stored in model bank from 1 to num, num, ε κfor the similar threshold value of setting.To the n met the demands obtained κindividual analogous location carries out further exact matching again.The zoom factor of computation model storehouse and true picture σ 1ifor model bank n κthe a class second order not bending moment of i-th position in individual analogous location, i is from 1 to n κ.Calculate wherein σ 2irepresent model bank n κthe two class second orders not bending moment of i-th position in individual analogous location.If between minimum value and sub-minimum, the ratio of difference is greater than threshold epsilon, namely represent sub-minimum), then think that shape corresponding to minimum value is wherein matched shape, obtain the barycenter correction factor of its position storage, get its zoom factor λ, center is revised.
For the contour images of Fig. 3, according to the value of κ, the most approximate with the image of κ=0.27 in model bank and κ=0.30, wherein, the image σ of κ=0.27 11=2653.9329, σ 21=519088.4199; The image σ of κ=0.30 12=8415.2522, σ 22=6697671.0698.Calculate λ 1=0.1073, λ 2=0.3401, and then think that the position of κ=0.27 is mated with Fig. 3.
4th step, utilizes barycenter correction factor, revises center, provides pointing vector.
According to the result of coupling, obtain the correction factor in model bank, calculate as follows:
x o = x c + &lambda;&Delta;x y o = y c + &lambda;&Delta;y , X cand y cfor the centre of form of small feature loss, Δ x and Δ y is the correction constant of both direction, λ=1/ λ ifor zoom factor.
The result of corresponding diagram 3 is x o = 251.79 + &Delta;x / 0.1073 = 245.09 y o = 247.5 + &Delta;y / 0.1073 = 253.45
Then barycenter points to and is calculated as follows: (x odx, y ody, f), dx, dy are pixel transverse and longitudinal size, and f is camera focus.Be normalized rear as follows:
( x o dx ( x o dx ) 2 + ( y o dy ) 2 + f 2 , y o dy ( x o dx ) 2 + ( y o dy ) 2 + f 2 , f ( x o dx ) 2 + ( y o dy ) 2 + f 2 ) .
Obtain being oriented to (-0.0007439 ,-0.0001739,0.99999971) of Fig. 3 small feature loss.
Through above-mentioned algorithm, obtain small feature loss imaging under different azimuth and distance, more consistent barycenter points to.

Claims (3)

1., for an irregular small feature loss image centroid defining method for Deep-space TT&C network, it is characterized in that performing step is as follows:
The first step, its geocentric coordinate system is determined according to the small feature loss three-dimensional shape model that ground data generates, according to the imaging of different azimuth, set up the HuShi 7 class normalization permanent center square under small feature loss different azimuth and scale factor, under this orientation, provide the barycenter correction factor of the corresponding current centre of form;
Second step, according to current time shooting image, extracts the profile of small feature loss, image is transformed to only containing the two-value contour images of profile information, calculate first moment to two-value contour images, namely the centre of form is as center, calculates normalization one class and the two class second order permanent center squares of HuShi; Due to the needs calculated, initially only calculate second-order moment around mean, then carry out subsequent match operation, if matching threshold does not meet the demands, then think that it fails to match, then continue calculating three class to seven class third central moment, then carry out complementary matching;
3rd step, mates the center square calculated with the center square stored in model bank, and when similarity is close to setting threshold value, then thinks that current location matches completes, utilizes barycenter coefficient to revise the current centre of form, provide the directional information of barycenter;
The process calculating HuShi 7 class normalization permanent center square and scale factor in the described first step is:
(21) small feature loss model is carried out projection imaging in equivalent orbit altitude, design sample size of data;
Small feature loss three-dimensional shape model is known, and the perigee orbit altitude of nominal carries out projection imaging to small feature loss, directly forms contour images as sample image, simultaneously by barycenter (x o1, y o1) project in picture plane, according to the requirement of pointing accuracy, determine the quantity of sample image;
(22) HuShi 7 class normalization permanent center square and scale factor is calculated;
Suppose that the profile bianry image inputted is f (x, y), x=1,2 ..., M, y=1,2 ..., N, the size of image is M × N, first calculates its first moment, i.e. the centre of form (x c1, y c1):
x c 1 = x &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N mf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , y c 1 = y &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N nf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n )
Then two-dimensional center square is calculated:
&eta; 20 = u 20 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , &eta; 02 = u 02 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( n - y &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ,
&eta; 11 = u 11 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) ( n - y &OverBar; ) f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n )
Wherein u pq = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) p ( n - y &OverBar; ) q f ( m , n )
Then an independently class second order not bending moment σ is calculated 1with two class second orders not bending moment σ 2:
&sigma; 1 = &eta; 20 + &eta; 02 &sigma; 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2 , Then the constant rate factor is calculated: &kappa; i = &sigma; 2 &sigma; 1 ;
Finally calculate third moment:
&eta; 30 = u 30 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 3 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 , &eta; 03 = u 03 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( n - y &OverBar; ) 3 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 ,
&eta; 12 = u 12 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) ( n - y &OverBar; ) 2 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 , &eta; 21 = u 21 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 2 ( n - y &OverBar; ) f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2
By above-mentioned gauge calculate independently normalization three class, four classes, five classes, six classes and seven Lei Sanjie centers not bending moment stored in database:
σ 3=(η 30-3η 12) 2+(η 03-3η 21) 2
σ 4=(η 3012) 2+(η 0321) 2
σ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 0321) 2]
+(η 03-3η 21)(η 0321)[(η 0321) 2-3(η 3012) 2]
σ 6=(η 2002)[(η 3012) 2-(η 0321) 2]+4η 113012)(η 0321)
σ 7=(η 30-3η 21)(η 0321)[(η 0321) 2-3(η 3012) 2]
-(η 03-3η 21)(η 3012)[(η 3012) 2-3(η 0321) 2]
By the κ calculated i, σ 1to σ 7carry out correspondence position preservation;
(23) correction factor of the present orientation centre of form to barycenter is provided;
The correction constant of the centre of form and barycenter in computed image, &Delta;x = x c 1 - x o 1 &Delta;y = y c 1 - y o 1 , Δ x and Δ y is the correction constant of both direction, and corresponding each sample is preserved.
2. the irregular small feature loss image centroid defining method for Deep-space TT&C network according to claim 1, is characterized in that: described second step is implemented as:
(31) image of typical Boundary extracting algorithm to shooting is adopted to carry out contour edge extraction;
(32) class and the two class second orders not bending moment of contour images is calculated;
Suppose that the profile bianry image inputted is f (x, y), x=1,2 ..., M, y=1,2 ..., N, the size of image is M × N, first calculates its first moment, i.e. the centre of form:
x &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N mf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , y &OverBar; = &Sigma; m = 1 M &Sigma; n = 1 N nf ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n )
Then two-dimensional center square is calculated:
&eta; 20 = u 20 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) , &eta; 02 = u 02 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( n - y &OverBar; ) 2 f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ,
&eta; 11 = u 11 u 00 1 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) ( n - y &OverBar; ) f ( m , n ) &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n )
Wherein u pq = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) p ( n - y &OverBar; ) q f ( m , n )
Finally calculate an independently class second order not bending moment with two class second orders not bending moment
Carry out the matching process of the 3rd step, if to unmatch, then carry out step (33);
(33) other high-order not bending moment is calculated
If only with with coupling cannot be confirmed, then for small feature loss contour images, continue to calculate its third moment:
&eta; 30 = u 30 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 3 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 , &eta; 03 = u 03 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( n - y &OverBar; ) 3 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 ,
&eta; 12 = u 12 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) ( n - y &OverBar; ) 2 f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2 , &eta; 21 = u 21 u 00 3 / 2 = &Sigma; m = 1 M &Sigma; n = 1 N ( m - x &OverBar; ) 2 ( n - y &OverBar; ) f ( m , n ) ( &Sigma; m = 1 M &Sigma; n = 1 N f ( m , n ) ) 3 / 2
Then independently normalization three class, four classes, five classes, six classes and seven Lei Sanjie center not bending moments are calculated:
3. the irregular small feature loss image centroid defining method for Deep-space TT&C network according to claim 2, is characterized in that: the 3rd described step specific implementation:
(41) step of a class and two class second orders not bending moment is compared;
Calculate the constant rate factor:
Close constant rate factor κ will be sought in model bank, | κ-κ i| < ε κ, i is the position number stored in model bank from 1 to num, num, ε κfor the similar threshold value of setting; To the n met the demands obtained κindividual analogous location carries out further exact matching again; The zoom factor of computation model storehouse and true picture σ 1ifor model bank n κthe a class second order not bending moment of i-th position in individual analogous location, i is from 1 to n κ; Calculate wherein σ 2irepresent model bank n κthe two class second orders not bending moment of i-th position in individual analogous location; If between minimum value and sub-minimum, the ratio of difference is greater than threshold epsilon, namely δ jrepresent sub-minimum, then think that shape corresponding to minimum value is wherein matched shape, coupling is correct;
(42) step of other not bending moment is compared;
If not by with complete coupling, then utilization calculates with adopt following formulae discovery or or re-start coupling to confirm, get its minimum value, and judge whether minimum and multiple that is sub-minimum meets ε; If still cannot confirm, then export with and provide this mark being the centre of form;
(43) to the process that the centre of form is revised
According to the correct result of coupling, obtain the correction factor in model bank, calculate as follows:
x o = x c + &lambda;&Delta;x y o = y c + &lambda;&Delta;y , X cand y cfor the centre of form of small feature loss, Δ x and Δ y is the correction constant of both direction, λ=1/ λ ifor zoom factor,
Then barycenter points to and is calculated as follows: (x odx, y ody, f), dx, dy are that pixel is horizontal, linear foot cun, and f is camera focus, is normalized rear as follows:
( x o dx ( x o dx ) 2 + ( y o dy ) 2 + f 2 , y o dy ( x o dx ) 2 + ( y o dy ) 2 + f 2 , f ( x o dx ) 2 + ( y o dy ) 2 + f 2 ) .
CN201210519685.1A 2012-11-30 2012-11-30 Method for determining image centroid of small irregular celestial body for deep space autonomous navigation Active CN103049921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210519685.1A CN103049921B (en) 2012-11-30 2012-11-30 Method for determining image centroid of small irregular celestial body for deep space autonomous navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210519685.1A CN103049921B (en) 2012-11-30 2012-11-30 Method for determining image centroid of small irregular celestial body for deep space autonomous navigation

Publications (2)

Publication Number Publication Date
CN103049921A CN103049921A (en) 2013-04-17
CN103049921B true CN103049921B (en) 2015-07-08

Family

ID=48062549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210519685.1A Active CN103049921B (en) 2012-11-30 2012-11-30 Method for determining image centroid of small irregular celestial body for deep space autonomous navigation

Country Status (1)

Country Link
CN (1) CN103049921B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106353032B (en) * 2015-10-10 2019-03-29 北京控制与电子技术研究所 A kind of celestial body centroid rapid detection method under deficient illumination condition
CN106530317B (en) * 2016-09-23 2019-05-24 南京凡豆信息科技有限公司 A kind of scoring of simple picture computer and auxiliary painting methods
CN107220983B (en) * 2017-04-13 2019-09-24 中国农业大学 A kind of live pig detection method and system based on video
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN111426333B (en) * 2020-02-25 2022-03-04 上海航天控制技术研究所 Mars navigation sensor image centroid accurate correction method based on geometric method
CN117115275B (en) * 2023-10-25 2024-03-12 深圳明锐理想科技股份有限公司 Distortion parameter determination method and device and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425174A (en) * 2007-10-31 2009-05-06 中国科学院沈阳自动化研究所 Rapid image geometric moment and central moment calculation structure
CN102081738A (en) * 2011-01-06 2011-06-01 西北工业大学 Method for positioning mass center of spatial object star image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425174A (en) * 2007-10-31 2009-05-06 中国科学院沈阳自动化研究所 Rapid image geometric moment and central moment calculation structure
CN102081738A (en) * 2011-01-06 2011-06-01 西北工业大学 Method for positioning mass center of spatial object star image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
地心和月心引力常数及月球形心与质心的确定;孔祥元等;《大地测量与地球动力学》;20060531;第26卷(第2期);第4部分 *
基于高阶胡氏矩的多目标图象识别算法;丘江等;《光子学报》;20010930;第30卷(第9期);1141-1144 *

Also Published As

Publication number Publication date
CN103049921A (en) 2013-04-17

Similar Documents

Publication Publication Date Title
CN103049921B (en) Method for determining image centroid of small irregular celestial body for deep space autonomous navigation
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
US9202144B2 (en) Regionlets with shift invariant neural patterns for object detection
CN110458939A (en) The indoor scene modeling method generated based on visual angle
CN112017220B (en) Point cloud accurate registration method based on robust constraint least square algorithm
Xiao et al. Uncalibrated perspective reconstruction of deformable structures
CN111862101A (en) 3D point cloud semantic segmentation method under aerial view coding visual angle
CN107122705A (en) Face critical point detection method based on three-dimensional face model
CN107680133A (en) A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN107862744A (en) Aviation image three-dimensional modeling method and Related product
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
Yin et al. Cae-lo: Lidar odometry leveraging fully unsupervised convolutional auto-encoder for interest point detection and feature description
CN106485676A (en) A kind of LiDAR point cloud data recovery method based on sparse coding
CN114119884A (en) Building LOD1 model construction method based on high-score seven-satellite image
CN111598995A (en) Self-supervision multi-view three-dimensional human body posture estimation method based on prototype analysis
CN111191704B (en) Foundation cloud classification method based on task graph convolutional network
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
CN104318552A (en) Convex hull projection graph matching based model registration method
CN113284249B (en) Multi-view three-dimensional human body reconstruction method and system based on graph neural network
CN112906573B (en) Planet surface navigation road sign matching method based on contour point set
CN117132737B (en) Three-dimensional building model construction method, system and equipment
Wang et al. LiDAR-SLAM loop closure detection based on multi-scale point cloud feature transformer
Jia et al. DispNet based stereo matching for planetary scene depth estimation using remote sensing images
Wang et al. A simple deep learning network for classification of 3D mobile LiDAR point clouds
EP4254354A1 (en) System and method using pyramidal and uniqueness matching priors for identifying correspondences between images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant