CN108981721B - Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite - Google Patents
Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite Download PDFInfo
- Publication number
- CN108981721B CN108981721B CN201810820441.4A CN201810820441A CN108981721B CN 108981721 B CN108981721 B CN 108981721B CN 201810820441 A CN201810820441 A CN 201810820441A CN 108981721 B CN108981721 B CN 108981721B
- Authority
- CN
- China
- Prior art keywords
- micro
- attitude determination
- points
- earth sensor
- target angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000003068 static effect Effects 0.000 title claims abstract description 26
- 239000013598 vector Substances 0.000 claims abstract description 47
- 238000009826 distribution Methods 0.000 claims abstract description 35
- 238000003384 imaging method Methods 0.000 claims description 28
- 238000007476 Maximum Likelihood Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 5
- 238000005260 corrosion Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 238000002474 experimental method Methods 0.000 claims description 3
- 230000036961 partial effect Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract description 8
- 230000002829 reductive effect Effects 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 28
- 238000005259 measurement Methods 0.000 description 10
- 238000003331 infrared imaging Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 108091092878 Microsatellite Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/24—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/02—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by astronomical means
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
Abstract
The invention discloses a method for determining a target angle of a static infrared earth sensor for determining the attitude of a micro-nano satellite, which is an improvement on the basis of the traditional method, avoids the middle step of calculating the position of a circle center, directly calculates a vector of a circular ring method, and adopts a method of combining an interval vector product and a t Location-Scale distribution confidence interval mean value to avoid the massive calculation of Hough transform in the traditional method, so that the calculated amount is greatly reduced, the accuracy and the efficiency of the static infrared earth sensor can be further improved, the attitude determination accuracy can be improved by 50%, the attitude determination speed can be improved by 87%, and the attitude determination performance of the satellite is improved.
Description
Technical Field
The invention belongs to the technical field of satellite attitude determination, relates to a micro-nano satellite attitude determination method, and particularly relates to a method for determining a target angle of a static infrared earth sensor for micro-nano satellite attitude determination, wherein the method is used for improving the measurement precision and efficiency of the static infrared earth sensor.
Background
In recent years, the micro-nano satellite technology is rapidly developed. The novel high-power-consumption solar photovoltaic power generation device has the characteristics of light weight, small volume and low power consumption, and has great potential in the field of aerospace application. At present, the types and difficulties of tasks executed by the micro-nano satellite are continuously increased, for example, the micro-nano satellite is required to realize large-angle quick maneuvering, formation flying, binocular vision and the like. And the mobility of the micro-nano satellite is improved, so that the high-difficulty task is realized.
The mobility of the micro-nano satellite depends on the performances of an attitude sensor and an attitude controller. The satellite attitude sensor comprises an earth sensor, a sun sensor, a star sensor, a gyroscope and the like. In practical application, the attitude of the satellite can be accurately determined by the coordination of various sensors. The static infrared earth sensor is a frequently-used and reliable attitude determination sensor for the micro/nano satellite.
The static infrared earth sensor shoots the earth staring, the angle of the earth centroid relative to the sensor is determined by calculating the direction of the imaging vector, and the direction of the satellite relative to the earth is finally determined through coordinate conversion. Compared with a dynamic scanning type infrared earth sensor, the infrared earth sensor has the advantages of high speed, low power consumption, long service life, small size and the like. The traditional attitude determination algorithm of the static infrared earth sensor combines a least square method and Hough transformation, so that the attitude determination precision can reach 0.1 degree.
In order to improve the accuracy of determining the attitude of the micro-nano satellite, the invention provides a micro-nano satellite attitude determination method based on a static infrared earth sensor. The method adopts an algorithm combining the interval vector product and the t Location-Scale distribution confidence interval mean value, so that the attitude determination precision is improved by 50%, the attitude determination speed is improved by 87%, and the attitude determination performance of the satellite is improved.
Disclosure of Invention
The invention aims to provide a method for determining a target angle of a static infrared earth sensor for determining the attitude of a micro-nano satellite, aiming at the defects of the prior art, wherein the static infrared earth sensor is an attitude detection component of the micro-satellite, the traditional attitude determination algorithm is an algorithm combining Hough transformation and least square method, and the accuracy can reach 0.1 degree. The method is improved on the basis of the traditional algorithm, and the algorithm combining the interval vector product and the t Location-Scale distribution confidence interval mean value is provided, so that the precision and the efficiency of the static infrared earth sensor can be further improved.
The measurement target of the static infrared earth sensor (sensor for short) is shown in figure 1.
The imaging center of the sensor is OMThe imaging viewing angle is 2 theta, and the incident light is tangent to the earth at points a and B. Then the round surface (defined as E surface) with AB as diameter on the earth is used as the infrared earth imaging surface which can be detected by the sensor, and the center of the imaging surface is OE. With sensor OMAs the origin, the imaging axis is the Z axis, and a space rectangular coordinate system is established, then OCOEIs the geocentric vector, OCOEAnd an included angle beta between the Z axis and the Z axis is a pitching angle of the geocentric vector, namely a target angle to be measured.
The sensor adopts a panoramic fisheye infrared lens, and the imaging rule is as follows:
y=f·θ. (1)
in the formula: y is the distance between the imaging center and the pixel point imaged by the incident light, f is the focal length, and theta is the included angle between the incident light and the normal of the lens.
The imaging of the infrared sensor is positively correlated with theta, so that the imaging surface of the earth is an E surface no matter what direction the sensor rotates. The E plane is projected to the optical system imaging plane as shown in fig. 2.
Coordinates of an imaging point on an imaging plane of the optical system are (X, Y), and the purpose of a system algorithm is to obtain a beta value through coordinate values (X, Y) of an imaging edge point.
The traditional algorithm for determining the attitude of the static infrared earth sensor is shown in fig. 3 and comprises the following steps:
1) edge extraction: obtaining an imaging point (X, Y) through image enhancement, edge detection and corrosion;
2) distortion removal: mapping an imaging point (X, Y) to a spherical surface with the radius of R, wherein the obtained edge points (X, Y, z) form a section circular ring;
3) hough transform: giving a distance r between the circle center of the cross section and the center of an imaging sphere, and obtaining a rough beta value by utilizing Hough transform processing (x, y, z);
4) removing interference points: calculating the center (x) of the cross-sectional circle according to the obtained beta valueo,yo,zo) Comparing the distance between the edge point and the circle center, and eliminating interference points;
5) least squares: performing least square by using the edge points after the interference points are removed, and calculating to obtain a new centroid vector (x)o,yo,zo) And outputting an angle beta.
It can be seen that, in the conventional method, after the section circular ring is obtained, the idea is to calculate the circle center through points on the circular ring, and then calculate the attitude angle through the coordinates of the circle center. The invention provides a method for directly calculating a normal vector of a section ring, which omits the intermediate step of calculating the coordinates of the circle center and avoids a large amount of calculation of Hough transform.
The method of the present invention is specifically shown in fig. 4, and comprises:
1) imaging point pre-processing
Since the earth infrared imaging is a circular surface, and the subsequent calculation is performed based on the imaging edge, the edge extraction operation is performed first.
1.1 edge etch extraction
The combination of strong edge detection and corrosion by Sobel operator is used here. Specifically, the Sobel operator image field is
In the formula: z is a radical of1,z2,z3,L,z9The gray value of the pixel point at the position is obtained.
The gradient was calculated as follows:
in the formula: f is the gradient of the gray value at the center position.
Here, it is desired to get a strong edge ignoring a weak edge, so a threshold T is specified and it is considered that f ≧ T is an edge pixel.
And then, for the binary image A, performing binary image corrosion by using the structural element array B. Since the image edge is annular, the array of structuring elements is selected to be
Binary image erosion, i.e. using vector subtraction on image A
In the formula: b is(X,Y)For the (X, Y) structuring element pixel region, when it coincides with the corresponding position a, the pixel point is retained.
1.2 distortion removal
Based on the imaging characteristics of the panoramic fish-eye infrared lens, imaging points (X, Y) are mapped onto a spherical surface with the radius of R, and the obtained edge points (X, Y, z) form a section circular ring D. The imaging undistortion process is shown in fig. 5.
According to the lens imaging formula (1) have
According to the physical imaging relationship, the relationship between the imaging position coordinate and the infrared imaging three-dimensional direction can be obtained.
z=Rcos(θ). (9)
In the formula: r is the radius of the mapping sphere, the point (X, Y, z) is the mapping from the optical imaging point (X, Y) to the sphere, and the mapped pixel points form a space section circular ring D.
2) Target angle calculation
The invention provides a method for directly calculating a normal vector of a circular ring, which avoids the middle step of calculating the position of the circle center. Specifically, a method of combining the interval vector product and the t Location-Scale distribution confidence interval mean value is adopted, so that the calculated amount is reduced, and the precision requirement of the infrared earth sensor is met.
2.1 space vector product
After the distortion removal processing, the pixel points on the imaging plane of the optical system are projected onto the spatial sphere to form a cross-sectional circular ring D, as shown in fig. 6. Each pixel point on the circle D has a three-dimensional coordinate (x, y, z). However, due to the error problem of edge extraction, some pixels may not be on the cross-sectional circle, and they are called interference points.
Two vectors V can be formed by arbitrarily selecting three points a, b and c on the section ring DbaAnd Vbc. From the vector theorem, two vectors VbaAnd VbcVector V obtained by vector productβPerpendicular to the plane D of the original two vectors. Then there is VβParallel to the centre vector r of the cross-sectional circle D, obviously VβThe angle formed between the Z axis and the Z axis is the target value beta.
Specifically, if the distances selected by the three points a, b, and c are close, the two vectors formed by the three points a, b, and c tend to be collinear, the vertical vector obtained by the vector product tends to 0, and subsequent calculation is not convenient, so that the image points should be selected at intervals. The spatial circle D is divided into A, B and C segments, represented as three sets a ═ a1,a2,a3,...,an},B={b1,b2,b3,...,bnC ═ C1,c2,c3,...,cn}. each contain n pixels. As shown in fig. 7.
Three points a are selected from the three setsk、bkAnd ckTo obtain two vectors VbakAnd VbckReferred to as a space vector pair. Sequentially taking points, at least n space vector pairs can be formed, and the space vector pairs are on the same space circular ring D. Due to the vector VbakAnd VbckAre distributed on the same plane and they are not on a straight line. According to the vector theory, can obtain
Vβ={Vβk|Vβk=Vbak×Vbck,k=1,2,3...n}. (10)
In the formula: vβkAs a result of the vector product, VβIs composed of n numbers of VβkA set of vectors.
VβMiddle vector VβkParallel to each other and perpendicular to the spatial circle D. Calculate the angle beta of
In the formula: b istSet of beta angles for a single infrared image, (x)βk,yβk,zβk) Is a vector VβkCoordinates in three-dimensional space.
Some image points are not distributed on the same plane D due to the existence of interference points, resulting in BtAngle beta in the setkThe difference in (a).
2.2 t Location-Scale distribution fitting
Due to the presence of interference points, BtThe elements in the set differ in their typical probability density distribution as shown in fig. 8.
As can be seen from fig. 6, the distribution of β has skewness, i.e., a heavy tail. Compared with normal distribution, t Location Scale distribution can better simulate data distribution with heavier tail (abnormal values are more likely to occur). Probability density function of t Location Scale distribution is as follows
Wherein (·) represents Gamma equation, - ∞ < mu < + > infinity is position parameter, sigma > 0 is scale parameter, and v > 0 is degree of freedom.
the parameter of the t Location Scale distribution can be obtained by maximum likelihood function estimation, and the maximum likelihood function has
The partial derivatives of μ and σ are calculated for l (μ, σ, v) to be equal to 0, i.e. estimated
Equations 14-16 indicate that there is coupling between the parameters, but that the estimate can still be obtained by iterative convergence, which is an iterative equation
In the formula: (i) representing the ith iteration.
The iteration termination condition is that the difference value of the results of the two times is less than 10-4. Since the degree of freedom v is unknown, an iterative operation of maximum likelihood estimation needs to be performed as well:
in the formula: phi (x) dln ((x))/dx, and phi (·) + ln (·) may be approximated as
Mu and sigma of the t Location Scale distribution can be obtained by the formulas 17 to 20.
In specific engineering application, fitting is performed in real time through t Location Scale distribution, and certain computing resources are required to be occupied. The calculation can be simplified as follows: due to BtE (0,90) °, so the interval (0,90) is divided into h sub-intervals (90(i-1)/h,90i/h), where i is 1,2,3
The t Location Scale distribution measurable by the ground experiment is the variance of each angle interval, and a variance empirical table is established, as shown in table 1. In actual engineering, σ can be obtained by μ table lookup. Hereinafter, the method is simply referred to as a new algorithm engineering edition.
2.3 mean confidence interval
Due to BtThe sample is easy to have abnormal values, and the beta value is estimated by directly utilizing the integral mean value mu and is interfered by the abnormal values. Therefore, the mean value of the samples within a certain confidence interval is selected for estimation. When the confidence interval is wider, although the confidence level is higherHigher, but more interference points are involved. According to the 3 sigma criterion, the probability that the data is distributed in (mu-sigma, mu + sigma) is 68.27%, which is a relatively moderate acceptance domain. The final estimate of β is
In the formula: m is beta falling in the acceptance domainkThe number of (2).
The method has the beneficial effects that:
compared with the traditional Hough transform and least square algorithm, the vector product mean algorithm reduces the calculated amount, does not need to give a relative projection distance, and improves the attitude determination precision and efficiency of the static infrared earth sensor.
Drawings
FIG. 1 is a schematic diagram of a static infrared earth sensor measurement;
FIG. 2 is an imaging schematic diagram of a static infrared earth sensor;
FIG. 3 is a flow chart of a traditional mode algorithm for determining the attitude of a static infrared earth sensor;
FIG. 4 is a flow chart of an algorithm for determining the attitude of the static infrared earth sensor;
FIG. 5 is a distortion of the pixel;
FIG. 6 illustrates the mapping of pixel points to spatial locations;
FIG. 7 is a diagram of image points on a segmented cross-sectional circle D;
map 860 ° sample probability density distribution within Bt;
FIG. 9 Earth Infrared imaging;
FIG. 10 Sobel strong edge detection;
FIG. 11 is a diagram of the earth after binary image erosion;
FIG. 12 undistorted effect;
map 1310 degrees of Bt samples;
map 1460 degree Bt sample probability density distributions;
map 1530 degree Bt sample probability density distribution;
map 1610 degree Bt sample probability density distribution;
map 171 degrees Bt sample probability density distribution;
map 181 σ interval 60 degree Bt sample probability density distribution;
FIG. 19 shows the target angle estimation result of the original algorithm;
FIG. 20 shows the result of the new algorithm target angle estimation;
FIG. 21 is a graph showing the error of measurement of the original algorithm within + -10 deg.;
the error of the new algorithm measurement within ± 10 ° of fig. 22.
Detailed Description
The method is tested by adopting the infrared earth sensor on Pixing II of Zhejiang university, and the test result is as follows:
3.1 edge extraction
As shown in FIG. 9, the earth infrared imaging has a 60-degree blind area in the middle, which needs to be removed. Then, Sobel edge detection is used, as shown in FIG. 10, a strong edge image is obtained, and subsequent statistical analysis is facilitated while more pixel points are reserved. Finally, through binary image corrosion, a clearer earth contour is obtained, as shown in fig. 11.
3.2 distortion removal
According to the formula 6-formula 9, the earth contour is projected to the space sphere from the imaging plane, and the image point after distortion removal is obtained as shown in fig. 12. The circle formed by the red image points in the figure is a section circle D, and the image points can be seen from the figure to have uneven density and have some interference points on the edge.
3.3 space vector product and distribution results
The result of the interval vector product operation performed on the image points on the cross-sectional circle D is shown in fig. 13, taking the vicinity of 10 ° as an example.
Then obtaining B through t Location-Scale fittingtSample probability density distribution and fitting result. For example: when the target β values were 60 °, 30 °, 10 ° and 1 °, the results were obtained as in fig. 14 to 17.
As can be seen from FIGS. 14 to 17, BtThe sample distribution has unimodal characteristic, has skewness of about 60 degrees, and BtThe sample distribution is right biased; within 30 °, Bt samples are left biased in distribution. t LocatiThe parameters of the on-Scale distribution estimation are shown in Table 1.
TABLE 1 t Location-Scale distribution parameter estimation
As can be seen from table 1, as the β value decreases, the probability density around the mean value thereof significantly increases, and the distribution tends to be concentrated. Calculation of B in the 1 σ intervaltThe sample mean value, as shown in fig. 18, is calculated as the mean value of the samples included in the blue interval.
The estimated value of the target angle β can be obtained by performing the above processing for each infrared image.
3.4 comparison of Algorithm results
According to the experimental method, the test is carried out, and the infrared photo of the earth is taken by rotating a turntable from-60 degrees to 60 degrees. The new algorithm of the invention is compared with the commonly used Hough transform + least squares method. The actual measurement results of the original algorithm and the new algorithm are shown in fig. 19 and 20.
Comparing fig. 19 and 20, it can be seen that when the measurement angle is greater than 40 °, the test effect can be maintained. Further comparison of measurement errors in the region of + -10 degThe result of 200 test points obtained by rotating the turntable at each 0.1 degree change is shown in fig. 21 and 22.
According to the graph 21 and the graph 22, when the target angle is within +/-10 degrees, the standard deviation of the error angle of the original algorithm is 0.06 degree, the standard deviation of the error angle of the new algorithm is 0.03 degree, and the measurement precision is improved by 50 percent. Further comparison of the algorithm accuracy and calculation speed in each interval is shown in table 2.
TABLE 2 comparison of the accuracy of the algorithm in different intervals
As can be seen from Table 2, the measurement accuracy of the new algorithm is improved in each interval. In addition, the performance of the engineering version of the new algorithm is listed in the table 2, the average processing time of single infrared imaging is shortened by 87% compared with that of the original algorithm, and the efficiency of determining the attitude of the static infrared earth sensor is improved.
Claims (5)
1. A method for determining a target angle of a static infrared earth sensor for micro/nano satellite attitude determination is characterized by comprising the following steps:
1) edge extraction: obtaining an imaging point (X, Y) through image enhancement, edge detection and corrosion;
2) distortion removal: mapping an imaging point (X, Y) to a spherical surface with the radius of R, and forming a section circular ring D by the obtained edge points (X, Y, z);
3) the space vector product: selecting three points on the section circular ring D at intervals to form two vectors, namely a group of space vector pairs, obtaining all the space vector pairs of the section circular ring, and calculating to obtain a set of target angles beta;
4) t Location-Scale distribution fitting: calculating the mean value and the variance of the set of target angles beta in t Location-Scale distribution;
5) mean confidence interval: and selecting a sample mean value in the 1 sigma confidence interval for estimation to obtain an estimation value of the target angle beta.
2. The method for determining the target angle of the static infrared earth sensor for micro/nano satellite attitude determination according to claim 1, wherein the step 3) is specifically as follows:
three points a, b and C on the section ring D are selected at intervals, the space ring D is divided into A, B sections and C sections, and the three sections are expressed as three sets A ═ a1,a2,a3,...,an},B={b1,b2,b3,...,bnC ═ C1,c2,c3,...,cn}; each section containing n image points; selecting three points a from the three setsk、bkAnd ckTo obtainTo two vectors VbakAnd VbckCalled space vector pair; sequentially taking points, and at least forming n space vector pairs which are positioned on the same space circular ring D; can obtain
Vβ={Vβk|Vβk=Vbak×Vbck,k=1,2,3...n}. (1)
In the formula: vβkAs a result of the vector product, VβIs composed of n numbers of VβkA set of vectors;
Vβmiddle vector VβkParallel to each other and perpendicular to the spatial circle D, then the angle beta has
In the formula: b istSet of beta angles for a single infrared image, (x)βk,yβk,zβk) Is a vector VβkCoordinates in three-dimensional space.
3. The method for determining the target angle of the static infrared earth sensor for micro/nano satellite attitude determination according to claim 1, wherein the step 4) is specifically as follows:
probability density function of t Location Scale distribution is as follows
Wherein (·) represents Gamma equation, infinity is position parameter, σ > 0 is scale parameter, and v > 0 is degree of freedom;
the parameter of t Location Scale distribution is obtained by maximum likelihood function estimation, and the maximum likelihood function has
The partial derivatives of μ and σ are calculated for l (μ, σ, v) to be equal to 0, i.e. estimated
Obtaining an estimated value by an iterative convergence method, wherein the iterative formula is
In the formula: (i) represents the ith iteration;
the iteration termination condition is that the difference value of the results of the two times is less than 10-4(ii) a The iteration operation of maximum likelihood estimation is also needed for the degree of freedom v:
in the formula: phi (x) dln ((x))/dx, and phi (·) + ln (·) may be approximated as
The mean μ and variance σ of the t Location Scale distribution are obtained.
4. The method for determining the target angle of the static infrared earth sensor for micro/nano satellite attitude determination according to claim 1, wherein the step 4) is obtained by a simplified method, and specifically comprises the following steps:
due to BtE (0,90) °, so the interval (0,90) is divided into h sub-intervals (90(i-1)/h,90i/h), where i is 1,2,3
And (3) establishing a variance experience table by adopting the variance of t Location Scale distribution in each angle interval measured by a ground experiment, and obtaining sigma by a mu table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810820441.4A CN108981721B (en) | 2018-07-24 | 2018-07-24 | Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810820441.4A CN108981721B (en) | 2018-07-24 | 2018-07-24 | Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108981721A CN108981721A (en) | 2018-12-11 |
CN108981721B true CN108981721B (en) | 2020-10-23 |
Family
ID=64550482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810820441.4A Active CN108981721B (en) | 2018-07-24 | 2018-07-24 | Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108981721B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113221059B (en) * | 2020-07-24 | 2023-01-17 | 哈尔滨工业大学(威海) | Fast conjugate gradient direction finding algorithm without constructing covariance matrix |
CN112927294B (en) * | 2021-01-27 | 2022-06-10 | 浙江大学 | Satellite orbit and attitude determination method based on single sensor |
JP2022157958A (en) * | 2021-04-01 | 2022-10-14 | 日本電気株式会社 | Satellite attitude estimation system, satellite attitude estimation method and satellite attitude estimation program |
CN116592899B (en) * | 2023-04-28 | 2024-03-29 | 哈尔滨工业大学 | Pose measurement system based on modularized infrared targets |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102721400B (en) * | 2012-05-09 | 2014-01-08 | 中国科学院上海技术物理研究所 | High-precision attitude detection method of static infrared earth sensor |
-
2018
- 2018-07-24 CN CN201810820441.4A patent/CN108981721B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108981721A (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108981721B (en) | Method for determining target angle of static infrared earth sensor for attitude determination of micro-nano satellite | |
CN109035320B (en) | Monocular vision-based depth extraction method | |
CN109146980B (en) | Monocular vision based optimized depth extraction and passive distance measurement method | |
CN110332887B (en) | Monocular vision pose measurement system and method based on characteristic cursor points | |
CN105261047B (en) | A kind of docking ring center extracting method based on short distance short arc segments image | |
CN105021124B (en) | A kind of planar part three-dimensional position and normal vector computational methods based on depth map | |
CN106326892B (en) | Visual landing pose estimation method of rotary wing type unmanned aerial vehicle | |
CN112819903B (en) | L-shaped calibration plate-based camera and laser radar combined calibration method | |
CN109785379A (en) | The measurement method and measuring system of a kind of symmetric objects size and weight | |
CN110807815B (en) | Quick underwater calibration method based on corresponding vanishing points of two groups of mutually orthogonal parallel lines | |
CN103487058B (en) | A kind of method improving APS star sensor dynamic property | |
CN109345588A (en) | A kind of six-degree-of-freedom posture estimation method based on Tag | |
WO2015096508A1 (en) | Attitude estimation method and system for on-orbit three-dimensional space object under model constraint | |
CN110807809A (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
CN109993800A (en) | A kind of detection method of workpiece size, device and storage medium | |
US20150323648A1 (en) | Method and system for estimating information related to a vehicle pitch and/or roll angle | |
CN113256696B (en) | External parameter calibration method of laser radar and camera based on natural scene | |
CN113947724A (en) | Automatic line icing thickness measuring method based on binocular vision | |
CN116844124A (en) | Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium | |
CN109084734B (en) | Monocular microscopic vision-based microsphere attitude measurement device and measurement method | |
CN109859276A (en) | A kind of fish eye lens inner parameter self-calibrating method | |
CN113920201A (en) | Polar line geometric constraint fisheye camera calibration method | |
CN116091494B (en) | Method for measuring distance of hidden danger of external damage of power transmission machinery | |
CN117095038A (en) | Point cloud filtering method and system for laser scanner | |
Jinze et al. | 3D laser point cloud clustering method based on image information constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |