CN114577208A - Navigation system error unified modeling method based on rotating reference coordinate system - Google Patents

Navigation system error unified modeling method based on rotating reference coordinate system Download PDF

Info

Publication number
CN114577208A
CN114577208A CN202210107583.2A CN202210107583A CN114577208A CN 114577208 A CN114577208 A CN 114577208A CN 202210107583 A CN202210107583 A CN 202210107583A CN 114577208 A CN114577208 A CN 114577208A
Authority
CN
China
Prior art keywords
error
camera
image plane
equivalent
unified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210107583.2A
Other languages
Chinese (zh)
Other versions
CN114577208B (en
Inventor
王大轶
孙博文
李茂登
邓润然
朱卫红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Spacecraft System Engineering
Original Assignee
Beijing Institute of Spacecraft System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Spacecraft System Engineering filed Critical Beijing Institute of Spacecraft System Engineering
Priority to CN202210107583.2A priority Critical patent/CN114577208B/en
Publication of CN114577208A publication Critical patent/CN114577208A/en
Application granted granted Critical
Publication of CN114577208B publication Critical patent/CN114577208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Astronomy & Astrophysics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a navigation system error unified modeling method based on a rotating reference coordinate system, which comprises the following steps: (1) representing the installation error of the camera by using the equivalent rotation vector; (2) obtaining a camera image plane translation error according to a camera measurement principle; (3) converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by using a rotation reference coordinate system; (4) and obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera. The method overcomes the defect that the existing method can not simultaneously meet the requirements of estimating the installation error and the image plane translation of the satellite-borne optical camera, reduces the system error dimension, utilizes the rotary reference coordinate system to carry out unified dimension reduction representation on the system error of the satellite-borne optical camera, and realizes unified modeling of the autonomous navigation system error of the spacecraft.

Description

Navigation system error unified modeling method based on rotating reference coordinate system
Technical Field
The invention belongs to the technical field of space navigation, and particularly relates to a navigation system error unified modeling method based on a rotating reference coordinate system.
Background
In the detection of small celestial bodies, the spacecraft runs on orbit for a long time, so that the parameters of a measuring sensor are easy to change, and system errors are generated, for example, the satellite-borne optical camera may have system errors such as installation errors and zero offset of a lens, which is a main reason for restricting the autonomous relative navigation precision of the spacecraft. And because the load of the spacecraft is limited, the spacecraft cannot carry a redundant measuring sensor, and meanwhile, elements are not easy to replace in-orbit, so that the in-orbit estimation of the system error of the satellite-borne optical camera is required. At present, on-orbit estimation of errors of a satellite-borne optical camera system becomes a hot problem of research.
At present, the system error is estimated mostly by adopting an augmented Kalman filtering method, namely, a system error variable to be estimated is expanded into a state variable for estimation, and the complexity of algorithm calculation is greatly increased due to the large number of system errors, meanwhile, satellite calculation resources are severely limited, and large data amount calculation cannot be undertaken, so that the traditional method based on the augmented Kalman filtering method cannot estimate the system error on track.
Disclosure of Invention
The technical problem solved by the invention is as follows: the method overcomes the defects of the prior art, provides a navigation system error unified modeling method based on a rotating reference coordinate system, overcomes the defect that the prior method can not meet the requirements of estimating the installation error and the image plane translation of the satellite-borne optical camera at the same time, reduces the system error dimension, utilizes the rotating reference coordinate system to carry out unified dimension reduction representation on the satellite-borne optical camera system error, and realizes unified modeling of the autonomous navigation system error of the spacecraft.
The purpose of the invention is realized by the following technical scheme: a navigation system error unified modeling method based on a rotating reference coordinate system comprises the following steps: (1) representing the installation error of the camera by using the equivalent rotation vector; (2) obtaining a camera image plane translation error according to a camera measurement principle; (3) converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by using a rotation reference coordinate system; (4) and obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera.
In the navigation system error unified modeling method based on the rotating reference coordinate system, in the step (1), the installation error matrix C of the camerainsComprises the following steps:
Cins=I3+[θ×];
wherein, I3Is a 3-dimensional identity matrix, and theta is the mounting error of the camera.
In the navigation system error unified modeling method based on the rotating reference coordinate system, in the step (3), the step of converting the camera image plane translation error into the camera image plane translation error in the form of the equivalent rotating vector by using the rotating reference coordinate system comprises the following steps: (31) obtaining a uniform equivalent rotating shaft; (32) obtaining a unified rotation angle under an equivalent rotation axis according to the image plane translation error of the camera and the focal length of the camera; (33) and obtaining the camera image plane translation error in the form of an equivalent rotation vector according to the unified equivalent rotation axis and the rotation angle under the unified equivalent rotation axis.
In the method for unified modeling of navigation system errors based on the rotating reference coordinate system, in step (31), the unified equivalent rotation axis is [ δ ═ lvu 0]T(ii) a Where l is the uniform equivalent axis of rotation, δvFor the amount of translation of the optical axis of the camera in the longitudinal direction of the image plane, δuThe translation amount of the optical axis of the camera in the transverse direction of the image plane is shown.
In the unified modeling method for navigation system errors based on the rotating reference coordinate system, in step (32), the rotation angles under the unified equivalent rotation axis are:
Figure BDA0003494433780000021
where φ is the rotation angle under the uniform equivalent rotation axis, f is the focal length of the camera, δvFor the amount of translation of the optical axis of the camera in the longitudinal direction of the image plane, δuThe translation amount of the optical axis of the camera in the transverse direction of the image plane is shown.
In the navigation system error unified modeling method based on the rotating reference coordinate system, in step (33), the camera image plane translation error in the form of an equivalent rotation vector is as follows:
Figure BDA0003494433780000022
wherein, β is the camera image plane translation error in the form of equivalent rotation vector, l is the uniform equivalent rotation axis, and φ is the rotation angle under the uniform equivalent rotation axis.
In the navigation system error unified modeling method based on the rotating reference coordinate system, in the step (4), the system error after unified modeling is as follows:
α=θ+β;
wherein alpha is a system error after unified modeling, theta is a mounting error of the camera, and beta is a camera image plane translation error in an equivalent rotation vector form.
A system for unified modeling of navigation system errors based on a rotating reference frame, comprising: a first module for representing a mounting error of the camera using the equivalent rotation vector; the second module is used for obtaining the image plane translation error of the camera according to the camera measurement principle; the third module is used for converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by utilizing a rotation reference coordinate system; and the fourth module is used for obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera.
In the navigation system error unified modeling system based on the rotating reference coordinate system, the installation error matrix C of the camerainsComprises the following steps:
Cins=I3+[θ×];
wherein, I3Is a 3-dimensional identity matrix, and theta is the mounting error of the camera.
In the navigation system error unified modeling system based on the rotating reference coordinate system, the step of converting the camera image plane translation error into the camera image plane translation error in the form of the equivalent rotating vector by using the rotating reference coordinate system comprises the following steps: (31) obtaining a uniform equivalent rotating shaft; (32) obtaining a unified rotation angle under an equivalent rotation axis according to the camera image plane translation error and the camera focal length; (33) and obtaining the image plane translation error of the camera in the form of an equivalent rotation vector according to the unified equivalent rotation axis and the rotation angle under the unified equivalent rotation axis.
Compared with the prior art, the invention has the following beneficial effects:
(1) according to the invention, through the unified modeling characteristic of the system error, the unified dimension reduction representation of the system error is realized, the calculation complexity is reduced, and the limited calculation capability on the spacecraft can be met;
(2) the method gives the condition that the system meets observability through the observability analysis of the system error after unified modeling, estimates on the premise of meeting the observability of the system, and ensures the convergence of filtering;
(3) according to the invention, after unified modeling and observability analysis of system errors, corresponding filtering steps are given, and self-correction of spacecraft navigation system errors can be realized.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart of a unified modeling method for errors of a navigation system based on a rotating reference coordinate system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a filtered estimation error after uniform modeling of a system error according to an embodiment of the present invention;
FIG. 3 is a graph illustrating the filtered estimation error with respect to the navigation state variable r/| r | according to an embodiment of the present invention;
FIG. 4 is a diagram of relative navigation state variables provided by an embodiment of the present invention
Figure BDA0003494433780000041
Is shown as a graph of the filtered estimation error.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a flowchart of a navigation system error unified modeling method based on a rotating reference coordinate system according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
(1) the mounting error of the camera is approximately expressed by using the equivalent rotation vector, namely the mounting error of the camera is linearly expressed;
(2) on the basis of the step (1), a parametric representation method of the image plane translation of the camera is given from the angle of the camera measurement principle;
(3) on the basis of the step (2), approximately converting image plane translation into an equivalent rotation vector form by utilizing a rotation reference coordinate system, wherein the rotation reference coordinate system is used for converting additive image plane translation errors into equivalent rotation vectors through a uniform rotation mode;
(4) and (4) according to the transformation method of the image plane translation error given in the step (3) and the equivalent rotation vector representation method of the installation error given in the step (1), performing unified dimension reduction characterization, namely unified modeling, on the two system errors, so that the number of parameters to be estimated is reduced, and the calculation complexity is reduced.
In the method for representing the installation error in the step (1), because the installation error is small, the second-order term is ignored, and the second-order term can be linearized by using an equivalent rotation vectorExpressed and added to the form of the identity matrix by the outer product of the equivalent rotation vector theta3+[θ×]A rotation matrix representing mounting errors.
In the image plane translation representation method in the step (2), a parameterization representation method is provided by combining the physical background of image plane translation from the angle of a camera imaging principle.
The approximate conversion method based on the rotating reference coordinate system in the step (3) comprises the following steps:
(4.1) according to the characteristics of the image plane translation error of the camera, searching a uniform equivalent rotation axis l ═ deltavu 0]T
(4.2) calculating the rotation angle under the rotation axis in the step (4.1) according to the size of the image plane translation error and the size of the focal length of the camera
Figure BDA0003494433780000051
(4.3) combining the rotation axis in (4.1) and the rotation angle in (4.2) to obtain an equivalent rotation vector of the image plane translation
Figure BDA0003494433780000052
In the unified modeling method for the systematic error in the step (4), the equivalent rotation vectors of the mounting error and the image plane translation error are added on the basis of the steps (1) and (3), and the unified modeled systematic error α is obtained as θ + β.
The method comprises the following specific steps:
giving a measurement equation, assuming that the focal length of the camera is f, and the direction vector of the target under the camera system is p ═ px,py,pz]TMeasurement values u, v of the camera without taking into account measurement noise]TSatisfies the following conditions:
Figure BDA0003494433780000061
wherein p ═ rM/|rMAnd l, r is a relative position vector of the non-cooperative target under the camera system.
If there is an installation error in the optical camera, there are:
Figure BDA0003494433780000062
in the formula,
Figure BDA0003494433780000063
is a rotation matrix of the ideal orbital to camera system, CinsIs the mounting error matrix.
Since the mounting error is small, the equivalent rotation vector θ can be made [ θ ═ θxyz]TRepresenting the mounting error, the mounting error matrix can be expressed as:
Figure BDA0003494433780000064
if the optical camera has image plane translation delta [ delta ]uv]TThen, then
Figure BDA0003494433780000065
A unified modeling method is provided:
the image plane translation is converted into a rotation matrix form, and the condition that the rotation matrix form is satisfied by a vector l is easy to know1Rotated to vector l2Must have:
<l,l1>=<l,l2>
from the geometrical relationship, there is a common axis of rotation:
l=[δvu 0]T
assuming that the imaging point is made near the optical axis by the attitude control, the rotation angle is approximated by:
Figure BDA0003494433780000066
the equivalent rotation vector is approximated as:
Figure BDA0003494433780000071
obtaining a unified modeling result of the optical camera system error:
α=θ+β
if the direction vector of the target under the camera system is p, the observation equation is:
Figure BDA0003494433780000072
wherein u is a coordinate value of the target on the horizontal axis of the image plane, v is a coordinate value of the target on the vertical axis of the image plane, f is the focal length of the camera, and p is a direction vector of the target under the camera system.
The method also includes the steps of: constructing a relative motion model without distance information, and performing observability analysis on the navigation system error after unified modeling on the basis, wherein the method specifically comprises the following steps:
(2.1) giving a linear relative orbit dynamic equation, and considering the noise, giving a position vector r and a velocity vector
Figure BDA0003494433780000073
And the systematic error α can be expressed as:
Figure BDA0003494433780000074
in the formula,
Figure BDA0003494433780000075
omega is the angular velocity of the spacecraft orbit, F1Is composed of
Figure BDA0003494433780000076
First order Jacobi matrix for r, F2Is composed of
Figure BDA0003494433780000077
About
Figure BDA0003494433780000078
First order Jacobi matrix.
For easy observability analysis, the normalized [ -u, -v, f ] is not allowed]TFor the measured value, the measurement equation at the time without considering the measurement noise is:
Figure BDA0003494433780000079
in the formula, Cα=I3+[α×]Zeta is the measured value of the target relative direction vector, CαTo unify the equivalent rotation matrix of the modeled system error,
Figure BDA0003494433780000081
is a rotation matrix of the orbit to the camera system under ideal conditions,
Figure BDA0003494433780000082
is the unit direction vector of the target under the camera system.
(2.2) since the camera only measures the direction vector of the target, in order to visually analyze the observability of the system error, we remove the distance information in the relative motion and give a relative motion equation without distance information:
Figure BDA0003494433780000083
in the formula,
Figure BDA0003494433780000084
(2.3) discretizing a state equation and a measurement equation without a distance scale
Discretizing a state equation without a distance scale:
Figure BDA0003494433780000085
wherein,
Figure BDA0003494433780000086
is the ratio of the differential of the position vector at time k +1 to the distance,
Figure BDA0003494433780000087
is the ratio of the second derivative of the position vector at time k +1 to the distance, alphakIn order to uniformly model the system error at the k moment,
Figure BDA0003494433780000088
uniformly modeling the differential of the system error at the k +1 moment,
Figure BDA0003494433780000089
as the ratio of the differential of the position vector at time k to the distance,
Figure BDA00034944337800000810
is the ratio of the position vector at time k to the distance, phik+1,kIs the state transition matrix from time k to time k + 1.
Discretizing a measurement equation:
Figure BDA00034944337800000811
therein, ζkIs the measure of the target relative direction vector at time k,
Figure BDA00034944337800000812
is the rotation matrix of the orbit to camera system under the ideal condition of the k time.
The measurement matrix is:
Figure BDA00034944337800000813
(2.4) constructing an observability matrix O, and analyzing the observability of the system:
Figure BDA0003494433780000091
in the formula,
Figure BDA0003494433780000092
Figure BDA0003494433780000093
Figure BDA0003494433780000094
wherein HkIs a measurement matrix at time k, Hk+1Is the measurement matrix at time k +1, Hk+2Is the measurement matrix at time k +2, phik+1,kFor the state transition matrix from time k to time k +1, phik+2,k+1Is the state transition matrix from time k +1 to time k +2, O11Is a 3X 3-dimensional block matrix in an observability matrix O, O12Is a 3X 3-dimensional block matrix in an observability matrix O, O13Is a 3X 3-dimensional block matrix in an observability matrix O, O21Is a 3 x 3 dimensional block matrix of an observability matrix O22Is a 3X 3-dimensional block matrix in an observability matrix O, O23Is a 3X 3-dimensional block matrix in an observability matrix O, O31Is a 3X 3-dimensional block matrix in an observability matrix O, O32Is a 3X 3-dimensional block matrix in an observability matrix O, O33Is a 3 x 3 dimensional block matrix in the observability matrix O,
Figure BDA0003494433780000095
a rotation matrix of the orbit system to the camera system at time k,
Figure BDA0003494433780000096
for k time target in camera systemThe unit direction vector of (a) below,
Figure BDA0003494433780000097
a rotation matrix of the orbit system to the camera system at time k +1,
Figure BDA0003494433780000098
a rotation matrix of the orbit system to the camera system at time k +2,
Figure BDA0003494433780000099
the ratio of the target position vector to the distance at time k +1,
Figure BDA00034944337800000910
the ratio of the target position vector to the distance at time k +2,
Figure BDA00034944337800000911
the unit direction vector of the target under the camera system at the moment k +1,
Figure BDA00034944337800000912
the unit direction vector of the target under the camera system at the moment k + 2.
It can be seen that the system does not meet observability if the direction vector of the target under the camera system is unchanged. At the moment, the direction vector of the target under the camera system needs to be changed through attitude maneuver so as to meet the observability of the system.
The method further comprises the steps of: and a filtering step is given according to a relative kinematics equation without distance information, an observation equation after unified modeling and an observability analysis result.
State variable one-step prediction Xk+1,kWherein
Figure BDA0003494433780000101
Wherein, Xk+1,kIs the predicted value of the state variable at the next moment of time k, XkIs an estimate of the state variable at time k, αkIs the k time systematic error.
One-step predicted value P of state covariancek+1,k
Figure BDA0003494433780000102
In the formula phik+1,kIs a state transition matrix, Q, at k to k +1 timekIs the state noise covariance at time k, Pk+1,kIs the predicted value of the state covariance matrix at time k to the next time, PkIs the state covariance matrix at time k.
Computing gain matrix Kk
Figure BDA0003494433780000103
In the formula, HkIs a measurement matrix at time k, RkThe measured noise covariance at time k;
fourthly, obtaining zeta after the actual measurement value is normalizedk+1
Figure BDA0003494433780000104
Therein, ζk+1Is a measure of the target relative direction vector at time k +1, uk+1Coordinate value v of the target on the image plane horizontal axis at the time k +1k+1And the coordinate value of the target on the vertical axis of the image plane at the moment k + 1.
State variable updating:
Figure BDA0003494433780000105
state covariance update
Pk+1=(I9-KkHk)Pk+1,k
Wherein, Pk+1Is the state covariance matrix at time k +1, I9Is a 9-dimensional identity matrix.
Test results
FIG. 2 is a drawing of the present inventionThe embodiment provides a curve diagram of a filtering estimation error after unified modeling of a system error; FIG. 3 is a graph illustrating the filtered estimation error with respect to the navigation state variable r/| r | according to an embodiment of the present invention; FIG. 4 is a diagram of relative navigation state variables provided by an embodiment of the present invention
Figure BDA0003494433780000111
Is shown as a graph of the filtered estimation error. As shown in fig. 2 to 4:
six orbits of the spacecraft and the non-cooperative target are shown in table 1, and parameters required by the experiment are shown in table 2.
TABLE 1 six number of tracks
Figure BDA0003494433780000112
TABLE 2 Experimental parameters
Figure BDA0003494433780000113
According to the unified modeling method, the equivalent rotation vector after unified modeling is known as α ═ 5.20,8.69,3.49]T(unit: rad), the statistical result of the autonomous relative navigation estimation error based on the unified modeling can be obtained according to the filtering method, as shown in table 3.
TABLE 3 estimation error statistics
Figure BDA0003494433780000114
The embodiment also provides various navigation system error unified modeling systems based on the rotating reference coordinate system, which include: a first module for representing a mounting error of the camera using the equivalent rotation vector; the second module is used for obtaining a camera image plane translation error according to a camera measurement principle; the third module is used for converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by utilizing a rotation reference coordinate system; and the fourth module is used for obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera.
According to the invention, through the unified modeling characteristic of the system error, the unified dimension reduction representation of the system error is realized, the calculation complexity is reduced, and the limited calculation capability on the spacecraft can be met; the method gives the condition that the system meets observability through observability analysis of system errors after unified modeling, estimates on the premise of meeting the observability of the system, and ensures the convergence of filtering; according to the invention, after unified modeling and observability analysis of system errors, corresponding filtering steps are given, and self-correction of spacecraft navigation system errors can be realized.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (10)

1. A navigation system error unified modeling method based on a rotating reference coordinate system is characterized by comprising the following steps:
(1) representing the installation error of the camera by using the equivalent rotation vector;
(2) obtaining a camera image plane translation error according to a camera measurement principle;
(3) converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by using a rotation reference coordinate system;
(4) and obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera.
2. Rotating reference frame based on claim 1The unified modeling method for the navigation system errors is characterized by comprising the following steps: in step (1), a mounting error matrix C of the camerainsComprises the following steps:
Cins=I3+[θ×];
wherein, I3Is a 3-dimensional identity matrix, and theta is the mounting error of the camera.
3. The method of claim 1, wherein the method comprises the steps of: in the step (3), the step of converting the camera image plane translation error into the camera image plane translation error in the form of the equivalent rotation vector by using the rotation reference coordinate system comprises the following steps:
(31) obtaining a uniform equivalent rotating shaft;
(32) obtaining a unified rotation angle under an equivalent rotation axis according to the camera image plane translation error and the camera focal length;
(33) and obtaining the camera image plane translation error in the form of an equivalent rotation vector according to the unified equivalent rotation axis and the rotation angle under the unified equivalent rotation axis.
4. The method of claim 3, wherein the method comprises: in step (31), the uniform equivalent rotation axis is [ δ ═ lvu 0]T(ii) a Where l is the uniform equivalent axis of rotation, δvFor the amount of translation of the optical axis of the camera in the longitudinal direction of the image plane, δuThe translation amount of the optical axis of the camera in the transverse direction of the image plane is obtained.
5. The method of claim 3, wherein the method comprises: in step (32), the rotation angle at the uniform equivalent rotation axis is:
Figure FDA0003494433770000021
wherein phi is uniform, etcRotation angle under the effective rotation axis, f is the focal length of the camera, deltavFor the amount of translation of the optical axis of the camera in the longitudinal direction of the image plane, δuThe translation amount of the optical axis of the camera in the transverse direction of the image plane is obtained.
6. The method of claim 3, wherein the method comprises: in step (33), the camera image plane translation error in the form of an equivalent rotation vector is:
Figure FDA0003494433770000022
the method comprises the following steps of calculating a rotation angle under a unified equivalent rotation axis, wherein beta is a camera image plane translation error in an equivalent rotation vector form, l is the unified equivalent rotation axis, and phi is a rotation angle under the unified equivalent rotation axis.
7. The unified modeling method of navigation system errors based on rotating reference frame of claim 1, characterized in that: in step (4), the system error after unified modeling is:
α=θ+β;
wherein alpha is a system error after unified modeling, theta is a mounting error of the camera, and beta is a camera image plane translation error in an equivalent rotation vector form.
8. A navigation system error unified modeling system based on a rotating reference coordinate system is characterized by comprising:
a first module for representing a mounting error of the camera using the equivalent rotation vector;
the second module is used for obtaining a camera image plane translation error according to a camera measurement principle;
the third module is used for converting the camera image plane translation error into a camera image plane translation error in an equivalent rotation vector form by utilizing a rotation reference coordinate system;
and the fourth module is used for obtaining a system error after unified modeling according to the camera image plane translation error in the equivalent rotation vector form and the installation error of the camera.
9. The system according to claim 8, wherein the system comprises: mounting error matrix C of camerainsComprises the following steps:
Cins=I3+[θ×];
wherein, I3Is a 3-dimensional identity matrix, and theta is the mounting error of the camera.
10. The system according to claim 8, wherein the system comprises: the method for converting the camera image plane translation error into the camera image plane translation error in the form of the equivalent rotation vector by using the rotation reference coordinate system comprises the following steps:
(31) obtaining a uniform equivalent rotating shaft;
(32) obtaining a unified rotation angle under an equivalent rotation axis according to the camera image plane translation error and the camera focal length;
(33) and obtaining the camera image plane translation error in the form of an equivalent rotation vector according to the unified equivalent rotation axis and the rotation angle under the unified equivalent rotation axis.
CN202210107583.2A 2022-01-28 2022-01-28 Navigation system error unified modeling method based on rotating reference coordinate system Active CN114577208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210107583.2A CN114577208B (en) 2022-01-28 2022-01-28 Navigation system error unified modeling method based on rotating reference coordinate system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210107583.2A CN114577208B (en) 2022-01-28 2022-01-28 Navigation system error unified modeling method based on rotating reference coordinate system

Publications (2)

Publication Number Publication Date
CN114577208A true CN114577208A (en) 2022-06-03
CN114577208B CN114577208B (en) 2024-07-12

Family

ID=81771964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210107583.2A Active CN114577208B (en) 2022-01-28 2022-01-28 Navigation system error unified modeling method based on rotating reference coordinate system

Country Status (1)

Country Link
CN (1) CN114577208B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6336062B1 (en) * 1999-12-10 2002-01-01 Nec Corporation Attitude angle sensor correcting apparatus for an artificial satellite
CN103487032A (en) * 2013-08-08 2014-01-01 上海卫星工程研究所 Low earth orbit space camera free-pointing image motion vector calculation method
CN108645426A (en) * 2018-04-09 2018-10-12 北京空间飞行器总体设计部 A kind of in-orbit self-calibrating method of extraterrestrial target Relative Navigation vision measurement system
CN109631952A (en) * 2019-01-31 2019-04-16 中国人民解放军国防科技大学 Method for calibrating installation error of attitude reference mirror of optical gyro component for spacecraft
CN109696182A (en) * 2019-01-23 2019-04-30 张过 A kind of spaceborne push-broom type optical sensor elements of interior orientation calibrating method
CN111561935A (en) * 2020-05-19 2020-08-21 中国科学院微小卫星创新研究院 In-orbit geometric calibration method and system for rotary large-breadth optical satellite

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6336062B1 (en) * 1999-12-10 2002-01-01 Nec Corporation Attitude angle sensor correcting apparatus for an artificial satellite
CN103487032A (en) * 2013-08-08 2014-01-01 上海卫星工程研究所 Low earth orbit space camera free-pointing image motion vector calculation method
CN108645426A (en) * 2018-04-09 2018-10-12 北京空间飞行器总体设计部 A kind of in-orbit self-calibrating method of extraterrestrial target Relative Navigation vision measurement system
CN109696182A (en) * 2019-01-23 2019-04-30 张过 A kind of spaceborne push-broom type optical sensor elements of interior orientation calibrating method
CN109631952A (en) * 2019-01-31 2019-04-16 中国人民解放军国防科技大学 Method for calibrating installation error of attitude reference mirror of optical gyro component for spacecraft
CN111561935A (en) * 2020-05-19 2020-08-21 中国科学院微小卫星创新研究院 In-orbit geometric calibration method and system for rotary large-breadth optical satellite

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MA HONGLIANG ET AL.: "An algorithm and observability research of autonomous navigation and joint attitude determination in asteroid exploration descent stage", 《SCIENCE CHINA-INFORMATION SCIENCES》, vol. 57, no. 10, 22 October 2014 (2014-10-22), pages 1 - 17, XP035386508, DOI: 10.1007/s11432-014-5091-y *
刘鸣鹤等: "对地观测相机像移速度矢量建模", 《电光与控制》, vol. 21, no. 1, pages 63 - 67 *
王大轶等: "航天器自主导航状态估计方法研究综述", 《航空学报》, vol. 42, no. 4, pages 1 - 18 *
范城城;王密;赵薇薇;杨博;金淑英;潘俊;: "一种高分辨率光学卫星影像时变系统误差建模补偿方法", 光学学报, no. 12, pages 308 - 315 *

Also Published As

Publication number Publication date
CN114577208B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
Pesce et al. Comparison of filtering techniques for relative attitude estimation of uncooperative space objects
CN107702709B (en) Time-frequency domain hybrid identification method for non-cooperative target motion and inertia parameters
CN111380518B (en) SINS/USBL tight combination navigation positioning method introducing radial velocity
Li et al. Using consecutive point clouds for pose and motion estimation of tumbling non-cooperative target
CN108376411B (en) Binocular vision-based non-cooperative target relative state resolving method
CN107525492B (en) Drift angle simulation analysis method suitable for agile earth observation satellite
CN113175929B (en) UPF-based spatial non-cooperative target relative pose estimation method
CN113291493B (en) Method and system for determining fusion attitude of multiple sensors of satellite
Wang et al. Variational Bayesian cubature RTS smoothing for transfer alignment of DPOS
CN112985421B (en) Spacecraft autonomous astronomical navigation method based on angle constraint auxiliary measurement
CN114111818A (en) Universal visual SLAM method
CN108458709A (en) The airborne distributed POS data fusion method and device of view-based access control model subsidiary
CN113074753A (en) Star sensor and gyroscope combined attitude determination method, combined attitude determination system and application
CN114543794A (en) Absolute positioning method for fusion of visual inertial odometer and discontinuous RTK
Nazemipour et al. MEMS gyro bias estimation in accelerated motions using sensor fusion of camera and angular-rate gyroscope
CN114047766B (en) Mobile robot data acquisition system and method for long-term application of indoor and outdoor scenes
CN115290118A (en) Satellite laser communication system pointing error correction method based on star sensor
Liu et al. Global estimation method based on spatial–temporal Kalman filter for DPOS
CN114577208A (en) Navigation system error unified modeling method based on rotating reference coordinate system
CN117471439A (en) External parameter calibration method for ship lock forbidden region non-overlapping vision field laser radar
CN114623832A (en) Observable capacity dimension reduction representation and analysis judgment method and system for autonomous navigation system
CN116047480A (en) External parameter calibration method from laser radar to attitude sensor
CN113252029B (en) Astronomical navigation attitude transfer method based on optical gyroscope measurement information
Candan et al. Estimation of attitude using robust adaptive Kalman filter
CN110793540B (en) Method for improving attitude measurement precision of multi-probe star sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant