CN110186465B - Monocular vision-based space non-cooperative target relative state estimation method - Google Patents

Monocular vision-based space non-cooperative target relative state estimation method Download PDF

Info

Publication number
CN110186465B
CN110186465B CN201910593684.3A CN201910593684A CN110186465B CN 110186465 B CN110186465 B CN 110186465B CN 201910593684 A CN201910593684 A CN 201910593684A CN 110186465 B CN110186465 B CN 110186465B
Authority
CN
China
Prior art keywords
target
coordinate system
equation
axis
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910593684.3A
Other languages
Chinese (zh)
Other versions
CN110186465A (en
Inventor
孟中杰
郭新程
黄攀峰
张夷斋
张帆
刘正雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201910593684.3A priority Critical patent/CN110186465B/en
Publication of CN110186465A publication Critical patent/CN110186465A/en
Application granted granted Critical
Publication of CN110186465B publication Critical patent/CN110186465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention provides a monocular vision-based relative state estimation method for a space non-cooperative target, which adopts a monocular camera to realize relative navigation in the final approach process of the space non-cooperative target and accelerates filtering convergence by adding a constraint mode in an observation equation; establishing a kinematics and dynamics model of relative motion between the target and the service spacecraft, establishing an observation equation, adding a constraint relation equation between target feature points, designing a filter, and realizing estimation of a target relative state. The monocular camera is adopted to realize the relative navigation of the final approaching space non-cooperative target, and compared with a binocular camera (stereoscopic vision) and a laser imaging radar, the monocular camera has the advantages of small volume and mass, low power consumption, low economic cost and the like; and the constraint satisfied by the relative characteristics is added into the observation equation, so that the filter is faster in convergence.

Description

Monocular vision-based space non-cooperative target relative state estimation method
Technical Field
The invention relates to the technical field of relative navigation of spacecrafts, in particular to a target relative state estimation method.
Background
In order to be able to implement on-orbit services for non-cooperative targets, the target-to-service spacecraft relative position and attitude needs to be estimated as the space target is approached in close proximity. The spatial non-cooperative target mostly has the following characteristics: the surface is free of specific optical identification marks; communication cannot be performed; the object model is unknown, etc. This brings great difficulty to the estimation of the relative pose during the approaching operation of such targets.
From the perspective measurement sensors available for relative navigation, there are monocular cameras, binocular cameras (stereo vision), laser imaging radars, etc., and compared with the latter two, the monocular cameras have the advantages of simple technical implementation, small volume and mass, low power consumption, etc., and are more flexible and economical in economy, but have the defect that depth information cannot be directly obtained. Although the literature has demonstrated the observability of position-dependent states when cameras are mounted offset, it remains a difficult task to estimate the relative state of an object using a monocular camera.
In fact, the estimation problem of the relative state of the unknown non-cooperative target can be similar to the SLAM problem, that is, the relative pose of the target is estimated and the target structure is restored at the same time, and for a rolling target, information such as the inertia main axis of the target generally needs to be estimated. In contrast, in one method in the literature, image coordinates of feature points on the surface of a target are used as observed quantities, and a filtering algorithm is designed to estimate the relative state of the target by establishing a motion model of the feature points and a dynamic model of the target. But it has a major problem that the filtering converges slowly.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a monocular vision-based method for estimating the relative state of a spatial non-cooperative target. The invention aims to solve the technical problems that relative navigation in the final approximation process of a space non-cooperative target is realized by adopting a monocular camera, and filtering convergence is accelerated by adding a constraint mode into an observation equation.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
the method comprises the following steps: establishing a kinematics and dynamics model of the relative motion between the target and the service spacecraft;
the following coordinate system is defined:
(1) geocentric inertial coordinate system I: the origin is located at the earth mass center, the z-axis points to the north pole of the earth, the x-axis points to the spring break point, and the y-axis is determined by a right-hand rule;
(2) LVLH coordinate system H: the origin of the coordinate system is located at the centroid of the service spacecraft, the x axis points to the centroid of the service spacecraft from the centroid, the z axis is perpendicular to the orbital plane and is consistent with the orbital angular momentum direction, and the y axis is determined by a right-hand rule;
(2) serving spacecraft body coordinate system A: the origin of coordinates is fixed on the center of mass of the service spacecraft, and the coordinate axis is superposed with the inertia main shaft of the service spacecraft;
(3) eye specimen coordinate system B: the origin of coordinates is fixed on a target centroid, coordinate axes are overlapped with an inertia main axis of the target, a minimum inertia main axis is defined as an x axis, and a maximum inertia main axis is defined as a y axis.
(5) Camera coordinate system C: the coordinate origin is fixed on the optical center of the camera, the position vector on the service spacecraft body coordinate system is d, the known fixed quantity is obtained, the z axis is along the optical axis direction of the camera, and the x axis and the y axis are parallel to the imaging plane;
the relative attitude of the target specimen body coordinate system B relative to the service spacecraft body coordinate system A is expressed by a quaternion q which is expressed by
Figure BDA0002116934100000021
q v =[q 1 q 2 q 3 ] T As part of a quaternion vector, q 4 A scalar part which is a quaternion, and a rotation matrix corresponding to the quaternion q is:
Figure BDA0002116934100000022
wherein [ q ] is]Is vector q ═ q 1 q 2 q 3 ] T The antisymmetric matrix of (a):
Figure BDA0002116934100000023
the multiplication between quaternions b and q is defined as follows:
Figure BDA0002116934100000024
the representation of the angular speed of the serving spacecraft body coordinate system A and the target specimen coordinate system B under the respective body coordinate systems is defined as
Figure BDA0002116934100000025
The angular velocity ω of the target relative to the serving spacecraft is expressed in the target object coordinate system B as:
Figure BDA0002116934100000026
the kinematic equation of the relative attitude is:
Figure BDA0002116934100000027
wherein the angular velocity of the serving spacecraft
Figure BDA0002116934100000031
Measured by inertial equipment of a service spacecraft, and used as a known quantity in a model equation;
the attitude dynamics equation of the target is:
Figure BDA0002116934100000032
wherein ω is t Represents the target angular velocity, i.e. is
Figure BDA0002116934100000033
τ t As a noise term, J t An inertia matrix of the target, and J t =diag(J xx ,J yy ,J zz ),J xx ,J yy ,J zz The moment of inertia of the x axis, the y axis and the z axis of the target main shaft are respectively; order to
Figure BDA0002116934100000034
Then formula (6) is converted into:
Figure BDA0002116934100000035
the target moment of inertia ratio is a constant and should satisfy:
Figure BDA0002116934100000036
wherein the content of the first and second substances,
Figure BDA0002116934100000037
represents k 1 With respect to the rate of change of time,
Figure BDA0002116934100000038
represents the rate of change of k2 with respect to time;
definition of p 0 Is the position vector of the target centroid relative to the serving spacecraft centroid, expressed as [ x y z ] in LVLH coordinate system] T ρ when considering a close proximity of a serving spacecraft to a target 0 If small, the relative position kinetics equation is:
Figure BDA0002116934100000039
wherein:
Figure BDA00021169341000000310
to serve the true near point angular velocity, r, of the spacecraft orbit c To serve the spacecraft orbit radius, p c Is a semi-positive focal chord line,
Figure BDA00021169341000000311
respectively, second derivatives of x, y, z,
Figure BDA00021169341000000312
is represented by r c First derivative of, serving spacecraft orbit parameters
Figure BDA00021169341000000313
r c 、p c
Figure BDA00021169341000000314
The method is obtained through a service spacecraft orbit determination system and used as a known quantity in a model equation;
definition of p 1 ,…,p N The position vectors of the N characteristic points on the target relative to the target centroid are respectively expressed on the target ontology, and the rigid body hypothesis of the target is as follows:
Figure BDA00021169341000000315
wherein the content of the first and second substances,
Figure BDA00021169341000000316
to represent
Figure BDA00021169341000000317
Rate of change with respect to time;
finally selecting the state to be estimated
Figure BDA0002116934100000041
The relative motion dynamics model is then written as:
Figure BDA0002116934100000042
wherein: f (X) is obtained from formulae (5), (7), (8), (9) and (10), respectively; w (t) is the system noise term;
step two: establishing an observation equation, wherein a constraint relation equation between target characteristic points is added;
defining a position vector rho of a characteristic point i on a target under a camera coordinate system i Comprises the following steps:
Figure BDA0002116934100000043
in the formula:
Figure BDA0002116934100000044
a rotation matrix representing the coordinate system from the serving spacecraft body coordinate system to the camera coordinate system is a known quantity;
Figure BDA0002116934100000045
the rotation matrix from the LVLH coordinate system to the service spacecraft body coordinate system is represented and obtained through self-measuring equipment;
Figure BDA0002116934100000046
relating to the state q to be estimated, given by equation (1);
characteristic point p i Image projection coordinates of (2)
Figure BDA0002116934100000047
And rho i =[x i y i z i ] T The relationship is as follows:
Figure BDA0002116934100000048
wherein f is x ,f y ,c x ,c y The internal parameter of the used camera;
further, the relative feature ρ is defined in consideration of the feature points i, j, k ij Comprises the following steps:
Figure BDA0002116934100000049
same definition of rho jk And ρ ki ,ρ ij Projection coordinate in the image is y ij The two satisfy the following relation:
Figure BDA00021169341000000410
wherein u is ij =u i -u j ,v ij =v i -v j
Figure BDA00021169341000000411
Taking into account the vector sum p ijjkki When the formula (15) is 0, the constraint should be satisfied between the feature points i, j, and k:
Figure BDA00021169341000000412
is expressed as M (i, j, k) ═ y i,j,k
Numbering N characteristic points on a target by 1,2, … and N, and forming a constraint equation by taking three points as a group and adding the constraint equation into an observation equation; suppose s 1 ,…,s m Is a point selection combination mode, covers all the characteristic points to be estimated, and the point selection combination mode is s 1 =1,2,3,s 2 =2,3,4,…,s N-1 =N-1,N,1;
The final observation equation is:
Figure BDA0002116934100000051
in observation equation (17): z is all observed quantities, v is observed noise, and all ρ i 、ρ ij The related components are replaced by the formula (12) and the formula (14), and the mapping relation from the state to the observation can be obtained;
step three: designing a filter to realize the estimation of a target relative state;
equation (11) describes the equation of state of a continuous system, which is first transformed into a discrete model, and then estimated, where the discrete model is:
Figure BDA0002116934100000052
in the formula: x (k) is the state at time k, Δ t is the filtering period,
Figure BDA0002116934100000053
respectively expressing a system state equation and an observation equation for the formula (18) and the formula (17), and performing state estimation by adopting iterative extended Kalman filtering; the specific estimation steps are as follows:
step 3.1: initialization: setting initial filter estimation values
Figure BDA0002116934100000054
And an error variance matrix P (0);
step 3.2: the state prediction value is:
Figure BDA0002116934100000061
the prediction error variance matrix is:
Figure BDA0002116934100000062
in the formula: p (k +1/k) represents a prediction error variance matrix, and P (k/k) represents a filtering error variance matrix at the time k;
wherein:
Figure BDA0002116934100000063
Q k is the system noise variance;
step 3.3: and (3) state updating:
step 3.3.1: taking the predicted value as an iteration initial value:
Figure BDA0002116934100000064
P(k+1/k+1) 0 =P(k+1/k) (22)
step 3.3.2: in the iteration process, the ith iteration is as follows:
Figure BDA00021169341000000611
wherein
Figure BDA0002116934100000066
R k+1 For measuring the variance of the noise, K (K +1) i Represents the gain at the ith iteration;
Figure BDA0002116934100000067
wherein, (r (k) i Representing the observed residual at the ith iteration,
Figure BDA0002116934100000068
the iteration result of the (i-1) th iteration result is substituted into an observation equation (17) to obtain the result;
wherein z (k +1) is all observations at time k + 1;
Figure BDA0002116934100000069
P(k+1/k+1) i =[I-K(k+1) i H(k+1) i-1 ]P(k+1/k+1) i-1 (26)
step 3.3.3: when the iteration times reach the maximum iteration times or the state iteration error is smaller than a set threshold value, the iteration is terminated, and the state iteration error is defined as follows:
Figure BDA00021169341000000610
wherein | · | purple 2 Representing a vector two norm; assuming that iteration is terminated in the nth iteration, outputting the iteration result at the moment as a final estimation result:
Figure BDA0002116934100000071
P(k+1/k+1)=P(k+1/k+1) n (29)
wherein the content of the first and second substances,
Figure BDA0002116934100000072
i.e. the final state estimation result at the time k + 1.
The method has the advantages that the monocular camera is adopted to realize the relative navigation of the final approaching space non-cooperative target, and compared with a binocular camera (stereoscopic vision) and a laser imaging radar, the monocular camera has the advantages of small volume and mass, low power consumption, low economic cost and the like; and the constraint satisfied by the relative characteristics is added into the observation equation, so that the filter is faster in convergence.
Drawings
Fig. 1 is a schematic diagram of a body coordinate system and a measurement geometry relationship of a service spacecraft and a target.
Among them, 1-service spacecraft, 2-target satellite, 3-monocular camera.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention provides a method for estimating the relative state of a target and rapidly converging by using a monocular camera on the basis of the existing research. The procedure of the example is as follows:
the method comprises the following steps: establishing a kinematics and dynamics model of the relative motion between the target and the service spacecraft;
the following coordinate system is defined:
(1) geocentric inertial coordinate system I: the origin is located at the earth mass center, the z-axis points to the north pole of the earth, the x-axis points to the spring break point, and the y-axis is determined by a right-hand rule;
(2) LVLH coordinate system H: the origin of the coordinate system is located at the centroid of the service spacecraft, the x axis points to the centroid of the service spacecraft from the centroid, the z axis is perpendicular to the orbital plane and is consistent with the orbital angular momentum direction, and the y axis is determined by a right-hand rule;
(3) serving spacecraft body coordinate system A: the origin of coordinates is fixed on the center of mass of the service spacecraft, and the coordinate axis is superposed with the inertia main shaft of the service spacecraft;
(4) eye specimen coordinate system B: the origin of coordinates is fixed on a target centroid, coordinate axes are overlapped with an inertia main axis of the target, a minimum inertia main axis is defined as an x axis, and a maximum inertia main axis is defined as a y axis.
(5) Camera coordinate system C: the coordinate origin is fixed on the optical center of the camera, the position vector on the service spacecraft body coordinate system is d, the known fixed quantity is obtained, the z axis is along the optical axis direction of the camera, and the x axis and the y axis are parallel to the imaging plane;
target specimen coordinate system B relative to service spacecraftThe relative attitude of the body coordinate system A is expressed by a quaternion q which is a quaternion
Figure BDA0002116934100000081
q v =[q 1 q 2 q 3 ] T As part of a quaternion vector, q 4 A scalar part which is a quaternion, and a rotation matrix corresponding to the quaternion q is:
Figure BDA0002116934100000082
wherein [ q ] is]Is vector q ═ q 1 q 2 q 3 ] T The antisymmetric matrix of (a):
Figure BDA0002116934100000083
the multiplication between quaternions b and q is defined as follows:
Figure BDA0002116934100000084
the representation of the angular speed of the serving spacecraft body coordinate system A and the target specimen coordinate system B under the respective body coordinate systems is defined as
Figure BDA0002116934100000085
The angular velocity ω of the target relative to the serving spacecraft is expressed in the target object coordinate system B as:
Figure BDA0002116934100000086
the kinematic equation of the relative attitude is:
Figure BDA0002116934100000087
wherein the angular velocity of the serving spacecraftDegree of rotation
Figure BDA0002116934100000088
Measured by inertial equipment of a service spacecraft, and used as a known quantity in a model equation;
the attitude dynamics equation of the target is:
Figure BDA0002116934100000089
wherein ω is t Represents the target angular velocity, i.e. is
Figure BDA00021169341000000810
τ t As a noise term, J t An inertia matrix of the target, and J t =diag(J xx ,J yy ,J zz ),J xx ,J yy ,J zz Respectively the rotational inertia of the x axis, the y axis and the z axis of the target main shaft; since the target moment of inertia is unknown, the state estimation cannot be directly performed by using the formula (6), and the moment of inertia needs to be parameterized to make
Figure BDA0002116934100000091
Then formula (6) is converted into:
Figure BDA0002116934100000092
considering that the target is a rigid body, the mass and the distribution thereof do not change in a short period, and the target moment-of-inertia ratio is a constant, the following conditions should be satisfied:
Figure BDA0002116934100000093
wherein the content of the first and second substances,
Figure BDA0002116934100000094
represents k 1 With respect to the rate of change of time,
Figure BDA0002116934100000095
represents the rate of change of k2 with respect to time;
definition of p 0 Is the position vector of the target centroid relative to the serving spacecraft centroid, expressed as [ x y z ] in LVLH coordinate system] T ρ when considering a close proximity of a serving spacecraft to a target 0 If small, the relative position kinetics equation is:
Figure BDA0002116934100000096
wherein:
Figure BDA0002116934100000097
to serve the true near point angular velocity, r, of the spacecraft orbit c To serve the spacecraft orbit radius, p c Is a semi-positive focal chord line,
Figure BDA00021169341000000917
respectively, second derivatives of x, y, z,
Figure BDA0002116934100000099
is represented by r c First derivative of, serving spacecraft orbit parameters
Figure BDA00021169341000000910
r c 、p c
Figure BDA00021169341000000911
The method is obtained through a service spacecraft orbit determination system and used as a known quantity in a model equation;
definition of p 1 ,…,p N The position vectors of the N characteristic points on the target relative to the target centroid are respectively expressed on the target ontology, and the rigid body hypothesis of the target is as follows:
Figure BDA00021169341000000912
wherein the content of the first and second substances,
Figure BDA00021169341000000913
to represent
Figure BDA00021169341000000914
Rate of change with respect to time;
finally selecting the state to be estimated
Figure BDA00021169341000000915
The relative motion dynamics model is then written as:
Figure BDA00021169341000000916
wherein: f (X) is obtained from formulae (5), (7), (8), (9) and (10), respectively; w (t) is the system noise term;
step two: establishing an observation equation, wherein a constraint relation equation between target characteristic points is added;
the observed quantity is coordinates of target feature points extracted from the image, the number of the feature points to be selected and tracked is assumed to be N, and the problem of feature point shielding disappearance is not considered; defining a position vector rho of a characteristic point i on a target under a camera coordinate system i As shown in fig. 1, the following components are provided:
Figure BDA0002116934100000101
in the formula:
Figure BDA0002116934100000102
a rotation matrix representing the coordinate system of the serving spacecraft body to the coordinate system of the camera, is a known quantity;
Figure BDA0002116934100000103
the rotation matrix representing the LVLH coordinate system to the service spacecraft body coordinate system is related to the attitude motion and the orbit position of the service spacecraft and can be obtained through self measuring equipment;
Figure BDA0002116934100000104
relating to the state q to be estimated, given by equation (1);
characteristic point p i Image projection coordinates of
Figure BDA0002116934100000105
And rho i =[x i y i z i ] T The relationship is as follows:
Figure BDA0002116934100000106
wherein f is x ,f y ,c x ,c y The internal parameter of the used camera;
further, the relative feature ρ is defined in consideration of the feature points i, j, k ij Comprises the following steps:
Figure BDA0002116934100000107
similarly, p can be defined jk And ρ ki ,ρ ij Projection coordinate in the image is y ij The two satisfy the following relation:
Figure BDA0002116934100000108
wherein u is ij =u i -u j ,v ij =v i -v j
Figure BDA0002116934100000109
Taking into account the vector sum p ijjkki When the formula (15) is 0, the constraint should be satisfied between the feature points i, j, and k:
Figure BDA00021169341000001010
is expressed as M (i, j, k) ═ y i,j,k
Numbering N characteristic points on a target by 1,2, … and N, and forming a constraint equation by taking three points as a group and adding the constraint equation into an observation equation; suppose s 1 ,…,s m Is a point selection combination mode, covers all the characteristic points to be estimated, and the point selection combination mode is s 1 =1,2,3,s 2 =2,3,4,…,s N-1 =N-1,N,1;
The final observation equation is:
Figure BDA0002116934100000111
in observation equation (17): z is all observed quantities, v is observed noise, and all ρ i 、ρ ij The related components are replaced by the formula (12) and the formula (14), and the mapping relation from the state to the observation can be obtained;
step three: designing a filter to realize the estimation of a target relative state;
equation (11) describes the equation of state of a continuous system, which is first transformed into a discrete model, and then estimated, where the discrete model is:
Figure BDA0002116934100000112
in the formula: x (k) is the state at time k, Δ t is the filtering period,
Figure BDA0002116934100000113
respectively expressing a system state equation and an observation equation for the formula (18) and the formula (17), and performing state estimation by adopting iterative extended Kalman filtering; the specific estimation steps are as follows:
step 3.1: initialization: setting initial filter estimation values
Figure BDA0002116934100000114
And an error variance matrix P (0);
step 3.2: the state prediction value is:
Figure BDA0002116934100000115
the prediction error variance matrix is:
Figure BDA0002116934100000121
in the formula: p (k +1/k) represents a prediction error variance matrix, and P (k/k) represents a filtering error variance matrix at the time k;
wherein:
Figure BDA0002116934100000122
Q k is the system noise variance;
step 3.3: and (3) updating the state:
step 3.3.1: taking the predicted value as an iteration initial value:
Figure BDA0002116934100000123
P(k+1/k+1) 0 =P(k+1/k) (22)
step 3.3.2: in the iteration process, the ith iteration is as follows:
Figure BDA00021169341000001210
wherein
Figure BDA0002116934100000124
R k+1 For measuring the variance of the noise, K (K +1) i Represents the gain at the ith iteration;
Figure BDA0002116934100000125
wherein, (r (k) i Representing the observed residual at the ith iteration,
Figure BDA0002116934100000126
the iteration result of the (i-1) th iteration result is substituted into an observation equation (17) to obtain the result;
wherein z (k +1) is all observations at time k + 1;
Figure BDA0002116934100000127
P(k+1/k+1) i =[I-K(k+1) i H(k+1) i-1 ]P(k+1/k+1) i-1 (26)
step 3.3.3: when the iteration times reach the maximum iteration times or the state iteration error is smaller than a set threshold value, the iteration is terminated, and the state iteration error is defined as follows:
Figure BDA0002116934100000128
wherein | · | purple 2 Representing a vector two norm; assuming that iteration is terminated in the nth iteration, outputting the iteration result at the moment as a final estimation result:
Figure BDA0002116934100000129
P(k+1/k+1)=P(k+1/k+1) n (29)
wherein the content of the first and second substances,
Figure BDA0002116934100000131
i.e. the final state estimation result at the time k + 1.

Claims (1)

1. A monocular vision-based spatial non-cooperative target relative state estimation method is characterized by comprising the following steps:
the method comprises the following steps: establishing a kinematics and dynamics model of the relative motion between the target and the service spacecraft;
the following coordinate system is defined:
(1) geocentric inertial coordinate system I: the origin is located at the earth mass center, the z-axis points to the north pole of the earth, the x-axis points to the spring break point, and the y-axis is determined by a right-hand rule;
(2) LVLH coordinate system H: the origin of the coordinate system is located at the centroid of the service spacecraft, the x axis points to the centroid of the service spacecraft from the centroid, the z axis is perpendicular to the orbital plane and is consistent with the orbital angular momentum direction, and the y axis is determined by a right-hand rule;
(2) serving spacecraft body coordinate system A: the origin of coordinates is fixed on the center of mass of the service spacecraft, and the coordinate axis is superposed with the inertia main shaft of the service spacecraft;
(3) eye specimen coordinate system B: the origin of coordinates is fixed on a target centroid, coordinate axes are overlapped with an inertia main axis of a target, a minimum inertia main axis is defined as an x axis, and a maximum inertia main axis is defined as a y axis;
(5) camera coordinate system C: the origin of coordinates is fixed on the optical center of the camera, the z axis is along the optical axis direction of the camera, and the x axis and the y axis are parallel to the imaging plane;
the relative attitude of the target specimen body coordinate system B relative to the service spacecraft body coordinate system A is expressed by a quaternion q which is expressed by
Figure FDA0003674303050000011
q v =[q 1 q 2 q 3 ] T As part of a quaternion vector, q 4 A scalar part which is a quaternion, and a rotation matrix corresponding to the quaternion q is:
Figure FDA0003674303050000012
wherein [ q ] is]Is vector q ═ q 1 q 2 q 3 ] T The antisymmetric matrix of (a):
Figure FDA0003674303050000013
the multiplication between quaternions b and q is defined as follows:
Figure FDA0003674303050000014
the representation of the angular speed of the serving spacecraft body coordinate system A and the target specimen coordinate system B under the respective body coordinate systems is defined as
Figure FDA0003674303050000021
The angular velocity ω of the target relative to the serving spacecraft is expressed in the target object coordinate system B as:
Figure FDA0003674303050000022
the kinematic equation of the relative attitude is:
Figure FDA0003674303050000023
wherein the angular velocity of the serving spacecraft
Figure FDA0003674303050000024
Measured by inertial equipment of a service spacecraft, and used as a known quantity in a model equation;
the attitude dynamics equation of the target is:
Figure FDA0003674303050000025
wherein ω is t Represents the target angular velocity, i.e. is
Figure FDA0003674303050000026
τ t As a noise term, J t An inertia matrix of the target, and J t =diag(J xx ,J yy ,J zz ),J xx ,J yy ,J zz Respectively the rotational inertia of the x axis, the y axis and the z axis of the target main shaft; order to
Figure FDA0003674303050000027
Then the formula is converted to:
Figure FDA0003674303050000028
the target moment of inertia ratio is a constant and should satisfy:
Figure FDA0003674303050000029
wherein the content of the first and second substances,
Figure FDA00036743030500000210
represents k 1 With respect to the rate of change of time,
Figure FDA00036743030500000211
represents the rate of change of k2 with respect to time;
definition of p 0 Is the position vector of the target centroid relative to the serving spacecraft centroid, expressed as [ x y z ] in LVLH coordinate system] T ρ when considering a close proximity of a serving spacecraft to a target 0 If small, the relative position kinetics equation is:
Figure FDA00036743030500000212
wherein:
Figure FDA0003674303050000031
to serve the true near point angular velocity, r, of the spacecraft orbit c To serve the spacecraft orbit radius, p c Is a semi-positive focal length string of a straight line,
Figure FDA0003674303050000032
respectively, second derivatives of x, y, z,
Figure FDA0003674303050000033
is represented by r c First derivative of, serving spacecraft orbit parameters
Figure FDA0003674303050000034
r c 、p c
Figure FDA0003674303050000035
The method is obtained through a service spacecraft orbit determination system and used as a known quantity in a model equation;
definition of p 1 ,…,p N The position vectors of the N characteristic points on the target relative to the target centroid are respectively expressed on the target ontology, and the rigid body hypothesis of the target is as follows:
Figure FDA0003674303050000036
wherein the content of the first and second substances,
Figure FDA0003674303050000037
is represented by [ p ] 1 T ,…,p N T ] T Rate of change with respect to time;
finally selecting the state to be estimated
Figure FDA0003674303050000038
The relative motion dynamics model is then written as:
Figure FDA0003674303050000039
wherein: f (X) is obtained from formulae (5), (7), (8), (9) and (10), respectively; w (t) is a system noise term;
step two: establishing an observation equation, wherein a constraint relation equation between target characteristic points is added;
defining a position vector rho of a characteristic point i on a target under a camera coordinate system i Comprises the following steps:
Figure FDA00036743030500000310
in the formula:
Figure FDA00036743030500000311
a rotation matrix representing the coordinate system from the serving spacecraft body coordinate system to the camera coordinate system is a known quantity; d is a position vector of the origin of the camera coordinate system under the serving spacecraft body coordinate system, and is a known quantity;
Figure FDA00036743030500000312
the rotation matrix from the LVLH coordinate system to the service spacecraft body coordinate system is represented and obtained through self-measuring equipment;
Figure FDA00036743030500000313
relating to the state q to be estimated, given by equation (1);
characteristic point p i Image projection coordinates of
Figure FDA00036743030500000314
And rho i =[x i y i z i ] T The relationship is as follows:
Figure FDA00036743030500000315
wherein f is x ,f y ,c x ,c y The internal parameter of the used camera;
further, considering the feature points i, j, k, the relative feature ρ is defined ij Comprises the following steps:
Figure FDA00036743030500000316
definition of rho in the same way jk And ρ ki ,ρ ij Projection coordinate in the image is y ij The two satisfy the following relation:
Figure FDA0003674303050000041
wherein u is ij =u i -u j ,v ij =v i -v j
Figure FDA0003674303050000042
Taking into account the vector sum p ijjkki When the characteristic point i, j, k satisfies the constraint of 0:
Figure FDA0003674303050000043
is expressed as M (i, j, k) ═ y i,j,k
Numbering N characteristic points on a target by 1,2, … and N, and forming a constraint equation by taking three points as a group and adding the constraint equation into an observation equation; suppose s 1 ,…,s m Is a point selection combination mode, covers all the characteristic points to be estimated, and the point selection combination mode is s 1 =1,2,3,s 2 =2,3,4,…,s N-1 =N-1,N,1;
The final observation equation is:
Figure FDA0003674303050000044
in the observation equation: z is all observed quantities, v is observed noise, and all ρ i 、ρ ij The related components are replaced by applying formulas and formulas, and the mapping relation from the state to the observation can be obtained;
step three: designing a filter to realize the estimation of a target relative state;
the equation describes a continuous system state equation, which is converted into a discrete model, and then estimated, wherein the discrete model is as follows:
Figure FDA0003674303050000051
in the formula: x (k) is the state at time k, Δ t is the filtering period,
Figure FDA0003674303050000052
respectively expressing a system state equation and an observation equation by using the equation and the formula, and performing state estimation by using iterative extended Kalman filtering; the specific estimation steps are as follows:
step 3.1: initialization: setting initial filter estimation values
Figure FDA0003674303050000053
And an error variance matrix P (0);
step 3.2: the state prediction value is:
Figure FDA0003674303050000054
the prediction error variance matrix is:
Figure FDA0003674303050000055
in the formula: p (k +1/k) represents a prediction error variance matrix, and P (k/k) represents a filtering error variance matrix at the time k;
wherein:
Figure FDA0003674303050000056
Q k is the system noise variance;
step 3.3: and (3) updating the state:
step 3.3.1: taking the predicted value as an iteration initial value:
Figure FDA0003674303050000057
P(k+1/k+1) 0 =P(k+1/k) (22)
step 3.3.2: in the iteration process, the ith iteration is as follows:
Figure FDA0003674303050000058
wherein
Figure FDA0003674303050000059
R k+1 For measuring the variance of the noise, K (K +1) i Represents the gain at the ith iteration;
Figure FDA00036743030500000510
wherein, (r (k) i Representing the observed residual at the ith iteration,
Figure FDA00036743030500000511
) The iteration result of the (i-1) th iteration result is substituted into an observation equation (17) to obtain the result;
wherein z (k +1) is all observations at time k + 1;
Figure FDA0003674303050000061
P(k+1/k+1) i =[I-K(k+1) i H(k+1) i-1 ]P(k+1/k+1) i-1 (26)
step 3.3.3: when the iteration times reach the maximum iteration times or the state iteration error is smaller than a set threshold value, the iteration is terminated, and the state iteration error is defined as follows:
Figure FDA0003674303050000062
wherein | · | purple 2 Representing a vector two norm; assuming that iteration is terminated in the nth iteration, outputting the iteration result at the moment as a final estimation result:
Figure FDA0003674303050000063
P(k+1/k+1)=P(k+1/k+1) n (29)
wherein the content of the first and second substances,
Figure FDA0003674303050000064
i.e. the final state estimation result at the time k + 1.
CN201910593684.3A 2019-07-03 2019-07-03 Monocular vision-based space non-cooperative target relative state estimation method Active CN110186465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910593684.3A CN110186465B (en) 2019-07-03 2019-07-03 Monocular vision-based space non-cooperative target relative state estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910593684.3A CN110186465B (en) 2019-07-03 2019-07-03 Monocular vision-based space non-cooperative target relative state estimation method

Publications (2)

Publication Number Publication Date
CN110186465A CN110186465A (en) 2019-08-30
CN110186465B true CN110186465B (en) 2022-08-05

Family

ID=67724752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910593684.3A Active CN110186465B (en) 2019-07-03 2019-07-03 Monocular vision-based space non-cooperative target relative state estimation method

Country Status (1)

Country Link
CN (1) CN110186465B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110823214B (en) * 2019-10-18 2021-05-25 西北工业大学 Method for estimating relative pose and inertia of space complete non-cooperative target
CN113295171B (en) * 2021-05-19 2022-08-16 北京航空航天大学 Monocular vision-based attitude estimation method for rotating rigid body spacecraft
CN116958263B (en) * 2023-08-09 2024-04-12 苏州三垣航天科技有限公司 Monocular camera intelligent enhancement method in space observation target gesture recognition process

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104406598A (en) * 2014-12-11 2015-03-11 南京航空航天大学 Non-cooperative spacecraft attitude estimation method based on virtual sliding mode control
CN106548475A (en) * 2016-11-18 2017-03-29 西北工业大学 A kind of Forecasting Methodology of the target trajectory that spins suitable for space non-cooperative
CN107167145A (en) * 2017-05-25 2017-09-15 西北工业大学 A kind of morphological parameters measuring method of adaptive contactless inert satellite
CN107449402A (en) * 2017-07-31 2017-12-08 清华大学深圳研究生院 A kind of measuring method of the relative pose of noncooperative target
CN107702709A (en) * 2017-08-31 2018-02-16 西北工业大学 A kind of noncooperative target moves the time-frequency domain mixing discrimination method with inertial parameter
CN108804846A (en) * 2018-06-20 2018-11-13 哈尔滨工业大学 A kind of data-driven attitude controller design method of noncooperative target assembly spacecraft
CN108917772A (en) * 2018-04-04 2018-11-30 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on sequence image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104406598A (en) * 2014-12-11 2015-03-11 南京航空航天大学 Non-cooperative spacecraft attitude estimation method based on virtual sliding mode control
CN106548475A (en) * 2016-11-18 2017-03-29 西北工业大学 A kind of Forecasting Methodology of the target trajectory that spins suitable for space non-cooperative
CN107167145A (en) * 2017-05-25 2017-09-15 西北工业大学 A kind of morphological parameters measuring method of adaptive contactless inert satellite
CN107449402A (en) * 2017-07-31 2017-12-08 清华大学深圳研究生院 A kind of measuring method of the relative pose of noncooperative target
CN107702709A (en) * 2017-08-31 2018-02-16 西北工业大学 A kind of noncooperative target moves the time-frequency domain mixing discrimination method with inertial parameter
CN108917772A (en) * 2018-04-04 2018-11-30 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on sequence image
CN108804846A (en) * 2018-06-20 2018-11-13 哈尔滨工业大学 A kind of data-driven attitude controller design method of noncooperative target assembly spacecraft

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Trajectory prediction of space robot for capturing non-cooperative target;Dong Han et.al;《2017 18th International Conference on Advanced Robotics (ICAR)》;20171231;第328-333页 *
对空间碎片的相对位姿估计;宋亮等;《宇航学报》;20150831;第36卷(第8期);第906-915页 *

Also Published As

Publication number Publication date
CN110186465A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
US20230194266A1 (en) Vision-aided inertial navigation
Pesce et al. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects
Aghili et al. Robust relative navigation by integration of ICP and adaptive Kalman filter using laser scanner and IMU
CN110186465B (en) Monocular vision-based space non-cooperative target relative state estimation method
Aghili et al. Motion and parameter estimation of space objects using laser-vision data
Gross et al. A comparison of extended Kalman filter, sigma-point Kalman filter, and particle filter in GPS/INS sensor fusion
CN108645416B (en) Non-cooperative target relative navigation simulation verification method based on vision measurement system
Scandaroli et al. Nonlinear filter design for pose and IMU bias estimation
Clemens et al. Extended Kalman filter with manifold state representation for navigating a maneuverable melting probe
Moutinho et al. Attitude estimation in SO (3): A comparative UAV case study
CN115200578A (en) Polynomial optimization-based inertial-based navigation information fusion method and system
Cristofalo et al. Vision-based control for fast 3-d reconstruction with an aerial robot
Na et al. Vision-based relative navigation using dual quaternion for spacecraft proximity operations
Galante et al. Fast Kalman filtering for relative spacecraft position and attitude estimation for the raven ISS hosted payload
Song et al. Autonomous rendezvous and docking of an unknown tumbling space target with a monocular camera
Paul et al. Dual Kalman filters for autonomous terrain aided navigation in unknown environments
Ammann et al. Undelayed initialization of inverse depth parameterized landmarks in UKF-SLAM with error state formulation
Sveier et al. Pose estimation using dual quaternions and moving horizon estimation
Huntsberger et al. Sensory fusion for planetary surface robotic navigation, rendezvous, and manipulation operations
Villa et al. Optical navigation for autonomous approach of small unknown bodies
Li et al. A novel hybrid unscented particle filter based on firefly algorithm for tightly-coupled stereo visual-inertial vehicle positioning
Mazzucato et al. Stereoscopic vision-based relative navigation for spacecraft proximity operations
Rathinama et al. Vision based state estimation using a graph-SLAM approach for proximity operations near an asteroid
Yang et al. Inertial-aided vision-based localization and mapping in a riverine environment with reflection measurements
Marcus et al. MHN-SLAM FOR PLANETARY LANDING

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant