CN117647822A - Video imaging method and system based on multi-star pair non-cooperative targets - Google Patents

Video imaging method and system based on multi-star pair non-cooperative targets Download PDF

Info

Publication number
CN117647822A
CN117647822A CN202311359160.0A CN202311359160A CN117647822A CN 117647822 A CN117647822 A CN 117647822A CN 202311359160 A CN202311359160 A CN 202311359160A CN 117647822 A CN117647822 A CN 117647822A
Authority
CN
China
Prior art keywords
coordinate system
target
satellite
component
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311359160.0A
Other languages
Chinese (zh)
Inventor
姚晓杰
孔晓健
任路明
胡泽岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CASIC Space Engineering Development Co Ltd
Original Assignee
CASIC Space Engineering Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CASIC Space Engineering Development Co Ltd filed Critical CASIC Space Engineering Development Co Ltd
Priority to CN202311359160.0A priority Critical patent/CN117647822A/en
Publication of CN117647822A publication Critical patent/CN117647822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/393Trajectory determination or predictive tracking, e.g. Kalman filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The embodiment of the invention discloses a video imaging method and a system based on a multi-star pair non-cooperative target. Selecting a target to be observed, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system; calculating the position of the target under the orbit coordinate system according to the positions of the target and the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target along the observation direction; calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and a first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle, and adjusting the roll attitude and the pitch attitude of the satellite; judging whether the image shift exceeds 0.1 pixel or not according to the positions of two background points in the landmark library in the current frame and the position of the next frame in the image coordinate system and the first perpendicular bisector and the second perpendicular bisector of the positions of the two background points; if so, the yaw angle is calculated and the yaw attitude of the satellite is adjusted.

Description

Video imaging method and system based on multi-star pair non-cooperative targets
Technical Field
The present invention relates to the field of aerospace and satellite attitude kinematics. And more particularly to a method and system for multi-star pair non-cooperative target based video imaging.
Background
The video imaging satellite observes the ground in a video recording mode, and is an upgrade of the ground observation technology of the traditional optical satellite. During the last decade of the transmission of the LUPAN-TUBSAT satellite developed by the indonesia and germany cooperation in 2007 and the "Tiantuo No. two" test satellite developed by the national defense science and technology university in 2014, the video satellite has been widely used in the scenes of urban traffic real-time monitoring, natural disaster emergency fast response, public safety monitoring, national defense military and the like. Wherein, the video satellite can obtain more motion change information of the target by continuously observing and imaging a specific area or the target for a period of time. The satellite video data is further combined with a target tracking technology, and is particularly suitable for monitoring moving targets. The low orbit video satellite constellation is a novel ground or air target observation mode developed in recent years, and is mainly designed for tracking the dynamic real-time monitoring of hot spot areas and targets in the global scope. The low-orbit video satellite has the characteristics of high agility, continuous observation and low cost, and has wide application potential in the aspects of tracking and monitoring of dynamic targets.
However, the resolution of video data acquired by a video satellite is low, the imaging range is wide, the proportion of pixels occupied by targets in the video data is very small, and the video background is complex. The interference generated by a large number of objects such as mountain bodies, forest lands, buildings and the like which are irrelevant to the targets on the target tracking algorithm is far greater than that of the traditional video image. Therefore, some mature target tracking algorithms are not as successful in video satellite images as in traditional videos, and are difficult to extract and identify the characteristics of the targets and summarize the motion rules of the targets. For example, texture and shading within airport tarmac, and complex building shapes, all contribute to significant background interference in real-time tracking targeting aircraft. At present, no mature technology for real-time target tracking of high-resolution video satellites is disclosed, and real-time target tracking of high-resolution video satellites is still a problem to be solved in the field of video application.
Disclosure of Invention
The invention aims to provide a multi-star pair non-cooperative target-based video imaging method and system, which are used for solving at least one of the problems in the prior art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the first aspect of the invention provides a method for video imaging based on multi-star to non-cooperative targets, the method comprising
Selecting a target to be observed, observing by utilizing satellites in an area where the target is located, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system;
calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target along the observation direction under the orbit coordinate system according to the position of the target under the orbit coordinate system;
calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector;
judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the positions of two background points in the landmark library in the current frame and the first perpendicular bisector of the positions of the two background points in the image coordinate system and the positions of the two background points in the landmark library in the next frame and the second perpendicular bisector of the positions of the two background points in the image coordinate system;
if the image shift exceeds 0.1 pixel, a yaw angle is calculated according to the position of the current frame and the position of the next frame in an image coordinate system of two background points in the landmark library, and the yaw attitude of the satellite is adjusted according to the yaw angle.
Optionally, the step of performing positioning calculation when the at least two satellites capture the target, the step of obtaining the position of the target under the geocentric inertial coordinate system includes
When two satellites capture the target, positioning calculation is carried out to obtain a calculation formula of the position of the target under the geocentric inertial coordinate system, wherein the calculation formula is as follows
In the method, in the process of the invention,msx 1 a first directional component in an inertial coordinate system for a target line of sight of a first satellite; msy 1 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; msz 1 A third directional component in an inertial coordinate system of a target line of sight for the first satellite; msx 2 A first directional component in an inertial coordinate system for a target line of sight of a second satellite; msy 2 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; x is x s1 A first coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; y is s1 A second coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; x is x s2 A first coordinate component that is the position of the second satellite in the geocentric inertial coordinate system; y is s2 Is the second coordinate component of the position of the second satellite in the geocentric inertial coordinate system.
Optionally, the step of performing positioning calculation when the at least two satellites capture the target, the step of obtaining the position of the target under the geocentric inertial coordinate system includes
When N satellites capture the target, positioning calculation is carried out to obtain a calculation formula of the position of the target under the geocentric inertial coordinate system, wherein the calculation formula is as follows
Wherein N is more than 2; x is x ij A first location component of the target observation solved for any two target lines of sight; y is ij Is that; z ij A second location component of the target observation solved for any two target lines of sight; z ij And a third location component of the target observation solved for any two target lines.
Optionally, the calculation formula for calculating the position of the target in the orbit coordinate system according to the position of the target in the geocentric inertial coordinate system and the position of the satellite in the geocentric inertial coordinate system is
In the method, in the process of the invention,the position vector of the satellite under the geocentric inertial coordinate system; />Is a coordinate transformation matrix from a geocentric inertial coordinate system to an orbital coordinate system.
Optionally, the calculation formula for calculating the first unit vector of the target along the observation direction under the orbit coordinate system according to the position of the target under the orbit coordinate system is as follows
In the formula, |x| is a modulo operation.
Optionally, the calculation formula for calculating the expected attitude maneuver quaternion according to the included angle between the optical axis of the detector in the satellite and the first unit vector and the second unit vector perpendicular to the optical axis and the first unit vector is as follows
Wherein delta is an included angle between an optical axis of a detector in the satellite and the first unit vector; e is a second unit vector perpendicular to the optical axis and the first unit vector; q c0 A first component that is a motorized quaternion of the desired pose; q c1 A second component that is a motorized quaternion of the desired pose; q c2 A third component that is a desired gestural maneuver quaternion; q c3 The fourth component of the motorized quaternion is the desired pose.
Optionally, the calculation formula for calculating the expected roll angle according to the expected attitude maneuver quaternion is that
Optionally, the calculation formula for calculating the expected pitch angle according to the expected attitude maneuver quaternion is that
Optionally, the calculation formula for calculating the yaw angle according to the positions of the current frame and the position of the next frame in the image coordinate system of the two background points in the landmark library is as follows
Wherein u is 1 A first position component of a current frame in an image coordinate system is used as a first background point in a landmark library; v 1 A second position component of the current frame in the image coordinate system for a first background point in the landmark library; u (u) 2 A first position component of a current frame in an image coordinate system for a second background point in the landmark library; v 2 A second position component of the current frame in the image coordinate system for a second background point in the landmark library; u (u) 1 ' is the first position component of the next frame of the first background point in the landmark library in the image coordinate system;v 1 ' is the second position component of the next frame in the image coordinate system of the first background point in the landmark library; u (u) 2 ' is the first position component of the next frame of the second background point in the landmark database in the image coordinate system; v 2 ' is the second position component of the next frame in the image coordinate system for the second background point in the landmark library.
In a second aspect, the present invention provides a multi-star pair non-cooperative target based video imaging system comprising
The first calculation module is used for selecting a target to be observed, observing by utilizing satellites in an area where the target is located, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system;
the second calculation module is used for calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target under the orbit coordinate system along the observation direction according to the position of the target under the orbit coordinate system;
the third calculation and adjustment module is used for calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector;
the judging module is used for judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the position of the current frame of the two background points in the landmark library in the image coordinate system and the first perpendicular bisector of the positions of the two background points and the position of the next frame of the two background points in the landmark library in the image coordinate system;
and the fourth calculation and adjustment module is used for calculating a yaw angle according to the position of the current frame and the position of the next frame in the image coordinate system of two background points in the landmark library if the image shift exceeds 0.1 pixel, and adjusting the yaw attitude of the satellite according to the yaw angle.
The beneficial effects of the invention are as follows:
the invention provides a video imaging method based on a multi-satellite to non-cooperative target, which can provide a task scene and an implementation thought for the task design of a satellite constellation system for earth observation, realize continuous tracking and monitoring of an unknown target and fix the target in the center of a visual field; by establishing a landmark library, effective observation information is extracted from a background irrelevant to a target and utilized to eliminate image rotation generated by moving video imaging, so that a clearer tracking video image is obtained, interference to a target tracking algorithm in video processing is reduced, and reliability of target identification and movement rule analysis is improved.
Drawings
The following describes the embodiments of the present invention in further detail with reference to the drawings.
Fig. 1 shows a flowchart of a method for multi-star pair non-cooperative target based video imaging provided by an embodiment of the present invention.
Fig. 2 shows a schematic diagram of a double-star positioning model in a video imaging method based on a multi-star to non-cooperative target according to an embodiment of the present invention.
Fig. 3 is a schematic diagram showing a relationship among a pixel coordinate system, an image coordinate system and a camera coordinate system in a multi-star-to-non-cooperative target-based video imaging method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram showing the definition of azimuth and pitch angles in a satellite body coordinate system in a multi-satellite-to-non-cooperative-target-based video imaging method according to an embodiment of the present invention.
Fig. 5 shows a schematic diagram of background reference point image shift in a video imaging method based on multi-star pair non-cooperative targets according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the present invention, the present invention will be further described with reference to examples and drawings. Like parts in the drawings are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and that this invention is not limited to the details given herein.
Because the resolution of video data acquired by a video satellite is low, the imaging range is wide, the proportion of pixels occupied by targets in the video data is very small, and the video background is complex. The interference generated by a large number of objects such as mountain bodies, forest lands, buildings and the like which are irrelevant to the targets on the target tracking algorithm is far greater than that of the traditional video image. Therefore, some mature target tracking algorithms are not as successful in video satellite images as in traditional videos, and are difficult to extract and identify the characteristics of the targets and summarize the motion rules of the targets. For example, texture and shading within airport tarmac, and complex building shapes, all contribute to significant background interference in real-time tracking targeting aircraft. At present, no mature technology for real-time target tracking of high-resolution video satellites is disclosed, and real-time target tracking of high-resolution video satellites is still a problem to be solved in the field of video application.
In view of this, an embodiment of the present invention provides a method for video imaging based on a multi-satellite to non-cooperative target, where the method includes selecting a target to be observed, observing with satellites in an area where the target is located, and performing positioning calculation when at least two satellites capture the target, so as to obtain a position of the target under a geocentric inertial coordinate system; calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target along the observation direction under the orbit coordinate system according to the position of the target under the orbit coordinate system; calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector; judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the positions of two background points in the landmark library in the current frame and the first perpendicular bisector of the positions of the two background points in the image coordinate system and the positions of the two background points in the landmark library in the next frame and the second perpendicular bisector of the positions of the two background points in the image coordinate system; if the image shift exceeds 0.1 pixel, a yaw angle is calculated according to the position of the current frame and the position of the next frame in an image coordinate system of two background points in the landmark library, and the yaw attitude of the satellite is adjusted according to the yaw angle.
Specifically, in the embodiment, through collaborative observation of two or more video satellites, firstly, identification of motion information of an unknown target in the air is performed, and then continuous observation of the target by a satellite-borne optical sensor is kept through satellite attitude maneuver, so that the effect of non-collaborative moving target staring imaging by a video satellite constellation is realized.
Further, low orbit satellites in the video satellite constellation are distributed on a plurality of orbit planes, and multi-angle stereoscopic observation is carried out on the target. Satellites can be classified into probe satellites that acquire objects and video satellites that continuously monitor imaging, according to the observation mission. The detection satellites are provided with infrared detectors, a detection network is formed by a plurality of detection satellites, and the movement characteristics of the target are determined by receiving the azimuth information of the target in the scanning view field of the infrared detectors; the video satellite is provided with a high-resolution optical sensor and an agile satellite platform, and can quickly maneuver and image the target staring; the constellation realizes inter-satellite communication through the cross links among satellites and transmits the position information of the target.
In a specific example, a single satellite can only obtain the relative azimuth information of a target, continuous tracking shooting cannot be performed, and a double-satellite or more video satellite can obtain more movement information of the target by determining multiple target lines, so that a more reliable staring imaging effect can be generated on the moving target lacking prior information. In the imaging process, the relative position between the satellite and the observed target is changed continuously, and the constant attitude maneuver of the satellite is required to ensure that the visual axis always points to the target, so that the observed target is kept in the center of the video image. Considering that the aerial target is not on the satellite's undersea point track in most cases, the relative position of the observed area between the satellite sensor's projection and the satellite's sensor is continually deflected, producing an inter-frame image shift. According to the research, if the image shift caused by the jitter of the camera in the integration time exceeds 0.1 pixel, a clear image cannot be obtained, which is unfavorable for reading and analyzing video information, and the example needs to design a satellite to rotate around a visual axis so that a shooting background is kept fixed in the video.
The embodiment can provide a task scene and an implementation thought for the task design of the earth observation satellite constellation system, realize continuous tracking and monitoring of an unknown target, and fix the target in the center of a visual field; by establishing a landmark library, effective observation information is extracted from a background irrelevant to a target and utilized to eliminate image rotation generated by moving video imaging, so that a clearer tracking video image is obtained, interference to a target tracking algorithm in video processing is reduced, and reliability of target identification and movement rule analysis is improved.
In a specific example, a non-cooperative target to be observed is selected on the ground, the detection satellite in the area where the target is located is notified to observe the target through satellite-to-ground communication, and when two or more satellites capture the target in view, positioning calculation is performed.
In a specific example, a model of positioning a target T using two satellites is shown in FIG. 2, where O is the centroid of the earth, O s1 And O s2 Respectively expressed as the mass centers of two observation satellites, and is provided with an earth center inertial coordinate system OX of the two satellites I Y I Z I The lower position coordinates are (x) s1 ,y s1 ,z s1 ) And (x) s2 ,y s2 ,z s2 )。
In one specific example, describing the imaging process of a satellite on a target, it is first necessary to determine the relationship of the camera coordinate system, the pixel coordinate system, and the image plane coordinate system. As shown in FIG. 3Shown is the camera coordinate system O c X c Y c Z c Taking the optical axis of the camera as O c Z c An axis, the optical axis is at the origin O at the center position of the camera optical system c
Further, the camera coordinate system is based on the satellite body coordinate system O s X b Y b Z b Is described mathematically, here abbreviated as the coincidence of the camera coordinate system with the satellite body coordinate system.
Further, a pixel coordinate system ouv is used to describe the location of pixel coordinates in an image. Image coordinate system O P X P Y P Describing the physical dimensions of the object in the image; wherein the origin O of the image coordinate system P Is the center of the image, and its pixel coordinates are (u 0 ,v 0 ). Further, O P X P The axis is parallel to the ou axis of the pixel coordinate system, O P Y P The axis is parallel to the ov axis of the pixel coordinate system.
In a specific example, the positional relationship between the object point and the image point can be visually represented in the camera coordinate system, as shown in fig. 4. Let the pixel of the observation target T obtained by the optical system in the pixel coordinate system be (u, v), the image coordinate is
Wherein d x And d y A first component and a second component representing the physical distance of one pixel of the photosensitive device in the optical system.
Further, the azimuth angle alpha and the pitch angle beta of the target T in the satellite body coordinate system are obtained
Where f is the focal length of the optical system.
In one possible implementation manner, the calculating the position of the target in the geocentric inertial coordinate system when the at least two satellites capture the target includes calculating the position of the target in the geocentric inertial coordinate system when the at least two satellites capture the target, where the calculation formula for obtaining the position of the target in the geocentric inertial coordinate system is
In the method, in the process of the invention,msx 1 a first directional component in an inertial coordinate system for a target line of sight of a first satellite; msy 1 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; msz 1 A third directional component in an inertial coordinate system of a target line of sight for the first satellite; msx 2 A first directional component in an inertial coordinate system for a target line of sight of a second satellite; msy 2 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; x is x s1 A first coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; y is s1 A second coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; x is x s2 A first coordinate component that is the position of the second satellite in the geocentric inertial coordinate system; y is s2 Is the second coordinate component of the position of the second satellite in the geocentric inertial coordinate system.
Specifically, according to the azimuth angle alpha and the pitch angle beta of the target T in the satellite body coordinate system, the direction of the target under inertia is obtained
In the method, in the process of the invention,for measuring relative inertia of satellite body by star sensor, attitude quaternion Q= [ Q ] 0 ,q 1 ,q 2 ,q 3 ]And obtaining a coordinate transformation matrix.
Further, the method comprises the steps of,the calculation formula of (2) is
Wherein q is 0 A first component that is a gesture quaternion; q 1 A second component that is a gesture quaternion; q 2 A third component that is a gesture quaternion; q 3 Is the fourth component of the gesture quaternion.
Further, when two video satellites (the position coordinates of the two satellites in the geocentric inertial coordinate system are (x s1 ,y s1 ,z s1 ) And (x) s2 ,y s2 ,z s2 ) Obtaining the direction of the two target lines in the inertial space as
In msz 2 Is a third directional component of the target line of sight of the second satellite in the inertial coordinate system.
Further, according to the point-wise equation of the space straight line
Wherein x is T A first coordinate component that is a spatial position of the target under the geocentric inertial system; y is T A second coordinate component that is a spatial position of the target under the geocentric inertial system; z T Is the third coordinate component of the spatial position of the target under the geocentric inertial system.
Further, the spatial position of the target under the geocentric inertial system is obtained as
In the method, in the process of the invention,
in one possible implementation manner, the calculating the position of the target in the geocentric inertial coordinate system when the at least two satellites each capture the target includes calculating the position of the target in the geocentric inertial coordinate system when the N satellites each capture the target, where the calculation formula for obtaining the position of the target in the geocentric inertial coordinate system is
Wherein N is more than 2; x is x ij A first location component of the target observation solved for any two target lines of sight; y is ij Is that; z ij A second location component of the target observation solved for any two target lines of sight; z ij And a third location component of the target observation solved for any two target lines.
Specifically, when N satellites observe a target at the same time, any two target lines can solve an observed value of a target position. Using least square to obtain the estimated value of the position of the target which is the point with the least sum of the squares of the distances from all the observed values, wherein the estimated value is
In (x) ij ,y ij ,z ij ) Is marked by two itemsLine of sight l si And l sj Solving three position components of the target observation position; x is x T1 An estimate of a first location component of the target; y is T1 An estimate of a second location component of the target; z T1 Is an estimate of the third location component of the object.
In one possible implementation manner, the calculation formula for calculating the position of the target in the orbit coordinate system according to the position of the target in the geocentric inertial coordinate system and the position of the satellite in the geocentric inertial coordinate system is that
In the method, in the process of the invention,the position vector of the satellite under the geocentric inertial coordinate system; />Is a coordinate transformation matrix from a geocentric inertial coordinate system to an orbital coordinate system.
In particular, the method comprises the steps of,the calculation formula of (2) is
Wherein Ω is the right ascent point, u 11 Is a latitude amplitude angle; i is the track inclination; r is R x A first coordinate axis of the geocentric inertial coordinate system; r is R y A second coordinate axis of the geocentric inertial coordinate system; r is R z Is the third coordinate axis of the geocentric inertial coordinate system.
In one possible implementation manner, the calculation formula for calculating the first unit vector of the target along the observation direction in the orbit coordinate system according to the position of the target in the orbit coordinate system is that
In the formula, |x| is a modulo operation.
Specifically, a unit vector along the direction of the observation vector in the orbit coordinate system is further obtained.
In one possible implementation manner, the calculation formula for calculating the expected attitude maneuver quaternion according to the included angle between the optical axis of the detector in the satellite and the first unit vector and the second unit vector perpendicular to the optical axis and the first unit vector is
Wherein delta is an included angle between an optical axis of a detector in the satellite and the first unit vector; e is a second unit vector perpendicular to the optical axis and the first unit vector; q c0 A first component that is a motorized quaternion of the desired pose; q c1 A second component that is a motorized quaternion of the desired pose; q c2 A third component that is a desired gestural maneuver quaternion; q c3 The fourth component of the motorized quaternion is the desired pose.
Specifically, to achieve target gaze imaging, the optical axis of the camera or detector in the satellite optical system needs to be coincident with the observation vector (i.e., the first unit vector), with the optical axis along the yaw axis Z-axis of the satellite.
Further, the calculation formulas of delta and e are respectively as follows
Wherein u is s Is the optical axis vector.
In one possible implementation, the calculation formula for calculating the desired roll angle according to the desired attitude maneuver quaternion is as follows
In one possible implementation manner, the calculation formula for calculating the expected pitch angle according to the expected attitude maneuver quaternion is as follows
Specifically, the coordinate transformation matrix from the orbit coordinate system to the satellite body coordinate system (satellite body coordinate system) obtained from the desired attitude quaternion is
Further, the Euler angle conversion sequence of the attitude from the orbit coordinate system to the satellite body coordinate system is X-Y-Z, and the specific conversion steps are that X is firstly wound o Rotation of the shaftAngle-derived coordinate system O S X O Y 1 Z 1 Rewinding Y 1 The axis rotates by an angle theta to obtain a coordinate system O S X 2 Y 1 Z b Finally around Z b The axis rotates the psi angle to obtain a satellite body coordinate system O S X b Y b Z b Wherein->θ and ψ are the roll angle (i.e., roll angle), pitch angle, and yaw angle, respectively.
In one possible implementation manner, the calculation formula for calculating the yaw angle according to the positions of the current frame and the positions of the next frame in the image coordinate system of the two background points in the landmark library is as follows
Wherein u is 1 A first position component of a current frame in an image coordinate system is used as a first background point in a landmark library; v 1 A second position component of the current frame in the image coordinate system for a first background point in the landmark library; u (u) 2 A first position component of a current frame in an image coordinate system for a second background point in the landmark library; v 2 A second position component of the current frame in the image coordinate system for a second background point in the landmark library; u (u) 1 ' is the first position component of the next frame of the first background point in the landmark library in the image coordinate system; v 1 ' is the second position component of the next frame in the image coordinate system of the first background point in the landmark library; u (u) 2 ' is the first position component of the next frame of the second background point in the landmark database in the image coordinate system; v 2 ' is the second position component of the next frame in the image coordinate system for the second background point in the landmark library.
Specifically, when the satellite keeps imaging the gaze of the target, the background is blurred due to satellite attitude maneuver and orbiting image rotation, and the yaw attitude of the satellite needs to be adjusted to eliminate the rotation between the image of the observation area and the sensor. The embodiment designs correct the rotation by identifying the drift amount of the special background point in the continuous frames based on the coupling of the data between the front frame and the rear frame. The special background points are landmark buildings covering the ground in the field of view when the video satellite observes the object imaging, the landmarks are selected and matched from a global landmark database preloaded by satellites, the landmark database comprises landmarks with obvious geometric configuration and longitude and latitude information thereof, such as rivers, islands, coastlines, vegetation, city buildings and the like, and the specific landmark selection and design is based on a world coastline vector library, a world data bank, observation images of remote sensing satellites and the like.
Further, it is known that the pixel coordinates of two background points are (u 1 ,v 1 ) Sum (u) 2 ,v 2 ) The coordinate of the next frame is changed to (u) 1 ′,v 1 ') and (u) 2 ′,v 2 ′),And calculating an adjusted yaw angle according to the coordinate values.
In a specific example, as shown in fig. 5, two background points are known to have coordinates (u 1 ,v 1 ) Sum (u) 2 ,v 2 ) Taking the perpendicular bisector as a reference direction, the position of the background point in the pixel coordinate system is (u) 1 ′,v 1 ') and (u) 2 ′,v 2 '), when the image shift in the direction of the perpendicular bisectors of the two background points exceeds 0.1 pixel, the yaw angle needs to be adjusted to the yaw attitude, and the yaw angle is the included angle of the perpendicular bisectors.
The embodiment can provide a task scene and an implementation thought for the task design of the earth observation satellite constellation system, realize continuous tracking and monitoring of an unknown target, and fix the target in the center of a visual field; by establishing a landmark library, effective observation information is extracted from a background irrelevant to a target and utilized to eliminate image rotation generated by moving video imaging, so that a clearer tracking video image is obtained, interference to a target tracking algorithm in video processing is reduced, and reliability of target identification and movement rule analysis is improved.
The invention further provides a video imaging system based on a multi-satellite-to-non-cooperative target, which comprises a first calculation module, a second calculation module and a third calculation module, wherein the first calculation module is used for selecting a target to be observed, utilizing satellites in an area where the target is located to observe, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system; the second calculation module is used for calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target under the orbit coordinate system along the observation direction according to the position of the target under the orbit coordinate system; the third calculation and adjustment module is used for calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector; the judging module is used for judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the position of the current frame of the two background points in the landmark library in the image coordinate system and the first perpendicular bisector of the positions of the two background points and the position of the next frame of the two background points in the landmark library in the image coordinate system; and the fourth calculation and adjustment module is used for calculating a yaw angle according to the position of the current frame and the position of the next frame in the image coordinate system of two background points in the landmark library if the image shift exceeds 0.1 pixel, and adjusting the yaw attitude of the satellite according to the yaw angle.
Specifically, the embodiment constructs a low orbit video satellite constellation, performs multi-angle stereoscopic observation on a target with unknown motion information, and realizes continuous calculation on target position information through a multi-star positioning algorithm and inter-satellite communication. The landmark library is designed, and satellite gestures are corrected by eliminating offset of landmark background points, so that video image rotation caused by orbital motion and ground speed is avoided.
The embodiment can provide a task scene and an implementation thought for the task design of the earth observation satellite constellation system, realize continuous tracking and monitoring of an unknown target, and fix the target in the center of a visual field; by establishing a landmark library, effective observation information is extracted from a background irrelevant to a target and utilized to eliminate image rotation generated by moving video imaging, so that a clearer tracking video image is obtained, interference to a target tracking algorithm in video processing is reduced, and reliability of target identification and movement rule analysis is improved.
In the description of the present invention, it should be noted that the azimuth or positional relationship indicated by the terms "upper", "lower", etc. are based on the azimuth or positional relationship shown in the drawings, and are merely for convenience of describing the present invention and simplifying the description, and are not indicative or implying that the apparatus or element in question must have a specific azimuth, be constructed and operated in a specific azimuth, and thus should not be construed as limiting the present invention. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
It is further noted that in the description of the present invention, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It should be understood that the foregoing examples of the present invention are provided merely for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention, and that various other changes and modifications may be made therein by one skilled in the art without departing from the spirit and scope of the present invention as defined by the appended claims.

Claims (10)

1. A method for video imaging based on multi-star to non-cooperative targets, the method comprising
Selecting a target to be observed, observing by utilizing satellites in an area where the target is located, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system;
calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target along the observation direction under the orbit coordinate system according to the position of the target under the orbit coordinate system;
calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector;
judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the positions of two background points in the landmark library in the current frame and the first perpendicular bisector of the positions of the two background points in the image coordinate system and the positions of the two background points in the landmark library in the next frame and the second perpendicular bisector of the positions of the two background points in the image coordinate system;
if the image shift exceeds 0.1 pixel, a yaw angle is calculated according to the position of the current frame and the position of the next frame in an image coordinate system of two background points in the landmark library, and the yaw attitude of the satellite is adjusted according to the yaw angle.
2. The multi-star pair non-cooperative target based video imaging method of claim 1,
the step of performing positioning calculation when at least two satellites capture the target, and obtaining the position of the target under the geocentric inertial coordinate system comprises
When two satellites capture the target, positioning calculation is carried out to obtain a calculation formula of the position of the target under the geocentric inertial coordinate system, wherein the calculation formula is as follows
In the method, in the process of the invention,msx 1 a first directional component in an inertial coordinate system for a target line of sight of a first satellite; msy 1 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; msz 1 A third directional component in an inertial coordinate system of a target line of sight for the first satellite; msx 2 A first directional component in an inertial coordinate system for a target line of sight of a second satellite; msy 2 A second directional component in the inertial coordinate system of the target line of sight for the first satellite; x is x s1 A first coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; y is s1 A second coordinate component that is the position of the first satellite in the geocentric inertial coordinate system; x is x s2 A first coordinate component that is the position of the second satellite in the geocentric inertial coordinate system; y is s2 Is the second coordinate component of the position of the second satellite in the geocentric inertial coordinate system.
3. The multi-star pair non-cooperative target based video imaging method of claim 1,
the step of performing positioning calculation when at least two satellites capture the target, and obtaining the position of the target under the geocentric inertial coordinate system comprises
When N satellites capture the target, positioning calculation is carried out to obtain a calculation formula of the position of the target under the geocentric inertial coordinate system, wherein the calculation formula is as follows
Wherein N is more than 2; x is x ij A first location component of the target observation solved for any two target lines of sight; y is ij Is that; z ij A second location component of the target observation solved for any two target lines of sight; z ij And a third location component of the target observation solved for any two target lines.
4. A multi-star pair non-cooperative target based video imaging method of claim 2 or 3,
the calculation formula for calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system is that
In the method, in the process of the invention,the position vector of the satellite under the geocentric inertial coordinate system; />Is a coordinate transformation matrix from a geocentric inertial coordinate system to an orbital coordinate system.
5. The multi-star pair non-cooperative target based video imaging method of claim 4,
the calculation formula for calculating the first unit vector of the target along the observation direction under the orbit coordinate system according to the position of the target under the orbit coordinate system is as follows
In the formula, |x| is a modulo operation.
6. The multi-star pair non-cooperative target based video imaging method of claim 5,
the calculation formula for calculating the expected attitude maneuver quaternion according to the included angle between the optical axis of the detector in the satellite and the first unit vector and the second unit vector perpendicular to the optical axis and the first unit vector is as follows
Wherein delta is an included angle between an optical axis of a detector in the satellite and the first unit vector; e is a second unit vector perpendicular to the optical axis and the first unit vector; q c0 A first component that is a motorized quaternion of the desired pose; q c1 A second component that is a motorized quaternion of the desired pose; q c2 A third component that is a desired gestural maneuver quaternion; q c3 The fourth component of the motorized quaternion is the desired pose.
7. The multi-star pair non-cooperative target based video imaging method of claim 6,
the calculation formula for calculating the expected roll angle according to the expected attitude maneuver quaternion is as follows
8. The multi-star pair non-cooperative target based video imaging method of claim 7,
the calculation formula for calculating the expected pitch angle according to the expected attitude maneuver quaternion is as follows
9. The multi-star pair non-cooperative target based video imaging method of claim 8,
the calculation formula for calculating the yaw angle according to the positions of the current frame and the position of the next frame in the image coordinate system of the two background points in the landmark library is as follows
Wherein u is 1 A first position component of a current frame in an image coordinate system is used as a first background point in a landmark library; v 1 A second position component of the current frame in the image coordinate system for a first background point in the landmark library; u (u) 2 A first position component of a current frame in an image coordinate system for a second background point in the landmark library; v 2 A second position component of the current frame in the image coordinate system for a second background point in the landmark library; u (u) 1 ' is the first position component of the next frame of the first background point in the landmark library in the image coordinate system; v 1 ' is the second position component of the next frame in the image coordinate system of the first background point in the landmark library; u (u) 2 ' is the first position component of the next frame of the second background point in the landmark database in the image coordinate system; v 2 ' is the second position component of the next frame in the image coordinate system for the second background point in the landmark library.
10. A multi-star pair non-cooperative target based video imaging system, the system comprising
The first calculation module is used for selecting a target to be observed, observing by utilizing satellites in an area where the target is located, and performing positioning calculation when at least two satellites capture the target to obtain the position of the target under a geocentric inertial coordinate system;
the second calculation module is used for calculating the position of the target under the orbit coordinate system according to the position of the target under the geocentric inertial coordinate system and the position of the satellite under the geocentric inertial coordinate system, and calculating a first unit vector of the target under the orbit coordinate system along the observation direction according to the position of the target under the orbit coordinate system;
the third calculation and adjustment module is used for calculating an expected attitude maneuver quaternion according to an included angle between an optical axis of a detector in the satellite and the first unit vector and a second unit vector perpendicular to the optical axis and the first unit vector, calculating an expected roll angle and an expected pitch angle according to the expected attitude maneuver quaternion, and adjusting the roll attitude and the pitch attitude of the satellite according to the expected roll angle and the expected pitch angle so that the optical axis coincides with the first unit vector;
the judging module is used for judging whether the image movement of the first perpendicular bisector and the second perpendicular bisector exceeds 0.1 pixel according to the position of the current frame of the two background points in the landmark library in the image coordinate system and the first perpendicular bisector of the positions of the two background points and the position of the next frame of the two background points in the landmark library in the image coordinate system;
and the fourth calculation and adjustment module is used for calculating a yaw angle according to the position of the current frame and the position of the next frame in the image coordinate system of two background points in the landmark library if the image shift exceeds 0.1 pixel, and adjusting the yaw attitude of the satellite according to the yaw angle.
CN202311359160.0A 2023-10-19 2023-10-19 Video imaging method and system based on multi-star pair non-cooperative targets Pending CN117647822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311359160.0A CN117647822A (en) 2023-10-19 2023-10-19 Video imaging method and system based on multi-star pair non-cooperative targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311359160.0A CN117647822A (en) 2023-10-19 2023-10-19 Video imaging method and system based on multi-star pair non-cooperative targets

Publications (1)

Publication Number Publication Date
CN117647822A true CN117647822A (en) 2024-03-05

Family

ID=90043978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311359160.0A Pending CN117647822A (en) 2023-10-19 2023-10-19 Video imaging method and system based on multi-star pair non-cooperative targets

Country Status (1)

Country Link
CN (1) CN117647822A (en)

Similar Documents

Publication Publication Date Title
CN104482934B (en) The super close distance autonomous navigation device of a kind of Multi-sensor Fusion and method
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
US20170208251A1 (en) Stabilization and display of remote images
JP2008506167A (en) Method and apparatus for determining a location associated with an image
CN109781635B (en) Distributed remote sensing satellite system
US8587664B2 (en) Target identification and location system and a method thereof
CN106373159A (en) Simplified unmanned aerial vehicle multi-target location method
CN104729482B (en) A kind of ground small objects detecting system and method based on dirigible
CN114719848B (en) Unmanned aerial vehicle height estimation method based on vision and inertial navigation information fusion neural network
CN111091088B (en) Video satellite information supported marine target real-time detection positioning system and method
US9068884B1 (en) Turbulence and winds aloft detection system and method
CN112950671B (en) Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle
US8942421B1 (en) Geolocation of remotely sensed pixels by introspective landmarking
CN113029132A (en) Spacecraft navigation method combining ground image and astrolabe measurement
US10802135B2 (en) Method and apparatus for raw sensor image enhancement through georegistration
Speyerer et al. In-flight geometric calibration of the lunar reconnaissance orbiter camera
US11175398B2 (en) Method and apparatus for multiple raw sensor image enhancement through georegistration
Smith et al. Operational constraint analysis of terrain relative navigation for landing applications
Szenher et al. A hardware and software platform for aerial object localization
Kikuya et al. Attitude determination algorithm using Earth sensor images and image recognition
JP2009509125A (en) Method and apparatus for determining a position associated with an image
Rau et al. Development of a large-format uas imaging system with the construction of a one sensor geometry from a multicamera array
CN117647822A (en) Video imaging method and system based on multi-star pair non-cooperative targets
US20230054721A1 (en) Passive Wide-Area Three-Dimensional Imaging
Hruska Small UAV-acquired, high-resolution, georeferenced still imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination