IL282979A - Method and system of determining miss-distance - Google Patents

Method and system of determining miss-distance

Info

Publication number
IL282979A
IL282979A IL282979A IL28297921A IL282979A IL 282979 A IL282979 A IL 282979A IL 282979 A IL282979 A IL 282979A IL 28297921 A IL28297921 A IL 28297921A IL 282979 A IL282979 A IL 282979A
Authority
IL
Israel
Prior art keywords
relative position
platform
camera
intercepting
navigation
Prior art date
Application number
IL282979A
Other languages
Hebrew (he)
Other versions
IL282979B (en
Inventor
Maidanik Michael
Golan Oded
Rovinsky Jacob
Livne Miki
Original Assignee
Israel Aerospace Ind Ltd
Maidanik Michael
Golan Oded
Rovinsky Jacob
Livne Miki
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Israel Aerospace Ind Ltd, Maidanik Michael, Golan Oded, Rovinsky Jacob, Livne Miki filed Critical Israel Aerospace Ind Ltd
Priority to IL282979A priority Critical patent/IL282979B/en
Publication of IL282979A publication Critical patent/IL282979A/en
Publication of IL282979B publication Critical patent/IL282979B/en

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G7/00Direction control systems for self-propelled missiles
    • F41G7/001Devices or systems for testing or checking
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G7/00Direction control systems for self-propelled missiles
    • F41G7/006Guided missiles training or simulation devices
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G7/00Direction control systems for self-propelled missiles
    • F41G7/34Direction control systems for self-propelled missiles based on predetermined target position data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Description

1 METHOD AND SYSTEM OF DETERMINING MISS-DISTANCE FIELD OF THE PRESENTLY DISCLOSED SUBJECT MATTER The presently disclosed subject matter relates to the computer-implemented determination of distance between platforms.
BACKGROUND In various applications it is desire dto determine the distance (relative position) between two platforms while one platform passes by the other platform or while both platforms pass, one by the other. Normally, this distance is determined based on the comparison of navigation data received from conventional positioning devices such as GNSS and/or INS, installed onboard the two platforms.
GENERAL DESCRIPTION The term “miss-distance” is used herein to refe rto the shortes tdistance which was reached between two platforms in a scenario where one platform passes by another stationary platform , or where the two platforms are in motion and pass one by the other. In the event of impact, miss-distance equals zero, otherwise miss-distance equals a value greater than zero. The term “miss-distance time (tI)” refers to a point in time in which the miss-distance occurred.
One example where it is desire dto calculate the miss-distance is for the purpose of determining the accuracy of an intercepting airborne platform aimed to fly by a target (stationary target or moving target). For instance, when an intercepting airborne platform such as a missile is being tested, it is sometimes preferre tod avoid direc t impact of the intercepting platform on the target .Calculating the miss-distance provides an alternative to target interception. This alternative enables to reduce testing costs as the target is not destroyed and can be reused more than once. Also, this approach assists when destruction of the target is not permitted for safety reasons (e.g. where impact between the intercepting platform and the targe t occurs over a populated area).
Furthermor e,determining the miss-distance may also be required for debriefing purposes, in cases where the intercepting platform was aimed to hit the target ,but failed to do so.– 2 – As mentioned above, conventional methods of calculating miss-distance are based on the comparison of navigation data received from positioning devices onboard the platforms. The accuracy of miss-distance calculation performed by conventional techniques suffers from inaccuracies resulting from the inherent inaccuracies of the measurements received from positioning devices (e.g. GNSS receivers).
The presently disclosed subject matter includes a computerized method and system for determining miss-distance between platforms. The proposed method and system make use of an electro optic sensor (e.g. camera) mounted on one of the platforms for obtaining additional data which is used for improving the accuracy of positioning data obtained from conventional positioning devices.
More specifically, initially, a navigation error is calculated where the relative position of the two platforms is converted to the camera reference frame. Inaccuracies in this calculation are determined, based on the camera data in the focal-plane array (FPA) plane. The error is expressed in the camera X, Y and Z axes where Y and Z are in the camera image plane and X is perpendicular to that plane.
Now that the navigation error is available, it can be used to correct the measured miss-distance. A relative position is calculated at the miss-distance time, based on the onboard navigation systems. The relative position is corrected using the calculated navigation error value to obtain a more accurate value.
According to an aspect of the presently disclosed subject matter there is provided a computer implemented method of determining miss-distance between an intercepting platform and a target platform; the intercepting platform is launched towards the targe tplatform and comprises a camera and at least one positioning device; the method comprising using at least one processor for: i.executing a navigation errors calculation process ,comprising: for at least one image captured by the camera at a time tFi during flight of the intercepting platform towards the target platform: calculating relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the targe tplatform;– 3 – transforming the relative position to camera field of view (CFOV) reference frame; calculating rectified relative position values in the camera Y and Z components, based on the relative position in camera field of view reference frame and camera line of sight (CLOS) unit vector components; and calculating relative position error values in the Y and Z axes in the FPA of the camera, based on a differenc ebetween the relative position and rectified relative position values; ii.executing a navigation miss-distance correction process, comprising: determining relative position of the intercepting platform and target platform at miss-distance time; and correcting relative position using the relative position error values.
In addition to the above features , the method according to this aspect of the presently disclosed subject matter can optionally comprise one or more of features (a) to (h) below, in any technically possible combination or permutation: a) wherein executing a navigation errors calculation process further comprises: for each captured image at time tFi: utilizing the at least one positioning device for determining navigation data of the intercepting platform; calculating attitude of the camera onboard the intercepting platform , in the reference frame of the intercepting platform; applying image processing on the captured image for determining CLOS; and transforming relative position to CFOV reference frame. b) wherein executing a navigation errors calculation process further comprises:– 4 – capturing multiple images and calculating respective relative position error values, wherein each respective relative position error value is associated with a certain captured image. c) The method further comprising: calculating final position error values based on the respective relative position error values. d) The method further comprising attitude errors determining process , comprising: selecting a firs timage at time tF1 and a second image at time tF2; for each one of the firs timage and second image: determining relative position in the CFOV reference frame; calculating rectified relative position values in the camera Y and Z axes; using LOS component uY and uZ at tF1 and tF2 to compute angle errors; e) The method further comprising synchronizing clocks onboard the intercepting platform and the target platform. f) The method further comprising determining whether one or more termination conditions have been fulfilled, and terminating navigation errors calculation process if they have. g) The method further comprising identifying miss-distance time. h) wherein calculating attitude of the camera and calculating CLOS are executed on any one of the intercepting platform and a control station.
According to another aspect of the present disclosed subject matter there is provided a computerized device configured for determining miss-distance between an intercepting platform and a target platform; the intercepting platform is launched towards the targe tplatform and comprises a camera and at least one positioning device; the computerized device comprising at least one processor configured to: i. execute a navigation errors calculation process, comprising: for at least one image captured by the camera at a time tFi during flight of the intercepting platform towards the target platform:– 5 – calculate relative position of the intercepting platform and the target platform , based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the targe tplatform; transform the relative position to camera field of view (CFOV) reference frame; calculate rectified relative position values in the camera Y and Z components, based on the relative position in camera field of view reference frame and camera line of sight (CLOS) unit vector components; and calculate relative position error values in the Y and Z axes in the FPA of the camera, based on a differenc ebetween the relative position and rectified relative position values; ii. execute a navigation miss-distance correction process ,comprising: determine relative position tI of the intercepting platform and target platform at miss-distance time tI; and correct relative position tI using the relative position error values.
According to another aspect of the presently disclosed subject matter there is provided a system for determining miss-distance between an intercepting platform and a target platform ;the system comprising a control station, an intercepting platform and a target platform; the intercepting platform is launched towards the targe tplatform and comprises a camera and at least one positioning device; the system further comprises a computerized device configured to: i. execute a navigation errors calculation process, comprising: operate the camera for capturing at least one image of the target platform; wherein each image is associated with a respective time tFi; the computerized device is configured, for each captured image at time tFi, to: utilize the at least one positioning device for determining navigation data of the intercepting platform; calculate attitude of the camera onboard the intercepting platform , in the reference frame of the intercepting platform; apply image processing on the captured image for determining CLOS;– 6 – transform relative position to CFOV reference frame; calculate relative position of the intercepting platform and the target platform , based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the targe tplatform; transform the relative position to camera field of view (CFOV) reference frame; calculate rectified relative position values in the camera Y and Z components, based on the relative position in camera field of view reference frame and camera line of sight (CLOS) unit vector components; and calculate relative position error values in the Y and Z axes in the FPA of the camera, based on a differenc ebetween the relative position and rectified relative position values; ii. execute a navigation miss-distance correction process ,comprising: determine relative position tI of the intercepting platform and target platform at miss-distance time tI; and correct relative position tI using the relative position error values.
According to another aspect of the presently disclosed subject matter there is provided a non-transitory program storage device readabl eby a computer, tangibly embodying a program of instructions executable by the computer to perform a method of determining miss-distance between an intercepting platform and a target platform; the method comprising: i .executing a navigation errors calculation process ,comprising: for at least one image captured by the camera at a time tFi during flight of the intercepting platform towards the target platform: calculating relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the targe tplatform; transforming the relative position to camera field of view (CFOV) reference frame;– 7 – calculating rectified relative position values in the camera Y and Z components, based on the relative position in camera field of view reference frame and camera line of sight (CLOS) unit vector components; and calculating relative position error values in the Y and Z axes in the FPA of the camera, based on a differenc ebetween the relative position and rectified relative position values; ii .executing a navigation miss-distance correction process, comprising: determining relative position tI of the intercepting platform and target platform at miss-distance time tI; and correcting relative position tI using the relative position error values.
The computer device, the system and the program storage device, disclosed in accordance with the presently disclosed subject matter can optionally comprise one or more of features (a) to (h) listed above, mutatis mutandis, in any desire dcombination or permutation.
The presently disclosed subject matter further contemplate a system and method for maintaining a minimal distance between platforms in order to avoid impact between the platforms. The same principles mentioned above and described in detail below are used for calculating the miss-distance between two platforms (either two moving platforms or a moving platform and a stationary platform). According to this implementation, it is desire dto maintain one platform within a certain predefined distance from the other platform .If the distance between the platforms is closer than the predefined minimal distance, maneuvering instructions to one or both of the platforms are generated (e.g. by computer 20) for controlling flight control subsystems onboard the platform and directing the platforms to avoid impact.
BRIEF DESCRIPTION OF THE DRAWINGS In order to understand the presently disclosed subject matter and to see how it may be carried out in practice, the subject matter will now be described, by way of non­ limiting examples only, with reference to the accompanying drawings, in which: Fig. 1a is a functional block diagram of a system, in accordance with an example of the presently disclosed subject matter;– 8 – Fig. 1b is a functional block diagram a system, in accordance with another example of the presently disclosed subject matter; Fig. 2 is a functional block diagram of a navigation error correction unit, in accordance with an example of the presently disclosed subject matter; Fig. 3a is a flowchart illustrating operations related to a navigation errors calculation process, in accordanc ewith an example of the presently disclosed subject matter; Fig. 3b is a flowchart illustrating operations related to a navigation errors calculation process, in accordanc ewith an example of the presently disclosed subject matter; Fig. 4 is flowchart illustrating operations related to a miss-distance correction process, in accordanc ewith an example of the presently disclosed subject matter; Fig. 5 is a flowchart illustrating operations related to an attitude error calculation process, in accordanc ewith an example of the presently disclosed subject matter; and Fig. 6 shows relative position in a CFOV frame, in accordance with an example of the presently disclosed subject matter.
DETAILED DESCRIPTION In the drawings and descriptions set forth, identical reference numerals indicate those components that are common to different embodiments or configurations.
Elements in the drawings are not necessarily drawn to scale.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "executing", "calculating", "transforming", "applying", "determining" or the like, include action and/or processes of a computer that manipulate and/or transform data into other data, said data represented as physical quantities, e.g. such as electronic quantities, and/or said data representing the physical objects.
The terms “computer”, "computerized device" or the like, should be expansively construed to cover any kind of electronic device with data processing circuitry, including a processor (e.g. digital signal processor (DSP), a microcontroller, a field programmabl egate array (FPGA), an application specific integrated circuit (ASIC), – 9 – etc.) or any other electronic device or combination thereof comprising one or more processors providing the device with data processing capabilities.
As used herein, the phrase "for example," "such as", "for instance" and variants thereof describe non-limiting embodiments of the presently disclosed subject matter.
Reference in the specification to "one case", "some cases", "other cases" or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter. Thus the appearance of the phrase "one case", "some cases", "other cases" or variants thereof does not necessarily refer to the same embodiment(s).
It is appreciated that certain features of the presently disclosed subject matter , which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the presently disclosed subject matter, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
In embodiments of the presently disclosed subject matter, fewer, more and/or different stages than those shown in Figs. 3a, 3b, 4 and 5 may be executed. In embodiments of the presently disclosed subject matter one or more stages illustrated in Figs. 3a, 3b, 4 and 5 may be executed in a different order and/or one or more groups of stages may be executed simultaneously.
Figs. 1a, 1b and 2 illustrate a genera lschematic of the functional layout of the system architecture in accordance with examples of the presently disclosed subject matter. Elements in Figs. 1a, 1b and 2 can be made up of any combination of software and hardware and/or firmware that performs the functions as defined and explained herein. Elements in Figs. 1a, 1b and 2 can be in some examples centralized in one location and in other examples dispersed over more than one location. In other examples of the presently disclosed subject matter, the system may comprise fewer, more, and/or different elements than those shown in Figs. 1a, 1b and 2. For example, while navigation error processing unit 17 is described herein to be configured to calculate both the navigation errors and also correct the miss-distance, in other examples, these two functions can be assigned to two different units, physically separated, where one unit is – 10 – configured to calculate navigation errors and the other unit is configured to use the calculated navigation errors for correcting measured miss-distance.
Attention is now drawn to Fig. 1a showing a functional block diagram of a system configured to calculate miss-distance, according to an example of the presently disclosed subject matter. Platform A schematically represents various types of airborne vehicles which are capable of being directed to fly towards a target. Platform A can be for example, an aircraf t,a missile, a rocket ,or some other projectile. In some examples, platform B can also be an airborne vehicle as platform A or some other moving or stationary target. For instance, platform A can be an intercepting missile targeting another missile or an aircraft, represented by platform B. In other examples, platform B can be a stationary or mobile object which is not airborne. For instance, platform A can be a missile and platform B a stationary target of some kind.
Platform A comprises: computer 20a; positioning devices including for example GNSS device 11a and INS 13a device; an electro optic sensor (referred to herein below as a "camera") 15a (e.g. CCD or CMOS camera device); and telemetry transmitter (Tx) 19a; and computer data-storage 5a. Platform B comprises: computer 20b; positioning devices including for example GNSS device 11b and INS 13b device; telemetry transmitter (Tx) 19b; and computer data-storage 5b.
According to the illustrated example, system 100 further comprises control station 110 configured to communicate over a communication network with platform A and platform B. Control station 110 comprises navigation error processing unit 17 operatively connected to computer 20c. Control station 110 can be located anywhere possible including the ground and a vehicle such as an aircraft, satellite, a marine vessel or a ground vehicle.
Control station 110 is configured in general to receive telemetry data (via telemetry receiver Rx) from telemetry Tx 19a on platform A and telemetry Tx 19b on platform B. The received telemetry data is fed to navigation error processing unit 17 which is configured in turn to use the received telemetry data for determining navigation error. According to some examples, unit 17 is further configured to use the calculated navigation error for correcting the navigation data received from the positioning devices and determining a more accurate miss-distance.– 11 – Computer (20a/20b/20c) on platform A, platform B and control station, respectively, schematically represent one or more computer processing devices operating on the respective platform, which are configured to execute various operations such as operations executed by positioning devices, telemetry, camera and navigation error correction unit.
Notably, according to the illustrated example, only platform A comprises camera 15, however, in some examples, platform A and platform B may both comprise the same components enabling to calculate the navigation error based on data received from both platforms.
According to another example (illustrated in fig. 1b), navigation error processing unit 17 can be located onboard platform B. According to this example, platform B is configured to receive telemetry data (by telemetry Rx 21) from platform A and feed to unit 17 data generated onboard platform B as well as telemetry data generated by platform A. As in the previous example, unit 17 is configured to use generated data and received telemetry data for determining navigation error, use the calculated navigation error for correcting the navigation data received from the positioning devices, and determine a more accurate miss-distance. According to other examples, unit 17 is located on platform A. According to some examples, both platform A and platform B can be similarly equipped (also with a camera) and configured to determine navigation error and correct the measured miss-distance.
It is noted that Figs. 1a and 1b are functional block diagram sshowing only functional components which are relevant to the presently disclosed subject matter. In reality, platform A, platform B and control station 110 comprise many other components related to other functionalities of the platforms (e.g. structural subsystem, propelling system, flight control unit, power source, various sensors, etc.) which are not described for the sake of simplicity and brevity.
Fig. 2 is a functional block diagram of various functional components of navigation error processing unit 17, in accordanc ewith an example of the presently disclosed subject matter. A more detailed description of components in Fig. 2 is provided below.
Fig. 3a is a flowchart illustrating operations related to the navigation errors calculation process, in accordanc ewith an example of the presently disclosed subject – 12 – matter. Operations described with reference to Fig. 3a can be executed, for example, with the help of system 100 described above with reference to Figs. 1a and 1b. It is noted however that any description of operations in Fig. 3a (as well as Figs. 3b, 4 and ) which is made with reference to elements in Figs. 1a, 1b and 2 is done by way of example and for the purpose of illustration only and should not be construed as limiting in any way.
Operations described with reference to blocks 301 to 305 are executed onboard platform A and operations described with reference to block 311 to 313 are executed onboard platform B. Operations described with reference to blocks 307 to 309 can be executed onboard platform A (the platform operating the camera), or, alternatively, by some other device located elsewhere (e.g. navigation error processing unit 17 located on control station 110) where the required data from platforms A and B is available.
In general, platform A is directed either to intercept platform B, or to fly by platform B, at a certain distance. Platform A may have a propelling system, an onboard navigation and flight control systems, and accordingly may be capable, either autonomously or responsive to instructions received in real-time, to direct itself towards a designated target. In other examples, platform A may be a projectile thrown into space, primarily propelled by the initial thrust gained during launch, and there may be little or no ability to control its flight path once in the air. In such cases, accuracy of hitting the target mainly depends on the trajector ycalculations and aiming to target made before launch.
In any case, unit 17 (configured for example as part of control station 110 or platform B or platform A) is configured to determine a miss-distance between platform A and platform B. The miss-distance is calculated, based on GNSS and INS navigation data and calculated navigation error. Where platform A is directed to hit platform B, the miss-distance can be calculated and provided in case platform A misses the target, (in case of a direct hit on the target the calculated miss-distance is expected to be zero).
During flight of platform A towards platform B, navigation data including the position and velocity of platform A are determine d(blocks 301, 303). Navigation data can be determined with respec tto Earth Centered Earth-Fixed (ECEF) or any other appropriate reference frame. Determination of position and velocity can be accomplished with the help of positioning devices onboard platform A. Navigation data – 13 – further comprises the attitude of platform A, which is also determined during flight of the platform (block 305). Attitude can be determined with respec t to the relevant reference frame (e.g. Direction Cosine Matrix (DCM) of Body relative to ECEF).
Determination of attitude can be accomplished for example, with the help of INS 13a onboard platform A.
As mentioned above, platform A also comprises an onboard camera 15. Camera can be attached to platform A at a fixed angle, or may be attached by a gimbal system. Camera 15 can be positioned for example around the front end of platform A and can be operated durin gflight of platform A for capturing images of platform B.
During flight of platform A, towards platform B, the attitude (in the defined reference frame ) of camera 15 with respect to platform A is also determined (block 307). Attitude of the camera with respect to the predefined reference frame can be calculated, based on the attitude of the platform calculated above and the attitude of the camera with respect to the platform.
In case camera 15 is attached to platform A at a fixed angle, the camera attitude with respect to the platform can be predefined. In case camera 15 is attached to platform A by gimbals, the camera attitude can be calculated, based on the gimbals system data (including for example gimbal angles).
Camera attitude data with respect to platform A is transforme dto camera attitude with respect to the reference frame. This can be performed, based on the platform’s attitude in the reference frame and camera attitude with respect to the platform, for example by the multiplication of DCM values. See for example the mathematical representation in section A in annex 1 below, which is considered as part of this description. Camera DCM can be computed for example by camera DCM calculator 23 in navigation error processing unit 17.
At block 309 camera line of sight (CLOS) vector from camera to platform B is determined. CLOS can be calculated for example, based on pixels in the camera sensors which are exposed to platform B (pixels receiving light reflected from platform B). As platform A approaches platform B, camera 15 captures images. Image processing is applied to images captured by the camera for identifying platform B in the frame. The pixels exposed to platform B are identified within camera field of view (CFOV) and – 14 – unit vector components (uY, uZ, defined below) from the camera in the direction of platform B in the camera focal-plane arra y(FPA) plane are obtained.
According to one example, CLOS can be determine donboard platform A (e.g. by CLOS determination module 25 onboard platform A, which can include an image processor ,and be configured to execute the relevant image processing). According to another example, captured images can be transmitted (e.g. via telemetry unit 19a) to control station 110, where the CLOS can be calculated (e.g. by CLOS determination module 25 in navigation error processing unit 17).
In addition, on platform B, during flight of platform A towards platform B, navigation data including the position and velocity of platform B are determined (blocks 311, 313). Similar to platform A, navigation data can be determined with respec tto Earth-Centered Earth-Fixed (ECEF) or any other predefined reference frame.
Determination of position and velocity can be accomplished for example with the help of positioning devices (e.g. GNSS 11b and/or INS 13b). Of course, if platform B is stationary, velocity is 0, and the determination of its position can be determined once and/or provided once to unit 17. In such cases, positioning devices onboard platform B may not be necessary.
As explained below, depending on the specific implementation of the process , the calculations described above with reference to Fig. 3a can be performed once, or repeated multiple times. In any case, these calculations, as well as those described below with reference to Fig. 3b, are synchronized with a certain image captured by the camera. Accordingly, position and velocity of the platforms, as well as camera attitude, camera LOS or its components, are tagged with a time stamp of a respective captured image (tFi, where i = 1, 2, …N). If the calculations described with reference to Figs. 3a and 3b are repeated, each repetition of the calculations is executed with respec tto a corresponding captured image. Values calculated at the time of certain captured image tFi can be tagged with a time stamp or tag associating each value to that image.
Notably, time between the two platforms is synchronized . This can be accomplished for example, by synchronizing between clocks onboard the two platforms (e.g. GNSS clocks onboard platform A and platform B).
Turning to Fig. 3b, this shows a flowchart illustrating further operations related to the navigation errors calculation process , according to examples of the presently – 15 – disclosed subject matter. Operations described with reference to Fig. 3b can be executed, for example, with the help of navigation error processing unit 17 described above.
For a given captured image tFi, a distance between platform A and platform B is calculated, based on the difference between the navigation data determined in the two platforms in the relevant time frame of the image (block 315). See for example mathematical representation in section B in the annex 1 below. According to one example, the distance can be calculated, for example, by relative position calculator 27 in unit 17.
At block 317 the distance between platform A and platform B is transformed to the camera field of view (CFOV) reference frame. This can be accomplished by multiplying the camera DCM(tFi) (camera DCM calculated for captured frame at time tFi) by the vector distance between the platforms at the same time frame. The resul tof this multiplication is the vector distance in the CFOV reference frame. This calculation can be mathematically expressed for example as shown in section C in annex 1 below.
The distance between platform A and platform B in the CFOV frame can be calculated, for example, by unit 17 (e.g. by relative position transformation module 29).
It is assumed that there is an error in the resul tof the previous calculation due to the GNSS and/or INS navigation error and error in gimbal angles. At block 319 rectified distance (relative position) values in the camera Y and Z axes are calculated. To this end, the absolute values of the relative position in the CFOV reference frame are multiplied by the CLOS unit vector components (uY, uZ). This calculation can be mathematically expressed for example as shown in section D in annex 1 below. This calculation is performed for each one of Y and Z axes. This can be calculated for example by unit 17 (e.g. by rectified distance calculator 33).
Notably, in some examples it may be desire dto correct the error of the calculated attitude values (e.g. camera DCM with respect to ECEF). In such cases the operations described above with reference to blocks 315 to 319 can be performed as part of the attitude error determination process ,as explained below with reference to Fig. 5.
At block 321 relative position error values in the Y and Z axes in the FPA of the camera, are calculated. This is done by calculating the difference between the relative position, obtained from positioning devices (block 317), and the rectified relative – 16 – position components calculated in the previous stage (block 319). The error calculation in the Y and Z axes can be calculated for example by unit 17, e.g. by navigation errors calculator 35. This calculation can be mathematically expressed as shown in section E in annex 1 below.
Notably, the Y and Z distance components are more accurate than the corresponding components calculated-based navigation data obtained from the positioning devices, and can be therefore used to improve the miss-distance accuracy, where CLOS data is unavailable, as shown below.
As explained above, error values are calculated in the Y and Z axis with respec t to a certain image tFi. Calculated errors for respective captured image tFi can be stored in the computer storage 5.
According to some examples, calculation of the navigation error values as described above can be performe din real-time while platform A is approaching platform B. According to other examples, the navigation error values are not calculated in real-time. Rather, various data components gathered during the preceding operations (including those described above with reference to Fig. 3a) are stored in data-storage (with reference to a respective frame) and can be used later on for calculating the navigation error and corrected miss-distance.
According to one example, the process of calculating a navigation error is initiated (e.g. by navigation error processing unit 17) when platform A is located at a certain distance (which can be predefined) from platform B. The number of times the calculation of a navigation error value is performed can vary depending on the specific implementation and preferences .According to one example, calculation of the navigation error can be done only once, to obtain a single value. According to other examples, calculation of the navigation error can be performed several times (in some examples, for each captured frame , in which case the rate of calculation depends in genera lon the camera frame rate) providing navigation error values for each time. The final relative position error values can be calculated, based on all or part of the calculated values (e.g. using some type of statistical calculation, such as average or median values).
The process of calculating navigation errors can be terminated (e.g. by navigation error processing unit 17) when platform A is located at a certain distance – 17 – (which can be predefined) from platform B. In any case, the process of calculating navigation errors can continue as long as the camera provides imaging data of platform B of a sufficient quality. As platform A moves closer to platform B, the camera field of view is shifted away from platform B until platform B is completely out of frame or until the obtained imaging data is not of sufficient quality. Likewise, image quality degradation can resul t from smearing of the targe t image as a resul t of gimbals limitation. At this point the errors in the Y and Z axes can no longer be calculated because imaging data from the camera is no longer available (or is available in insufficient quality).
At block 323 it is determined whether or not one or more conditions for terminating the process of calculating navigation errors are fulfilled. According to one example, a termination condition is defined, based on the number of frames for which a navigation error is to be calculated, and the calculation is terminated once the relevant number of calculations has been achieved. According to another example, a termination condition can be defined as a certain distance between platform A and platform B.
According to yet another example, a termination condition can be an image quality (e.g. of platform B) threshold (defined by an image processor,) where the process is terminated if the quality of the captured frames is below that threshold.
In the event that one or more predefined termination conditions are not fulfilled, the operations described with respec tto Fig. 3a and 3b are performed with respec tto another frame (the process is repeated according to Fig. 3a with respect to the next frame). Otherwise, in the event that the one or more predefined termination conditions are fulfilled, the process proceeds to block 325 where final navigation errors are determined.
Notably, in the event that the error determination process is a real-time process, the operations described with respect to block 323 can be executed after execution of the operations described with reference to Fig. 3a. As explained above, further calculation can be performed offline.
Final navigation error can be determined in various ways including for example, one of: selecting specific navigation error values from a collection of calculated navigation error values; calculating average navigation error values from a collection of calculated navigation error values; based on the angle between velocity vector at the – 18 – miss-distance time and X axis of CFOV; or simply taking the calculated navigation error values, if only one set of values is available.
For example, navigation error processing unit 17 can be configured to determine whether the one or more predefined termination conditions are fulfilled or not.
According to one non limiting example, a certain image quality threshold can be defined and navigation error processing unit 17 can be configured to use the captured images for calculating a navigation error until the image quality of the captured image falls below that threshold. To this end, navigation error processing unit 17 can comprise an image processor configured to determine whether or not platform B is visible in the CFOV and/or whether the quality of the image of platform B has sufficient image quality (e.g. if the image of platform B is located near the FOV margin, the image is not used).
The resulting navigation error values (see block 321 above) can be used for correcting the miss-distance values calculated, based on navigational data obtained from positioning devices onboard the platforms.
At some point the navigation error values calculation process terminates and platform A continues to advance towards platform B until the miss-distance occurs (time tI). It is assumed that the navigation error values do not change significantly during the time period starting from the time the navigation error calculations terminate (tF) and until the time of the miss-distance time (tI) occurs. This is so, since the positioning devices onboard platform A and platform B normally do not develop a significant error durin gthe time periods similar to that which is expected to elapse between tF and tI.
Fig. 4 is a flowchart illustrating operations related to the miss-distance correction process, in accordance with an example of the presently disclosed subject matter. As noted above with respec tto calculation of the errors, likewise, correction of miss-distance can be calculated real-time while platform A is approaching platform B or at some other time, using stored information recorde duringd the interception process.
Starting from the time the calculation of the navigations error values terminate (tF) the relative position (distance) of the two platforms is continuously calculated as explained above with reference to blocks 301, 303, 311, 313 and block 315 (block 401).
This calculation can be repeated until, for example, the relative position between the platforms is determined to be constantly increasing.– 19 – The miss-distance event is determined (block 405) for example, by identifying the shortes tdistance from among the calculated distances between the platforms. This calculation can be mathematically expressed for example as shown in section H in annex 1 below. The miss-distance can be determined for example by unit 17 (e.g. by miss-distance determination module 35).
At block 407 the miss-distance between platform A and platform B is corrected using the calculated navigation error value. As explained above, as the navigation and gimbal angle errors durin gthe time interva l (between tF and tI) are approximately constant, the navigation error values, calculated at the time tF may be used at the time tI.
This calculation can be mathematically expressed for example as shown in section H and I in annex 1 below. This can be accomplished for example by unit 17 (e.g. by miss-distance correction module 39).
As mentioned above the system and method disclosed herein can be alternatively used for maintaining a minimal distance between platform A and platform B in order to avoid impact. To this end, once the navigation error values are available (see block 321 above; in some examples after termination of the error calculation process is determined (block 323)) the calculated errors can be used for correcting the miss-distance values, based on navigational data obtained from positioning devices onboard the platforms. The resulting distance between the platforms can then be compared to a predefined minimal distance between the platform. If the calculated distance is smaller (or equal) than the allowed minimal distance, maneuvering instructions can be generated (e.g. by computer 20) for controlling a flight guiding sub­ system of at least one of the platform directing the platform away from the other platform in order to avoid impact. This process occurs in real-time during flight of the platform. Such process can be used for example for allowing platforms to fly by other platform in close proximity while reducing the risk of impact due to the accurate determination of relative position.
As mentioned above, in some systems it may be desire dto correct the error of the calculated attitude (e.g. camera DCM with respec t to ECEF) values. Errors in attitude values resul t from navigation system errors and gimbal angle errors Such. operations may not be necessary if the onboard navigation system and gimbals system provide sufficiently accurate data.– 20 – In general, as the attitude calculated by some systems is not as accurate as it may be desired, attitude correction can be used durin gthe navigation error correction process. The attitude correction process is executed before the operations described with reference to Fig. 4. Once the attitude errors are available, they can be used for correction of the miss-distance, calculated from the navigation data.
Turning to Fig. 5, this is a flowchart illustrating operations related to the attitude error determination process, in accordance with an example of the presently disclosed subject matter. During this process the attitude error resulting from the navigation attitude error of platform A with respect to ECEF and from the gimbals angle errors, is calculated. Operations described with reference to Fig. 5 can be executed for example by unit 17 (e.g. by attitude error calculator 41).
At block 501, two frames from the sequence of frames captured by the camera (15) are obtained. A firs tframe has a firs ttime stamp tF1 and the second has a second time stamp tF2.
At block 503 for each one of the two frames operations described above with reference to Fig. 3b blocks 315 and 317 are performed to determine relative position in the CFOV reference frame for each of the firs tframe and second frame.
At block 505 the absolute values of the relative position in the CFOV reference frame for each frame are multiplied by the camera LOS vector components measured by the camera (see block 319 above).
The respective camera LOS components uy and uZ calculated of each one of the first and second frames ,i.e. having the time stamp tF1 and tF2, respectively, are used to compute the attitude angle errors.
At block 507 the angle errors values can be calculated, based on the difference between the distance values calculated for each of the firs tframe and the second frame.
This calculation can be mathematically expressed for example as shown in section F in annex 1 below. The angle error values ( Sy , S9 ) can be used for calculation of distance component errors (see for example as shown in section H in annex 1 below).
Optionally, the operations described with reference to Fig. 5 can be repeated multiple times, each time with different pairs of images. The final attitude error values – 21 – can be calculated, based on the collection of attitude error values determined for each pair of images (e.g. average or median values).
It will also be understood that the system according to the presently disclosed subject matter may be a suitably programme dcomputer. Likewise, the presently disclosed subject matter contemplates a computer program being readabl eby a computer for executing the method of the presently disclosed subject matter. The presently disclosed subject matter further contemplates a machine-readable non- transitory memory tangibly embodying a program of instructions executable by the machine for executing the method of the presently disclosed subject matter.
It is to be understood that the presently disclosed subject matter is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The presently disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the presen tpresently disclosed subject matter.
Herein below there is provided in annex 1 mathematical representation of various operations described above.– 22 – ANNEX 1 1. Definitions Target and interceptor navigation data definitions Target data (from Targe tnavigation system): ProsTEaCrgEeFt — position in ECEF frame VelTarget — velocity in ECEF frame Interceptor navigation data (from interceptor navigation system): PosInterceptor — position in ECEF frame VelInterceptor — velocity in ECEF frame CEBCodEyF - DCM of Body frame relative to ECEF frame Interceptor CAMERA data definitions Interceptor gimbal data: CBCoFdOyV - DCM of CFOV frame relative to Body frame (camera orientation relative to Body) ur = [uX,uY,uZ] - Interceptor CAMERA LOS vector * C CFOV - Estimated ECEF to CFOV direction cosine matrix (notice that ECEF navigation data also is involved in the DCM calculation) Relative data definitions Target-interceptor relative position and velocity: RNav — relative position in ECEF frame VNav — relative velocity in ECEF frame RNav — relative position in CFOV frame VNav — reltive velocity in CFOV frame Notice that in accordanc ewith the above definitions:– 23 – ECEF r ECEF ECEF ECEF RNav RNav X , RNav Y , RNav Z CFOV rCFOV CFOV CFOV RNav RNav RNav Y , RNav Z Notice that the distance vector RrNCaFvOV and unit vector ur are collinear in absence of navigation errors.
Time definitions Image time tag (time of the LOS data measurement): tF image time tag, tF nth image time tag Virtual interceptor time: tI - time of the virtual interception – 24 – Target Interceptor relative position propagation (distance increment) definitions Distance increment in ECEF frame from time t1 to time t2 : r ECEF t2 r ECEF r ECEF RNav t RNav ( * 2 ) - RNav (*1) ־ t1 r ECEF I*2 nECEF It2 nC^CEF |1 2؛ _ RNav X, , RNav Y , , RNav Z, — t1 t1 t1 rREE (* ) - RECEF (t ) RECEF (* ) _ RECEF (* ) r ECEF (* ) _ r ECEF (* )ך rRNav X (*2) RNav X (*1), RNav Y (*2) RNav Y (*1), RNav Z (*2) RNav Z (*1)J Distance increment in CFOV frame from timet1 to time t2 : RrCFOV I t2 r CFOV r CFOV RNav RNav (* 2) RNav (*1) t1 nCFOV I t2 CFOV I t2 CFOV I*21 - RNav X , RNav Y , RNav Z . t1 t1 t1 r RCFOV (*_ ) _ RCFOV (* ) RCFOV (*_ ) _ RCFOV (* ) RCFOV (*_ ) _ RCFOV (* )ך rRNav X (*2) RNav X (*1), RNav Y (*2) RNav Y (*1), RNav Z (*2) RNav Z (*1)J Notice that the distance increment is computed from navigation systems data, corrupte dby navigation errors.
FPA measurements definitions Distance components in the FPA at time tF calculated from the relative position data in CFOV frame and CAMERA data: CFOV Y-component RNavCam Y (tF ) CFOV RNavCam Z (tF ) - Z-component These components are more accurate than relative range components obtained from the navigation system, and are therefore used to estimate the miss distance.– 25 – Throughout the rest of the document, the components will be alternatively called rectified distance components.
Corrected distance components in the FPA of the CFOV at the time tI : Ycor (tI ) - Y-component Zco r(tI ) - Z-component Navigation and CAMERA error definitions The estimate of the navigation relative position error in the ECEF frame: 8 RNav Errors of the Y and Z- components of the distance vector in FPA plane at the time of frame t : F CFOV CFOV CFOV 8R (tF) -R (tF) = R Y (tF ) Nav Y Nav Y NavCam CFOV CFOV CFOV (tF) 8R (tF)-R (tF) = R Nav Z Nav Z NavCam Z 8CECCFEOFV - Estimated ECEF to CFOV direction cosine matrix error 81^ - Z CFOV angular error 8S — Y CFOV angular error 8 Relative position in a CFOV frame is illustrated in Fig. 6.– 26 – X Figure 6 : Illustration of relative position in CFOV frame 2. Detailed Description of the Algorithm A. Calculation of the relative position vector between interceptor and target and relative velocity vector in the ECEF frame at the time of image t : r ECEF r ECEF r ECEF RNav (t ) ־ PosTarge t(t ) - PosIntercepto r(t) (1) ^^Interceptor (1a) VNavNav (t) VelTarTargetget (t) - Interceptor B. Calculation of the direction cosine matrix ECEF to CFOV at the time of image t : CFOV CFOV Body CECEF (t) — CBody (t) " CECEF (t) C. Calculation of the distance vector between interceptor and target and relative velocity in the CFOV frame at the time of image t : CFOV CFOV r ECEF (2) RNav (t) — CECEF (t) י RNav (t) r CFOV CFOV r ECEF (2a) VNav (t) — CECEF (t) י VNav (t)– 27 – These calculations are inaccurate due to the erroneous position, velocity and angles. Once again the latter is affected by both the navigation errors and camera and gimbal errors. Therefore the errors need to be rectified.
D. Calculation of the rectified distance components at the time of image t .
From the following vector definition RrCFOV (A rCFOV (t) • ur(t) NavCam ( t ) = RNav the rectified Y and Z–components of the distance, using Navigation and CAMERA data, are calculated as follows: CFOV (t) = RNT(t) •uY(t) RNavCam Y (3) CFOV r CFOV RNavCam Z (t ) = RNav (t ) •uZ(t) Notice that the LOS angles are very accurate and their values are small. Thus, the Y and Z- distance components are more accurate than the corresponding components calculated-based navigation data (see 2) and may be used to improve the distance accuracy at the interceptor point, where LOS data is unavailable.
E. Calculation of the estimated distance components errors in the FPA plane at the time of image t : CFOV CFOV CFOV Nav Y ( t ) RNav Y (t ) RNavCam Y ( t ) r CFOV CFOV (t) ־R Nav (t) • uY (t) RNav Y CFOV CFOV CFOV (4) Nav Z ( t ) RNav Z ( t ) RNavCam Z (t ) r CFOV CFOV (t) ־R RNav Z Nav (t) • uZ (t) The calculated errors can be used for calculation of the interceptor point relative coordinates in the FPA plane at the time t = tt ׳ where the CAMERA data is unavailable, as is mentioned above. Recall that the navigation errors during the time interval (between time t = tF and time t = tI) are approximately constant. Thus, the correction – 28 – terms which are calculated at the time t = tF may be used at the time t = tI. It is assumed that the angle between the relative velocity vector at the time t = tF and the CFOV X- axis, is small.
Notice that the navigation and gimbal angular errors may reduce the accuracy of the rectified distance components, and, as a result, the accuracy of the correction terms.
The calculation and compensation algorithm of the errors are described in the following paragraphs. The above formulas are used for two images time tags t = tF and t = tF.
F. Attitude error estimation The attitude error of the navigation solution can be estimated using two successive images before interceptor. Analysis of dependence of distance error in the CFOV frame with respect to attitude errors, is performed below. The analysis results are used in the following paragraph to compensate the corresponding distance errors.
Calculation of the relative target-interceptor distance propagation (increment ) from the time t = tF to the time t = tF : F1 F2 tF2 r CFOV r CFOV r CFOV RNav RNav (tF2 ) ־ RNav (tF1 ) tF1 CFOV rECEF CFOV rECEF CECEF (tF2 )RNav (tF2 ) CECEF (tF )RNav (tF ) CFOV ECEF(tF1) r ECEF CFOV r ECEF CECEF (tF1 )CECEF(tF2)RNav (tF2 ) CECEF (tF1 )RNav (tF1 ) Notice that: ECEF(tF1 ) CECEF(tF2) ~ I CECCFEOFV(t) y ECEF ECEF (t) Thus: tF2 r CFOV CFOV r ECEF r ECEF RNav CECEF (tF1 ) \ RNav (tF2 ) RNav (tF1 tF1 tF2 (5) CFOV r ECEF CECEF (tF1 ) RNav tF1– 29 – Error of the relative target-interceptor distance propagation (increment) during time interva lbetween tF and tF may be represented as follows: ( tF2 tF2 ן '; r ECEF r CFOV _ a I ^CFOV (tF1 ) RNav a RNav _ a CECEF tF1 tF1 ) V V tF2 tF2 a CFOV r ECEF +CCFOV(t ) • a r ECEF (tF1 ) a CECEF י RNav ؟ CECEF (tF1 ( י a RNav tF1 tF1 (6) RNav tF The first term a 2 is DCM error contribution to the error, ECEF F1 • RNav tF2 is the interceptor navigation relative target while the second term CECCFEOFV (tF ) • a 1 tF1 position error contribution.
The second term can be neglected. Indeed, time interval between tF and tF is expected to be no more than a few seconds. Assuming that inertial system is accurate and during that interval GPS position updates are prevented, navigation error aRrNEaCvEF(t) is almost constant and its change can be disregarded. Thus: tF2 CCFOV (t ) a r ECEF 0 CECEF (^!?1 ) י a RNav tF1 Analyze the firs tterm:- 30 - FOoVFOV/. ץ DECE F tF2 sFECEF (tF1 ) RNav tO t; C ccoov * o ECEF tO2 ) _ CCf OV (t ( tF1 I CECEF ) Efeo EF (tF1 • RNav tO t.
C f F'MA* o ECEF tO2 vcCFOV ( tF1 I Ccfov ) EcEFEF (tF1 ) e^ef EF (tF1 • RNav , tF1 * / oCFOV tO2 I Ccfov (U -1 )• cECEO (U • R°' tF1 Sy -SS\ ( 0 FOOT tO2 -Sy 0 S • R 'Nm tO ( SS -Sp 0 J (7) Rewrite the resul tof the above analysis: A ץ Sy 0 -SS tO2 ffov tO2 D FFOV -Sy 0 Sp SRn״ • RNav tF1 tF1 SS -Sp 0 V J (8) Alternatively: tO2 tO2 tO2 Foo f COVV oa f ^o^ovv Fio'Vfov SRNav x SW • RNav SS-RNw z yt1 tF1 tF1 tO2 tO2 tO2 F f -C^OVV Foo ClC^O^OV I Foo C><^O’OV = -Sy • RNav S RNav y x + SP • RNav z tF1 tF1 tO1 tO2 tF2 tO2 F f -c^ovv on f ><^oov Foo F>O^OOV S RNav z = SS• RNav x SP• RNav y tF1 tF1 tF1 (9) |t) |t) |t) |t) tO2 ,O2 nCFOV |J *O2 // dFOOV lJ f O'OW 1' Recall that RNFOVz | tF2 so finally: , « RNav x and RNav y| << RNav x [ ؛t; it; it; it; tF1 1F1 1F1 *F1– 31 – tF2 tF2 CFOV e CFOV • RNav 6 RNav y x tF1 tF1 tF2 tF2 ? CFOV CFOV 6 RNav z • RNav x tF1 tF1 Explicitly (using sign "=" instead of sign "*"): CFOV ( ) _ e CFOV ( ) — _Xnr • ( CCFOV (. ) _ CCFOV (. ) Nav y (tF2 ) 6RNav y (tF1 ) 6 ־W • ^ RNav x (tF2 ) RNav x (tF1 ) / CFOV ( ) _ c CFOV ( ) - XQ 7 CF^OOV (t ) _ 7?cfov (t ) Nav z (tF2 ) ^RNav z (tF1 ) ־ ^ • ^ RNav x (to2 ) RNav x (tF1 ) / (10) The tilt components 5w , 69 are dependent upon the gimbal angles and the navigation attitude errors, respectively. These errors are assumed to be relatively constant. Indeed, the encoders (also known as "rotary encoders") errors evolution is negligible if the CAMERA rotation is small. At the times tF ' tF and tF the CAMERA rotation is indeed small.
Notice that the times tF and tF must be different but one of them may be equal to the time t .
F Rewrite the equations (4) for the times tF ' tF : 6RCFOV (t ) CFOV r CFOV (tF2 ) (tF2) •uy(tF2) 6RNav y (lF2) ־ RNav y _ RNav CFOV r CFOV e CFOV (tF1 (tF1)_ (tF1 •uy(tF1) ) ) = RNav y RNav 6 RNav y CFOV r CFOV e CFOV ) (tF2 (tF2 ) (tF2 •uz(tF2) ) = RNav z RNav 6 RNav z _ e CFOV CFOV r CFOV (tF1 (tF1) _ (tF1 •uz(tF1) ) = RNav z RNav ) 6 RNav z (11) Applying the errors from (11) in the equation (10) yields:– 32 – CFOV r CFOV tF2 CFOV I RNav y (t) - RNav (t) •uy(t)) = 3 • RNav x (t) tF1 C CFCOOV /A- RCFOVfA .״ /^VF - 3Q CFOV tF2 I RNav z (t) RNav (t) uz (t) I 39 • RNav x (t) tF1 tF1 Now 5\^ and 39 may be calculated as follows: /7?CFOV ra-IRCFOVml.״ (A tF2 I RNav y (t) RNav (t) uy (t) I tF1 Sy = tF2 CFOV RNav x (t) tF1 tF2 / CFOV r CFO (13) (t)|•uz(t)) I RNav z (t) RNav tF1 S9 = tF2 CFOV RNav x (t) tF1 Notice that the attitude errors may be calculated for several images and a sanity test may be performed. In one example, an average value of the attitude errors may be used to diminish contribution of the noise. In order to apply the aforementioned corrections at the interceptor point, the time tI is to be calculated.
G. Detection of the interceptor time tI .
At the interceptor point the relative distance is minimal and scalar multiplication of relative velocity vector and relative position vector is zero. Thus, the value of RrNCaFvOV (t) ן is minimal and value of RrNCaFvOV (t) • VrNCaFvOV (t) changes its sign at the time tI .
The time of interceptor may be defined, based on the above conditions.
H. Calculation of FPA distance component errors at the virtual interceptor time tI .
Recall that the image for the distance correction is taken at the time. tF– 33 – Applying the equation (10) for times tF and tI gives: 2RCFOV (t ) _ 2 RCFOV (t ) — 2v ■ (RCFOV _(t_) — RCFOV_ (t ) 2RNav Y (t! ) 2 ־ RNav Y (tF ) 2V ‘(RNav X (t! ) RNav X (tF )) 2RCFOV (t ) _ 2RCFOV (t ) + 29.(RCFOV_ (t.) — RCFOV__tt_ ) 2RNav Z (t! ) 2 ־ RNav Z (tF ) + 2^ ‘^ RNav X (t! ) RNav X (tF ) ) (14) Substitution of (4) in (14) gives: CFOV CFOV r CFOV Nav Y (t! ) RNav Y (tF ) RNav (tF ) uY (tF ) 2V ■(RNav X (tI ) — RNav X (tF ) ) CFOV CFOV r CFOV Nav Z (t! ) RNav Z (tF ) RNav (tF ) uZ (tF ) + 2.9■( RNav X (tI ) — RNav X (tF ) ) (15) I. Calculation of corrected FPA distance components at the interceptor time tI .
From (15) the corrected FPA distance components Y and Z are calculated: cor cor _ RCFOV tt) — e CFOV Y (tI) cor_ RNav y (t!) 2RNav Y _ RCFOV (، ) — s: CFOV Z (tI) cor_ RNav z (t!) 2 RNav Z 156 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.1 Platform A Platform B Camera INS 13a GNSS 11a GNSS 11b INS 13b 5a Computer 20a 5b Telemetry Telemetry Tx 19a LOS determination module 7 Tx 19b Computer 20b Tel. Rx Control station 110 Navigation error Computer 20c 5c processing unit 17 100 FIG. 1a6 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.2 Platform B Platform A Telemetry Camera INS 13a GNSS 11a Tx 19b GNSS 11b INS 13b 5a 5b Telemetry Computer 20a Telemetry Tx 19a Computer 20b Rx 21 LOS determination module 7 Navigation error pr ocessing unit 17 100 FIG. 1b6 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.3 Camera DCM calculator Navigation errors 23 calculator 35 CLOS determination module 25 Miss-distance event Relative position determination module 37 calculator 27 Relative position transformation module 29 Miss-distance correction 17 module 39 Rectified distance calculator 33 Attitude error calculator Image processing module 41 FIG. 26 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.4 311 301 Determining position of platform B Determining position of platform A In platform B 303 In Determining velocity of platform A 313 platform Determining velocity of platform B A 305 Determining attitude of platform A 307 Determining Camera attitude with respect to platform reference frame 309 Determining Camera LOS Fig. 3b FIG. 3a6 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.5 Fig. 3a 315 Calculating distance between platform A and platform B 317 Calculating the relative posit ion of platforms A and B in CFOV reference frame 319 Calculating rectified relative position compo nents in the Y and Z axes in CFOV reference frame No 321 Calculating error in relative position compo nents in the Y and Z axis in CFOV reference frame 323 Determining whether predefined process termination condition (s) are fulfilled Yes 325 Calculating final relative position error values FIG. 3b6 Sheets ISRAEL AEROSPACE INDUSTRIES LTD. Sheet No.6 401 Calculating distance between platform A and platform B 405 Determining miss-distance event 407 Correcting calculation based on stored error data FIG. 4 501 Selecting two frames 503 Calculating for each frame a relative position in CFOV reference frame 505 Calculating for each frame an absolute value of the relative position Use LOS component u and u at t and t and compute 507 Y Z F1 F2 angle errors FIG. 5 282979/4 – 34 –

Claims (35)

CLAIMS:
1. A computer implemented method of determining miss-distance between an intercepting platform and a target platform; the intercepting platform is launched towards the target platform and comprises at least one positioning device; the method comprising using at least one processor for: i. executing a navigation errors calculation process, comprising: for at least one image captured at a time tFi during flight of the intercepting platform towards the target platform, by a camera fixed to the target platform: calculating relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transforming the relative position to camera field of view (CFOV) reference frame; calculating rectified relative position values in camera plane, based on camera data; and calculating relative position error values in the camera plane, based on a difference between the relative position and rectified relative position values; ii. executing navigation miss-distance correction process, comprising: determining relative position of the intercepting platform and target platform at miss-distance time; and correcting relative position using the relative position error values.
2. The method according to claim 1, wherein the camera data used for calculating the rectified relative position values incudes the relative position of the platform in camera field of view reference frame and camera line of sight (CLOS) unit vector components. 282979/4 – 35 –
3. The method according to any one of the preceding claims, wherein the navigation errors calculation process further comprises: for each captured image at time t : Fi determining navigation data of the intercepting platform; determining attitude of the camera fixed to the target platform, in the reference frame of the target platform; applying image processing on the captured image for determining CLOS; and transforming relative position to the CFOV reference frame.
4. The method according to any one of the preceding claims, wherein the navigation errors calculation process further comprises: calculating respective relative position error values for each one of multiple images captured during flight of the intercepting platform towards the target platform, wherein each respective relative position error value is associated with a certain captured image.
5. The method according to claim 4 further comprising: calculating final position error values based on the respective relative position error values.
6. The method according to any one of claims 1 and 2 further comprising an attitude errors determining process, comprising: selecting a first image at time t and a second image at time t ; F1 F2 for each one of the first image and second image: determining relative position in the CFOV reference frame; calculating rectified relative position values in Y and Z axes of the camera plane; and using LOS component u and u at t and t to compute angle errors. Y Z F1 F2 282979/4 – 36 –
7. The method according to any one of the preceding claims further comprising synchronizing clocks onboard the intercepting platform and the target platform.
8. The method according to any one of the preceding claims further comprising determining whether one or more termination conditions have been fulfilled, and terminating the navigation errors calculation process if they have.
9. The method according to any one of the preceding claims further comprising identifying miss-distance time.
10. The method according to any one of the preceding claims, wherein calculating attitude of the camera and calculating CLOS are executed on any one of: the intercepting platform; and a control station.
11. The method according to any one of the preceding claims, wherein attitude is measured by DCM in ESEF.
12. The method according to any one of the preceding claims further comprising capturing the at least one image using the camera fixed to the target platform.
13. The method according to claim 1 further comprising: executing a second navigation errors calculation process, comprising: for at least one image captured at a time t during flight of the intercepting Fi platform towards the target platform, by a second camera fixed to the intercepting platform: transforming the relative position to second camera field of view (CFOV) reference frame; calculating second rectified relative position values in second camera plane, based on second camera data; and calculating second relative position error values in the camera plane, based on a difference between the second relative position and second rectified relative position values; 282979/4 – 37 – correcting relative position using the second relative position error values instead or in addition to the relative position values.
14. The method according to claim 13, wherein the second camera data used for calculating the second rectified relative position values incudes the relative position of the platform in second camera field of view reference frame and second camera line of sight (CLOS) unit vector components.
15. The method according to any one of claims 13 to 14, wherein the navigation errors calculation process further comprises: for each captured image at time tFi by the second camera: determining second navigation data of the intercepting platform; calculating second attitude of the second camera in the reference frame of the intercepting platform; applying image processing on the captured image for determining second CLOS; and transforming second relative position to the second camera CFOV reference frame.
16. The method according to any one of claim 12 to 15, wherein the navigation errors calculation process further comprises: calculating second respective relative position error values for each one of multiple images captured during flight of the intercepting platform towards the target platform, wherein each second respective relative position error value is associated with a certain captured image.
17. The method according to claim 16 further comprising: calculating second final position error values based on the second respective relative position error values.
18. The method according to any one of claims 12 and 13 further comprising a second attitude errors determining process, comprising: 282979/4 – 38 – selecting a first image at time t and a second image at time t captured by the F1 F2 second camera ; for each one of the first image and second image: determining relative position in the CFOV reference frame; calculating rectified relative position values in Y and Z axes of the camera plane; and using LOS component u and u at t and t to compute angle errors. Y Z F1 F2
19. A computerized device configured for determining miss-distance between an intercepting platform and a target platform; the intercepting platform is capable of being launched towards the target platform and comprises at least one positioning device; the computerized device comprising at least one processor configured to: i. execute a navigation errors calculation process, comprising: for at least one image captured at a time t during flight of the intercepting Fi platform towards the target platform, by a camera fixed to the target platform: calculate relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transform the relative position to camera field of view (CFOV) reference frame; calculate rectified relative position values in a camera plane of the camera, based on camera data; and calculate relative position error values in the camera plane, based on a difference between the relative position and rectified relative position values; ii. execute navigation miss-distance correction process, comprising: determine relative position t of the intercepting platform and target platform at I miss-distance time tI; and correct relative position t using the relative position error values. I 282979/4 – 39 –
20. The computerized device according to claim 19, wherein the camera data used for calculating the rectified relative position values incudes the relative position of the platform in camera field of view reference frame and camera ling of sight (CLOS) unit vector components.
21. The computerized device according to any one of claims 19 to 20, wherein the at least one processor is further configured, for executing the navigation errors calculation process, to: for each captured image at time t : Fi determine navigation data of the intercepting platform; calculate attitude of the camera, in the reference frame of the intercepting platform; operate an image processor to process the captured image for determining CLOS; and transform relative position to the CFOV reference frame.
22. The computerized device according to any one of claims 19 to 21, wherein the at least one processor is further configured, for executing the navigation errors calculation process, to: receive multiple images captured during flight of the intercepting platform toward the target platform, each image is associated with a respective time t and Fi; calculate a respective captured image for each one of the multiple images.
23. The computerized device according to claim 22, wherein the at least one processor is further configured to: calculate final position error values based on the respective relative position error values.
24. The computerized device according to any one of claims claim 19 to 20, wherein the at least one processor is further configured to execute an attitude errors determining process, comprising: 282979/4 – 40 – selecting a first image at time t and a second image at time t ; F1 F2 for each one of the first image and second image: determining relative position in the CFOV reference frame; calculating rectified relative position values in Y and Z axes of the camera plane; and using LOS component uY and uZ at t and t to compute angle errors. F1 F2
25. The computerized device according to claim 19, wherein the at least one processor is further configured to synchronize clocks onboard the intercepting platform and the target platform.
26. The computerized device according to any one of claims claim 19 to 25, wherein the at least one processor is further configured to: determine whether one or more termination conditions have been fulfilled, and terminate the navigation errors calculation process if they have.
27. The computerized device according to any one of claims 19 to 26, wherein the at least one processor is further configured to identify miss-distance time.
28. The computerized device according to any one of claims 19 to 27 being installed on the intercepting platform.
29. The computerized device according to any one of claims 19 to 28 being installed and operates on a control station configured for monitoring and controlling the operation of the intercepting platform.
30. The computerized device according to any one of claims 19 to 29 installed on the target platform.
31. A system for determining miss-distance between an intercepting platform and a target platform; the system comprising a control station, an intercepting platform and a target platform; the intercepting platform is capable of being launched towards the target platform and comprises at least one positioning device; the system further comprises a camera fixed to the intercepting platform or the target platform and a computerized device configured to: 282979/4 – 41 – i. execute a navigation errors calculation process, comprising: operate the camera for capturing at least one image while the intercepting platform is flying in the direction of the target platform; wherein each image is associated with a respective time t Fi; the computerized device is configured, for each captured image at time t , to: Fi utilize the at least one positioning device for determining navigation data of the intercepting platform; calculate attitude of the camera, in a reference frame of the intercepting platform; apply image processing on the captured image for determining CLOS; transform relative position to CFOV reference frame; calculate relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transform the relative position to camera field of view (CFOV) reference frame; calculate rectified relative position values in camera plane, based on camera data; and calculate relative position error values in the camera plane, based on a difference between the relative position and rectified relative position values; ii. execute a navigation miss-distance correction process, comprising: determine relative position t of the intercepting platform and target platform at I miss-distance time t ; and I correct relative position t using the relative position error values. I
32. A system for determining miss-distance between an intercepting platform and a target platform; the system comprising an intercepting platform and a target platform, the intercepting platform is capable of being launched towards the target platform and comprises at least one positioning device and a first camera, the 282979/4 – 42 – target platform comprises a second camera; the system further comprises at least one computerized device configured to: execute a first navigation errors calculation process based on images captured by the first camera giving rise to first relative position error values, comprising: for at least one image captured by the first camera at a time t during flight of Fi the intercepting platform towards the target platform: calculate relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transform the relative position to a first camera field of view (CFOV) reference frame of the first camera; calculate rectified relative position values in a first camera plane of the first camera, based on first camera data; and calculate the first relative position error values in the first camera plane, based on a difference between the relative position and rectified relative position values; execute a second navigation errors calculation process based on images captured by the second camera giving rise to second relative position error values, comprising: for at least one image captured by the second camera at a time t during flight Fi of the intercepting platform towards the target platform: calculate relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transform the relative position to a second camera field of view (CFOV) reference frame of the second camera; calculate rectified relative position values in a second camera plane of the second camera, based on second camera data; and 282979/4 – 43 – calculate the second relative position error values in the second camera plane, based on a difference between the relative position and rectified relative position values; determine relative position t of the intercepting platform and target platform at I miss-distance time t ; and I correct relative position tI using the first relative position error values and the second relative position error values.
33. The system of claim 32, wherein the first camera data incudes the relative position of the platform in first camera field of view reference frame, and first camera ling of sight (CLOS) unit vector components; and the second camera data incudes the relative position of the platform in second camera field of view reference frame, and second camera ling of sight (CLOS) unit vector components.
34. A non-transitory program storage device readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method of determining miss-distance between an intercepting platform and a target platform; the method comprising: i. executing a navigation errors calculation process, comprising: for at least one image captured by a camera fixed to the intercepting platform or the target platform at a time t during flight of the intercepting platform towards the Fi target platform: calculating relative position of the intercepting platform and the target platform, based on navigation data of the intercepting platform determined by the at least one positioning device and navigation data of the target platform; transforming the relative position to camera field of view (CFOV) reference frame; calculating rectified relative position values in camera plane, based on camera data; and 282979/4 – 44 – calculating relative position error values in the camera plane, based on a difference between the relative position and rectified relative position values; ii. executing a navigation miss-distance correction process, comprising: determining relative position t of the intercepting platform and target platform I at miss-distance time t ; and I correcting relative position t using the relative position error values. I
35. The non-transitory program storage device of claim 34 wherein the method further comprising: executing a second navigation errors calculation process, comprising: for at least one image captured at a time t during flight of the intercepting Fi platform towards the target platform, by a second camera fixed to the intercepting platform: transforming the relative position to second camera field of view (CFOV) reference frame; calculating second rectified relative position values in second camera plane, based on second camera data; and calculating second relative position error values in the camera plane, based on a difference between the second relative position and second rectified relative position values; wherein, the navigation miss-distance correction process further comprising: correcting relative position using the second relative position error values instead or in addition to the relative position values.
IL282979A 2021-05-05 2021-05-05 Method and system of determining miss-distance IL282979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IL282979A IL282979B (en) 2021-05-05 2021-05-05 Method and system of determining miss-distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL282979A IL282979B (en) 2021-05-05 2021-05-05 Method and system of determining miss-distance

Publications (2)

Publication Number Publication Date
IL282979A true IL282979A (en) 2021-06-30
IL282979B IL282979B (en) 2022-06-01

Family

ID=76820101

Family Applications (1)

Application Number Title Priority Date Filing Date
IL282979A IL282979B (en) 2021-05-05 2021-05-05 Method and system of determining miss-distance

Country Status (1)

Country Link
IL (1) IL282979B (en)

Also Published As

Publication number Publication date
IL282979B (en) 2022-06-01

Similar Documents

Publication Publication Date Title
JP2525539B2 (en) Autonomous accuracy weapons using synthetic array rader
CN107727079B (en) Target positioning method of full-strapdown downward-looking camera of micro unmanned aerial vehicle
US7451022B1 (en) Calibration of ship attitude reference
US8666661B2 (en) Video navigation
US7720577B2 (en) Methods and systems for data link front end filters for sporadic updates
EP0432014B1 (en) Optoelectronic assistance system for air-raid missions and navigation
US7409292B2 (en) Method and system for degimbalization of vehicle navigation data
US11119178B2 (en) Estimating a source location of a projectile
KR100762891B1 (en) Method and apparatus of geometric correction of image using los vector adjustment model
US8243142B2 (en) Mobile object image tracking apparatus and method
US20080195316A1 (en) System and method for motion estimation using vision sensors
JPH0344270B2 (en)
JP2001518627A (en) Targeting system based on radio frequency interferometer and laser rangefinder / indicator
US11402176B2 (en) Method and system of determining miss-distance
US8569669B2 (en) Navigation method for a missile
US10605603B2 (en) Navigation systems and methods
IL282979A (en) Method and system of determining miss-distance
CA3064640A1 (en) Navigation augmentation system and method
US5373318A (en) Apparent size passive range method
US5367333A (en) Passive range measurement system
US8620464B1 (en) Visual automated scoring system
Hruska Small UAV-acquired, high-resolution, georeferenced still imagery
RU2751433C1 (en) Method for target designation by direction of guidance system of controlled object
KR20180060673A (en) Real-time computation technique to estimate relative displacement using GNSS carrier-phase data
CN114280654A (en) Intelligent target-seeking satellite guidance system