CN114964248A - Target position calculation and indication method for motion trail out of view field - Google Patents

Target position calculation and indication method for motion trail out of view field Download PDF

Info

Publication number
CN114964248A
CN114964248A CN202210361298.3A CN202210361298A CN114964248A CN 114964248 A CN114964248 A CN 114964248A CN 202210361298 A CN202210361298 A CN 202210361298A CN 114964248 A CN114964248 A CN 114964248A
Authority
CN
China
Prior art keywords
tar
target
data
carrier
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210361298.3A
Other languages
Chinese (zh)
Inventor
高强
刘兆沛
冯笑宇
陶忠
安学智
陆红强
范浩硕
吴志军
张璟玥
王宏浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian institute of Applied Optics
Original Assignee
Xian institute of Applied Optics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian institute of Applied Optics filed Critical Xian institute of Applied Optics
Priority to CN202210361298.3A priority Critical patent/CN114964248A/en
Publication of CN114964248A publication Critical patent/CN114964248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of airborne photoelectric reconnaissance and situation perception, and discloses a method for calculating and indicating a target position of a motion trail outgoing view field, which comprises the following steps: generating a three-dimensional geographic scene; geographic positioning of targets in the field of view; geographic positioning of a sight line of the photoelectric system; configuring a virtual camera pose; indicating arrow generation. The continuous indicating effect of the target position of the field of view is realized through a related technical means, and a virtual arrow pointing to the target of the field of view can be superposed on the photoelectric image; thereby providing the ability to quickly search for objects with known location information or out of view.

Description

Target position calculation and indication method for motion trail out of view field
Technical Field
The invention belongs to the field of airborne photoelectric reconnaissance and situation perception, and relates to a method for calculating and indicating a target position of a motion trail out of a view field.
Background
In the mission such as target detection, search, tracking, etc., the military optoelectronic system often has the situation that the motion trail of the tracked target exceeds the photoelectric detection visual field. The reasons for targeting the field of view generally include several factors: the system comprises a carrier position and posture conversion motion, a photoelectric system pitching and azimuth motion, a view field switching, a multi-target switching and a motion randomness of a moving target.
Once the tracked target appears out of the field of view, manual operation is required by a pilot, and the relevant area is searched again to detect the target. Under most task conditions, it tends to be time consuming, labor intensive, inefficient, and likely impossible to search again for relevant targets.
For an object that appears in the photoelectric field of view beyond the field of view, it is crucial to correctly indicate its relative position for a fast search of the object.
Disclosure of Invention
Objects of the invention
The method aims to solve the problems that the target of the field of view is difficult to search again and the efficiency is low. The patent provides a method for calculating and indicating a target position of a field of view. On the photoelectric image, the relative position of the target beyond the visual field is indicated in a form of an arrow graphic symbol superposed on the target, and a photoelectric operator can operate the handle and the photoelectric system according to the prompted relative position so as to quickly capture the target again.
(II) technical scheme
In order to solve the technical problem, the invention provides a method for calculating and indicating a target position of a motion trail out of a visual field, which mainly comprises the following steps: firstly, generating a three-dimensional geographic information system based on position and attitude data and topographic data of a carrier, wherein the process comprises the steps of acquiring real-time position and attitude sensor data and photoelectric aiming line attitude data of the carrier, calculating a spatial position conversion matrix and a spatial attitude conversion matrix of the carrier, and finishing three-dimensional scene reconstruction according to the topographic data; secondly, carrying out geographical positioning on the aiming line of the photoelectric system; simultaneously, completing the target geographical positioning in the photoelectric image view field; then, carrying out pose configuration on the virtual camera; finally, an indication arrow is generated.
(III) advantageous effects
According to the invention. The system can be designed into a software function and integrated into the existing photoelectric system, helps pilots quickly search targets beyond a visual field in executing various target detection tasks, improves the target hitting efficiency and helps to shorten the OODA loop time.
Drawings
FIG. 1 is a schematic flow sheet of the process of the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention more apparent, the following detailed description of the present invention will be made in conjunction with the accompanying drawings and examples.
As shown in fig. 1, the method for generating a follow-up helmet integrated view according to the embodiment of the present invention includes the following steps: firstly, generating a three-dimensional geographic scene based on position and attitude data and terrain data of a carrier; secondly, calculating an intersection point of the aiming line and the three-dimensional terrain according to the position and posture data of the carrier and the aiming line data, namely carrying out geographical positioning on the aiming line; thirdly, calculating the geographic position of the target based on the target image and the aiming line posture data, namely performing geographic positioning on the target; fourthly, configuring position and posture parameters of the virtual camera, and configuring the virtual camera according to the carrier pose parameters and the photoelectric imaging parameters; fifth, an indication arrow is generated.
Each step in the above process is described in detail below:
s1: generating three-dimensional geographic scene based on position and attitude data and terrain data of carrier
The position and attitude parameters of the carrier mainly comprise carrier position parameters and attitude parameters, and the position parameters comprise longitudeDegree, latitude and altitude are respectively recorded as L, B, H, the position data is based on a geographic coordinate system, the longitude and latitude units are degrees, the attitude parameters comprise a heading angle, a pitch angle and a roll angle which are respectively recorded as a, p and r, the units are degrees, and the angle is based on a northeast coordinate system. The attitude data of the photoelectric aiming line comprises a pitch angle and an azimuth angle of the aiming line, which are respectively marked as a los 、p los The angle is referenced to the coordinate system of the carrier.
And acquiring 8 data including the position, the posture and the aiming line posture of the carrier as the input of the subsequent dynamic continuous synthetic visual image generation step.
The spatial position transformation matrix is denoted as M pos The attitude matrix M pos The following calculation procedure was used:
Figure BDA0003583866980000031
wherein, n, u, v is a base vector under a local transformation coordinate system, nx, ny, nz are x, y, z components of the vector n respectively, ux, uy, uz are x, y, z components of the vector u respectively, vx, vy, (vz is x, y of the vector v respectively), z components, and the calculation adopts the following formula:
n=(cos L cos B,sin L cos B,sin B)
vpx is the x-component of the carrier position vp in geocentric coordinates, vpy is the y-component of the carrier position vp in geocentric coordinates, vpz is the z-component of the carrier position vp in geocentric coordinates, and the calculation is given by the following formula:
vpx=(N+H)cos B cos L
vpy=(N+H)cos B sin L
vpz=[(N(1-e 2 )+H]sin B
wherein, L and B are respectively the longitude and latitude of each frame in the position data of the carrier acquired in the above steps, N is the radius of the prime and unitary circle, e 2 For the first eccentricity, the following calculation formulas are respectively adopted:
Figure BDA0003583866980000032
Figure BDA0003583866980000033
in the above formula, a and c are respectively the long radius and the short radius of the earth ellipsoid model,
a=6378137.0m
c=6356752.3142m。
the spatial attitude transformation matrix is recorded as M atti
Attitude matrix M atti Firstly, constructing a quaternion according to attitude data of a carrier by adopting the following calculation process, and recording the quaternion as q:
Figure BDA0003583866980000041
wherein a, p and r are respectively a course angle, a pitch angle and a roll angle of the carrier acquired in the step;
Figure BDA0003583866980000042
generating three-dimensional static geographic SCENE SCENE of geographic area based on terrain data of geographic area where aircraft is located stategraph Wherein the terrain data comprises elevation data and satellite texture image data; m calculated according to acquired position and attitude data of carrier pos 、M atti Line of sight attitude M los Re-calculating to obtain a composite conversion matrix M composite Wherein M is los The sight line space transformation matrix constructed for the sight line attitude data has the following calculation formula
M composite =M los *M atti *M pos
From a composite matrix M composite Driving generated three-dimensional static geographic SCENE SCENE stategraph That is, a dynamic continuous composite visual image can be generated, in which the image of a certain frame is marked as f svs (x,y,z,t)
S2: line-of-sight geolocation
The composite visual image f output in step S1 svs (x, y, t) is input, and in the image content, selecting the target and obtaining the target pixel position value, denoted as P tar (x tar ,y tar ) Or calculating the pixel value of any object (such as by an object intelligent recognition algorithm or an object intelligent retrieval algorithm) by other programs, and inputting the pixel P tar (x tar ,y tar ) Combining a model viewpoint transformation matrix generated by synthesizing the visual image, a perspective projection matrix, a view port transformation matrix and the terrain data of the area, the geospatial position corresponding to the target can be rapidly calculated
2.1 obtaining a local-to-world transformation matrix, denoted M, for a virtual camera in a composite vision system camera_l2w The matrix is a known fixed value;
2.2 obtaining an observation matrix, denoted M, of a virtual camera in a composite visual System camera_view The matrix is a known fixed value;
2.3 obtaining a projective transformation matrix of the virtual camera in the synthetic vision system, denoted as M camera_projection The matrix is a known fixed value, and a far and near cutting plane of the projection transformation matrix is obtained and is marked as (z) far ,z near ),z far Z value of the far cutting plane near The Z value is the Z value of a near cutting plane;
2.4 obtaining a viewport transformation matrix, denoted M, for a virtual camera in a composite vision system camera_viewport The matrix is a known fixed value;
2.5 Pixel position P tar (x tar ,y tar ) Converting into a normalized position in the virtual camera system, and recording the obtained position as P normalize_tar (x tar ,y tar );
2.6 setting the composite transformation matrix and calculating, M composit =(M camera_l2w *M camera_view *M camera_projection *M camera_viewport ) -1
2.7 setting virtual Camera normalization according to selected Pixel positionsStarting point P in space normalize_tar_start And end position P normalize_tar_end And calculating a starting point P corresponding to the geographic space geocentric_tar_start And end point coordinates P geocentric_tar_start
P geocentric_tar_start (x tar ,y tar ,z near )=P normalize_tar_start (x tar ,y tar ,z near )*M composit
P geocentric_tar_end (x tar ,y tar ,z near )=P normalize_tar_end (x tar ,y tar ,z far )*M composit
2.8 with P geocentric_tar_start And P geocentric_tar_start And performing collision detection iterative algorithm on the line segment and the ground to obtain an intersection point of the line segment and the terrain surface, namely the final geographic position of the target.
Target pixel position P tar (x tar ,y tar ) Setting the pixel position as the center of the cross of the sight line, and calculating the geographic position of the sight line according to the process, and recording the geographic position as P los (x los ,y los ,z los )。
S3: target geolocation
Referring to the process of S3, the target center pixel position is substituted into P tar (x tar ,y tar ) Namely, the position of the target center pixel is set, and the geographic position of the aiming line can be calculated according to the process. Is denoted by P target (x target ,y target ,z target )。
Generating a straight line, marked as L (P) from the target geographical position point and the sight line geographical position point generated in the steps S2 and S3 as a starting point and an end point respectively target ,P los )。
S4: virtual camera configuration
Configuring the viewpoint of the virtual camera according to the real-time position data of the airborne machine in the S1 stage, configuring the posture of the virtual camera according to the attitude parameters of the airborne machine, and configuring the visual field of the virtual camera according to the visual field of the airborne optoelectronic systemAnd then the composite visual image generated in the correcting step is corrected and recorded as f svs_correct (x,y,z,t)。
S5: indicating arrow generation
Inside the composite image, the straight line L (P) generated in step S3 appears target ,P los ) As a part of a straight line L (P) target ,P los ) Starting from the starting point of (c), using the straight line and f svs_correct And (4) taking the intersection point of the (x, y, z, t) boundary as an end point, and plotting the indicating arrow again, namely completing the generation and plotting of the target indicating arrow.
According to the technical scheme, after the target geographic position is located and calculated, the geographic position of the cross of the sight line is calculated in real time, the connecting line between the geographic position of the sight line and the target geographic position is calculated and generated in real time, and the target position is continuously prompted in real time by taking the intersection area of the virtual camera generation area for calibration and the connecting line of the target sight line as an indication arrow. The method combines the surveying and mapping field and the achievement of information fusion processing, realizes a new target calculation and indication method of the movement track out of the view field in a software mode, has stronger engineering application significance for an airborne avionics system, can improve the target reconnaissance capability and the auxiliary navigation capability of the helicopter, and has tactical significance worthy of further mining to improve the battlefield viability of the helicopter.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for calculating and indicating the position of a target with a motion trail out of a visual field is characterized by comprising the following steps:
s1: generating a three-dimensional geographic scene based on the position and posture data of the carrier;
the process comprises the steps of obtaining real-time pose sensor data and photoelectric aiming line pose data of the carrier, calculating a spatial position conversion matrix and a spatial pose conversion matrix of the carrier, and finishing three-dimensional geographic scene reconstruction according to the pose data.
S2: geographic positioning of a sight line of the photoelectric system;
s3: geographic positioning of targets in the photoelectric image field of view;
s4: configuring a virtual camera pose;
s5: indicating arrow generation.
2. The method for calculating and indicating the target position of the motion trail out-of-view field according to claim 1, wherein in step S1, the position and attitude data of the carrier comprises position parameters and attitude parameters of the carrier, the position parameters of the carrier comprise longitude, latitude and altitude, respectively designated as L, B, H, the position data are based on a geographic coordinate system, the longitude and the latitude are in degrees, the attitude parameters comprise a heading angle, a pitch angle and a roll angle, respectively designated as a, p and r, the degrees are in degrees, and the degrees are based on a coordinate system of the northeast sky; the attitude data of the photoelectric aiming line comprises a pitch angle and an azimuth angle of the aiming line, which are respectively marked as a los 、p los The angle is referenced to the coordinate system of the carrier.
3. The method for calculating and indicating the target position of the motion trajectory out of the field of view according to claim 2, wherein in step S1, the calculation process of the spatial position transformation matrix is:
the spatial position transformation matrix is denoted as M pos
Figure FDA0003583866970000011
Wherein, n, u, v is a base vector under a transformation coordinate system, nx, ny, nz are respectively x, y, z components of the vector n, ux, uy, uz are respectively x, y, z components of the vector u, vx, vy, vz are respectively x, y, z components of the vector v, and the calculation adopts the following formula:
n=(cosLcosB,sinLcosB,sinB)
vpx is the x-component of the carrier position vp in geocentric coordinates, vpy is the y-component of the carrier position vp in geocentric coordinates, vpz is the z-component of the carrier position vp in geocentric coordinates, and the calculation is given by the following formula:
vpx=(N+H)cosBcosL
vpy=(N+H)cosBsinL
vpz=[(N(1-e 2 )+H]sinB
wherein, L and B are respectively the longitude and latitude of each frame in the position data of the carrier acquired in the above steps, N is the radius of the prime and unitary circle, e 2 For the first eccentricity, the following calculation formulas are respectively adopted:
Figure FDA0003583866970000021
Figure FDA0003583866970000022
in the above formula, a and c are respectively the long radius and the short radius of the earth ellipsoid model,
a=6378137.0m
c=6356752.3142m。
4. the method for calculating and indicating the target position of the motion trail out of the visual field according to claim 3, wherein in the step S1, the spatial attitude transformation matrix is recorded as M atti The calculation process is as follows:
firstly, constructing a quaternion according to attitude data of a carrier, and recording the quaternion as q:
Figure FDA0003583866970000023
wherein a, p and r are respectively the course angle, pitch angle and roll angle of the carrier acquired in the step;
Figure FDA0003583866970000031
5. the method for calculating and indicating the position of a target with a motion trail out of the field of view according to claim 4, wherein in step S1, based on the terrain data of the geographic area where the carrier is located, including elevation data and satellite texture image data, the reconstruction of the three-dimensional scene of the area is completed, and the steps include:
2.1 Single Block regular elevation terrain data visualization
The elevation data is in a form of a regular grid elevation data file, the regular grid elevation data file is analyzed, model viewpoint transformation, perspective projection transformation and viewport transformation are carried out according to the elevation data, and a gridding three-dimensional model of a single piece of regular elevation terrain data is generated;
2.2 Mass data organization
The massive terrain data consists of a single piece of regular elevation terrain data, and a plurality of pieces of regular elevation terrain data are organized by a quadtree multiresolution method to generate a large-scale three-dimensional terrain scene model;
2.3 texture-based mapping
Taking the satellite image as texture, mapping the satellite texture on the surface of a large-scale three-dimensional terrain SCENE to generate a three-dimensional terrain SCENE with a super-large-scale real effect, and marking the three-dimensional SCENE as SCENE stategraph
6. The method for calculating and indicating the position of the target with the motion trail out of the visual field according to claim 5, wherein in step S1, a dynamic continuous composite visual image is generated according to the acquired attitude data of the carrier and the three-dimensional static scene generated by the driving of the attitude of the sight line, and the steps comprise:
4.1 constructing a spatial transformation matrix according to the pose data of the carrier, including a position spatial transformation matrix M pos And attitude space transformation matrix M atti
4.2 constructing a line-of-sight space transformation matrix M according to the line-of-sight attitude data los
4.3 constructing a composite spatial transform matrix M according to the above steps composite ,M composite =M los *M atti *M pos
4.4 ScENE with the SCENE node tree generated by the three-dimensional static SCENE as the object stategraph Using a composite spatial transformation matrix M composite Generating a dynamic continuous composite visual image, which is recorded as SVS sequce Wherein the image of a certain frame is denoted as f svs (x,y,z,t)。
7. The method for calculating and indicating the position of a target with a motion trajectory out of view as claimed in claim 6, wherein in step S2, the geographical location of the line of sight is performed based on the synthesized visual image f output in step S1 svs (x, y, t) in the image content, selecting the target and obtaining the target pixel position value, and marking as P tar (x tar ,y tar ) With the input pixel P tar (x tar ,y tar ) Calculating the geospatial position corresponding to the target by combining a model viewpoint transformation matrix, a perspective projection matrix, a viewport transformation matrix and terrain data of the area, wherein the model viewpoint transformation matrix, the perspective projection matrix and the viewport transformation matrix are generated by synthesizing the visual image, and the calculation process comprises the following steps:
2.1 obtaining a local-to-world transformation matrix, denoted M, for a virtual camera in a composite vision system camera_l2w The matrix is a known fixed value;
2.2 obtaining an observation matrix, denoted M, of a virtual camera in a composite visual System camera_view The matrix is a known fixed value;
2.3 obtaining a projective transformation matrix of the virtual camera in the synthetic vision system, denoted as M camera_projection The matrix is a known fixed value, and a far and near cutting plane of the projection transformation matrix is obtained and is marked as (z) far ,z near ),z far Z value of a far cutting plane, Z near The Z value of the approximate cutting plane is obtained;
2.4 obtaining a viewport transformation matrix, denoted M, for a virtual camera in a composite vision system camera_viewport The matrix is a known fixed value;
2.5 Pixel position P tar (x tar ,y tar ) Converting into a normalized position in the virtual camera system, and recording the obtained position asP normalize_tar (x tar ,y tar );
2.6 setting the Complex transformation matrix and calculating, M composit =(M camera_l2w *M camera_view *M camera_projection *M camera_viewport ) -1
2.7 setting a starting point P in the normalized space of the virtual camera according to the selected pixel position normalize_tar_start And end position P normalize_tar_end And calculating a starting point P corresponding to the geographic space geocentric_tar_start And end point coordinates P geocentric_tar_start
P geocentric_tar_start (x tar ,y tar ,z near )=P normalize_tar_start (x tar ,y tar ,z near )*M composit
P geocentric_tar_end (x tar ,y tar ,z near )=P normalize_tar_end (x tar ,y tar ,z far )*M composit
2.8 with P geocentric_tar_start And P geocentric_tar_start Performing collision detection iterative algorithm on the line segment and the ground to obtain an intersection point of the line segment and the terrain surface, namely the final geographic position of the target;
target pixel position P tar (x tar ,y tar ) Setting the pixel position as the cross center of the sight line, and calculating the geographic position of the sight line according to the process, and recording the geographic position as P los (x los ,y los ,z los )。
8. The method for calculating and indicating the position of the target with the movement trace out of the field of view as claimed in claim 7, wherein in step S3, the process of locating the target is:
the real-time photoelectric image of the airborne photoelectric system is sent by the photoelectric turret, and each frame of image data is received according to the frame rate of the sensor and is recorded as f eo (x, y, t); bringing the target center pixel position into P tar (x tar Ytar), i.e. set toThe target center pixel position, the geographic position of the aiming line is calculated and is marked as P target (x target ,y target ,z target );
Generating a straight line, marked as L (P) from the target geographical position point and the sight line geographical position point generated in the steps S2 and S3 as a starting point and an end point respectively target ,P los )。
9. The method for calculating and indicating the position of the target with the motion trajectory out of the field of view according to claim 8, wherein in the step S4, the process of configuring the virtual camera is as follows:
configuring a viewpoint of a virtual camera according to the real-time position data of the airborne machine, configuring the posture of the virtual camera according to the posture parameters of the airborne machine, configuring the visual field of the virtual camera according to the visual field of the airborne optoelectronic system, and then correcting the synthesized visual image generated in the step S1 and recording the synthesized visual image as f svs_correct (x,y,z,t)。
10. The method for calculating and indicating the target position of the motion trajectory out of the field of view according to claim 9, wherein in step S5, the generation process of the indication arrow is:
inside the synthesized visual image, the straight line L (P) generated in step S3 appears target ,P los ) As a part of a straight line L (P) target ,P los ) Starting from the starting point of (c), using the straight line and f svs_correct And (5) taking the intersection point of the (x, y, z, t) boundary as an end point, replating the indicating arrow, and completing the generation and plotting of the target indicating arrow.
CN202210361298.3A 2022-04-07 2022-04-07 Target position calculation and indication method for motion trail out of view field Pending CN114964248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210361298.3A CN114964248A (en) 2022-04-07 2022-04-07 Target position calculation and indication method for motion trail out of view field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210361298.3A CN114964248A (en) 2022-04-07 2022-04-07 Target position calculation and indication method for motion trail out of view field

Publications (1)

Publication Number Publication Date
CN114964248A true CN114964248A (en) 2022-08-30

Family

ID=82971139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210361298.3A Pending CN114964248A (en) 2022-04-07 2022-04-07 Target position calculation and indication method for motion trail out of view field

Country Status (1)

Country Link
CN (1) CN114964248A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114459461A (en) * 2022-01-26 2022-05-10 西安应用光学研究所 Navigation positioning method based on GIS and real-time photoelectric video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114459461A (en) * 2022-01-26 2022-05-10 西安应用光学研究所 Navigation positioning method based on GIS and real-time photoelectric video
CN114459461B (en) * 2022-01-26 2023-11-28 西安应用光学研究所 Navigation positioning method based on GIS and real-time photoelectric video

Similar Documents

Publication Publication Date Title
AU2007355942B2 (en) Arrangement and method for providing a three dimensional map representation of an area
CN112184786B (en) Target positioning method based on synthetic vision
EP3228984B1 (en) Surveying system
CN107247458A (en) UAV Video image object alignment system, localization method and cloud platform control method
CN109579843A (en) Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method
US8139111B2 (en) Height measurement in a perspective image
WO2015096806A1 (en) Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN109709801A (en) A kind of indoor unmanned plane positioning system and method based on laser radar
CN106780729A (en) A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN102506867B (en) SINS (strap-down inertia navigation system)/SMANS (scene matching auxiliary navigation system) combined navigation method based on Harris comer matching and combined navigation system
CN110930508A (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
CN111681315B (en) High altitude and profile plotting positioning method based on three-dimensional GIS map
CN112381935A (en) Synthetic vision generation and multi-element fusion device
CN113409400A (en) Automatic tracking-based airborne photoelectric system target geographic positioning method
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
Burkard et al. User-aided global registration method using geospatial 3D data for large-scale mobile outdoor augmented reality
CN114964248A (en) Target position calculation and indication method for motion trail out of view field
CN112927356B (en) Three-dimensional display method for unmanned aerial vehicle image
CN114964249A (en) Synchronous association method of three-dimensional digital map and real-time photoelectric video
CN112985398A (en) Target positioning method and system
CN109341685B (en) Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation
CN114459461B (en) Navigation positioning method based on GIS and real-time photoelectric video
Hashimov et al. GIS technology and terrain orthophotomap making for military application
Wu et al. Simulation of two-satellite reconnaissance system with intelligent decision based on object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination