CN109708627B - Method for rapidly detecting space dynamic point target under moving platform - Google Patents

Method for rapidly detecting space dynamic point target under moving platform Download PDF

Info

Publication number
CN109708627B
CN109708627B CN201811357398.9A CN201811357398A CN109708627B CN 109708627 B CN109708627 B CN 109708627B CN 201811357398 A CN201811357398 A CN 201811357398A CN 109708627 B CN109708627 B CN 109708627B
Authority
CN
China
Prior art keywords
target
track
space
image
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811357398.9A
Other languages
Chinese (zh)
Other versions
CN109708627A (en
Inventor
韩飞
王兆龙
谭龙玉
孙俊
徐波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aerospace Control Technology Institute
Original Assignee
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aerospace Control Technology Institute filed Critical Shanghai Aerospace Control Technology Institute
Priority to CN201811357398.9A priority Critical patent/CN109708627B/en
Publication of CN109708627A publication Critical patent/CN109708627A/en
Application granted granted Critical
Publication of CN109708627B publication Critical patent/CN109708627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method for quickly detecting a space dynamic point target under a moving platform, which comprises the following steps: s1, windowing N sequence star maps by using known absolute position information of a target satellite and taking M as an allowable range to form N sequence windows with the size of M; s2, performing time domain compression on the three-dimensional image information in the window, and forming a target star map by adopting a projection algorithm after taking the maximum envelope; s3, completing background suppression of the target star map by adopting a binarization algorithm; s4, completing the track extraction in the target star map by adopting a straight line extraction algorithm; and S5, interpreting one or more tracks in the target star map to finish the identification of the target track. The invention solves the problem of fast detection of the space dynamic point target when the space camera is in a dynamic environment, greatly improves the application background of the space dynamic point target detection, and simultaneously improves the detection speed.

Description

Method for rapidly detecting space dynamic point target under moving platform
Technical Field
The invention belongs to the technical field of spacecraft navigation, particularly relates to discovery and extraction of navigation targets, and particularly relates to a rapid detection technology of a space dynamic point target under a movable platform.
Background
With the continuous progress of space technology, the tasks of the spacecraft in various countries are increasingly complex, the action range is developed from near to deep space, the action mode is developed from single star to constellation, the task content is developed from single to composite, and the spacecraft is more and more difficult to maintain a single posture for a single task. Meanwhile, autonomous navigation and deep space autonomous navigation by using an on-orbit optical camera constellation also become a current research hotspot. However, as the distance between the targets is too far, no morphological information exists, and target detection can be performed only through a few gray image points, so that the research on the target detection of the spatial dynamic points under the dynamic platform becomes one of the key problems to be solved urgently in constellation autonomous navigation and deep space autonomous navigation.
At present, the research around the detection of the space dynamic point target is more, and the detection is mostly based on the condition that the camera visual axis pointing is static relative to a satellite platform or the condition that the inertial space is stable based on the camera visual axis pointing. The observation image generated by the method has ideal consideration on the movement of fixed stars of the satellite or the movement of fixed stars of the satellite, and has obvious characteristics of 'static in motion' or 'static in motion'. However, in the actual in-orbit aircraft, the conditions of satellite platform jitter, visual axis offset relative to the platform, abandoning inertial pointing due to mission requirements and the like exist, so that the problem of 'moving in motion' in a more general sense, namely the problem of space dynamic point target detection under the moving platform provided by the invention is generated.
Disclosure of Invention
The invention aims to provide a method for quickly detecting a space dynamic point target under a movable platform, which can finish the quick detection of the space dynamic point target only by imaging a deep space of a region where the target is located by an on-orbit camera in the maneuvering process of a space vehicle platform, namely the attitude cannot be stably pointed to the target and cannot be stably pointed to an inertial space, greatly improve the autonomous navigation capability of a constellation and a deep space detector, reduce the burden of a ground station, reduce the on-orbit operation load of a satellite-borne computer and reduce the constraint requirement of navigation on the attitude.
In order to achieve the purpose, the invention is realized by the following technical scheme: a method for quickly detecting a space dynamic point target under a moving platform comprises the following steps:
s1, windowing N sequence star maps by using known absolute position information of a target satellite and taking M as an allowable range to form N sequence windows with the size of M;
s2, performing time domain compression on the three-dimensional image information in the window, and forming a target star map by adopting a projection algorithm after taking the maximum envelope;
s3, completing background suppression of the target star map by adopting a binarization algorithm;
s4, completing the track extraction in the target star map by adopting a straight line extraction algorithm;
and S5, interpreting one or more tracks in the target star map to finish the identification of the target track.
Further, in the step S1, the rough position of the target satellite is known in advance and has an error range of M, windowing is performed on each frame of image in the sequence star map through the error range M, irrelevant image information and tracks are removed, and the N sequence images are processed into N sequence windows with the size of M.
Further, in step S2, time domain compression is performed on the N sequence window images, three-dimensional image information is compressed into two dimensions, and a projection target star map with a maximum envelope is obtained through a projection algorithm.
Further, in the step S3, a binarization algorithm of a six-directional neighborhood is adopted to extract foreground information and suppress background information, so as to complete preprocessing of the target star map and prepare for subsequent target interpretation and extraction.
Further, in the step S5, the extracted trajectory is interpreted. If the target star map only has one track, the target star map is directly determined as the target track; if there are several tracks, the parameters of the paired tracks are counted to determine the motion track parameters of background fixed star, and the straight-line tracks in the same slope range are eliminated, and the rest several groups of tracks have known target relative motion speed
Figure GDA0003787866590000021
And (5) carrying out speed matching, and finishing the rapid detection of the space dynamic point target.
The windowing of the sequence image included in the step S1 is:
knowing absolute position information of target in geocentric inertial system
Figure GDA0003787866590000022
According to the collinearity equation:
Figure GDA0003787866590000031
Figure GDA0003787866590000032
wherein f is the focal length of the optical camera;
Figure GDA0003787866590000033
is the attitude matrix of the optical camera in the inertial space;
Figure GDA0003787866590000034
is the absolute position of the optical camera in inertial space; the above is obviously a known quantity, so that the position coordinates of the target in the image plane coordinate system in one frame of image can be obtained
Figure GDA0003787866590000035
Taking the sequence of images acquired by the on-track optical camera as P (x, y, t), t =1,2
Figure GDA0003787866590000036
For the center, M is the side length, and image information in the M × M region is retained, which is denoted as P' (x, y, t), t =1, 2.
The allowable range M included in step S1 is:
absolute position information of target in geocentric inertial system
Figure GDA0003787866590000037
Contains a known amount of positioning error, so when windowing on an image plane with target coordinates as the center, to prevent the target from being mistakenly picked, the absolute position error is required
Figure GDA0003787866590000038
Setting a windowing dimension M, namely:
Figure GDA0003787866590000039
the projection algorithm included in the step S2 is:
p '(x, y, t), t =1, 2.., N, is lossy-compressed along the time axis, the three-dimensional spatio-temporal information is compressed into two-dimensional spatial information, the image sequence P' (x, y, t) becomes a single sample P '(x, y), and P' (x, y) includes all image information within the maximum envelope after compression.
The projection algorithm adopts time sequence multiframe maximum value projection, namely, the sequential images are projected to form a standardized image zeta (x, y),
Figure GDA00037878665900000310
and then, carrying out binary quantization on zeta (x, y) by adopting a binarization algorithm.
The binarization algorithm included in the step S3 is as follows:
and (3) providing a directional six-neighborhood background suppression algorithm by combining the idea that the MTI algorithm is based on directional characteristics and the target energy integrity advantage of the four-neighborhood mean algorithm. As shown, considering the directional characteristic, the energy mean of 6 neighborhoods around the current position (x, y),
Figure GDA0003787866590000041
Figure GDA0003787866590000042
Figure GDA0003787866590000043
Figure GDA0003787866590000044
defining directional six neighborhood binary quantization
Figure GDA0003787866590000045
As shown in the above equation, when any of the directional six neighborhood means is above the threshold T, the (x, y) point is assigned a value of 1, otherwise 0. The method not only retains the trailing energy of the moving platform in motion due to the exposure time, but also is beneficial to the discovery of the weak space target; and the direction characteristic of the traditional MTI algorithm is combined, unnecessary analysis and calculation are reduced, the rapidity of the algorithm is improved, and meanwhile background suppression is well finished, so that the rapid detection of the target is ensured.
The straight line extraction algorithm included in the step S4 is:
the method comprises the steps of mapping points in an image space into a parameter space by adopting traditional Hough transformation, and counting whether the accumulated value of a counter of each point in the parameter space is greater than a threshold value or not by utilizing the Hough transformation principle that a straight line in the image space is mapped into one point in the parameter space, thereby determining a straight line track.
The track interpretation included in the step S5 is:
in order to remove the trajectories of the non-target motions such as the permanent star from all the extracted linear trajectories, all the trajectories need to be interpreted, and the method specifically comprises the following steps:
(1) If only one straight line track is contained, judging the straight line as a target track;
(2) If j tracks are included, each track l is obtained i (i =1, 2.. Times.j) slope k in the image coordinate system i (i =1, 2.. Times.j) and intercept b i (i=1,2,...,j);
(3) The slope is at k L <k i <k H Eliminating straight lines in the range so as to eliminate the fixed star track; wherein k is L 、k H Is a statistically derived threshold;
(4) D groups of tracks l remaining after elimination i (i =1, 2.. D.) according to a known relative movement speed containing errors
Figure GDA0003787866590000051
And carrying out speed matching filtering, and further removing redundant tracks to obtain a target track.
Compared with the prior art, the method for quickly detecting the space dynamic point target under the movable platform has the following advantages: the method is suitable for the maneuvering process of the spacecraft platform, and does not need to stably point to a target in attitude or stably point to an inertial space; through windowing operation, useless information is removed first, and the total amount of image processing data is reduced; through a directional six-neighborhood binarization algorithm, energy drag caused by maneuvering of the movable platform is reserved, data calculation amount is reduced, and rapid detection in the maneuvering process is guaranteed; the rapid target detection can be completed by directly utilizing the deep space sequence image of the region where the target is located without the on-orbit matching operation of a large-capacity complex star catalogue. The invention greatly improves the target discovery capability in the navigation process of the constellation and deep space probe, reduces the dependence on a ground station and the requirement on the calculation of a satellite-borne computer, reduces the constraint of a navigation system on the posture, and greatly expands and improves the applicability of the target detection method.
Drawings
FIG. 1 is a flow chart of a method for rapidly detecting a space dynamic point target under a moving platform;
FIG. 2 is a schematic view of windowing a sequence of images;
fig. 3 is a schematic diagram of six directional neighborhoods.
Detailed Description
The present invention will now be further described by way of the following detailed description of a preferred embodiment thereof, taken in conjunction with the accompanying drawings.
As shown in fig. 1, a method for rapidly detecting a space dynamic point target under a moving platform includes the following steps:
s1, windowing N sequence star maps by using known absolute position information of a target satellite and taking M as an allowable range to form N sequence windows with the size of M;
the windowing of the sequence images included in the step S1 is:
as shown in FIG. 2, the absolute position information of the target point T in the centroid inertia system is known
Figure GDA0003787866590000061
According to the collinear equation of the imaging of the optical camera, the following can be obtained:
Figure GDA0003787866590000062
meanwhile, according to the attitude measurement result of the star sensor carried by the satellite platform where the camera is located, the following steps are obtained:
Figure GDA0003787866590000063
wherein the content of the first and second substances,
Figure GDA0003787866590000064
attitude quaternion, q, obtained by the star sensor 4 Is a scalar, f is the optical camera focal length;
Figure GDA0003787866590000065
is the attitude matrix of the optical camera in the inertial space;
Figure GDA0003787866590000066
is the absolute position of the optical center O of the optical camera in the inertial space; the above is obviously a known quantity, so that the position coordinates of the target T in the image plane coordinate system in one frame image can be obtained
Figure GDA0003787866590000067
The sequence of images acquired by the on-track optical camera is P (x, y, t), t =1,2
Figure GDA0003787866590000068
And keeping the image information in the M multiplied by M area, namely P' (x, y, t), wherein t =1, 2.
The allowable range M included in step S1 is:
absolute position information of target in geocentric inertial system
Figure GDA0003787866590000069
Contains a known amount of positioning error, so when windowing on an image plane with target coordinates as the center, to prevent false rejection of the target, it is necessary to rely on the absolute position error
Figure GDA00037878665900000610
Setting a windowing dimension M, namely:
Figure GDA00037878665900000611
s2, performing time domain compression on the three-dimensional image information in the window, and forming a target star map by adopting a projection algorithm after taking the maximum envelope;
the projection algorithm included in the step S2 is:
lossy compression is performed on P '(x, y, t), t =1, 2.., N, along a time axis, three-dimensional space-time information is compressed into two-dimensional space information, and an image sequence P' (x, y, t) becomes a single sample P '(x, y), where P' (x, y) includes all image information within a maximum envelope after compression.
The projection algorithm adopts time sequence multiframe maximum value projection, namely, the sequence images are projected to form a standardized image zeta (x, y),
Figure GDA0003787866590000071
and then, carrying out binary quantization on zeta (x, y) by adopting a binarization algorithm.
S3, completing background suppression of the target star map by adopting a binarization algorithm;
the binarization algorithm included in the step S3 is as follows:
and (3) providing a directional six-neighborhood background suppression algorithm by combining the idea that the MTI algorithm is based on directional characteristics and the target energy integrity advantage of the four-neighborhood mean algorithm. As shown in fig. 3, considering the directional characteristic, the energy mean of 6 neighborhoods around the current position (x, y),
Figure GDA0003787866590000072
Figure GDA0003787866590000073
Figure GDA0003787866590000074
Figure GDA0003787866590000075
defining directional six neighborhood binary quantization
Figure GDA0003787866590000076
As described above, when any of the directional six neighborhood means is above the threshold T, the (x, y) point is assigned a value of 1, otherwise 0. The method not only retains the trailing energy of the moving platform due to the exposure time in the moving process, but also is beneficial to the discovery of the weak space target; and the direction characteristic of the traditional MTI algorithm is combined, unnecessary analysis and calculation are reduced, the rapidity of the algorithm is improved, and meanwhile background suppression is well finished, so that the rapid detection of the target is ensured.
S4, adopting a straight line extraction algorithm to complete the track extraction in the target star map;
the straight line extraction algorithm included in the step S4 is:
the method comprises the steps of mapping points in an image space into a parameter space by adopting traditional Hough transformation, and counting whether the accumulated value of a counter of each point in the parameter space is greater than a threshold value or not by utilizing the Hough transformation principle that a straight line in the image space is mapped into one point in the parameter space, thereby determining a straight line track.
And S5, interpreting one or more tracks in the target star map to finish the identification of the target track.
The track interpretation included in the step S5 is:
in order to remove the trajectories of the non-target motions such as the permanent star from all the extracted linear trajectories, all the trajectories need to be interpreted, and the method specifically comprises the following steps:
(1) If only one straight line track is contained, judging the straight line as a target track;
(2) If j tracks are included, find each track l i (i =1, 2.. Times.j) slope k in the image coordinate system i (i =1, 2.. Times.j) and intercept b i (i=1,2,...,j);
(3) The slope is at k L <k i <k H Linear elimination in the range, so as to eliminate the fixed star track; wherein k is L 、k H Is a statistically derived threshold;
(4) D groups of tracks l left after elimination i (i =1, 2.. D.) according to a known relative movement speed containing errors
Figure GDA0003787866590000081
And carrying out speed matching filtering, and further removing redundant tracks to obtain a target track.
In conclusion, the invention provides a method for rapidly detecting a space dynamic point target under a moving platform by taking the objective reality that a large number of motion tracks and target image point energy are dragged and scattered in the process of maneuvering or attitude overturning of the spacecraft platform. The effective application and implementation of the technology have important theoretical significance and practical significance for improving the target discovery capability in the navigation process of the constellation and deep space probe, reducing the dependence on a ground station and the requirement on the calculation of a satellite-borne computer, reducing the constraint of a navigation system on the posture and the like.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be limited only by the attached claims.

Claims (1)

1. A method for quickly detecting a space dynamic point target under a moving platform is characterized by comprising the following steps:
s1, windowing N sequence star maps by using known absolute position information of a target satellite and taking M as an allowable range to form N sequence windows with the size of M;
the windowing of the sequence image included in the step S1 is:
knowing the absolute position information of the target point T in the centroid inertial system
Figure FDA0003787866580000011
According to the collinear equation of the imaging of the optical camera, the following can be obtained:
Figure FDA0003787866580000012
meanwhile, according to the attitude measurement result of the star sensor carried by the satellite platform where the camera is located, the following steps are obtained:
Figure FDA0003787866580000013
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003787866580000014
attitude quaternion, q, obtained by the star sensor 4 Is a scalar; f is the focal length of the optical camera;
Figure FDA0003787866580000015
is the attitude matrix of the optical camera in the inertial space;
Figure FDA0003787866580000016
is the absolute position of the optical center O of the optical camera in the inertial space; the above is known quantity, and the position coordinate of the target T in the image plane coordinate system in one frame image is obtained
Figure FDA0003787866580000017
The sequence of images acquired by the on-track optical camera is P (x, y, t) t =1,2
Figure FDA0003787866580000018
As a center, M is a side length, and image information in an M × M region is reserved, and is denoted as P' (x, y, t), t =1, 2.
The allowable range M included in step S1 is:
absolute position information of target in geocentric inertial system
Figure FDA0003787866580000019
Contains a known amount of positioning error, so when windowing on an image plane with target coordinates as the center, to prevent the target from being mistakenly picked, the absolute position error is required
Figure FDA0003787866580000021
Setting a windowing dimension M, namely:
Figure FDA0003787866580000022
s2, performing time domain compression on the three-dimensional image information in the window, and forming a target star map by adopting a projection algorithm after taking the maximum envelope;
the projection algorithm included in the step S2 is:
lossy compression is carried out on P '(x, y, t), t =1,2, N along a time axis, three-dimensional space-time information is compressed into two-dimensional space information, an image sequence P' (x, y, t) becomes a single sample P '(x, y), and the P' (x, y) contains all image information in a maximum envelope after compression;
the projection algorithm adopts time sequence multiframe maximum value projection, namely, the sequential images are projected to form a standardized image zeta (x, y),
Figure FDA0003787866580000023
carrying out binary quantization on zeta (x, y) by adopting a binarization algorithm;
s3, completing background suppression of the target star map by adopting a binarization algorithm;
the binarization algorithm included in the step S3 is as follows:
on the premise of considering the direction characteristic, the energy mean value of 6 neighborhoods around the current position (x, y),
Figure FDA0003787866580000024
Figure FDA0003787866580000025
Figure FDA0003787866580000026
Figure FDA0003787866580000027
defining directional six neighborhood binary quantization
Figure FDA0003787866580000031
When any one of the six-direction neighborhood mean values is higher than a threshold value T, assigning 1 to a point (x, y), and otherwise, assigning 0;
s4, completing the track extraction in the target star map by adopting a straight line extraction algorithm;
the straight line extraction algorithm included in the step S4 is:
mapping points in an image space into a parameter space by adopting traditional Hough transformation, and counting whether the accumulated value of a counter of each point in the parameter space is greater than a threshold value by utilizing the Hough transformation principle that a straight line in the image space is mapped into one point in the parameter space, thereby determining a straight line track;
s5, interpreting one or more tracks in the target star map to finish the identification of the target track;
the track interpretation included in step S5 is:
in order to remove the trajectories of non-target motions such as the permanent star from all extracted linear trajectories, all trajectories need to be interpreted, and the method specifically comprises the following steps:
(1) If only one straight line track is included, judging the straight line as a target track;
(2) If j tracks are included, each track l is obtained i (i =1, 2.. Times.j) slope k in the image coordinate system i (i =1,2.. J) and intercept b i (i=1,2,...,j);
(3) The slope is at k L <k i <k H Eliminating straight lines in the range so as to eliminate the fixed star track; wherein k is L 、k H Is a statistically derived threshold;
(4) D groups of tracks l left after elimination i (i =1, 2.. D.) according to a known relative movement speed containing errors
Figure FDA0003787866580000032
And carrying out speed matching filtering, and further removing redundant tracks to obtain a target track.
CN201811357398.9A 2018-11-15 2018-11-15 Method for rapidly detecting space dynamic point target under moving platform Active CN109708627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811357398.9A CN109708627B (en) 2018-11-15 2018-11-15 Method for rapidly detecting space dynamic point target under moving platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811357398.9A CN109708627B (en) 2018-11-15 2018-11-15 Method for rapidly detecting space dynamic point target under moving platform

Publications (2)

Publication Number Publication Date
CN109708627A CN109708627A (en) 2019-05-03
CN109708627B true CN109708627B (en) 2022-10-18

Family

ID=66254900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811357398.9A Active CN109708627B (en) 2018-11-15 2018-11-15 Method for rapidly detecting space dynamic point target under moving platform

Country Status (1)

Country Link
CN (1) CN109708627B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827315B (en) * 2019-10-23 2022-08-02 上海航天控制技术研究所 Target spacecraft trajectory identification method based on time series information
CN110826252B (en) * 2019-11-26 2022-07-19 武汉理工大学 Enveloping mold design method for improving space enveloping forming precision under linear track
CN110956141B (en) * 2019-12-02 2023-02-28 郑州大学 Human body continuous action rapid analysis method based on local recognition
CN113409082B (en) * 2021-06-18 2023-08-01 湖南快乐阳光互动娱乐传媒有限公司 Interactive advertisement putting method, system, server and client

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7688999B2 (en) * 2004-12-08 2010-03-30 Electronics And Telecommunications Research Institute Target detecting system and method
CN102116633B (en) * 2009-12-31 2012-11-21 北京控制工程研究所 Simulation checking method for deep-space optical navigation image processing algorithm
CN101937565B (en) * 2010-09-16 2013-04-24 上海交通大学 Dynamic image registration method based on moving target track
CN106296726A (en) * 2016-07-22 2017-01-04 中国人民解放军空军预警学院 A kind of extraterrestrial target detecting and tracking method in space-based optical series image
CN106651904B (en) * 2016-12-02 2019-08-09 北京空间机电研究所 A kind of more extraterrestrial target method for capturing and tracing of width size range
CN108734103B (en) * 2018-04-20 2021-08-20 复旦大学 Method for detecting and tracking moving target in satellite video

Also Published As

Publication number Publication date
CN109708627A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109708627B (en) Method for rapidly detecting space dynamic point target under moving platform
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN108428255B (en) Real-time three-dimensional reconstruction method based on unmanned aerial vehicle
CN105021184B (en) It is a kind of to be used for pose estimating system and method that vision under mobile platform warship navigation
EP2166375B1 (en) System and method of extracting plane features
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
EP2575104A1 (en) Enhancing video using super-resolution
CN111829532B (en) Aircraft repositioning system and method
CN113781562B (en) Lane line virtual-real registration and self-vehicle positioning method based on road model
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN115406447B (en) Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment
Van Pham et al. Vision‐based absolute navigation for descent and landing
CN110617802A (en) Satellite-borne moving target detection and speed estimation method
CN117523461B (en) Moving target tracking and positioning method based on airborne monocular camera
Xiang et al. Hybrid bird’s-eye edge based semantic visual SLAM for automated valet parking
Liu et al. A new approach for the estimation of non-cooperative satellites based on circular feature extraction
Oreifej et al. Horizon constraint for unambiguous uav navigation in planar scenes
CN106767841A (en) Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
Bao et al. Towards micro air vehicle flight autonomy research on the method of horizon extraction
CN112837374B (en) Space positioning method and system
EP3816938A1 (en) Region clipping method and recording medium storing region clipping program
KR102381013B1 (en) Method, apparatus and computer program for multi-matching based realtime vision-aided navigation
Wang et al. Research on UAV Obstacle Detection based on Data Fusion of Millimeter Wave Radar and Monocular Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant