CN116518981B - Aircraft visual navigation method based on deep learning matching and Kalman filtering - Google Patents

Aircraft visual navigation method based on deep learning matching and Kalman filtering Download PDF

Info

Publication number
CN116518981B
CN116518981B CN202310779012.8A CN202310779012A CN116518981B CN 116518981 B CN116518981 B CN 116518981B CN 202310779012 A CN202310779012 A CN 202310779012A CN 116518981 B CN116518981 B CN 116518981B
Authority
CN
China
Prior art keywords
image
aircraft
real
matching
aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310779012.8A
Other languages
Chinese (zh)
Other versions
CN116518981A (en
Inventor
滕锡超
刘学聪
李璋
苏昂
王靖皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202310779012.8A priority Critical patent/CN116518981B/en
Publication of CN116518981A publication Critical patent/CN116518981A/en
Application granted granted Critical
Publication of CN116518981B publication Critical patent/CN116518981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The application relates to an aircraft visual navigation method based on deep learning matching and Kalman filtering, which comprises the steps of obtaining a aerial clapping real-time image and an aircraft auxiliary parameter updated in real time, firstly carrying out rough matching, inputting a candidate matching image obtained after rough matching into an accurate matching network to extract high-dimensional characteristics, adopting a quick matching algorithm to find a place accurately matched with the position in the aerial clapping real-time image on a satellite reference image according to the high-dimensional characteristics, obtaining position coordinates of a plurality of homonymous points in the aerial clapping real-time image, calculating according to the coordinates of the homonymous points to obtain the position and the gesture of a current aircraft, namely a visual navigation result, carrying out inertial navigation according to the aircraft auxiliary parameter to accumulate navigation errors, and carrying out inertial navigation after correcting the aircraft auxiliary parameter according to the visual navigation result when the navigation errors are larger than a preset threshold value.

Description

Aircraft visual navigation method based on deep learning matching and Kalman filtering
Technical Field
The application relates to the technical field of visual navigation, in particular to an aircraft visual navigation method based on deep learning matching and Kalman filtering.
Background
The visual navigation of the aircraft refers to acquiring surrounding environment information of the aircraft through a computer vision technology, and further determining the current position and direction of the aircraft, so that accurate navigation control is realized. However, due to the motion state of the aircraft and the complexity of the external environment, the conventional visual navigation method has great limitations, and often cannot meet the requirement of high-precision navigation. The inertial navigation positioning correction method based on heterogeneous image matching can lead the aircraft positioning technology to get rid of dependence on navigation satellites, and realize the autonomous positioning technology with all-day, all-weather, real-time, high precision requirements and strategic safety requirements. Image characteristics such as nonlinear gray-scale distortion and different noise characteristics are different between the different images due to the difference in imaging mechanism. In addition, in the process of acquiring the heterogeneous images, certain angle difference often exists, so that larger geometric distortion exists between imaging results in the same region, the factors bring great challenges to the matching of the heterogeneous images, and the design and extraction of the consistency similar features between the heterogeneous images are also difficult and hot problems of research.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, apparatus and device for visual navigation of an aircraft based on deep learning matching and kalman filtering, which can achieve high-precision navigation.
An aircraft visual navigation method for an aircraft based on deep learning matching and kalman filtering, the method comprising:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair; according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
And evaluating the accumulated error of inertial navigation according to the visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
In one embodiment, the precision matching network is a MatchNet network.
In one embodiment, the performing coarse matching processing on the aerial photo real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position includes:
inputting the aerial photo real-time image and a pre-stored satellite reference image into a trained heterogeneous characteristic extraction network to extract consistency characteristics of the aerial photo real-time image and the satellite reference image;
according to the consistency characteristics, performing coarse matching on the aerial clapping real-time image and the satellite reference image by adopting a quick KNN matching algorithm to obtain a plurality of candidate homonymous point pairs between the aerial clapping real-time image and the satellite reference image and corresponding relations between the candidate homonymous point pairs;
and cutting the aerial photo real-time image and the satellite reference image according to the preset size according to the candidate homonymy point pairs and the corresponding relation to obtain a plurality of candidate matching image pairs corresponding to the candidate homonymy point pairs.
In one embodiment, the method further comprises training the heterogeneous feature extraction network:
acquiring satellite image sets and optical image sets shot in the same area in different time periods;
constructing a training data set according to the satellite image set and the optical image set, wherein the training data set comprises a plurality of matched image pairs, and each matched image pair comprises a satellite image and an optical image, wherein the imaging areas and the imaging angles of the satellite image and the optical image are matched;
inputting the matched image pair into the heterogeneous characteristic extraction network, and mapping the satellite image and the optical image in the matched image pair to a low latitude space by the heterogeneous characteristic extraction network to obtain consistency characteristics with relevance respectively;
inputting the consistency characteristics into a measurement network, and carrying out similarity measurement by the measurement network according to the consistency characteristics so as to obtain the consistency probability of the matching image on the extracted characteristics;
and calculating a loss function according to the consistency probability and the consistency characteristics of the matched image pair, and adjusting parameters in the heterogeneous characteristic extraction network by using the loss function until the loss function converges, so as to obtain the trained heterogeneous characteristic extraction network.
In one embodiment, the heterogeneous feature extraction network employs a MatchNet twinning network.
In one embodiment, the metrology network includes a superimposed fully connected layer and a softmax layer.
In one embodiment, the Loss function employs a distance-weighted relative Loss, expressed as:
wherein ,
in the above-mentioned description of the invention,is a constant, which represents the maximum distance from the center point to the image edge in the consistency characteristic image, +.>Representing the distance of the current pixel point from the fast center point of the image,/-, for example>Representing said probability of coincidence, m being an integer representing the degree of difference of said pair of matching images,/->Representing consistency features and />The Euclidean distance between, wherein ∈> and />Respectively representing the consistency characteristics of satellite images and optical images in the matched image pair, +.>Representing the number of training samples.
In one embodiment, the calculating the position and the posture of the current aircraft according to the accurate coordinates of the plurality of homonymous points includes:
constructing and obtaining a collineation equation about the homonymous point based on an imaging relation of center projection of a photoelectric camera according to accurate coordinates of the homonymous point in the aerial photographing real-time image and aircraft auxiliary parameters at the current moment;
Obtaining a plurality of corresponding collineation equations according to the plurality of homonymous points, and combining the collineation equations to form an equation set;
and solving the equation set to obtain the position and the attitude of the current aircraft.
In one embodiment, the correcting the auxiliary parameter of the aircraft by using kalman filtering according to the visual navigation result includes:
and carrying out state optimal estimation at the current moment by adopting Kalman filtering, taking the position and the posture of the aircraft at the current moment as state quantities, taking a visual navigation result obtained at the latest moment and auxiliary parameters of the aircraft at the current moment as observables, and predicting the optimal state quantity at the current moment according to the observables, wherein the optimal state is the optimal position and the optimal posture of the aircraft at the current moment.
An aircraft visual navigation device for an aircraft based on deep learning matching and kalman filtering, the device comprising:
the data acquisition module is used for acquiring aerial clapping real-time images and real-time updated aircraft auxiliary parameters;
the candidate matching image pair obtaining module is used for carrying out rough matching processing on the aerial photo real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
The high-dimensional feature extraction module is used for inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional features of the aerial clapping real-time image and the satellite reference image in each corresponding candidate matching image pair;
the image precise matching module is used for precisely matching the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a rapid KNN matching algorithm according to the high-dimensional characteristics to obtain a plurality of precise homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the precise homonymous point pairs;
the visual navigation result obtaining module is used for obtaining accurate coordinates of the homonymous points in the aerial photo real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of the homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial photo real-time image as a visual navigation result;
and the inertial navigation correction module is used for evaluating the accumulated error of inertial navigation according to the visual navigation result, correcting the auxiliary parameter of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameter of the aircraft.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair; according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
And evaluating the accumulated error of inertial navigation according to the visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair; according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
Obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
and evaluating the accumulated error of inertial navigation according to the visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
According to the aircraft visual navigation method based on deep learning matching and Kalman filtering, through acquiring the aerial photo real-time image and the real-time updated aircraft auxiliary parameters, coarse matching is firstly carried out, then the candidate matching image obtained after the coarse matching is input into the precise matching network to extract high-dimensional features, a place precisely matched with the position in the aerial photo real-time image is found on the satellite reference image according to the high-dimensional features by adopting a rapid matching algorithm, so that the position coordinates of a plurality of homonymous points in the aerial photo real-time image are obtained, the position and the gesture of the current aircraft are obtained by calculating according to the coordinates of the homonymous points, namely, the visual navigation result, inertial navigation is carried out according to the aircraft auxiliary parameters, the accumulated navigation error is accumulated, when the navigation error is larger than a preset threshold value, the aircraft auxiliary parameters are corrected by adopting Kalman filtering according to the visual navigation result, and the corrected aircraft auxiliary parameters are adopted to carry out inertial navigation.
Drawings
FIG. 1 is a flow diagram of an aircraft visual navigation method based on deep learning matching and Kalman filtering in one embodiment;
FIG. 2 is a flow chart of obtaining a plurality of homonym point pairs and correspondence between homonym point pairs according to a heterogeneous image in one embodiment;
FIG. 3 is a flow diagram of a method of training a heterogeneous feature extraction network in one embodiment;
FIG. 4 is a schematic diagram of data flow during training of a heterogeneous feature extraction network in one embodiment;
FIG. 5 is a flow chart of an aircraft visual navigation method based on deep learning matching and Kalman filtering in another embodiment;
FIG. 6 is a block diagram of an aircraft visual navigation device based on deep learning matching and Kalman filtering in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As shown in fig. 1, there is provided an aircraft visual navigation method based on deep learning matching and kalman filtering, comprising the steps of:
Step S100, acquiring aerial clapping real-time images and real-time updated aircraft auxiliary parameters;
step S110, performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
step S120, inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images in the corresponding candidate matching images;
step S130, according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pairs by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
step S140, obtaining accurate coordinates of homonymous points in the aerial photo real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the position and the gesture of the aircraft obtained according to the aerial photo real-time image as a visual navigation result;
And step S150, evaluating the accumulated error of inertial navigation according to a visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
Aiming at the prior art, when the airborne inertial navigation data is adopted for navigation, navigation errors are accumulated along with time, so that the problem of course deviation of an aircraft is caused, an end-to-end deep learning network for extracting cross-mode consistency characteristics is adopted, and stable consistency characteristics between two cross-mode images, including similar stable structure information between the images and intensity information with unchanged relative gray scale distortion, are extracted for optical aerial images and satellite reference images acquired at different moments. The deep learning network is adopted to position the position of the aircraft at the current moment by utilizing the aerial image obtained in real time in the process of flying the aircraft, the positioning result is used for correcting the accumulated error of the airborne inertial navigation data, and the corrected inertial navigation data is adopted for accurate navigation. In the method, the accuracy of real-time positioning is further improved through two matching processes, namely coarse matching and accurate matching.
In step S100, while the aircraft is flying, the ground may be photographed with an onboard camera to obtain aerial real-time images, and navigation is performed by real-time updated aircraft assistance parameters. Wherein the aircraft assistance parameters include aircraft inertial navigation parameters and aircraft altitude data. The inertial navigation parameters of the aircraft are data output by a gyroscope and an accelerometer arranged on the aircraft. When the visual navigation result is used for correction later, the inertial navigation parameters of the aircraft are actually corrected.
In this embodiment, the inertial navigation data is not corrected by using aerial real-time images in real time, but is corrected when the error is greater than a preset threshold. Therefore, when the error is greater than the threshold value, the aerial photo real-time image can be acquired to obtain the position of the aircraft at the moment to correct the inertial navigation error, or the aerial photo real-time image can be acquired by setting the interval time, the current position is calculated correspondingly, and when the inertial navigation error is greater than the threshold value, the position of the aircraft obtained in the adjacent time can be called to correct the inertial navigation error.
In step S110, performing coarse matching processing on the aerial photo real-time image and the pre-stored satellite reference image, and preliminarily obtaining a plurality of candidate matching image pairs corresponding to the same position between the aerial photo real-time image and the pre-stored satellite reference image comprises: inputting the aerial photographing real-time image and a pre-stored satellite reference image into a trained heterogeneous characteristic extraction network to extract consistency characteristics of the aerial photographing real-time image and the satellite reference image, performing coarse matching on the aerial photographing real-time image and the satellite reference image by adopting a quick KNN matching algorithm according to the consistency characteristics to obtain a plurality of candidate homonymous point pairs between the aerial photographing real-time image and the satellite reference image and corresponding relations between the candidate homonymous point pairs, and cutting the aerial photographing real-time image and the satellite reference image according to preset sizes according to the candidate homonymous point pairs and the corresponding relations to obtain a plurality of candidate matching image pairs corresponding to the candidate homonymous point pairs.
In the course of coarse matching, aerial real-time images and pre-stored satellite reference images are input into a trained heterogeneous feature extraction network to extract consistency features between the aerial real-time images and the pre-stored satellite reference images. Because the aerial photo real-time image is an optical image acquired by a camera, the shooting time is the flight current time, the satellite reference image is an image acquired by a satellite, and the shooting time is not the current, the result of directly matching such a heterogeneous image is not accurate, so the two images are projected to a low latitude through a feature extraction network to be in the same dimensional space, the consistency feature between the two images is obtained, and the subsequent processing is carried out according to the feature.
And then, finding the nearest neighbor feature in the consistency feature of the aerial photo real-time image in the consistency feature of the satellite reference image by adopting a quick KNN matching algorithm, and further determining a rough region matched in the satellite reference image and the aerial photo real-time image, thereby obtaining the corresponding relation of a plurality of homonymous point pairs in the two images.
The process of using the fast KNN matching algorithm after inputting the aerial clapping real-time image and the pre-stored satellite reference image into the trained heterogeneous feature extraction network to extract the consistency feature between the two is shown in fig. 2.
In this embodiment, in order to further improve positioning accuracy, after coarse matching, corresponding positions are cut out in the aerial real-time image and the pre-stored satellite reference image respectively by using the currently obtained plurality of homonymous point pairs and corresponding relations between the homonymous point pairs, so as to obtain image matching pairs, wherein each image matching pair comprises aerial real-time image blocks and satellite reference image blocks which correspond to the same positions and have the same size.
In step S120, the image matching pairs are input into a trained precise matching network for feature extraction again, wherein the precise matching network adopts a MatchNet network.
In step S130, the fast KNN algorithm is again used to determine whether they belong to the same scene by comparing the similarity between the two images in the image matching pair using the extracted high-dimensional features.
After the two real-time position matching, accurate homonymous point pairs and accurate relations among the homonymous point pairs are obtained, and the accurate position coordinates of the homonymous points in the aerial clapping real-time image can be obtained based on the fact that each place in the satellite reference image has accurate position coordinates.
In step S140, calculating the current position and attitude of the aircraft according to the coordinates of the plurality of homonymous points includes: and constructing a collineation equation about the homonymous point based on the imaging relation of the center projection of the photoelectric camera according to the coordinates of the homonymous point in the aerial photographing real-time image and the auxiliary parameters of the aircraft at the current moment. And obtaining a plurality of corresponding collineation equations according to the plurality of homonymous points, and combining the collineation equations to form an equation set. And solving the equation set to obtain the position and the attitude of the current aircraft.
Specifically, the current pose of the aircraft is estimated based on the multi-point perspective relationship. Assume that the focal length of an aircraft high-definition camera isIn world coordinate system->In the middle, the camera optical center->Inertial navigation coordinates +.>The inertial parameter can be determined according to the installation relation between the camera and the inertial navigation module, which is obtained by the aircraft. Optical axis->Direction angle of (2)The optoelectronic pod platform attitude data may be utilized for determination.
Assume that a certain matching point (i.e., homonymous point) in a captured aerial real-time imagePIs the coordinates of (a)(X, Y, Z)(known quantity is obtained through the geographic information of the satellite reference image and the ground elevation data after matching), and the algorithm needs to correct inertial navigation data and />
The coordinate position of the real-time image point corresponding to the assumed point P in the image coordinate system O-XY is(x, y)The basic relationship in photogrammetry, obtained (by image matching algorithms, known quantity) from the photo camera center projection imaging relationship, is represented by the collinear equation:
(1)
in the case of the formula (1),、/>、/>、/>、/>、/>、/>、/>、/>for rotating matrix->Elements of (a), namely:
it is assumed that it can be obtained by image matchingmImage matching points and corresponding space position coordinates thereof(X, Y, Z)Then 2 equations can be obtained for each matching point according to the collineation equation, and all the equations are combined to form an equation set. If the matching points are enough, an overdetermined equation set of six parameters to be solved about the position and the attitude of the aircraft can be obtained, and a least square solution can be obtained to obtain the estimated position and the estimated attitude of the aircraft.
In step S150, an error threshold is formulated in combination with inertial navigation device parametersTComparing the inertial navigation result with an error threshold value in the navigation process of the aircraft, and when the difference value of the inertial navigation result and the error threshold value is smaller than the given threshold valueTWhen navigation is performed using only inertial navigation data. When the error is greater than a given thresholdTAnd when the method is used, the inertial navigation data of the carrier is corrected by using a visual navigation result, and the state optimal estimation is carried out by adopting Kalman filtering so as to realize high-precision long-endurance error correction.
Specifically, according to the visual navigation result, correcting the auxiliary parameters of the aircraft by adopting Kalman filtering comprises: and carrying out state optimal estimation at the current moment by adopting Kalman filtering, taking the position and the posture of the aircraft at the current moment as state quantities, taking a visual navigation result obtained at the latest moment and auxiliary parameters of the aircraft at the current moment as observed quantity, and predicting the optimal state quantity at the current moment according to the observed quantity, wherein the optimal state is the optimal position and the optimal posture of the aircraft at the current moment.
Further, the Kalman filtering model is designed as follows:
and (5) updating time:
and (5) measurement and update:
in the above-described kalman filter model, and />State prediction for k-1 and k-time, respectively, < > >For the optimal estimate of the state +.>Estimating covariance for state +.>For Kalman gain, ++>For controlling quantity, ++>For observational quantity, add>For state transition matrix>For controlling the input matrix>For the state observation matrix>Observing noise covariance for the instrument,/>Is the system process noise covariance.
Specifically, the parameter settings of the Kalman filter include a covariance matrix of the process noiseCovariance matrix of observation noise +.>And an initial state vector +>And state covariance matrix->
wherein ,the matrix is set by considering the motion mode of the aircraft and the noise characteristic of the inertial navigation device, and can be estimated through experimental data. />The matrix is set by considering the noise characteristics of the visual measurement and inertial navigation device, and can be estimated by experimental data. Initial state vector +.>And state covariance matrix->The estimation can be made from the visual measurements and inertial navigation data in the previous steps.
In practical application, kalman filtering can be utilized to perform state optimal estimation, and data of visual navigation and inertial navigation are fused, so that high-precision long-endurance error correction is realized.
As shown in fig. 3, there is also provided a method of training a heterogeneous feature extraction network, comprising:
Step S200, acquiring satellite image sets and optical image sets which are shot in the same area in different time periods;
step S210, a training data set is constructed according to a satellite image set and an optical image set, the training data set comprises a plurality of matched image pairs, and each matched image pair comprises a satellite image and an optical image, wherein the imaging areas and the imaging angles of the satellite image and the optical image are matched;
step S220, inputting the matched image pair into a heterogeneous characteristic extraction network, and mapping the satellite image and the optical image in the matched image pair into a low latitude space by the heterogeneous characteristic extraction network to respectively obtain consistency characteristics with relevance;
step S230, inputting the consistency characteristics into a measurement network, and carrying out similarity measurement by the measurement network according to the consistency characteristics so as to obtain the consistency probability of the matching image on the extracted characteristics;
and step S240, calculating a loss function according to the consistency probability and the consistency characteristics of the matched image pairs, and adjusting parameters in the heterogeneous characteristic extraction network by using the loss function until the loss function converges, so as to obtain the trained heterogeneous characteristic extraction network.
In this embodiment, when the training data set is constructed, the imaging angle of the aerial image is not fixed, for example, the consistency feature extraction is difficult caused by the problem of oblique imaging of the onboard optoelectronic pod, and a satellite image set obtained by shooting the ground by a multi-view aircraft is established. And acquiring satellite reference images shot in different time periods and optical real-time images shot in real time by the aircraft, and carrying out information fusion, contrast and analysis on images of the same area, different imaging angles and different modes. The same area, the same angle optical image and the satellite image are constructed as an image matching pair.
In this embodiment, the heterogeneous feature extraction network adopts a MatchNet twin network, which includes two deep convolutional neural networks (CNN feature extraction networks), each of which is composed of a convolutional layer, a max pooling layer, and a full-connection layer, that is, a bottleneck layer.
Specifically, when training the MatchNet twin network, the optical image and the satellite image in the matched image pair are respectively input into the corresponding CNN feature extraction network for processing, and the heterogeneous image is mapped to a low-dimensional space. These two features extract CNN, network a and network B, with or without sharing parameters. In the case of sharing parameters, the depth representation learned consistency features will be shared between the aircraft view and the satellite view.
Then, after obtaining the features of two heterogeneous images in the matching image, inputting the features into a measurement network, wherein the measurement network comprises a superimposed full-connection layer and a softmax layer, and outputting the consistency probability of the two features. In the training process, the similarity measurement between two features is extracted by a measurement network, so that the consistency probability of the extracted features is calculated, and the parameters in the heterogeneous feature extraction network are adjusted by calculating the loss function.
In this embodiment, the improved area weighted contrast Loss function is used, expressed as:
(2)
wherein ,
in the formula (2) of the present invention,is a constant, which represents the maximum distance from the center point to the image edge in the consistency characteristic image, +.>Representing the distance of the current pixel point from the fast center point of the image,/-, for example>Representing said probability of coincidence, m being an integer representing the degree of difference of said pair of matching images,/->Representing a consistency feature-> and />The Euclidean distance between, wherein ∈> and />Respectively representing the consistency characteristics of satellite images and optical images in the matched image pair, +.>Representing the number of training samples.
In the calculation of the Loss function, a weighting scheme is adopted, so that the pixel points closer to the central point on the characteristic image are given higher weight when calculating the contrast Loss, namelyWhen (I)>The weight representing the current pixel point is highest when +.>When (I)>The weight representing the current pixel point is the lowest, so that the pixel points around the center point of the characteristic image are more emphasized in the training process.
In this embodiment, the training of the heterogeneous feature extraction network can also be represented by fig. 4, in which the specific structure of the heterogeneous feature extraction network is given.
In this embodiment, the flow of the aircraft visual navigation method based on deep learning matching and kalman filtering can also be represented as in fig. 5, wherein the heteromatching module is used to implement the contents in steps S100 to S120, and the position resolving module is used to implement the contents in steps S130 to S140.
In the aircraft visual navigation method based on deep learning matching and Kalman filtering, through acquiring the aerial clapping real-time image and the real-time updated aircraft auxiliary parameters, the aerial clapping real-time image and the pre-stored satellite reference image are matched with each other twice, in the course of coarse matching, the consistency characteristics of the aerial clapping real-time image and the satellite reference image are extracted from the trained different-time different-source characteristic extraction network, and according to the consistency characteristics, the position matched with the position of the aerial clapping real-time image is found on the satellite reference image by adopting a fast KNN matching algorithm, so that a plurality of identical-name point pairs and corresponding relations between all identical-name point pairs are obtained, a plurality of image blocks with the same corresponding positions can be acquired in the aerial clapping real-time image and the satellite reference image, then the image blocks obtained after matching are utilized for high-dimensional characteristic extraction, so that the position and the gesture of the current aircraft are obtained, namely, the visual navigation result is calculated according to the coordinates of the plurality of identical-name points, namely, the visual navigation device is corrected by adopting the full-weather navigation method when the full-time inertial navigation device is adopted, the full-time inertial navigation parameter is corrected according to the preset navigation parameter, and the full-time inertial navigation filter parameter is corrected.
It should be understood that, although the steps in the flowcharts of fig. 1 and 3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 and 3 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of the other steps or sub-steps of other steps.
In one embodiment, as shown in fig. 6, there is provided an aircraft visual navigation device based on deep learning matching and kalman filtering, comprising: the system comprises a data acquisition module 300, a candidate matching image pair obtaining module 310, a high-dimensional feature extraction module 320, an image precision matching module 330, a visual navigation result obtaining module 340 and an inertial navigation correction module 350, wherein:
The data acquisition module 300 is used for acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
a candidate matching image pair obtaining module 310, configured to perform coarse matching processing on the aerial photo real-time image and a pre-stored satellite reference image, so as to obtain a plurality of candidate matching image pairs corresponding to the same position;
the high-dimensional feature extraction module 320 is configured to input each candidate matching image pair into a trained precise matching network to extract high-dimensional features corresponding to the aerial real-time image and the satellite reference image in each candidate matching image pair;
the image precise matching module 330 is configured to precisely match the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a rapid KNN matching algorithm according to the high-dimensional feature, so as to obtain a plurality of precise homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations between the precise homonymous point pairs;
the visual navigation result obtaining module 340 is configured to obtain accurate coordinates of homonymous points in the aerial photo real-time image according to a corresponding relationship between each of the accurate homonymous point pairs, calculate according to the accurate coordinates of a plurality of homonymous points to obtain a position and a posture of the current aircraft, and use the aircraft position and posture obtained according to the aerial photo real-time image as a visual navigation result;
And the inertial navigation correction module 350 is configured to evaluate an accumulated error of inertial navigation according to the visual navigation result, correct the auxiliary parameter of the aircraft by adopting kalman filtering according to the visual navigation result when the navigation error is greater than a preset threshold value, and perform inertial navigation by adopting the corrected auxiliary parameter of the aircraft.
Specific limitations regarding the aircraft visual navigation device based on the deep learning matching and the kalman filtering can be found in the above description of the aircraft visual navigation method based on the deep learning matching and the kalman filtering, and will not be described herein. The various modules in the aforementioned heterogeneous image-based visual navigation device for an aircraft may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of visual navigation of an aircraft based on deep learning matching and kalman filtering. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair; according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
Obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
and evaluating the accumulated error of inertial navigation according to the visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing rough matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position;
inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair; according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
Obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
and evaluating the accumulated error of inertial navigation according to the visual navigation result, and correcting the auxiliary parameters of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than a preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameters of the aircraft.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (7)

1. An aircraft visual navigation method based on deep learning matching and Kalman filtering, which is characterized by comprising the following steps:
acquiring aerial photo real-time images and real-time updated aircraft auxiliary parameters;
performing coarse matching processing on the aerial photography real-time image and a pre-stored satellite reference image to obtain a plurality of candidate matching image pairs corresponding to the same position, wherein the method specifically comprises the following steps of: inputting the aerial clapping real-time image and a pre-stored satellite reference image into a trained heterogeneous characteristic extraction network to extract consistency characteristics of the aerial clapping real-time image and the satellite reference image, performing coarse matching on the aerial clapping real-time image and the satellite reference image by adopting a fast KNN matching algorithm according to the consistency characteristics to obtain a plurality of candidate homonymous point pairs between the aerial clapping real-time image and corresponding relations between the candidate homonymous point pairs, and cutting the aerial clapping real-time image and the satellite reference image according to preset sizes according to the candidate homonymous point pairs and the corresponding relations to obtain a plurality of candidate matching image pairs corresponding to the candidate homonymous point pairs;
Inputting each candidate matching image pair into a trained accurate matching network to extract high-dimensional characteristics of aerial photo real-time images and satellite reference images corresponding to each candidate matching image pair;
according to the high-dimensional characteristics, performing accurate matching on the aerial clapping real-time image and the satellite reference image in the corresponding candidate matching image pair by adopting a quick KNN matching algorithm to obtain a plurality of accurate homonymous point pairs of the aerial clapping real-time image and the satellite reference image and corresponding relations among the accurate homonymous point pairs;
obtaining accurate coordinates of homonymous points in the aerial clapping real-time image according to the corresponding relation between the accurate homonymous point pairs, calculating according to the accurate coordinates of a plurality of homonymous points to obtain the position and the gesture of the current aircraft, and taking the aircraft position and the gesture obtained according to the aerial clapping real-time image as a visual navigation result;
estimating the accumulated error of inertial navigation according to the visual navigation result, navigating by adopting an onboard inertial navigation device when the navigation error is smaller than or equal to a preset threshold value, correcting the auxiliary parameter of the aircraft by adopting Kalman filtering according to the visual navigation result when the navigation error is larger than the preset threshold value, and performing inertial navigation by adopting the corrected auxiliary parameter of the aircraft, wherein the method comprises the following steps: and carrying out state optimal estimation at the current moment by adopting Kalman filtering, taking the position and the posture of the aircraft at the current moment as state quantities, taking a visual navigation result obtained at the latest moment and auxiliary parameters of the aircraft at the current moment as observables, and predicting the optimal state quantity at the current moment according to the observables, wherein the optimal state is the optimal position and the optimal posture of the aircraft at the current moment.
2. The method of claim 1, wherein the precision matching network is a MatchNet network.
3. The method of visual navigation of an aircraft of claim 2, further comprising training the heterogeneous feature extraction network:
acquiring satellite image sets and optical image sets shot in the same area in different time periods;
constructing a training data set according to the satellite image set and the optical image set, wherein the training data set comprises a plurality of matched image pairs, and each matched image pair comprises a satellite image and an optical image, wherein the imaging areas and the imaging angles of the satellite image and the optical image are matched;
inputting the matched image pair into the heterogeneous characteristic extraction network, and mapping the satellite image and the optical image in the matched image pair to a low latitude space by the heterogeneous characteristic extraction network to obtain consistency characteristics with relevance respectively;
inputting the consistency characteristics into a measurement network, and carrying out similarity measurement by the measurement network according to the consistency characteristics so as to obtain the consistency probability of the matching image on the extracted characteristics;
And calculating a loss function according to the consistency probability and the consistency characteristics of the matched image pair, and adjusting parameters in the heterogeneous characteristic extraction network by using the loss function until the loss function converges, so as to obtain the trained heterogeneous characteristic extraction network.
4. A method of visual navigation of an aircraft according to claim 3, wherein the heterogeneous feature extraction network employs a MatchNet twinning network.
5. The method of visual navigation of an aircraft of claim 4, wherein said metrology network comprises a superimposed fully connected layer and a softmax layer.
6. The method of claim 5, wherein the Loss function employs a distance-weighted relative Loss, expressed as:
wherein ,
in the above-mentioned description of the invention,is a constant, which represents the maximum distance from the center point to the image edge in the consistency characteristic image, +.>Representing the distance of the current pixel point from the fast center point of the image,/-, for example>Representing said probability of coincidence, m being an integer representing the degree of difference of said pair of matching images,/->Representing a consistency feature-> and />The Euclidean distance between, wherein ∈ > and />Respectively representing the consistency characteristics of satellite images and optical images in the matched image pair, +.>Representing the number of training samples.
7. The method of claim 6, wherein calculating the current position and attitude of the aircraft based on the exact coordinates of the plurality of homonymous points comprises:
constructing and obtaining a collineation equation about the homonymous point based on an imaging relation of center projection of a photoelectric camera according to accurate coordinates of the homonymous point in the aerial photographing real-time image and aircraft auxiliary parameters at the current moment;
obtaining a plurality of corresponding collineation equations according to the plurality of homonymous points, and combining the collineation equations to form an equation set;
and solving the equation set to obtain the position and the attitude of the current aircraft.
CN202310779012.8A 2023-06-29 2023-06-29 Aircraft visual navigation method based on deep learning matching and Kalman filtering Active CN116518981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310779012.8A CN116518981B (en) 2023-06-29 2023-06-29 Aircraft visual navigation method based on deep learning matching and Kalman filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310779012.8A CN116518981B (en) 2023-06-29 2023-06-29 Aircraft visual navigation method based on deep learning matching and Kalman filtering

Publications (2)

Publication Number Publication Date
CN116518981A CN116518981A (en) 2023-08-01
CN116518981B true CN116518981B (en) 2023-09-22

Family

ID=87403229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310779012.8A Active CN116518981B (en) 2023-06-29 2023-06-29 Aircraft visual navigation method based on deep learning matching and Kalman filtering

Country Status (1)

Country Link
CN (1) CN116518981B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954283A (en) * 2014-04-01 2014-07-30 西北工业大学 Scene matching/visual odometry-based inertial integrated navigation method
CN106708066A (en) * 2015-12-20 2017-05-24 中国电子科技集团公司第二十研究所 Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation
CN106802149A (en) * 2016-11-29 2017-06-06 南京航空航天大学 Rapid serial images match air navigation aid based on higher-dimension assemblage characteristic
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A kind of Loosely coupled air navigation aid of vision guided navigation/inertial navigation
US10410328B1 (en) * 2016-08-29 2019-09-10 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
KR20200036405A (en) * 2018-09-28 2020-04-07 현대엠엔소프트 주식회사 Apparatus and method for correcting longitudinal position error of fine positioning system
CN111024072A (en) * 2019-12-27 2020-04-17 浙江大学 Satellite map aided navigation positioning method based on deep learning
CN111238488A (en) * 2020-03-18 2020-06-05 湖南云顶智能科技有限公司 Aircraft accurate positioning method based on heterogeneous image matching
CN111504323A (en) * 2020-04-23 2020-08-07 湖南云顶智能科技有限公司 Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion
CN113624231A (en) * 2021-07-12 2021-11-09 北京自动化控制设备研究所 Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN114216454A (en) * 2021-10-27 2022-03-22 湖北航天飞行器研究所 Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment
CN114689047A (en) * 2022-06-01 2022-07-01 鹏城实验室 Deep learning-based integrated navigation method, device, system and storage medium
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954283A (en) * 2014-04-01 2014-07-30 西北工业大学 Scene matching/visual odometry-based inertial integrated navigation method
CN106708066A (en) * 2015-12-20 2017-05-24 中国电子科技集团公司第二十研究所 Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation
US10410328B1 (en) * 2016-08-29 2019-09-10 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
CN106802149A (en) * 2016-11-29 2017-06-06 南京航空航天大学 Rapid serial images match air navigation aid based on higher-dimension assemblage characteristic
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A kind of Loosely coupled air navigation aid of vision guided navigation/inertial navigation
KR20200036405A (en) * 2018-09-28 2020-04-07 현대엠엔소프트 주식회사 Apparatus and method for correcting longitudinal position error of fine positioning system
CN111024072A (en) * 2019-12-27 2020-04-17 浙江大学 Satellite map aided navigation positioning method based on deep learning
CN111238488A (en) * 2020-03-18 2020-06-05 湖南云顶智能科技有限公司 Aircraft accurate positioning method based on heterogeneous image matching
CN111504323A (en) * 2020-04-23 2020-08-07 湖南云顶智能科技有限公司 Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion
CN113624231A (en) * 2021-07-12 2021-11-09 北京自动化控制设备研究所 Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN114216454A (en) * 2021-10-27 2022-03-22 湖北航天飞行器研究所 Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment
CN114689047A (en) * 2022-06-01 2022-07-01 鹏城实验室 Deep learning-based integrated navigation method, device, system and storage medium
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于深度学习的捷联惯性导航系统 初始对准技术研究;刘哲;国防科技大学学报;第45卷(第2期);第15-26页 *
无人机景象匹配视觉导航技术综述;赵春晖;周昳慧;林钊;胡劲文;潘泉;;中国科学:信息科学(05);全文 *
无人飞行器异源图像匹配辅助惯性导航定位技术综述;罗世彬;国防科技大学学报;第42卷(第6期);第1-10页 *

Also Published As

Publication number Publication date
CN116518981A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN111429574A (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN114216454B (en) Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS refusing environment
CN106529538A (en) Method and device for positioning aircraft
CN112184824B (en) Camera external parameter calibration method and device
CN115861860B (en) Target tracking and positioning method and system for unmanned aerial vehicle
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN111241224B (en) Method, system, computer device and storage medium for target distance estimation
WO2020198963A1 (en) Data processing method and apparatus related to photographing device, and image processing device
CN113551665A (en) High dynamic motion state sensing system and sensing method for motion carrier
CN108444452B (en) Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN114758011A (en) Zoom camera online calibration method fusing offline calibration results
JP2018173882A (en) Information processing device, method, and program
CN116576850B (en) Pose determining method and device, computer equipment and storage medium
CN111598930B (en) Color point cloud generation method and device and terminal equipment
CN113436267A (en) Visual inertial navigation calibration method and device, computer equipment and storage medium
CN111721283B (en) Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN116518981B (en) Aircraft visual navigation method based on deep learning matching and Kalman filtering
CN113227711A (en) Navigation device, navigation parameter calculation method, and program
CN115930937A (en) Multi-sensor simultaneous positioning and mapping method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant