WO2022262164A1 - 一种强实时双构连续景象融合匹配导航定位方法及系统 - Google Patents

一种强实时双构连续景象融合匹配导航定位方法及系统 Download PDF

Info

Publication number
WO2022262164A1
WO2022262164A1 PCT/CN2021/126136 CN2021126136W WO2022262164A1 WO 2022262164 A1 WO2022262164 A1 WO 2022262164A1 CN 2021126136 W CN2021126136 W CN 2021126136W WO 2022262164 A1 WO2022262164 A1 WO 2022262164A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
matching
time
navigation
pose
Prior art date
Application number
PCT/CN2021/126136
Other languages
English (en)
French (fr)
Inventor
胡玉龙
刘品
陈皓
唐艺菁
Original Assignee
西安微电子技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安微电子技术研究所 filed Critical 西安微电子技术研究所
Publication of WO2022262164A1 publication Critical patent/WO2022262164A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the invention belongs to the technical field of scene matching and navigation, and in particular relates to a strong real-time dual-structure continuous scene fusion matching navigation positioning method and system.
  • the existing scene matching assisted navigation method often adopts the landmark matching positioning mode.
  • the customized reference map is collected in advance through satellites or UAVs, and the navigation adaptability analysis is performed on the reference map to plan the flight path. This leads to the tight coupling between the existing scene-matching-assisted navigation method and track planning.
  • the navigation and positioning ability will be directly lost due to the deviation of the route, and the adaptability to complex confrontation environments is extremely poor.
  • the current The embedded computing capability of unmanned aerial vehicles is limited, and there are generally differences in time, illumination, and shooting angles between the reference map and the real-time map.
  • the technical problem to be solved by the present invention is to provide a strong real-time dual-structure continuous scene fusion matching navigation positioning method and system to overcome the existing scene matching auxiliary navigation and track planning. Limited to problems such as the inability of discrete landmarks to continuously output navigation data in real time and the difficulty in obtaining reliable/accurate/timely wide-area reference map data, it is possible to realize roaming strong real-time autonomous navigation positioning decoupled from the planned track in a GPS-denied environment.
  • the present invention adopts following technical scheme:
  • a strong real-time dual-structure continuous scene fusion matching navigation positioning method comprising the following steps:
  • step S2 using the improved real-time map and satellite remote sensing reference map matching method, based on the incremental navigation pose estimated in step S1, optimizing the satellite remote sensing reference map, performing absolute navigation pose estimation on the real-time map and the optimized satellite remote sensing reference map, Get the absolute navigation pose;
  • step S3 the incremental navigation pose obtained in step S1 and the absolute navigation pose obtained in step S2 are fused, and the navigation pose is calculated using the predicted value of the navigation pose at the current moment. Optimize and update, obtain and output the navigation pose corresponding to the current real-time map, and realize strong real-time dual-structure continuous scene fusion matching navigation positioning.
  • step S1 is specifically:
  • S101 collect and transmit real-time maps, and prepare satellite remote sensing reference maps of flight areas;
  • step S102 Perform inter-frame feature matching on the real-time image collected in step S101 and the real-time image of the previous frame to obtain a pose perspective transformation matrix between the real-time image and the real-time image of the previous frame;
  • step S103 Based on the real-time image obtained in step S102 and the pose perspective transformation matrix of the real-time image of the previous frame, the relative pose estimation of the real-time image based on the real-time image of the previous frame is performed to obtain the relative pose of the real-time image;
  • step S104 based on the relative pose of the real-time map obtained in step S103 , combined with the navigation pose of the real-time map of the previous frame, perform incremental navigation pose estimation on the current real-time map to obtain the incremental navigation pose of the current real-time map.
  • initialization is performed, specifically:
  • step S101 the pixel longitude and latitude of the satellite remote sensing reference map are specifically:
  • Xpixel and Ypixel are the abscissa and ordinate of the latitude and longitude pixel to be solved respectively
  • Xgeo and Ygeo are the real longitude and latitude of the pixel respectively
  • GeoTransform is the conversion matrix from pixel coordinates to geographic coordinates.
  • step S102 is specifically:
  • Match the currently read real-time image with the updated real-time image of the previous frame and use the Match method of feature matching to match the descriptors of the two frames of real-time images to obtain matching points corresponding to each other, align the feature points and coordinate Transform, and then use the feature points to calculate the affine transformation matrix; if the fault-tolerant strategy for matching the satellite reference image and the real-time image is satisfied, use the matching information of the latest frame of real-time image, otherwise adopt the frame skipping strategy and use the previous frame of real-time image match information.
  • the fault-tolerant judgment rule when matching between real-time image frames, set the empirical value to make a fault-tolerant judgment on the affine transformation matrix. If the fault-tolerant judgment rule is satisfied, perform perspective transformation on the matching coordinates between frames, and update the real-time image matching information for inter-frame matching; if If the fault-tolerant judgment rule is not satisfied, the real-time image descriptor is matched with the inter-frame matching information descriptor successfully updated in the previous frame for feature point matching, and then the affine transformation matrix is calculated by using the matching feature points, and then two perspective transformations are used to calculate The row and column coordinates after the inter-frame matching are calculated by transferring the obtained affine transformation matrix and the successfully updated inter-frame matching information transformation matrix of the previous frame matching.
  • step S2 is specifically:
  • step S201 Based on the complete satellite remote sensing reference map, combined with the incremental navigation pose of the real-time map obtained in step S1, calibrate through translation, rotation, cropping and scaling, and then construct an optimized reference map that overlaps with the real-time map;
  • step S202 Perform heterogeneous feature matching on the real-time image collected in step S1 and the optimized reference image constructed in step S201 to obtain a navigation pose perspective transformation matrix of the real-time image relative to the reference image;
  • step S203 based on the navigation pose perspective transformation matrix obtained in step S202 , perform absolute navigation pose estimation on the real-time image to obtain an absolute navigation pose.
  • step S202 performing heterologous feature matching on the real-time graph and the optimized reference graph is specifically as follows:
  • step S3 is specifically:
  • step S302 combining the incremental navigation pose obtained in step S1, the absolute navigation pose and navigation data obtained in step S2, to analyze the source of observation information;
  • step S303 Build a factor graph structure according to the observation source analyzed in step S302;
  • step S304 Combining the estimated value of the navigation pose at the current moment obtained in step S301 and the observed value of the navigation pose at the current moment, based on the factor graph structure in step S303, the navigation pose is optimized and updated at the current moment, and the navigation pose corresponding to the current real-time map is obtained. posture and output.
  • Another technical solution of the present invention is a strong real-time dual-structure continuous scene fusion and matching navigation and positioning system, including:
  • the estimation module uses an improved sequential real-time image frame matching method to incrementally estimate the navigation pose based on the inter-frame feature transformation of the real-time image;
  • the optimization module adopts the improved matching method between the real-time map and the satellite remote sensing reference map, optimizes the satellite remote sensing reference map based on the incremental navigation pose estimated by the estimation module, and performs absolute navigation pose estimation on the real-time map and the optimized satellite remote sensing reference map , to get the absolute navigation pose;
  • the positioning module uses a purely visual navigation pose fusion estimation method to fuse the incremental navigation pose of the estimation module and the absolute navigation pose of the optimization module, and uses the predicted value of the navigation pose at the current moment to calculate the navigation pose.
  • Optimize and update obtain and output the navigation pose corresponding to the real-time map, and realize strong real-time dual-structure continuous scene fusion matching navigation positioning.
  • the present invention has at least the following beneficial effects:
  • the invention provides a strong real-time dual-structure continuous scene fusion matching navigation positioning method, which can effectively improve the registration efficiency, precision and success rate of scene matching through real-time preprocessing and optimization of the reference map, and can improve the accuracy without increasing the number of image sensors.
  • This method can reduce the timeliness requirements for the reference map, and use the recent satellite image as the reference map to meet the matching navigation and positioning requirements without special preprocessing of the satellite image.
  • it can independently perform scene matching navigation and positioning without the assistance of other navigation sensors, and realize the continuous and real-time calculation of the position and attitude of unmanned aerial vehicles. It can be applied to various types of continuous image matching real-time navigation and positioning requirements.
  • a frame-to-frame real-time image matching structure is implemented through step S1. Since the time interval between two adjacent frames of real-time images is short and the corresponding matching feature points are small, stable matching and positioning can be realized and the success rate of scene matching can be ensured.
  • the configuration of the image resources required by the scene matching algorithm can be realized, the relationship between the pixels of the reference image and the navigation coordinate system can be obtained, and the key parameters required for scene matching can be provided to ensure the normal operation of the scene matching process.
  • the direct and fast acquisition of the latitude and longitude in the navigation system corresponding to each pixel point on the satellite remote sensing reference map can be realized.
  • the matching and positioning of the inter-frame real-time image can be realized.
  • the implementation plan of the fault tolerance strategy is determined, and the row and column coordinates of the matching between frames are determined.
  • step S2 the position of the real-time image on the reference image is obtained through step S2, so as to realize absolute navigation and positioning and reduce the influence of accumulative errors of frame-to-frame positioning.
  • step S202 Furthermore, by performing heterogeneous feature matching settings on the real-time image and the optimized reference image in step S202, the workload of feature point extraction and feature alignment can be greatly reduced, and the matching success rate can be improved at the same time.
  • step S3 the fusion of relative positioning and absolute positioning data is realized through step S3, reducing navigation data jumps caused by image registration jitter, and stably outputting navigation pose data.
  • this method can effectively improve the registration efficiency, accuracy and success rate of scene matching, realize continuous and real-time calculation of UAV pose, and can be applied to various types of continuous image matching for real-time navigation and positioning requirements.
  • Fig. 1 is the framework flowchart of the present invention
  • Fig. 2 is a software algorithm work flow chart of the present invention
  • Fig. 3 is the flowchart of algorithm software configuration initialization of the present invention.
  • Fig. 4 is the flowchart of coordinate calculation of starting frame reference map of the present invention.
  • Fig. 5 is the flow chart of matching between real-time image frames of the present invention.
  • Fig. 6 is the flow chart of the matching of the real-time graph and the reference graph of the present invention.
  • Figure 7 is a sequence real-time diagram
  • Figure 8 is a satellite reference map
  • Fig. 9 is a real-time graph and an optimized benchmark graph, wherein (a) is a real-time graph feature, and (b) is an optimized benchmark graph;
  • Fig. 10 is a schematic diagram of the fused scene matching and positioning results, where (a) is the real-time image, and (b) is the registration position of the real-time image on the reference image.
  • the invention provides a strong real-time dual-structure continuous scene fusion matching navigation positioning method, which includes two image matching structures, one is the frame-to-frame matching of sequence real-time images, and the other is the matching of real-time images and satellite remote sensing reference images.
  • the estimated navigation pose" and the "navigation pose obtained by absolute matching” output the final navigation and positioning results, realizing roaming strong real-time autonomous navigation and positioning decoupled from the planned track in the GPS-denied environment; and because of its independent It can effectively guarantee the reliability of the system when combined with other navigation sensors, and can be widely used in the autonomous navigation and positioning system of unmanned aerial vehicles.
  • a kind of strong real-time dual-structure continuous scene fusion matching navigation positioning method of the present invention comprises the following steps:
  • the current T (n) time real-time map collected in step S101 and the previous frame T (n-1) time real-time map are carried out inter-frame feature matching, obtain the current T (n) time real-time map and the previous frame T (n) -1)
  • the pose perspective transformation matrix of the moment real-time graph
  • the current T (n) moment real-time image is based on the previous frame T ( n-1) moment real-time image
  • the relative pose estimation of the real-time graph at time obtains the relative pose of the real-time graph at the current T(n) moment
  • step S104 based on the relative pose of the real-time map at the current T(n) time obtained in step S103, in combination with the navigation pose of the real-time map at the time T(n-1) in the previous frame, increment the real-time map at the current T(n) time The incremental navigation pose estimation of the real-time map at the current T(n) time is obtained.
  • step S2 Using the improved matching method between the real-time map and the satellite remote sensing reference map, based on the incremental navigation pose obtained in step S1, optimize the reference map, perform absolute navigation pose estimation on the real-time map and the optimized reference map, and obtain absolute navigation pose;
  • step S201 Based on the complete satellite remote sensing reference map, combined with the incremental navigation pose of the real-time map at the current T(n) time in step S1, calibrate through translation, rotation, cropping and scaling, and construct an optimized reference map that overlaps with the real-time map;
  • step S202 Perform heterogeneous feature matching on the real-time map at the current T(n) time collected in step S1 and the satellite remote sensing reference map at the current T(n) time optimized in step S201, to obtain the real-time map at the current T(n) time relative to the reference map
  • step S203 Based on the navigation pose perspective transformation matrix obtained in step S202, perform absolute navigation pose estimation on the real-time image at the current T(n) time, to obtain absolute navigation poses.
  • the matching efficiency, accuracy and success rate of the satellite map and the down-view real-time map can be effectively improved through fusion and matching positioning.
  • step S302. Combining the incremental navigation pose obtained in step S1, the absolute navigation pose obtained in step S2, and other navigation data, analyze the sources of observation information;
  • step S303 According to the observation source in step S302, build a factor graph structure
  • step S304 in conjunction with the estimated value of the navigation pose at T(n) time obtained in step S301 and the observed value of the navigation pose at T(n) time, based on the factor graph structure of step S303, the optimal update of the navigation pose at T(n) time is performed, Obtain and output the navigation pose corresponding to the current real-time image.
  • the factor graph structure bring in the estimated value of the navigation pose at T(n) time and the observed value of the navigation pose at T(n) time, multiply the corresponding estimation/observation matrix respectively, and perform linear calculation according to the output sequence structure of the factor graph .
  • a strong real-time dual-structure continuous scene fusion matching navigation positioning system which can be used to realize the above-mentioned strong real-time dual-structure continuous scene fusion matching navigation positioning method, specifically, the strong real-time dual-structure
  • the continuous scene fusion and matching navigation and positioning system includes an estimation module, an optimization module and a positioning module.
  • the estimation module adopts the improved sequential real-time image frame matching method, and incrementally estimates the navigation pose based on the inter-frame feature transformation of the real-time image;
  • the optimization module adopts the improved matching method between the real-time map and the satellite remote sensing reference map, optimizes the satellite remote sensing reference map based on the incremental navigation pose estimated by the estimation module, and performs absolute navigation pose estimation on the real-time map and the optimized satellite remote sensing reference map , to get the absolute navigation pose;
  • the positioning module adopts the pure vision navigation pose fusion estimation method, uses the incremental navigation pose of the estimation module and the absolute navigation pose of the optimization module to analyze the source of observation information, and uses the predicted value of the navigation pose at the current moment to analyze the navigation position.
  • the pose is optimized and updated, and the navigation pose corresponding to the current real-time map is obtained and output.
  • a terminal device in yet another embodiment, includes a processor and a memory, the memory is used to store a computer program, the computer program includes program instructions, and the processor is used to execute the computer The program instructions stored in the storage medium.
  • the processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Gate array (Field-Programmable GateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., which are the computing core and control core of the terminal, are suitable for implementing one or more instructions, specifically It is suitable for loading and executing one or more instructions so as to realize the corresponding method flow or corresponding functions; the processor described in the embodiment of the present invention can be used for the operation of the strong real-time dual-structure continuous scene fusion matching navigation positioning method, including:
  • the incremental estimated navigation pose is based on the inter-frame feature transformation of real-time images; using the improved matching method between real-time images and satellite remote sensing reference images, based on the estimated incremental navigation pose , optimize the satellite remote sensing reference map, and perform absolute navigation pose estimation on the real-time map and the optimized satellite remote sensing reference map to obtain the absolute navigation pose; use the pure vision navigation pose fusion estimation method to combine the incremental navigation pose and The absolute navigation pose is fused, and the navigation pose prediction value at the current moment is used to optimize and update the navigation pose, and the navigation pose corresponding to the current real-time map is obtained and output.
  • the present invention also provides a storage medium, specifically a computer-readable storage medium (Memory).
  • the computer-readable storage medium is a memory device in a terminal device for storing programs and data.
  • the computer-readable storage medium here may include a built-in storage medium in the terminal device, and certainly may include an extended storage medium supported by the terminal device.
  • the computer-readable storage medium provides storage space, and the storage space stores the operating system of the terminal.
  • one or more instructions suitable for being loaded and executed by the processor are also stored in the storage space, and these instructions may be one or more computer programs (including program codes).
  • the computer-readable storage medium here may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
  • One or more instructions stored in the computer-readable storage medium can be loaded and executed by the processor, so as to realize the corresponding steps of the strong real-time dual-structure continuous scene fusion matching navigation positioning method in the above-mentioned embodiments; one of the computer-readable storage medium or one or more instructions are loaded by the processor and executed as follows:
  • the incremental estimated navigation pose is based on the inter-frame feature transformation of real-time images; using the improved matching method between real-time images and satellite remote sensing reference images, based on the estimated incremental navigation pose , optimize the satellite remote sensing reference map, and perform absolute navigation pose estimation on the real-time map and the optimized satellite remote sensing reference map to obtain the absolute navigation pose; use the pure vision navigation pose fusion estimation method to combine the incremental navigation pose and The absolute navigation pose is fused, and the navigation pose prediction value at the current moment is used to optimize and update the navigation pose, and the navigation pose corresponding to the current real-time map is obtained and output.
  • the workflow of a strong real-time dual-structure continuous scene fusion matching navigation positioning method of the present invention is as follows:
  • the main loop process includes two image matching processes between frame matching and benchmark image matching with real-time images, and the current real-time image is output in each cycle Navigation and pose information obtained after matching with the reference map;
  • the cycle begins to read the real-time image data stream, process the real-time image, fix the real-time image to the matching size, obtain the corresponding feature points and descriptors of the real-time image, and judge whether it is the starting frame;
  • the real-time image cannot be matched with the real-time image of the previous frame, then the row and column coordinates of the corresponding reference image are calculated, and the latitude and longitude of the four vertices of the real-time image are obtained by using the navigation information and pose information of the starting point Coordinates, use the GDAL data set of the reference map to perform coordinate transformation, and calculate the row and column coordinates on the reference map;
  • the matching method adopts an inter-frame matching fault-tolerant strategy to obtain the coordinates of the rows and columns of the perspective transformation, and then perform the perspective transformation to transfer the coordinates of the current real-time image
  • the affine transformation matrix updated by matching the reference image with the real-time image is transferred as the coordinates corresponding to the reference image, and then the coordinates are transformed into row and column coordinates on the entire reference image, but the row and column coordinates at this time are not standard rectangular coordinates, and need to be used
  • a specific calculation method turns it into standard rectangular coordinates, which are the row and column coordinates of the current reference map;
  • the matching method adopts the reference map and real-time Graph matching fault tolerance strategy, obtain the corresponding row and column coordinates of the current real-time graph on the entire reference graph, and update the corresponding matching information;
  • the formula for calculating tolerance range is:
  • the real-time image at the start frame does not have the previous frame real-time image for frame-to-frame matching to calculate the row and column coordinates of the crop reference image, so it is calculated using the initialization configuration information; according to the navigation and pose information of the real-time image at the start point, and the field of view of the shooting camera Calculate the latitude and longitude coordinates of the four vertices of the real-time image using the angle and the ratio of the captured image, and then use the GDAL dataset of the reference image to perform coordinate conversion to obtain the row and column information corresponding to the latitude and longitude of the four vertices on the entire reference image, which is the clipping reference image. row and column coordinates.
  • the real-time image camera takes a rectangular picture, uses the camera's field of view angle ⁇ and the flight height value h to calculate the actual distance c between the four vertices and the center, uses the camera's shooting aspect ratio a and b to calculate the fixed angle ⁇ , and combines the deflection angle when shooting ⁇ , get the angle A[4] of the four vertices A[0], A[1], A[2], A[3] relative to the center.
  • the latitude and longitude of the four vertices can be calculated according to the latitude and longitude lon1 and lat1 of one point .
  • R is the radius of the earth 63713930m.
  • the GDAL open source database After obtaining the latitude and longitude coordinates of the four vertices of the real-time graph, use the GDAL open source database to complete the conversion between the row and column coordinate system and the geographic coordinate system by means of the transfer of the projected coordinate system, specifically:
  • Xpixel and Ypixel are the abscissa and ordinate of the latitude and longitude pixel to be solved respectively
  • Xgeo and Ygeo are the real longitude and latitude of the pixel respectively
  • GeoTransform is the conversion matrix from pixel coordinates to geographic coordinates.
  • the currently read real-time image is matched with the updated real-time image of the previous frame, and the descriptors of the two frames of real-time images are matched using the Match method of feature matching to obtain matching points corresponding to each other. Align the feature points and perform coordinate transformation, and then use the feature points to calculate the affine transformation matrix.
  • the strategy uses the currently updated matching information for the information selection of the previous frame real-time image, that is, if the fault-tolerant strategy for matching the reference image and the real-time image is satisfied, the matching information of the latest frame real-time image is used, otherwise The frame skipping strategy is adopted, and the matching information of the real-time image of the previous frame is used.
  • the fault-tolerant judgment rule is:
  • Kh and Kn are error tolerance coefficients.
  • the current real-time image descriptor is matched with the inter-frame matching information descriptor successfully updated in the previous frame for feature point matching, and then the affine transformation matrix is calculated using the matching feature points, and then two perspective transformations are used.
  • the row and column coordinates after the inter-frame matching are calculated by passing the calculated affine transformation matrix and the inter-frame matching information transformation matrix successfully updated by the previous frame matching.
  • the fault-tolerant judgment rule is:
  • M is the error tolerance range of the initialization configuration
  • Km is the scale difference error tolerance coefficient
  • the error flag is set to 0.
  • Use the perspective transformation to convert the starting image coordinates to the target image coordinates through the affine transformation matrix, and then modify the size transformation, rotation transformation and cropping.
  • Transform transform the coordinates into row and column coordinates on the entire reference map, and update the matching information of the current real-time map.
  • the error flag is set to 1, and the row and column coordinates obtained during frame-to-frame matching are used as the row and column coordinates obtained by matching the reference image and the real-time image.
  • Figure 7 shows the continuous sequence features of real-time images. It can be seen from Figure 7 that any two adjacent real-time images have feature overlapping parts, which can achieve feature matching between frames.
  • Figure 8 is the complete satellite reference image.
  • Figure 8 shows that the scale difference between the complete satellite reference image and the real-time image is too large, and it is difficult to achieve direct registration between the real-time image and it.
  • Figure 9(a) is the feature of the real-time image
  • Figure 9(b) is the optimized reference image.
  • the spatial area is close, which can effectively realize different Source image registration.
  • Figure 10(a) is the real-time map
  • Figure 10(b) is the registration position of the real-time map on the reference map, which shows that this method can effectively realize the matching and positioning of the real-time map on the satellite reference map, and output navigation result.
  • the present invention provides a strong real-time dual-structure continuous scene fusion matching navigation positioning method and system, which can effectively improve the registration efficiency, accuracy and success rate of scene matching through real-time preprocessing and optimization of the reference map, and improve autonomous flight.
  • the real-time, continuity and adaptability of scene matching navigation and positioning in the environment can realize the continuous and real-time calculation of the position and attitude of unmanned aerial vehicles, and can be applied to various types of continuous image matching real-time navigation and positioning requirements.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种强实时双构连续景象融合匹配导航定位方法及系统,采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;采用改进的实时图与卫星遥感基准图匹配方法,基于估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;采用纯视觉的导航位姿融合估计方法,将增量式导航位姿和绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出。该方法可有效提高景象匹配的配准效率、精度及成功率,实现无人飞行器位姿的连续、实时解算,适用于各种类型的连续图像匹配实时导航定位需求。

Description

一种强实时双构连续景象融合匹配导航定位方法及系统 技术领域
本发明属于景象匹配导航技术领域,具体涉及一种强实时双构连续景象融合匹配导航定位方法及系统。
背景技术
面向日益多现的GPS拒止环境,无人飞行器的自主导航需求快速增长,高精度且自成体系的景象匹配导航已成为实现自主导航的重要选择之一。
现有景象匹配辅助导航方法常采用地标式匹配定位模式,根据任务需求通过卫星或无人机提前采集定制基准图,针对基准图进行导航适配性分析,规划飞行航迹。这导致现有景象匹配辅助导航方法与航迹规划耦合紧密,变更任务或遭遇突发空情时,由于航线偏离会直接失去导航定位能力,面对复杂对抗环境的适应能力极差;同时,现有无人机飞行器的嵌入式计算能力有限,且基准图与实时图间普遍存在时间、光照、拍摄角度等方面的差异,景象匹配导航定位能力受到卫图适配区离散化与匹配算法抗尺度变换/抗旋转能力的限制,难以满足自主飞行状态下实时、连续的导航定位需求;而且,现有匹配定位算法为保证可靠性,需要任务区域的基准图尽可能即刻采集,然而可靠、准确、及时的广域基准图数据获取往往十分困难。
以上问题,仅依靠提升计算设备能力或算法效率精度难以真正解决。
发明内容
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种强实时双构连续景象融合匹配导航定位方法及系统,克服现有景象匹配辅助导航与航迹规划耦合程度高、受限于离散化地标无法连续实时输出导航数据以及可靠/准确/及时的广域基准图数据获取困难等问题,实现GPS拒止环境下,与规划航迹解耦的漫游式强实时自主导航定位。
本发明采用以下技术方案:
一种强实时双构连续景象融合匹配导航定位方法,包括以下步骤:
S1、采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;
S2、采用改进的实时图与卫星遥感基准图匹配方法,基于步骤S1估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
S3、采用纯视觉的导航位姿融合估计方法,将步骤S1得到的增量式导航位姿和步骤S2得到的绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出,实现强实时双构连续景象融合匹配导航定位。
具体的,步骤S1具体为:
S101、采集并传输实时图,准备飞行区域的卫星遥感基准图;
S102、将步骤S101采集的实时图与前一帧实时图进行帧间特征匹配,获得实时图与前一帧实时图的位姿透视变换矩阵;
S103、基于步骤S102获得的实时图与前一帧实时图的位姿透视变换矩阵,进行实时图基于前一帧实时图的相对位姿估计,得到实时图的相对位姿;
S104、基于步骤S103得到的实时图的相对位姿,结合前一帧实时图的导航位姿,对当前实时图进行增量式导航位姿估计,得到当前实时图的增量式导航位姿。
进一步的,在步骤S101采集当前实时图之前,先进行初始化,具体为:
首先对实时图数据流、基准图文件路径、信息输出路径信息进行配置,初始化GDAL读取基准图GDAL数据集,读取基准图并准备裁剪,初始化创建特征匹配方法,初始化起始点的导航信息和位姿信息,初始化错误标志字,初始化配置图像匹配尺寸,配置误差容忍度Fr。
进一步的,步骤S101中,卫星遥感基准图的像素经纬度具体为:
Xgeo=GeoTransform[0]+Xpixel×GeoTransform[1]+Ypixel×GeoTransform[2]
Ygeo=GeoTransform[3]+Xpixel×GeoTransform[4]+Ypixel×GeoTransform[5]
其中,Xpixel、Ypixel分别为待求解经纬度像素点的横坐标和纵坐标,Xgeo、Ygeo分别为该像素点的真实经度和纬度,GeoTransform为像素坐标到地理坐标的转换矩阵。
进一步的,步骤S102具体为:
将当前读取的实时图与更新后的前一帧实时图进行匹配,使用特征匹配的Match方法将两帧实时图的描述子进行匹配,得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵;如果满足卫星基准图与实时图匹配的容错策略,使用最近一帧实时图的匹配信息,否则采取跳帧策略,使用更前一帧实时图的匹配信息。
更进一步的,实时图帧间匹配时,设置经验值对仿射变换矩阵进行容错判断,如果满足容错判断规则,对帧间匹配坐标进行透视变换,并更新帧间匹配的实时图匹配信息;如果不满足容错判断规则,将实时图描述子与前一帧匹配成功更新的帧间匹配信息描述子进行特征点匹配,然后利用匹配特征点计算仿射变换矩阵,再使用两次透视变换,通过计算出的仿射变换矩阵和上一帧匹配成功更新的帧间匹配信息变换矩阵传递计算出帧间匹配后的行列坐标。
具体的,步骤S2具体为:
S201、基于完整卫星遥感基准图,结合步骤S1得到的实时图的增量式导航位姿,通过平移、旋转、裁剪和缩放方式进行校准,然后构建与实时图重合的优化基准图;
S202、对步骤S1采集的实时图和步骤S201构建的优化基准图进行异源特征匹配,得到实时图相对于基准图的导航位姿透视变换矩阵;
S203、基于步骤S202得到的导航位姿透视变换矩阵,对实时图进行绝对式导航位姿估计,得到绝对式导航位姿。
进一步的,步骤S202中,对实时图和优化基准图进行异源特征匹配具体为:
使用裁剪基准图的行列坐标在整张基准图上裁剪用作匹配的基准图,使基准图匹配区域与实时图区域基本一致,再旋转基准图,将卫星遥感基准图和实时图保持在相同角度,然后将旋转后的基准图进行处理,固定为匹配尺寸,使用特征匹配得到对应的特征点和描述子,使用Match方法对卫星遥感基准图的描述子和实时图的描述子进行匹配得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵,对仿射变换矩阵进行容错判断;当满足容错判断规则时,卫星遥感基准图与实时图匹配成功时,使用透视变换将起始图像坐标通过仿射变换矩阵转换为目标图像坐标,再通过修正尺寸变换,旋转变换和裁剪变换,将坐标变换为在整张基准图上的行列坐标,更新实时图的匹配信息;当不满足容错判断规则时,卫星遥感基准图与实时图匹配不成功时,使用帧间匹配时得到的行列坐标作为卫星遥感基准图和实时图匹配得到的行列坐标。
具体的,步骤S3具体为:
S301、基于前一帧的导航位姿,通过导航位姿状态方程进行当前时刻导航位姿的预测,得到当前时刻导航位姿的估计值;
S302、结合步骤S1获得的增量式导航位姿、步骤S2绝对式导航位姿以及导航数据,进行观测信息来源分析;
S303、根据步骤S302分析的观测来源搭建因子图结构;
S304、结合步骤S301得到的当前时刻导航位姿的估计值和当前时刻导航位姿的观测值,基于步骤S303的因子图结构进行当前时刻导航位姿优化更新,得出当前实时图对应的导航位姿并输出。
本发明的另一技术方案是,一种强实时双构连续景象融合匹配导航定位系统,包括:
估计模块,采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;
优化模块,采用改进的实时图与卫星遥感基准图匹配方法,基于估计模块估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
定位模块,采用纯视觉的导航位姿融合估计方法,将估计模块的增量式导航位姿和优化模块的绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出实时图对应的导航位姿并输出,实现强实时双构连续景象融合匹配导航定位。
与现有技术相比,本发明至少具有以下有益效果:
本发明一种强实时双构连续景象融合匹配导航定位方法,通过实时预处理优化基准图,可有效提高景象匹配的配准效率、精度及成功率,能够在不增加图像传感器的前提下,提升自主飞航环境中景象匹配导航定位的实时性、连续性及适配性。通过本方法可降低对基准图时效性的要求,采用近期卫星图像作为基准图即可满足匹配导航定位需求,无需对卫星图像进行特殊预处理。同时,可在无其他导航传感器辅助情况下,独立进行景象匹配导航定位,实现无人飞行器位姿的连续、实时解算,可适用于各种类型的连续图像匹配实时导航定位需求。
进一步的,通过步骤S1实现一种帧间的实时图匹配结构,由于相邻两帧实时图时间间隔短,相应的匹配特征点小,可实现稳定的匹配定位,保证景象匹配的成功率。
进一步的,过初始化设置,可实现景象匹配算法所需图像资源的配置,获取基准图像素与导航坐标系的关系,提供景象匹配所需的关键参数,保障景象匹配流程正常运行。
进一步的,通过卫星遥感基准图的像素经纬度求解方法,可实现卫星遥感基准图上每个像素点对应导航系中经纬度的直接快速获取。
进一步的,通过实时图与前一帧实时图进行帧间特征匹配,结合匹配容错策略,可实现帧间实时图的匹配定位。
进一步的,通过分析匹配后的仿射变换矩阵正常状态区间,判断匹配是否正常,决定容错 策略执行方案,确定帧间匹配的行列坐标。
进一步的,通过步骤S2获得实时图在基准图上的位置,实现绝对式导航定位,减少帧间定位的累计误差影响。
进一步的,通过步骤S202对实时图和优化基准图进行异源特征匹配设置,可大幅减少特征点提取、特征对准的工作量,同时提升匹配成功率。
进一步的,通过步骤S3实现相对定位和绝对定位数据的融合,减少由于图像配准抖动引起的导航数据跳变,稳定输出导航位姿数据。
综上所述,本方法可有效提高景象匹配的配准效率、精度及成功率,实现无人飞行器位姿的连续、实时解算,可适用于各种类型的连续图像匹配实时导航定位需求。
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。
附图说明
图1为本发明的架构流程图;
图2为本发明的软件算法工作流程图;
图3为本发明算法软件配置初始化的流程图;
图4为本发明起始帧基准图坐标计算的流程图;
图5为本发明实时图帧间匹配的流程图;
图6为本发明实时图与基准图匹配的流程图;
图7为序列实时图;
图8为卫星基准图;
图9为实时图与优化后的基准图,其中,(a)为实时图特征,(b)为优化后的基准图;
图10为融合后的景象匹配定位结果示意图,其中,(a)为实时图,(b)为实时图在基准图上配准的位置。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本发明说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本发明。如在本发明说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本发明说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
在附图中示出了根据本发明公开实施例的各种结构示意图。这些图并非是按比例绘制的,其中为了清楚表达的目的,放大了某些细节,并且可能省略了某些细节。图中所示出的各种区域、层的形状及它们之间的相对大小、位置关系仅是示例性的,实际中可能由于制造公差或技术限制而有所偏差,并且本领域技术人员根据实际所需可以另外设计具有不同形状、大小、相对位置的区域/层。
本发明提供了一种强实时双构连续景象融合匹配导航定位方法,包括两种图像匹配结构,一是序列实时图帧间匹配,二是实时图与卫星遥感基准图匹配,通过融合“增量估计得到的导航位姿”与“绝对匹配得到的导航位姿”,输出最终导航定位结果,实现GPS拒止环境下,与规划航迹解耦的漫游式强实时自主导航定位;且由于其独立性,在与其他导航传感器组合时能 有效保障系统的可靠性,可广泛用于无人飞行器自主导航定位系统。
请参阅图1,本发明一种强实时双构连续景象融合匹配导航定位方法,包括以下步骤:
S1、采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换,增量式的估计导航位姿;
S101、采集并传输当前实时图,准备飞行区域的卫星遥感基准图;
S102、将步骤S101采集的当前T(n)时刻实时图与前一帧T(n-1)时刻实时图进行帧间特征匹配,获得当前T(n)时刻实时图与前一帧T(n-1)时刻实时图的位姿透视变换矩阵;
S103、基于步骤S102获得的当前T(n)时刻实时图与前一帧T(n-1)时刻实时图的位姿透视变换矩阵,进行当前T(n)时刻实时图基于前一帧T(n-1)时刻实时图的相对位姿估计,得到当前T(n)时刻实时图的相对位姿;
S104、基于步骤S103得到的当前T(n)时刻实时图的相对位姿,结合前一帧T(n-1)时刻实时图的导航位姿,对当前T(n)时刻实时图进行增量式导航位姿估计,得到当前T(n)时刻实时图的增量式导航位姿。
S2、采用改进的实时图与卫星遥感基准图匹配方法,基于步骤S1获得的增量式导航位姿,优化基准图,对实时图和优化基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
S201、基于完整卫星遥感基准图,结合步骤S1当前T(n)时刻实时图的增量式导航位姿,通过平移、旋转、裁剪和缩放方式校准,构建与实时图重合的优化基准图;
S202、对步骤S1采集的当前T(n)时刻实时图和步骤S201优化后的当前T(n)时刻卫星遥感基准图进行异源特征匹配,得到当前T(n)时刻实时图相对于基准图的导航位姿透视变换矩阵;
S203、基于步骤S202得到的导航位姿透视变换矩阵,对当前T(n)时刻实时图进行绝对式导航位姿估计,得到绝对式导航位姿。
S3、采用纯视觉的导航位姿融合估计方法,通过融合匹配定位可有效提升卫图与下视实时图的匹配效率、精度及成功率。
S301、基于T(n-1)时刻的导航位姿,通过导航位姿状态方程进行T(n)时刻导航位姿的预测,得到T(n)时刻导航位姿的估计值;
S302、结合步骤S1获得的增量式导航位姿、步骤S2绝对式导航位姿以及其他导航数据,进行观测信息来源分析;
S303、根据步骤S302的观测来源,搭建因子图结构;
S304、结合步骤S301得到的T(n)时刻导航位姿的估计值和T(n)时刻导航位姿的观测值,基于步骤S303的因子图结构进行T(n)时刻导航位姿优化更新,得出当前实时图对应的导航位姿并输出。
T(n)时刻导航位姿优化更新具体为:
根据因子图结构,带入T(n)时刻导航位姿的估计值和T(n)时刻导航位姿的观测值,分别乘以对应的估计/观测矩阵,按因子图输出顺序结构进行线性计算。
本发明再一个实施例中,提供一种强实时双构连续景象融合匹配导航定位系统,该系统能够用于实现上述强实时双构连续景象融合匹配导航定位方法,具体的,该强实时双构连续景象融合匹配导航定位系统包括估计模块、优化模块以及定位模块。
其中,估计模块,采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;
优化模块,采用改进的实时图与卫星遥感基准图匹配方法,基于估计模块估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
定位模块,采用纯视觉的导航位姿融合估计方法,利用估计模块的增量式导航位姿和优化 模块的绝对式导航位姿进行观测信息来源分析,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出。
本发明再一个实施例中,提供了一种终端设备,该终端设备包括处理器以及存储器,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器用于执行所述计算机存储介质存储的程序指令。处理器可能是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor、DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable GateArray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其是终端的计算核心以及控制核心,其适于实现一条或一条以上指令,具体适于加载并执行一条或一条以上指令从而实现相应方法流程或相应功能;本发明实施例所述的处理器可以用于强实时双构连续景象融合匹配导航定位方法的操作,包括:
采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;采用改进的实时图与卫星遥感基准图匹配方法,基于估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;采用纯视觉的导航位姿融合估计方法,将增量式导航位姿和绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出。
本发明再一个实施例中,本发明还提供了一种存储介质,具体为计算机可读存储介质(Memory),所述计算机可读存储介质是终端设备中的记忆设备,用于存放程序和数据。可以理解的是,此处的计算机可读存储介质既可以包括终端设备中的内置存储介质,当然也可以包括终端设备所支持的扩展存储介质。计算机可读存储介质提供存储空间,该存储空间存储了终端的操作系统。并且,在该存储空间中还存放了适于被处理器加载并执行的一条或一条以上的指令,这些指令可以是一个或一个以上的计算机程序(包括程序代码)。需要说明的是,此处 的计算机可读存储介质可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。
可由处理器加载并执行计算机可读存储介质中存放的一条或一条以上指令,以实现上述实施例中有关强实时双构连续景象融合匹配导航定位方法的相应步骤;计算机可读存储介质中的一条或一条以上指令由处理器加载并执行如下步骤:
采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;采用改进的实时图与卫星遥感基准图匹配方法,基于估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;采用纯视觉的导航位姿融合估计方法,将增量式导航位姿和绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中的描述和所示的本发明实施例的组件可以通过各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图2,本发明一种强实时双构连续景象融合匹配导航定位方法的工作流程如下:
对算法配置初始化,确定路径信息,资源库的初始化和基础配置信息,开始主循环流程,主循环流程包括帧间匹配和基准图与实时图匹配两次图像匹配过程,每次循环输出当前实时图与基准图匹配后得到的导航与位姿信息;
循环开始读取实时图数据流,对实时图进行处理,将实时图固定为匹配尺寸,获取实时图 对应的特征点和描述子,判断是否为起始帧;
如果是起始帧,实时图不能与前一帧实时图进行帧间匹配,则其对应的基准图行列坐标通过计算得到,使用起始点的导航信息与位姿信息获取实时图四个顶点的经纬度坐标,利用基准图的GDAL数据集进行坐标转换,计算在基准图上的行列坐标;
如果不是起始帧,将实时图与更新的前一帧实时图进行帧间特征匹配,匹配方法采取帧间匹配容错策略,得到透视变换的行列坐标,再进行透视变换传递,将当前实时图坐标通过基准图与实时图匹配更新的仿射变换矩阵传递为基准图对应的坐标,然后将坐标变换为在整张基准图上的行列坐标,但是此时的行列坐标不是标准的矩形坐标,需要使用特定的计算方法将其变为标准矩形坐标,即为当前基准图的行列坐标;
得到基准图行列坐标后,计算并输出增量式经、纬、高、航向角信息与标志,并对整张基准图上进行裁剪,与当前实时图进行特征匹配,匹配方法采取基准图与实时图匹配容错策略,得到当前实时图在整张基准图上对应的行列坐标,并对相应的匹配信息进行更新;
计算当前实时图的行列坐标对应的绝对式经、纬、高数据和航向角信息,将四个顶点的行列坐标取平均值得到中心点的行列坐标,再使用基准图的GDAL数据集将行列坐标转换为对应的经纬度坐标,通过因子图融合导航估计后输出用于后续导航定位处理;
判断是否终止循环,如果终止,结束循环流程,否则继续新的循环。
请参阅图3,算法配置初始化具体为:
首先对实时图数据流、基准图文件路径、信息输出路径等信息进行配置,初始化GDAL读取基准图GDAL数据集,读取基准图并准备裁剪,初始化创建特征匹配算法软件,初始化起始点的导航信息和位姿信息(经纬度,飞行高度和偏转角度),初始化错误标志字,初始化配置图像匹配尺寸,根据放大比例m和上下浮动误差容忍范围q,配置误差容忍度Fr,可以自定义调节,误差容忍度范围计算公式为:
Figure PCTCN2021126136-appb-000001
请参阅图4,起始帧基准图坐标计算具体为:
起始帧时实时图因为没有前一帧实时图做帧间匹配计算裁剪基准图的行列坐标,所以使用初始化配置信息计算得到;根据起始点实时图的导航与位姿信息、拍摄相机的视场角和拍摄图像比例计算出实时图四个顶点的经纬度坐标,再使用基准图的GDAL数据集进行坐标转换得到在整张基准图上四个顶点经纬度所对应的行列信息,即为裁剪基准图的行列坐标。
实时图相机拍摄为长方形图片,使用相机视场角θ和飞行高度值h计算四个顶点距离中心的实际距离c,使用相机拍摄长宽比a和b计算固定角度α,结合拍摄时的偏转角度β,得到四个顶点A[0]、A[1]、A[2]、A[3]相对于中心的角度A[4]。
Figure PCTCN2021126136-appb-000002
Figure PCTCN2021126136-appb-000003
A[0]=-α+β
A[1]=-180°+α+β
A[2]=-180°-α+β
A[3]=α+β
根据中心点经纬度,距离和角度,分别计算四个顶点的经纬度;已知两点的距离d和连线相对于北偏东的角度α,根据一个点的经纬度lon1和lat1计算出另一点的经纬度。
Figure PCTCN2021126136-appb-000004
Figure PCTCN2021126136-appb-000005
其中,R为地球半径63713930m。
得到实时图四个顶点的经纬度坐标后,使用GDAL开源数据库借助投影坐标系的传递完成行列坐标系与地理坐标系的转换,具体为:
首先将GDAL数据集的描述信息导入投影坐标系,并获取其对应的地理坐标系,构造两个坐标系之间的相互转换关系,将地理经纬度坐标转换为投影坐标;
再依赖于GDAL数据集的坐标变换系数,通过二元一次方程组的逆解算,完成投影坐标值到像素坐标值的转换,将投影坐标转换为行列坐标,得到起始帧基准图的行列坐标。
Xgeo=GeoTransform[0]+Xpixel×GeoTransform[1]+Ypixel×GeoTransform[2]
Ygeo=GeoTransform[3]+Xpixel×GeoTransform[4]+Ypixel×GeoTransform[5]
其中,Xpixel、Ypixel分别为待求解经纬度像素点的横坐标和纵坐标,Xgeo、Ygeo分别为该像素点的真实经度和纬度,GeoTransform为像素坐标到地理坐标的转换矩阵。
请参阅图5,实时图帧间匹配具体为:
实时图帧间匹配时,将当前读取的实时图与更新后的前一帧实时图进行匹配,使用特征匹配的Match方法将两帧实时图的描述子进行匹配,得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵。
此处策略在对前一帧实时图的信息选择上,使用的是当前更新的匹配信息,即如果满足基准图与实时图匹配的容错策略,使用的是最近一帧实时图的匹配信息,否则采取跳帧策略,使用的是更前一帧实时图的匹配信息。
实时图帧间匹配时,图像匹配效果会体现在仿射变换矩阵值上,对仿射变换矩阵H设置经验值进行容错判断,容错判断规则为:
Figure PCTCN2021126136-appb-000006
其中,Kh、Kn为容错系数。
根据匹配场景进行调整,如果满足容错判断规则,对帧间匹配坐标进行透视变换,并更新帧间匹配的实时图匹配信息;
如果不满足容错判断规则,将当前实时图描述子与前一帧匹配成功更新的帧间匹配信息描述子进行特征点匹配,然后利用匹配特征点计算仿射变换矩阵,再使用两次透视变换,通过计算出的仿射变换矩阵和上一帧匹配成功更新的帧间匹配信息变换矩阵传递计算出帧间匹配后的行列坐标。
请参阅图6,实时图与基准图匹配具体为:
实时图与基准图进行特征匹配时,需要先使用裁剪基准图的行列坐标在整张基准图上裁剪用作匹配的基准图,以满足基准图匹配区域与实时图区域基本一致,再旋转基准图,将两图保持在相同角度,减轻旋转对图像匹配带来的影响,然后将旋转后的基准图进行处理,固定为匹配尺寸,使用特征匹配得到对应的特征点和描述子,将基准图的描述子和当前实时图的描述子使用合适的Match方法进行匹配,得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵。
实时图与基准图匹配得到仿射变换矩阵后,根据图像匹配中尺寸和位置对矩阵值的影响,对仿射变换矩阵H设置经验值进行容错判断,以采取不同的措施,容错判断规则为:
Figure PCTCN2021126136-appb-000007
其中,M为初始化配置的误差容忍度范围,Km为尺度差异容错系数。
当满足容错判断规则时,认为基准图与实时图匹配成功,错误标志置0,使用透视变换将起始图像坐标通过仿射变换矩阵转换为目标图像坐标,再通过修正尺寸变换,旋转变换和裁剪变换,将坐标变换为在整张基准图上的行列坐标,更新当前实时图的匹配信息。
当不满足容错判断规则时,错误标志置1,使用帧间匹配时得到的行列坐标作为基准图和 实时图匹配得到的行列坐标。
在某次离地500m飞行试验中,通过双构景象匹配实现强实时的融合导航定位,水平定位误差均值为13.45m,高度定位误差均值为27.99m,导定位航周期50ms。
请参阅图7,为实时图的连续序列特征,从图7中可以看出,任意两张相邻实时图具有特征重合部分,能够实现帧间特征匹配。
请参阅图8,为完整卫星基准图,通过图8说明完整卫星基准图与实时图尺度差异过大,难以实现实时图与之的直接配准。
请参阅图9,图9(a)为实时图特征,图9(b)为优化后的基准图,优化后的基准图与实时图基本不存在尺度差异,且空间区域接近,能够有效实现异源图像配准。
请参阅图10,图10(a)为实时图,图10(b)为实时图在基准图上配准的位置,说明本方法能有效实现实时图在卫星基准图上的匹配定位,输出导航结果。
综上所述,本发明一种强实时双构连续景象融合匹配导航定位方法及系统,通过实时预处理优化基准图,可有效提高景象匹配的配准效率、精度及成功率,提升自主飞航环境中景象匹配导航定位的实时性、连续性及适配性,实现无人飞行器位姿的连续、实时解算,可适用于各种类型的连续图像匹配实时导航定位需求。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通 用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。

Claims (10)

  1. 一种强实时双构连续景象融合匹配导航定位方法,其特征在于,包括以下步骤:
    S1、采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;
    S2、采用改进的实时图与卫星遥感基准图匹配方法,基于步骤S1估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
    S3、采用纯视觉的导航位姿融合估计方法,将步骤S1得到的增量式导航位姿和步骤S2得到的绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出当前实时图对应的导航位姿并输出,实现强实时双构连续景象融合匹配导航定位。
  2. 根据权利要求1所述的方法,其特征在于,步骤S1具体为:
    S101、采集并传输实时图,准备飞行区域的卫星遥感基准图;
    S102、将步骤S101采集的实时图与前一帧实时图进行帧间特征匹配,获得实时图与前一帧实时图的位姿透视变换矩阵;
    S103、基于步骤S102获得的实时图与前一帧实时图的位姿透视变换矩阵,进行实时图基于前一帧实时图的相对位姿估计,得到实时图的相对位姿;
    S104、基于步骤S103得到的实时图的相对位姿,结合前一帧实时图的导航位姿,对当前实时图进行增量式导航位姿估计,得到当前实时图的增量式导航位姿。
  3. 根据权利要求2所述的方法,其特征在于,在步骤S101采集当前实时图之前,先进行初始化,具体为:
    首先对实时图数据流、基准图文件路径、信息输出路径信息进行配置,初始化GDAL读取基准图GDAL数据集,读取基准图并准备裁剪,初始化创建特征匹配方法,初始化起始点的导航信息和位姿信息,初始化错误标志字,初始化配置图像匹配尺寸,配置误差容忍度Fr。
  4. 根据权利要求2所述的方法,其特征在于,步骤S101中,卫星遥感基准图的像素经纬度具体为:
    Xgeo=GeoTransform[0]+Xpixel×GeoTransform[1]+Ypixel×GeoTransform[2]
    Ygeo=GeoTransform[3]+Xpixel×GeoTransform[4]+Ypixel×GeoTransform[5]
    其中,Xpixel、Ypixel分别为待求解经纬度像素点的横坐标和纵坐标,Xgeo、Ygeo分别为该像素点的真实经度和纬度,GeoTransform为像素坐标到地理坐标的转换矩阵。
  5. 根据权利要求2所述的方法,其特征在于,步骤S102具体为:
    将当前读取的实时图与更新后的前一帧实时图进行匹配,使用特征匹配的Match方法将两帧实时图的描述子进行匹配,得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵;如果满足卫星基准图与实时图匹配的容错策略,使用最近一帧实时图的匹配信息,否则采取跳帧策略,使用更前一帧实时图的匹配信息。
  6. 根据权利要求5所述的方法,其特征在于,实时图帧间匹配时,设置经验值对仿射变换矩阵进行容错判断,如果满足容错判断规则,对帧间匹配坐标进行透视变换,并更新帧间匹配的实时图匹配信息;如果不满足容错判断规则,将实时图描述子与前一帧匹配成功更新的帧间匹配信息描述子进行特征点匹配,然后利用匹配特征点计算仿射变换矩阵,再使用两次透视变换,通过计算出的仿射变换矩阵和上一帧匹配成功更新的帧间匹配信息变换矩阵传递计算出帧间匹配后的行列坐标。
  7. 根据权利要求1所述的方法,其特征在于,步骤S2具体为:
    S201、基于完整卫星遥感基准图,结合步骤S1得到的实时图的增量式导航位姿,通过平移、旋转、裁剪和缩放方式进行校准,然后构建与实时图重合的优化基准图;
    S202、对步骤S1采集的实时图和步骤S201构建的优化基准图进行异源特征匹配,得到实时图相对于基准图的导航位姿透视变换矩阵;
    S203、基于步骤S202得到的导航位姿透视变换矩阵,对实时图进行绝对式导航位姿估计,得到绝对式导航位姿。
  8. 根据权利要求7所述的方法,其特征在于,步骤S202中,对实时图和优化基准图进行异源特征匹配具体为:
    使用裁剪基准图的行列坐标在整张基准图上裁剪用作匹配的基准图,使基准图匹配区域与实时图区域基本一致,再旋转基准图,将卫星遥感基准图和实时图保持在相同角度,然后将旋转后的基准图进行处理,固定为匹配尺寸,使用特征匹配得到对应的特征点和描述子,使用Match方法对卫星遥感基准图的描述子和实时图的描述子进行匹配得到彼此对应的匹配点,将特征点对齐并进行坐标转换,然后使用特征点计算出仿射变换矩阵,对仿射变换矩阵进行容错判断;当满足容错判断规则时,卫星遥感基准图与实时图匹配成功时,使用透视变换将起始图像坐标通过仿射变换矩阵转换为目标图像坐标,再通过修正尺寸变换,旋转变换和裁剪变换,将坐标变换为在整张基准图上的行列坐标,更新实时图的匹配信息;当不满足容错判断规则时,卫星遥感基准图与实时图匹配不成功时,使用帧间匹配时得到的行列坐标作为卫星遥感基准图和实时图匹配得到的行列坐标。
  9. 根据权利要求1所述的方法,其特征在于,步骤S3具体为:
    S301、基于前一帧的导航位姿,通过导航位姿状态方程进行当前时刻导航位姿的预测,得到当前时刻导航位姿的估计值;
    S302、结合步骤S1获得的增量式导航位姿、步骤S2绝对式导航位姿以及导航数据,进行观测信息来源分析;
    S303、根据步骤S302分析的观测来源搭建因子图结构;
    S304、结合步骤S301得到的当前时刻导航位姿的估计值和当前时刻导航位姿的观测值,基于步骤S303的因子图结构进行当前时刻导航位姿优化更新,得出当前实时图对应的导航位 姿并输出。
  10. 一种强实时双构连续景象融合匹配导航定位系统,其特征在于,包括:
    估计模块,采用改进的序列实时图帧间匹配方法,基于实时图的帧间特征变换增量式的估计导航位姿;
    优化模块,采用改进的实时图与卫星遥感基准图匹配方法,基于估计模块估计的增量式导航位姿,优化卫星遥感基准图,对实时图和优化卫星遥感基准图进行绝对式导航位姿估计,得到绝对式导航位姿;
    定位模块,采用纯视觉的导航位姿融合估计方法,将估计模块的增量式导航位姿和优化模块的绝对式导航位姿进行融合处理,利用当前时刻导航位姿预测值对导航位姿进行优化更新,得出实时图对应的导航位姿并输出,实现强实时双构连续景象融合匹配导航定位。
PCT/CN2021/126136 2021-06-15 2021-10-25 一种强实时双构连续景象融合匹配导航定位方法及系统 WO2022262164A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110663125.2A CN113418527B (zh) 2021-06-15 2021-06-15 一种强实时双构连续景象融合匹配导航定位方法及系统
CN202110663125.2 2021-06-15

Publications (1)

Publication Number Publication Date
WO2022262164A1 true WO2022262164A1 (zh) 2022-12-22

Family

ID=77788611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126136 WO2022262164A1 (zh) 2021-06-15 2021-10-25 一种强实时双构连续景象融合匹配导航定位方法及系统

Country Status (2)

Country Link
CN (1) CN113418527B (zh)
WO (1) WO2022262164A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113418527B (zh) * 2021-06-15 2022-11-29 西安微电子技术研究所 一种强实时双构连续景象融合匹配导航定位方法及系统
CN114199250A (zh) * 2021-12-03 2022-03-18 清华大学 一种基于卷积神经网络的景象匹配导航方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353377A (zh) * 2011-07-12 2012-02-15 北京航空航天大学 一种高空长航时无人机组合导航系统及其导航定位方法
CN107067415A (zh) * 2017-03-21 2017-08-18 南京航空航天大学 一种基于图像匹配的目标快速精确定位方法
CN109974693A (zh) * 2019-01-31 2019-07-05 中国科学院深圳先进技术研究院 无人机定位方法、装置、计算机设备及存储介质
CN111504323A (zh) * 2020-04-23 2020-08-07 湖南云顶智能科技有限公司 基于异源图像匹配与惯性导航融合的无人机自主定位方法
CN112102403A (zh) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 用于输电塔场景下的自主巡检无人机的高精度定位方法及其系统
CN113418527A (zh) * 2021-06-15 2021-09-21 西安微电子技术研究所 一种强实时双构连续景象融合匹配导航定位方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111256696B (zh) * 2020-02-24 2021-11-26 武汉大学 多特征多层次景象匹配的飞行器自主导航方法
CN111915651B (zh) * 2020-07-31 2023-09-12 西安电子科技大学 基于数字影像地图与特征点跟踪的视觉位姿实时估计方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353377A (zh) * 2011-07-12 2012-02-15 北京航空航天大学 一种高空长航时无人机组合导航系统及其导航定位方法
CN107067415A (zh) * 2017-03-21 2017-08-18 南京航空航天大学 一种基于图像匹配的目标快速精确定位方法
CN109974693A (zh) * 2019-01-31 2019-07-05 中国科学院深圳先进技术研究院 无人机定位方法、装置、计算机设备及存储介质
CN111504323A (zh) * 2020-04-23 2020-08-07 湖南云顶智能科技有限公司 基于异源图像匹配与惯性导航融合的无人机自主定位方法
CN112102403A (zh) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 用于输电塔场景下的自主巡检无人机的高精度定位方法及其系统
CN113418527A (zh) * 2021-06-15 2021-09-21 西安微电子技术研究所 一种强实时双构连续景象融合匹配导航定位方法及系统

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DONG-GYU SIM, IHN-CHEOL KIM, RAE-HONG PARK, RIN-CHUL KIM, SANG UK LEE: "Integrated position estimation using aerial image sequences", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY., USA, vol. 24, no. 1, 1 January 2002 (2002-01-01), USA , pages 1 - 18, XP011094131, ISSN: 0162-8828, DOI: 10.1109/34.982881 *
LI TIEJUN, CHEN ZHE, LIU ZHUN, WANG RENXIANG: "Estimation of navigation parameters from real-time aerial scene", SPIE SMART STRUCTURES AND MATERIALS + NONDESTRUCTIVE EVALUATION AND HEALTH MONITORING, 2005, SAN DIEGO, CALIFORNIA, UNITED STATES, SPIE, US, vol. 4222, 10 October 2000 (2000-10-10), US, pages 230 - 235, XP093015891, ISSN: 0277-786X, ISBN: 978-1-5106-4548-6, DOI: 10.1117/12.403880 *
LI YAOJUN, PAN, QUAN,ZHAO CHUN-HUI,YU, FENG YING,YANG: "Natural-landmark Scene Matching Vision Navigation Based on Dynamic Keyframe", OPTO-ELECTRONIC ENGINEERING, ZHONGGUO KEXUEYUAN GUANGDIAN JISHU YANJIUSUO QINGBAOSHI / CHINESE ACADEMY OF SCIENCES, INSTITUTE OF OPTICS AND ELECTRONICS, CN, vol. 37, no. 9, 15 September 2010 (2010-09-15), CN , pages 32 - 38, XP093015893, ISSN: 1003-501X, DOI: 10.3969/j.issn.1003-501X.2010.09.006 *
TIEJUN LI, ZHE CHEN, RENXIANG WANG: "Study of estimation of helicopter navigation parameters with digital camera", PROCEEDINGS OF SPIE, VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2005, SPIE, VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2005, 2005, BEIJING, CHINA, vol. 4602, 17 April 2003 (2003-04-17), Visual Communications and Image Processing 2005, 2005, Beijing, China, XP055110106, ISSN: 0277786X, DOI: 10.1117/12.445741 *
WENG LUBIN L, TIAN YUAN‚WANG YANQING‚T AI XIANQING‚YANG YIPING: "Novel Scene Matching Method for Complex Terrain Based on Three -view Constraints", ACTA AERONAUTICA ET ASTRONAUTICA SINICA, vol. 29, no. 5, 25 September 2008 (2008-09-25), pages 1288 - 1296, XP093015897, ISSN: 1000-6893 *

Also Published As

Publication number Publication date
CN113418527B (zh) 2022-11-29
CN113418527A (zh) 2021-09-21

Similar Documents

Publication Publication Date Title
CN109579843B (zh) 一种空地多视角下的多机器人协同定位及融合建图方法
WO2022262164A1 (zh) 一种强实时双构连续景象融合匹配导航定位方法及系统
US20230394756A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction apparatus and storage medium
CN110675450B (zh) 基于slam技术的正射影像实时生成方法及系统
CN105352509B (zh) 地理信息时空约束下的无人机运动目标跟踪与定位方法
CN113406682B (zh) 一种定位方法、装置、电子设备及存储介质
US20210082137A1 (en) Method and device of simultaneous localization and mapping
US11238643B1 (en) High-definition city mapping
CN112129281A (zh) 一种基于局部邻域地图的高精度图像导航定位方法
AU2012296659A1 (en) Systems and methods for navigating camera
CN111829532B (zh) 一种飞行器重定位系统和重定位方法
CN110275179A (zh) 一种基于激光雷达以及视觉融合的构建地图方法
CN112419501A (zh) 一种地空异构协同地图构建方法
AU2018449839A1 (en) Surveying and mapping method and device
US20220300745A1 (en) Contextualization and refinement of simultaneous localization and mapping
Zhao et al. RTSfM: Real-time structure from motion for mosaicing and DSM mapping of sequential aerial images with low overlap
CN111508072B (zh) 地图构建方法、装置、电子设备及存储介质
CN114396944B (zh) 一种基于数字孪生的自主定位误差矫正方法
Hamieh et al. Construction of autonomous driving maps employing lidar odometry
CN114022561A (zh) 一种基于gps约束和动态校正的城区单目测图方法和系统
Ke et al. 3D scene localization and mapping based on omnidirectional SLAM
Lourakis et al. Pose estimation of a moving camera with low-cost, multi-GNSS devices
CN111091622B (zh) 一种无人机巡检航线构建方法
Patel et al. Collaborative mapping of archaeological sites using multiple uavs
CN111868656A (zh) 一种作业控制系统、作业控制方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21945744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE