CN112906573B - Planet surface navigation road sign matching method based on contour point set - Google Patents

Planet surface navigation road sign matching method based on contour point set Download PDF

Info

Publication number
CN112906573B
CN112906573B CN202110194541.2A CN202110194541A CN112906573B CN 112906573 B CN112906573 B CN 112906573B CN 202110194541 A CN202110194541 A CN 202110194541A CN 112906573 B CN112906573 B CN 112906573B
Authority
CN
China
Prior art keywords
navigation
landmark
matching
image
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110194541.2A
Other languages
Chinese (zh)
Other versions
CN112906573A (en
Inventor
朱圣英
修义
崔平远
徐瑞
梁子璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110194541.2A priority Critical patent/CN112906573B/en
Publication of CN112906573A publication Critical patent/CN112906573A/en
Application granted granted Critical
Publication of CN112906573B publication Critical patent/CN112906573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to a method for matching a planet surface navigation road sign based on a contour point set, and belongs to the field of deep space exploration. The detector respectively detects and fits the edges of the navigation road signs on the planet surface image A and the database map B, and the contour point set coordinates are obtained through discretization of the fitted contour of the navigation road signs. And calculating the similarity distance of the obtained discrete point set. And selecting three pairs of navigation signposts with the minimum value in the point set similarity matrix as effective matching signposts, and calculating a homography transformation matrix. And solving an affine transformation coordinate of the central coordinate of the navigation landmark of the image to be matched by using the homography transformation matrix, calculating distance deviation by using the central pixel coordinate of the navigation landmark of the image, constructing a matching distance matrix and searching the navigation landmark with the minimum matching distance as a matching landmark. The method is more widely applicable to matching objects, can simultaneously match terrains such as meteorite craters, rocks and gullies in the optical image, and is suitable for matching and tracking sequence images and database maps.

Description

Planet surface navigation road sign matching method based on contour point set
Technical Field
The invention relates to a method for matching a planet surface navigation landmark based on a contour point set, which is particularly suitable for matching sequence images and matching landing images with a database in a deep space exploration navigation process and belongs to the field of deep space exploration.
Background
Planetary exploration is one of the most central tasks of future deep space exploration, wherein the matching of the topographic features of the surface of the planet is one of the key technologies of optical autonomous navigation. The deep space exploration has long navigation distance and long communication time, and the traditional ground measurement and control-based mode has larger communication time delay. In addition, the deep space dynamics environment is complex, and the navigation mode based on ground remote control cannot meet the requirement of high-precision navigation. With the breakthrough and development of computer hardware technology, the terrain feature matching technology based on the spaceborne computer becomes a research hotspot. The meteorite crater, rocks, gullies, steep slopes and the like are widely existing on the surface of the planet as common natural topographic features, have optical characteristics of obvious bright and dark contrast under the illumination condition, can realize large-scale detection and extraction only by the optical sensor and the satellite-borne computer, are simple to process and convenient to track, provide a large amount of data for image matching, and have a wide application prospect. Therefore, the method for matching the planet surface topography as the navigation road sign is widely researched and applied in the aspect of planet detection. Whether the navigation road signs on the surfaces of the planets in the optical images can be matched correctly becomes one of the key technologies for determining the success or failure of the task.
Before a task starts, the deep space probe draws a target planet global map through long-time observation, wherein the target planet global map comprises data information such as a navigation landmark shape outline and a three-dimensional position. In subsequent tasks such as landing, the detector shoots an image of a target area in real time, and position information of a navigation landmark in a descending image needs to be acquired for navigation, so that the self state is estimated. The image matching is a bridge for connecting a database map and a descending image, plays a role in starting and stopping between navigation landmark detection and pose estimation, and is an important information source for acquiring the three-dimensional position of the navigation landmark. Correct landmark information is provided for a pose estimation algorithm through navigation landmark matching, so that the navigation precision of the detector is ensured.
Due to different shooting environments, imaging conditions of the planet surface navigation road signs are different, such as an illumination angle, a shooting attitude, a shooting distance and the like. The difference exists between the shot image and the navigation landmark in the database map, the linear transformation from the two-dimensional data of the shot image to the two-dimensional data of the map belongs to affine transformation, namely the navigation landmark has affine transformation in the two images, but the straightness and the parallelism of the navigation landmark graph before and after the transformation are kept unchanged, and the important basis is provided for landmark matching.
In the developed planet surface navigation landmark matching method, the main matching objects are concentrated on the features of the meteor crater landmark, and the common methods include an image-based cross-correlation matching method, a meteor crater correlation-based matching method and a meteor crater area ratio affine invariant-based matching method.
In the prior art [1] (Xutian Lai, Ruizhu billow, Tianyang, etc., a meteorite crater matching method based on area ratio: China, CN102999915A [ P ],2013-03-27.), a matching algorithm based on different meteorite crater area ratios is provided, firstly, meteorite crater matching detection is carried out on a target image through a maximum stable extremum region method, and the areas of meteorite craters in two images are respectively calculated by utilizing an ellipse fitting algorithm. And finally, traversing the meteorite craters in the two images based on the Hausdorff distance, judging the similarity, and considering that the two meteorite craters with the highest similarity are successfully matched. Under the condition that the number of meteorite craters in an image is large, the calculation amount is too large due to the fact that the meteorite craters are traversed through the similarity, the matching rate based on the Hausdorff distance is not high, and the algorithm is not suitable for the task with high real-time requirement of landing descent.
In the prior art [2] (M.Yu, H.Cui, Y.Tian, A new adaptive basal on classifier detection and matching for visual navigation in planar mapping [ J ]. Advances in Space research.53(2014)1810 and 1821.), a matching method for performing WTA voting based on meteor crater area ratio as affine invariant is provided, the method can calculate meteor crater area ratio in a database map in advance, calculate meteor crater area ratio in an image to be matched in a task implementation stage, and take the crater with the area ratio most matched in the image to be matched as successful matching by comparing the number of matching relations between a detection map and the meteor crater area ratio in the database. The algorithm does not rely on pose information of the probe, but the image matching effect is not ideal for meteor craters with less than 5.
In the prior art [3] (T.Lu, W.Hu, C.Liu, et al.relative position estimation of a lander using the detection and the matching [ J ]. Optical Engineering,2016,55(2):023102.1-023102.24.), a meteorite crater matching algorithm described by using geometric patterns between meteorite craters is provided, the algorithm needs to calculate the positions and the radiuses of the meteorite craters in an image, the centers of the three meteorite craters are used for forming triangles, matching vectors are constructed by three side lengths of the triangles and the radiuses of the three meteorites for description, and meteorite crater matching is carried out on all possible triangles traversed by two images based on the matching vectors. The algorithm is not applicable to the case that the number of meteorite crater detections in the image is less than three and other topographic feature images without meteorite craters.
Disclosure of Invention
The invention aims to provide a method for matching a planet surface navigation landmark based on a contour point set, which realizes the matching of an image to be matched and a navigation landmark in a database map under the condition that the navigation landmark in the image to be matched has the transformation of rotation, translation and scale scaling.
The purpose of the invention is realized by the following technical scheme.
The invention discloses a contour point set-based planet surface navigation landmark matching method.A detector utilizes a carried optical camera to directionally shoot a target area, and a spaceborne computer reads a terrain image A of the surface of a target sky shot by the optical camera and then detects the edge of a navigation landmark of the image A; similarly, the detection of the navigation landmark edge is performed for the map B in the database. And respectively fitting the detection edges of the navigation signposts, and discretizing the fitted contours of the navigation signposts to obtain the coordinates of the contour point sets. Calculated discrete point set P i A,
Figure BDA0002941673620000021
The similarity distance F. And selecting three pairs of navigation signposts with the minimum value in the point set similarity matrix F as effective matching signposts, and calculating a homography transformation matrix T. Obtaining affine transformation coordinates of the navigation landmark central coordinates of the images to be matched by the homography transformation matrix, and utilizing a navigation landmark central pixel coordinate meter of the image BAnd calculating the distance deviation D, constructing a matching distance matrix H, and searching the navigation landmark with the minimum matching distance as a matching landmark.
The invention discloses a method for matching a planet surface navigation road sign based on a contour point set, which comprises the following steps:
the method comprises the following steps: the detector utilizes a carried optical camera to directionally shoot a target area, and after a spaceborne computer reads a terrain image A of the surface of a target sky body shot by the optical camera, the image A is subjected to detection of the edge of a navigation road sign; similarly, the detection of the navigation landmark edge is performed for the map B in the database.
Step two: and respectively fitting the detection edges of the navigation road signs, and discretizing the fitted contour of the navigation road signs to obtain the coordinates of the contour point set.
Fitting the detected edges of the ith (i is 1,2,3, …, m) navigation road signs in the m navigation road signs detected in the graph A, discretizing the fitted contour, calculating and storing coordinates of a discrete point set of the fitted contour of the ith navigation road sign as
Figure BDA0002941673620000031
Similarly, the j (j is 1,2,3, …, n) -th navigation landmark edge point set coordinate in the n navigation landmarks detected in the database map B is
Figure BDA0002941673620000032
Step three: calculating the discrete point set P obtained in the step twoi A,
Figure BDA0002941673620000033
The similarity distance F.
The similarity distance is used for judging the similarity of the data of the two groups of point sets, and the similarity distance between two different navigation landmark contour point sets in the image A and the map B is calculated and used as a judgment basis for the similarity of the shapes of the navigation landmarks in the matching method. For the ith navigation landmark contour point set P in the detection image AiAnd the jth navigation landmark contour point set Q in the database map BjAnd the similarity distance is expressed as
Figure BDA0002941673620000034
If m navigation landmarks are extracted from the detected image A in total, n navigation landmarks are stored in the database map B, and the similarity distances of all the navigation landmarks in the image A and all the navigation landmarks in the database map B are stored in an m multiplied by n point set similarity matrix F
Figure BDA0002941673620000035
Preferably, the third step is implemented as follows:
after the edge detection of the two image navigation road signs and the acquisition of the contour discrete point sets are completed, the Frechet distance is adopted for the two sets of contour point sets to describe the path similarity. The Frechet distance is a distance defined between any two sets in the metric space, and the similarity degree between two sets of point sets can be effectively described by emphasizing the spatial distance of the path. For the ith navigation landmark contour point set P in the detection image A iAnd the jth navigation landmark contour point set Q in the database map BjThe Frechet distance is defined as:
Figure BDA0002941673620000036
where inf (-) represents the lower bound of the data, max (-) represents the maximum in the data,
Figure BDA0002941673620000037
for detecting the ith navigation landmark contour point set P in the imageiAnd the euclidean distance of the jth navigation landmark contour point set in the database map,
Figure BDA0002941673620000038
Figure BDA0002941673620000039
and
Figure BDA00029416736200000310
respectively are the coordinates of the contour point set of the ith navigation landmark in the graph A and the coordinates of the contour point set of the jth navigation landmark in the graph B.
Step four: and selecting three pairs of navigation signposts with the minimum value in the point set similarity matrix F as effective matching signposts, and calculating a homography transformation matrix T.
Let the rank number corresponding to the minimum 3 values in the point set similarity matrix F be (i)1,j1),(i2,j2) And (i)3,j3) The navigation landmark center pixel coordinates of the graph A and the graph B corresponding to the three pairs of landmarks which are effectively matched are respectively
Figure BDA0002941673620000041
Figure BDA0002941673620000042
And
Figure BDA0002941673620000043
the homography transformation matrix of the affine transformation is calculated from the three pairs of coordinates as T:
Figure BDA0002941673620000044
wherein
Figure BDA0002941673620000045
And
Figure BDA0002941673620000046
respectively corresponding to the ith in the graph A corresponding to the minimum value in the point set similarity matrix F1The central pixel coordinate of navigation road sign and jth pixel coordinate in graph B1The center pixel coordinates of each navigation landmark are, likewise,
Figure BDA0002941673620000047
and
Figure BDA0002941673620000048
respectively corresponding to the ith small value and the third small value in the point set similarity matrix F 2And i3The coordinates of the center pixel of each navigation landmark,
Figure BDA0002941673620000049
and
Figure BDA00029416736200000410
j in the graph B corresponding to the second and third small values in the point set similarity matrix F2And j3The center pixel coordinates of each navigation signpost.
Step five: and fourthly, obtaining affine transformation coordinates of the central coordinates of the navigation landmarks of the image to be matched by the homography transformation matrix obtained in the fourth step, calculating a distance deviation D by using the central pixel coordinates of the navigation landmarks of the image B, constructing a matching distance matrix H, and searching the navigation landmarks with the minimum matching distance to serve as the matching landmarks.
Through the homography transformation matrix T of the step four, the pixel coordinates C of the centers of all navigation signposts of the image A are processedAPerforming homography transformation to make the road sign center after image A transformation be CA'
Figure BDA00029416736200000411
The navigation road sign center of the map B is CB:
Figure BDA00029416736200000412
Wherein
Figure BDA00029416736200000413
The pixel coordinates of the ith navigation landmark center in the graph a,
Figure BDA00029416736200000414
is composed of
Figure BDA00029416736200000415
The pixel coordinates after homography transformation,
Figure BDA0002941673620000051
is the pixel coordinate of the jth navigation landmark center in the graph B.
All the transformed road sign centers C of the image A are in the presence of errorsA'The center of the corresponding navigation landmark in the map B can not be completely overlapped, so the distance deviation between the center of the transformation landmark and the center of the navigation landmark in the map B is calculated as D:
Figure BDA0002941673620000052
Figure BDA0002941673620000053
Figure BDA0002941673620000054
and the Euclidean distance between the center coordinate of the ith navigation landmark after the homography transformation of the image A and the center coordinate of the jth navigation landmark in the database map B.
Calculating the matching distance in the form of dot product of corresponding elements in the similarity distance matrix F and the deviation distance D, and storing the matching distance in the matching distance matrix H of m multiplied by n:
Figure BDA0002941673620000055
the matching problem is mathematically structured to construct a matching matrix W as follows:
Figure BDA0002941673620000056
element W in the matching matrix WijFor decision variables, values of 0 and 1, w ij0 denotes the ith navigation signpost sum in graph aThe jth navigation signpost in graph B does not match, and wij1 indicates that the ith navigation landmark in graph A and the jth navigation landmark in graph B are matched with each other
Since the number of detected navigation landmarks is less than the number of landmarks in the database, the matching problem is set as a one-way search from image a to map B, and the matching search problem for different navigation landmarks of image a and map B is as follows:
Figure BDA0002941673620000057
Figure BDA0002941673620000058
wij={0,1} (11)
searching w by calculating the matching distance of the similarity and the central coordinate deviation of the contour point sets of different navigation signpostsijMinimizing the performance index J to determine a matching matrix W, W in the W matrixijThe position of the ith row and the jth column of the element 1 indicates that the ith (i ═ 1,2,3, …, m) navigation landmark in the image a to be matched and the jth (j ═ 1,2,3, …, n) navigation landmark in the map B are matched with each other.
Further comprises the following steps: and determining the position posture of the deep space probe based on the navigation road sign matched in the step five, so that the navigation precision of the position posture of the deep space probe is improved.
Advantageous effects
1. The method for matching the planet surface navigation signposts based on the contour point set performs matching by comparing the similarity of the contour point sets of the navigation signposts of two images, does not need to calculate parameters such as the correlation between a fitting area ratio and the navigation signposts, can realize matching only by taking the detected edges of the navigation signposts as input quantity, and the edge fitting of the navigation signposts is a necessary step for image detection, so that the calculation complexity and the calculation difficulty are greatly simplified, the algorithm is simple, and the processing is easy.
2. Compared with an area ratio matching method, the method for matching the navigation signposts on the planet surface based on the contour point set judges the similarity by using the point sets of the navigation signpost contours in different images, the contour point set contains more shape information, the evaluation effect on the curve similarity is better, and the matching rate of the navigation signposts of different images is higher, so that in the subsequent position and pose estimation of a detector, more navigation signposts are matched, the navigation signpost information for position and pose calculation is richer, the position and pose determination precision of the detector is higher, and the navigation effect is better.
3. The contour point set-based planet surface navigation landmark matching method disclosed by the invention has the advantages that the matching application objects are wider, the terrains such as meteor craters, rocks and gullies in the optical image can be simultaneously matched, and the method is suitable for matching and tracking sequence images and database maps.
Drawings
FIG. 1 is a schematic flow chart of the method for matching the navigation landmark on the surface of the planet based on the contour point set;
FIG. 2 is an original optical image A to be matched taken by an analog optical camera in an example of the present invention;
FIG. 3 is an original optical image B of a simulated database map in an example of the invention;
FIG. 4 is a contour point set image of the navigation landmark edge detection of the image A to be matched in the simulation of the embodiment of the present invention;
FIG. 5 is a set of contour points image of the database map B navigation landmark edge detection in an example simulation of the present invention;
FIG. 6 is a graph of the matching results of the navigation landmark contour point sets in image A and image B in a simulation of an embodiment of the present invention.
FIG. 7 shows the area matching result of the navigation landmarks in image A and image B according to the prior art.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
In order to verify the feasibility of the method, a planet surface sand table of a deep space autonomous navigation and control department of industry and communications key laboratory is utilized, shooting is carried out from different angles through a camera, and a database map (shown as a figure 3) and an affine-transformed image to be matched (shown as a figure 2) are simulated.
The method for matching the planet surface navigation landmark based on the contour point set disclosed by the embodiment comprises the following concrete implementation steps as shown in fig. 1:
The method comprises the following steps: the detector utilizes a carried optical camera to directionally shoot a planetary surface sand table target area, and after a satellite-borne computer reads a target object surface terrain image A (shown in figure 2) shot by the optical camera, the image A is subjected to detection and extraction of navigation landmark edges based on computer graphic processing technologies such as image segmentation and morphological processing algorithms. Similarly, for the database map B (fig. 3), the navigation landmark edge detection is performed by the graphic processing technique.
Step two: and respectively fitting the detection edges of the navigation road signs, and discretizing the fitted contour of the navigation road signs to obtain the coordinates of the contour point set.
Fitting the detected edges of the ith (i ═ 1,2,3, …,12) navigation landmarks in the 12 navigation landmarks detected in the image a, and discretizing the fitted contour, wherein the obtained discrete point set images of the navigation landmark edge contour of the image a and the navigation landmark edge contour of the image B are respectively shown in fig. 4 and 5, 12 navigation landmarks including 9 meteorite pits and 3 rocks are detected in fig. 4, and 16 navigation landmarks including 10 meteorite pits and 6 landmarks are detected in fig. 5. Calculating and storing the coordinates of the discrete point set of the fitted contour of the ith navigation landmark in the graph A as
Figure BDA0002941673620000071
Similarly, the j (j is 1,2,3, …,16) th navigation landmark edge point set coordinate in the 16 navigation landmarks detected in the database map B is
Figure BDA0002941673620000072
Step three: calculating the discrete point set P obtained in the step twoi A,
Figure BDA0002941673620000073
The similarity distance F.
The similarity distance is used for judging two groupsAnd calculating the similarity distance between two different navigation landmark contour point sets in the image A and the map B according to the similarity of the point set data, and taking the similarity distance as a judgment basis of the navigation landmark shape similarity in the matching method. For the ith navigation landmark contour point set P in the detection image AiAnd the jth navigation landmark contour point set Q in the database map BjThe similarity distance can be expressed as
Figure BDA0002941673620000074
Detecting 12 navigation road signs extracted from the image A, storing 16 navigation road signs in the database map B, and storing the similarity distance between all the navigation road signs in the image A and all the navigation road signs in the database map B in a 12 x 16 point set similarity matrix F
Figure BDA0002941673620000075
Preferably, the third step is implemented as follows:
after the edge detection of the two image navigation road signs and the acquisition of the contour discrete point sets are finished, Frechet distance is adopted for the two sets of contour point sets to describe path similarity. The Frechet distance is a distance defined between any two sets in the metric space, and the similarity degree between two sets of point sets can be effectively described by emphasizing the spatial distance of the path. For the ith navigation landmark contour point set P in the detection image A iAnd the jth navigation landmark contour point set Q in the database map BjThe Frechet distance is defined as:
Figure BDA0002941673620000076
where inf (-) represents the lower bound of the data, max (-) represents the maximum in the data,
Figure BDA0002941673620000081
for detecting the ith navigation landmark contour point set P in the imageiAnd a databaseThe Euclidean distance of the jth navigation landmark outline point set in the map,
Figure BDA0002941673620000082
Figure BDA0002941673620000083
and
Figure BDA0002941673620000084
respectively are the coordinates of the contour point set of the ith navigation landmark in the graph A and the coordinates of the contour point set of the jth navigation landmark in the graph B.
Step four: and selecting three pairs of navigation signposts with the minimum value in the point set similarity matrix F as effective matching signposts, and calculating a homography transformation matrix T.
By searching, the row and column serial numbers corresponding to the minimum 3 values in the point set similarity matrix F are (6,8), (11,15) and (10,12), and then the navigation landmark central pixel coordinates of the map A and the map B corresponding to the effectively matched landmark are respectively (6,8), (11,15) and (10,12)
Figure BDA0002941673620000085
Figure BDA0002941673620000086
And
Figure BDA0002941673620000087
the homography transformation matrix of the affine transformation is calculated from the three pairs of coordinates as T:
Figure BDA0002941673620000088
wherein
Figure BDA0002941673620000089
And
Figure BDA00029416736200000810
the central pixel coordinate of the 6 th navigation road sign in the graph A and the central pixel coordinate of the 6 th navigation road sign in the graph B which are respectively corresponding to the minimum value in the point set similarity matrix FThe center pixel coordinates of the 8 th navigation landmark are, likewise,
Figure BDA00029416736200000811
and
Figure BDA00029416736200000812
the coordinates of the central pixels of the 11 th and 10 th navigation landmarks in the graph a corresponding to the second small value and the third small value in the point set similarity matrix F,
Figure BDA00029416736200000813
And
Figure BDA00029416736200000814
the coordinates of the central pixels of the 15 th and 12 th navigation landmarks in the graph B corresponding to the second small value and the third small value in the point set similarity matrix F are respectively.
The one-to-one correspondence relationship between the pixel points of the two image navigation road signs can be found through homography transformation, so that a homography transformation matrix T obtained by utilizing the navigation road sign center with the highest similarity of 3 point sets can be used for calculating the center pixel coordinates of all the road signs of the image A after transformation.
Step five: and fourthly, obtaining affine transformation coordinates of the central coordinates of the navigation landmarks of the image to be matched by the homography transformation matrix obtained in the fourth step, calculating a distance deviation D by using the central pixel coordinates of the navigation landmarks of the image B, constructing a matching distance matrix H, and searching the navigation landmarks with the minimum matching distance to serve as the matching landmarks.
Through the homography transformation matrix T of the step four, the pixel coordinates C of the centers of all navigation signposts of the image A are processedAPerforming homography transformation to make the road sign center after image A transformation be CA'
Figure BDA00029416736200000815
The navigation road sign center of the map B is CB:
Figure BDA0002941673620000091
Wherein
Figure BDA0002941673620000092
The pixel coordinates of the ith navigation landmark center in the graph a,
Figure BDA0002941673620000093
is composed of
Figure BDA0002941673620000094
The pixel coordinates after homography transformation,
Figure BDA0002941673620000095
is the pixel coordinate of the jth navigation landmark center in the graph B.
The road sign center C after the image A is converted due to the existence of errors A'The center of the corresponding navigation landmark in the map B can not be completely overlapped, so the distance deviation between the center of the transformation landmark and the center of the navigation landmark in the map B is calculated as D:
Figure BDA0002941673620000096
Figure BDA0002941673620000097
Figure BDA0002941673620000098
and the Euclidean distance between the center coordinate of the ith navigation landmark after the homography transformation of the image A and the center coordinate of the jth navigation landmark in the database map B.
Based on the similarity distance F and the deviation distance D, calculating the matching distance in the form of dot product of corresponding elements in the matrixes F and D, and storing the matching distance in a matching distance matrix H of m multiplied by n:
Figure BDA0002941673620000099
the matching problem is mathematically structured to construct a matching matrix W as follows:
Figure BDA00029416736200000910
element W in the matching matrix WijFor decision variables, values of 0 and 1, w ij0 denotes that the ith navigation landmark in graph A and the jth navigation landmark in graph B do not match, and wij1 indicates that the ith navigation landmark in graph A and the jth navigation landmark in graph B are matched with each other
Since the number of detected navigation landmarks is usually less than the number of landmarks in the database, the matching problem is set as a one-way search from image a to map B, and the matching search problem for different navigation landmarks of image a and map B is as follows:
Figure BDA0002941673620000101
Figure BDA0002941673620000102
wij={0,1} (22)
searching w by calculating the matching distance between Frechet distance and center coordinate deviation of different navigation landmark contour point sets ijMinimizing the performance index J to determine a matching matrix W, W in the W matrixijThe position of the ith row and the jth column of the element of 1 indicates that the ith (i ═ 1,2,3, …,12) navigation landmark in the image a to be matched and the jth (j ═ 1,2,3, …,16) navigation landmark in the map B are matched with each other. The final matching image is shown in fig. 6, and the two image matching rates used in this example are shown in table 1 after statistics.
Table 1 simulation results
Figure BDA0002941673620000103
The method also comprises the following six steps: and determining the position posture of the deep space probe based on the navigation road sign matched in the fifth step, thereby improving the navigation precision of the position posture of the deep space probe.
In order to verify the reliability of the matching result, edge detection is performed by using a shot planet surface sand table, and on the basis, the navigation landmark matching of two images is performed by using the method for contour point set similarity and the method for area similarity in the prior art, and the matching result is respectively shown in fig. 6 and 7. Defining three-dimensional coordinates of 16 navigation road signs in a map B under a target celestial body fixed connection coordinate system, and setting the initial position of a detector under a small celestial body fixed connection coordinate system as [ 510; -320; 4000] m, initial attitude [ 24; -6; 23 deg.. The field angle is 30 degrees, the focal length of the navigation camera is 8mm, and pose estimation verification of the two matching results is carried out.
The detector is simulated by pose estimation by using a least square method, the input quantities of the pose estimation algorithm are respectively a navigation landmark center two-dimensional pixel coordinate and a corresponding three-dimensional coordinate matched with a point set in the figure 6 and a navigation landmark center pixel coordinate and a corresponding three-dimensional coordinate matched with an area matching method in the figure 7, and finally the pose estimation result based on contour point set matching is as follows:
TABLE 2 pose determination results of the method of this embodiment
Figure BDA0002941673620000104
Figure BDA0002941673620000111
And the matching result based on the area similarity gives wrong three-dimensional positions to the navigation landmarks in the detected image due to mismatching, so that the pose estimation result is seriously diverged. According to the simulation result, the contour point set-based planet surface navigation landmark matching method can effectively realize the matching of navigation landmarks between images and improve the navigation precision of the position posture of the deep space probe.
The method is based on the matching distance between the navigation landmark contour point set before and after affine transformation, can autonomously search corresponding navigation landmarks in different images by minimizing the similarity distance and the central deviation distance between the image to be matched and the navigation landmark contour point set in the database map, provides a new idea for the landmark matching problem in the deep space exploration task based on the optical image, and is simple to implement and easy to operate. As can be seen from FIG. 6, the navigation landmarks detected in the image A are all correctly matched with the database navigation landmarks of the map B, and the correctness and the effectiveness of the method for matching the planet surface navigation landmarks based on the contour point set are verified.
And completing the method for matching the planet surface navigation road signs based on the contour point set required by the deep space probe.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (1)

1. The method for matching the planet surface navigation road signs based on the contour point set is characterized by comprising the following steps of: the method comprises the following steps:
the method comprises the following steps: the detector utilizes a carried optical camera to directionally shoot a target area, and after a spaceborne computer reads a terrain image A of the surface of a target sky body shot by the optical camera, the image A is subjected to detection of the edge of a navigation road sign; similarly, detecting the edges of the navigation landmarks aiming at the map B in the database;
step two: fitting the detection edges of the navigation road signs respectively, and obtaining a contour point set coordinate through discretization of the fitted contour of the navigation road signs;
fitting the detected edges of the ith (i is 1,2,3, …, m) navigation signposts in the m navigation signposts detected in the graph A, and discretizing the fitted contour Calculating and storing the coordinates of the discrete point set of the fitted contour of the ith navigation road sign
Figure FDA0003618284760000011
Similarly, the j (j is 1,2,3, …, n) th navigation landmark edge point set coordinate in the n navigation landmarks detected in the database map B is
Figure FDA0003618284760000012
Step three: calculating the discrete point set P obtained in the step twoi A,
Figure FDA0003618284760000013
The similarity distance F;
the similarity distance is used for judging the similarity of the data of the two groups of point sets, calculating the similarity distance between two different navigation landmark contour point sets in the image A and the map B, and taking the similarity distance as a judgment basis of the navigation landmark shape similarity in the matching method; for the ith navigation landmark contour point set P in the detection image AiAnd the jth navigation landmark contour point set Q in the database map BjAnd the similarity distance is expressed as
Figure FDA0003618284760000014
If m navigation landmarks are extracted from the detected image A in total, n navigation landmarks are stored in the database map B, and the similarity distances of all the navigation landmarks in the image A and all the navigation landmarks in the database map B are stored in an m multiplied by n point set similarity matrix F
Figure FDA0003618284760000015
The third implementation method comprises the following steps:
after completing the edge detection of the two image navigation road signs and the acquisition of the contour discrete point sets, performing Frechet distance tracing on the two sets of contour point setsThe path similarity; the Frechet distance is a distance defined between any two sets in a measurement space, and the similarity between two groups of point sets can be effectively described by emphasizing the spatial distance of paths; for the ith navigation landmark contour point set P in the detection image A iAnd the jth navigation landmark contour point set Q in the database map BjThe Frechet distance is defined as:
Figure FDA0003618284760000021
where inf (-) represents the lower bound of the data, max (-) represents the maximum in the data,
Figure FDA0003618284760000022
for detecting the ith navigation landmark contour point set P in the imageiAnd the euclidean distance of the jth navigation landmark contour point set in the database map,
Figure FDA0003618284760000023
Figure FDA0003618284760000024
and
Figure FDA0003618284760000025
respectively representing the coordinates of the contour point set of the ith navigation landmark in the graph A and the coordinates of the contour point set of the jth navigation landmark in the graph B;
step four: selecting three pairs of navigation signposts with the minimum value in the point set similarity matrix F as effective matching signposts, and calculating a homography transformation matrix T;
let the row and column sequence number corresponding to the minimum 3 values in the point set similarity matrix F be (i)1,j1),(i2,j2) And (i)3,j3) The navigation landmark center pixel coordinates of the graph A and the graph B corresponding to the three pairs of landmarks which are effectively matched are respectively
Figure FDA0003618284760000026
Figure FDA0003618284760000027
And
Figure FDA0003618284760000028
the homography transformation matrix of the affine transformation is calculated from the three pairs of coordinates as T:
Figure FDA0003618284760000029
wherein
Figure FDA00036182847600000210
And
Figure FDA00036182847600000211
respectively corresponding to the ith in the graph A corresponding to the minimum value in the point set similarity matrix F1The central pixel coordinate of navigation road sign and jth pixel coordinate in graph B1The center pixel coordinates of each navigation landmark are, likewise,
Figure FDA00036182847600000212
and
Figure FDA00036182847600000213
respectively corresponding to the ith small value and the third small value in the point set similarity matrix F 2And i3The coordinates of the center pixel of each navigation landmark,
Figure FDA00036182847600000214
and
Figure FDA00036182847600000215
j in the graph B corresponding to the second and third small values in the point set similarity matrix F2And j3The central pixel coordinates of each navigation landmark;
step five: obtaining an affine transformation coordinate of the central coordinate of the navigation landmark of the image to be matched by using the homography transformation matrix obtained in the step four, calculating a distance deviation D by using the central pixel coordinate of the navigation landmark of the image B, constructing a matching distance matrix H, and searching the navigation landmark with the minimum matching distance as a matching landmark;
through the homography transformation matrix T of the step four, the pixel coordinates C of the centers of all navigation signposts of the image A are processedAPerforming homography transformation to make the road sign center after image A transformation be CA'
Figure FDA0003618284760000031
The navigation road sign center of the map B is CB:
Figure FDA0003618284760000032
Wherein
Figure FDA0003618284760000033
The pixel coordinates of the ith navigation landmark center in the graph a,
Figure FDA0003618284760000034
is composed of
Figure FDA0003618284760000035
The pixel coordinates after homography transformation,
Figure FDA0003618284760000036
the pixel coordinate of the jth navigation road sign center in the graph B;
all the transformed road sign centers C of the image A are in the presence of errorsA'The center of the corresponding navigation landmark in the map B can not be completely overlapped, so the distance deviation between the center of the transformation landmark and the center of the navigation landmark in the map B is calculated as D:
Figure FDA0003618284760000037
Figure FDA0003618284760000038
Figure FDA0003618284760000039
the Euclidean distance between the center coordinate of the ith navigation landmark after homography transformation of the image A and the center coordinate of the jth navigation landmark in the database map B;
Calculating the matching distance in the form of dot product of corresponding elements in the similarity distance matrix F and the deviation distance D, and storing the matching distance in the matching distance matrix H of m multiplied by n:
Figure FDA00036182847600000310
the matching problem is mathematically structured to construct a matching matrix W as follows:
Figure FDA0003618284760000041
element W in the matching matrix WijFor decision variables, values of 0 and 1, wij0 denotes that the ith navigation landmark in graph A and the jth navigation landmark in graph B do not match, and wij1 indicates that the ith navigation landmark in graph A and the jth navigation landmark in graph B are matched with each other
Since the number of detected navigation landmarks is less than the number of landmarks in the database, the matching problem is set as a one-way search from image a to map B, and the matching search problem for different navigation landmarks of image a and map B is as follows:
Figure FDA0003618284760000042
Figure FDA0003618284760000043
wij={0,1} (11)
searching w by calculating the matching distance of the similarity and the central coordinate deviation of the contour point sets of different navigation signpostsijMinimizing the performance index J to determine a matching matrix W, W in the W matrixijThe position of the ith row and the jth column of the element with 1 indicates that the ith (i is 1,2,3, …, m) navigation landmark in the image a to be matched and the jth (j is 1,2,3, …, n) navigation landmark in the map B are matched with each other;
further comprises the following steps: and determining the position posture of the deep space probe based on the navigation road sign matched in the step five, so that the navigation precision of the position posture of the deep space probe is improved.
CN202110194541.2A 2021-02-10 2021-02-10 Planet surface navigation road sign matching method based on contour point set Active CN112906573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110194541.2A CN112906573B (en) 2021-02-10 2021-02-10 Planet surface navigation road sign matching method based on contour point set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110194541.2A CN112906573B (en) 2021-02-10 2021-02-10 Planet surface navigation road sign matching method based on contour point set

Publications (2)

Publication Number Publication Date
CN112906573A CN112906573A (en) 2021-06-04
CN112906573B true CN112906573B (en) 2022-06-28

Family

ID=76124173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110194541.2A Active CN112906573B (en) 2021-02-10 2021-02-10 Planet surface navigation road sign matching method based on contour point set

Country Status (1)

Country Link
CN (1) CN112906573B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435495B (en) * 2021-06-23 2022-06-17 北京理工大学 Planet landing collaborative navigation feature matching method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968795A (en) * 2012-12-03 2013-03-13 哈尔滨工业大学 Meteor crater mismatching determination method based on ratio of shaded area to external-contour area
CN102999915A (en) * 2012-12-03 2013-03-27 哈尔滨工业大学 Meteorite crater matching method based on area ratio
CN103512574A (en) * 2013-09-13 2014-01-15 北京航天飞行控制中心 Optical guidance method for deep space probe based on minor planet sequence image
CN108871349A (en) * 2018-07-13 2018-11-23 北京理工大学 A kind of deep space probe optical guidance pose weight determination method
CN110619368A (en) * 2019-09-23 2019-12-27 北京理工大学 Planet surface navigation feature imaging matching detection method
AU2020103576A4 (en) * 2019-12-27 2021-02-04 Wuhan University Autonomous orbit and attitude determination method of low-orbit satellite based on non-navigation satellite signal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968795A (en) * 2012-12-03 2013-03-13 哈尔滨工业大学 Meteor crater mismatching determination method based on ratio of shaded area to external-contour area
CN102999915A (en) * 2012-12-03 2013-03-27 哈尔滨工业大学 Meteorite crater matching method based on area ratio
CN103512574A (en) * 2013-09-13 2014-01-15 北京航天飞行控制中心 Optical guidance method for deep space probe based on minor planet sequence image
CN108871349A (en) * 2018-07-13 2018-11-23 北京理工大学 A kind of deep space probe optical guidance pose weight determination method
CN110619368A (en) * 2019-09-23 2019-12-27 北京理工大学 Planet surface navigation feature imaging matching detection method
AU2020103576A4 (en) * 2019-12-27 2021-02-04 Wuhan University Autonomous orbit and attitude determination method of low-orbit satellite based on non-navigation satellite signal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Crater-based attitude and position estimation for planetary exploration with weighted measurement uncertainty;Shengying Zhu,et al.;《Acta Astronautica》;20200625;全文 *
Observability-based visual navigation using landmarks measuring angle for pinpoint landing;Shengying Zhu,et al.;《Acta Astronautica》;20190201;全文 *
行星表面陨石坑检测与匹配方法;冯军华等;《航空学报》;20100925(第09期);全文 *

Also Published As

Publication number Publication date
CN112906573A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN108871349A (en) A kind of deep space probe optical guidance pose weight determination method
WO2015096508A1 (en) Attitude estimation method and system for on-orbit three-dimensional space object under model constraint
CN102750537B (en) Automatic registering method of high accuracy images
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN111652896B (en) Method for detecting coarse-fine meteorite crater by inertial navigation assistance
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN110095123B (en) Method for evaluating and optimizing observation information of road signs on surface of irregular small celestial body
Xia et al. Globally consistent alignment for planar mosaicking via topology analysis
Simard Bilodeau et al. Pinpoint lunar landing navigation using crater detection and matching: design and laboratory validation
Mariottini et al. An accurate and robust visual-compass algorithm for robot-mounted omnidirectional cameras
CN109871024A (en) A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN112906573B (en) Planet surface navigation road sign matching method based on contour point set
Fu et al. An efficient scan-to-map matching approach for autonomous driving
Shipitko et al. Linear features observation model for autonomous vehicle localization
Brockers et al. On-board absolute localization based on orbital imagery for a future mars science helicopter
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
Jiang et al. Icp stereo visual odometry for wheeled vehicles based on a 1dof motion prior
Sheikh et al. Geodetic alignment of aerial video frames
CN113435495B (en) Planet landing collaborative navigation feature matching method
Brink Stereo vision for simultaneous localization and mapping
Jia et al. DispNet based stereo matching for planetary scene depth estimation using remote sensing images
Aggarwal Machine vision based SelfPosition estimation of mobile robots
Kim et al. Automatic multiple lidar calibration based on the plane features of structured environments
Villa et al. Autonomous navigation and dense shape reconstruction using stereophotogrammetry at small celestial bodies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant