CN108171732A - A kind of detector lunar surface landing absolute fix method based on multi-source image fusion - Google Patents

A kind of detector lunar surface landing absolute fix method based on multi-source image fusion Download PDF

Info

Publication number
CN108171732A
CN108171732A CN201711191047.0A CN201711191047A CN108171732A CN 108171732 A CN108171732 A CN 108171732A CN 201711191047 A CN201711191047 A CN 201711191047A CN 108171732 A CN108171732 A CN 108171732A
Authority
CN
China
Prior art keywords
image
detector
point
landing
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711191047.0A
Other languages
Chinese (zh)
Other versions
CN108171732B (en
Inventor
王镓
王保丰
周立
谢剑锋
崔晓峰
陈明
刘传凯
王晓雪
李立春
王俊魁
师明
戴堃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
63920 Troops Of Pla
Original Assignee
63920 Troops Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 63920 Troops Of Pla filed Critical 63920 Troops Of Pla
Priority to CN201711191047.0A priority Critical patent/CN108171732B/en
Publication of CN108171732A publication Critical patent/CN108171732A/en
Application granted granted Critical
Publication of CN108171732B publication Critical patent/CN108171732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to it is a kind of based on multi-source image fusion detector lunar surface landing absolute fix method, including:The sequence image of landing camera shooting is obtained, and feature extracting and matching is carried out to the overlapping region of flanking sequence image respectively, obtains the first matching characteristic point set;The coordinate conversion relation of first frame image and end-frame image in the first matching characteristic point set sequence of calculation image;Picture coordinate of the drop point in first frame image is calculated according to coordinate conversion relation;By first frame image projection into approximate orthograph picture;Approximate orthograph picture with detector touch-down zone pre-selection base map DOM images is matched, obtains the second matching characteristic point set;According to drop point as the first position information of coordinate and the second matching characteristic point set calculating detector drop point.The present invention is due to the flanking sequence image of use, thus matched overlapping region is larger between image, the problem that can avoid that it fails to match caused by the overlapping region included in two images is too small, improves successful match rate.

Description

Detector lunar landing absolute positioning method based on multi-source image fusion
Technical Field
The invention belongs to the field of spacecraft deep space navigation positioning, and particularly relates to a detector lunar surface landing absolute positioning method based on multi-source image fusion.
Background
The high-precision positioning of the landing site of the detector is an important premise for the detector of the extraterrestrial celestial body to successfully carry out various works, and the existing positioning method of the landing site of the detector is to directly extract and match the characteristics of an overlapped area based on a single landing camera image and a detector landing area pre-selection base image DOM image, so that the matching failure caused by the undersize overlapped area contained in the two images is often caused, and the landing site of the detector cannot be positioned.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a detector lunar surface landing absolute positioning method based on multi-source image fusion.
The technical scheme for solving the technical problems is as follows: a detector lunar landing absolute positioning method based on multi-source image fusion comprises the following steps:
step 1, acquiring sequence images shot by a landing camera on a detector in the landing process of the detector, and respectively carrying out feature extraction and matching on overlapping areas of adjacent sequence images to obtain a corresponding first matching feature point set;
step 2, calculating the coordinate transformation relation between the initial frame image and the final frame image in the sequence image according to the first matching feature point set;
step 3, calculating the image coordinates of the detector drop point in the initial frame image according to the coordinate transformation relation between the initial frame image and the final frame image;
step 4, projecting the initial frame image into an approximate orthoimage;
step 5, matching the approximate orthographic image with a detector landing area pre-selected base map DOM image to obtain a second matching feature point set;
and 6, calculating first position information of the detector falling point according to the image coordinate of the detector falling point in the initial frame image and the second matching feature point set.
The invention has the beneficial effects that: due to the adoption of the adjacent sequence images, the overlapping area of matching between the images is large, the problem of matching failure caused by the fact that the overlapping area contained in the two images is too small can be solved, and the matching success rate is improved.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, after the step 6, the method further includes:
step 7, retrieving all LRO NAC images covering the first position information;
step 8, matching the approximate orthographic image with each frame of LRO NAC image obtained by retrieval to obtain a corresponding fourth matching feature point set;
step 9, calculating corresponding image coordinates of the detector landing point in each frame of LRO NAC image according to the fourth matching feature point set;
and step 10, calculating second position information of the detector landing point according to the corresponding image coordinates of the detector landing point in each frame of LRO NAC image.
The beneficial effect of adopting the further scheme is that firstly, the LRO NAC satellite image with the highest imaging resolution of the lunar surface by in-orbit flight is utilized in the solving process, so that the accuracy of extracting and matching the feature points by using the image is higher than that of other camera images; and secondly, the LRO NAC historical image is adopted, lunar elevation auxiliary information is introduced into a collinear equation, and the position of the landing point of the detector is solved through a method of repeated iterative calculation, so that the defect of poor real-time performance caused by the fact that the LRO needs to fly over the landing point to obtain a corresponding image and then perform positioning calculation after the LRO flies over the landing point after the detector is landed is effectively overcome.
Drawings
Fig. 1 is a flowchart of a detector lunar surface landing absolute positioning method based on multi-source image fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a detector placement location based on multi-source image matching;
FIG. 3 is a view field geometry of a landing camera view;
fig. 4 is a flowchart of a detector lunar surface landing absolute positioning method based on multi-source image fusion according to another embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of a detector lunar surface landing absolute positioning method based on multi-source image fusion according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
101. sequence images shot by a landing camera on the detector in the detector landing process are obtained, and feature extraction and matching are respectively carried out on the overlapping areas of the adjacent sequence images to obtain a corresponding first matching feature point set.
Specifically, as shown in fig. 2, when the probe starts to prepare for landing the moon, the sequence image is captured by the landing camera carried by the probe, the resolution of the landing camera image is related to the height of the corresponding orbit, and the image resolution is increased when the probe is at a smaller height from the moon surface. Therefore, the image of the landing zone with the detector landing camera view field perpendicular to the landing zone is selected as much as possible. The landing camera observation map view field geometry is shown in fig. 3, the horizontal field angle of the landing camera is made to be theta, the vertical height of the detector is h, the lunar surface view field distance is L, the number of pixels is s, and the image resolution of the landing camera is r. On the premise that the transverse field angle theta, the vertical height h of the detector and the pixel s of the landing camera are known, the calculation expression of the resolution r of the landing camera image is as follows:
at the same time, detector falls on the bottom graph DOM image (recorded as I)DOM) Image resolution (denoted as f)DOM) Also known parameters.
In the formula (1), let r ═ fDOMThe height h of the detector position at that time can be determinedmLet the landing camera image obtained at this position height be denoted as Im. Then the falling camera image I can be approximatedmDOM image I of base map of pre-selected falling area of falling moon with detectorDOMThe image resolution of (2) is the same.
When the detector is in the hovering stage in the moon falling process, the horizontal speed of the detector is close to zero, and the falling camera performs vertical imaging. Let the landing camera image obtained at this time be denoted as I0
Solving adjacent images I using descending camera sequence imagesnAnd In+1The feature point set method of the (n-0, 1, …, m) overlap region is as follows:
influenced by the installation position of the landing camera, a part of the fixed area in each landing camera image is shielded by the detector bracket. If the original image is directly used for matching, the occlusion area generates a certain number of invalid feature points. Therefore, before the matching of adjacent sequence images is carried out, each image is firstly subjected to mask processing, and the images in the mask area do not participate in the matching operation. The mask may be obtained by manually selecting the image area. After the processing, the matching efficiency can be improved, and the mismatching rate can be reduced.
Because two adjacent landing camera sequence images have certain deformation such as scaling, rotation, inclination and the like, matching between the images needs to select a matching algorithm insensitive to the deformation. The SIFT algorithm has the characteristics of unchanged image scale and rotation when the feature points are extracted, and has better robustness on the light intensity change and noise influence, so the SIFT algorithm is adopted in the step. The method comprises the following specific steps:
step a: acquiring sequence images shot by a landing camera in the landing process of a detector, extracting feature point sets of overlapping areas of adjacent sequence images respectively by adopting an SIFT algorithm, and recording the feature point sets as the feature point sets in sequence
Step b: and respectively calculating the Euclidean distance between each characteristic point of the next frame image and one characteristic point of the previous frame image in the adjacent sequence images, wherein when the ratio of the minimum Euclidean distance to the next minimum Euclidean distance does not exceed a threshold value, the characteristic point of the next frame image corresponding to the minimum Euclidean distance and the characteristic point of the previous frame image are corresponding matching point pairs.
Specifically, let set NnCharacteristic point ofThe feature vector of (n is 0, 1, 2, …, m 3. ltoreq.i) is denoted as xniFrom the set Nn+1Find and point inThe two characteristic points with the minimum Euclidean distance are respectively recorded as Corresponding feature vectors are each xn+1jAnd x'n+1jThen the corresponding distances are d respectivelyijAnd d'ijI.e. formula (2):
if it is satisfied withWherein ε is a threshold value, generally 0.5-0.6, then it is considered thatAndis the corresponding matching point pair.
Step c: traversing the previous frame image I according to the method of step 1.2nObtaining a first matching feature point set of the adjacent sequence images, and sequentially recording as:
wherein,andrespectively representing images InAnd In+1The image coordinates of the kth feature point of (1).
102. And calculating the coordinate transformation relation between the initial frame image and the final frame image in the sequence image according to the first matching feature point set.
This step can be broken down into the following three steps: firstly, performing gross error elimination on the first matching feature point sets of the two adjacent images obtained in the step 101 by using a RANSAC method to obtain a third matching point set between the adjacent images, and then solving a primary coordinate transformation relation between the images by using a least square method. Then a certain threshold value is set, and the points with residual errors larger than the threshold value are eliminated. And finally, recalculating by using the remaining matching point pairs to obtain a coordinate transformation relation between the images, wherein the recalculation comprises the following specific steps:
step a: and eliminating gross errors of the first matching feature point set by using an RANSAC method, and after eliminating wrong feature points, requiring the number of the remaining feature points to be not less than 4 groups to obtain a third matching point set which is recorded as:
step b: and calculating the coordinate transformation relation of the adjacent sequence images by adopting an affine transformation model according to the third matching feature point set.
The calculation formula is as follows:
wherein, an、bn、cn、an+1、bn+1、cn+1In the form of an affine coefficient, the coefficients,andrespectively the image coordinates of the same-name points of the adjacent images. Theoretically, when the number of the homonymous points is more than 3, the formula (3) has a solution. Because adjacent sequence images are selected, the overlapping area of the two images is large, and the logarithm (k) of the homonymous feature points is obtained under the general condition1) Much greater than 3, and thus can be associated with formula (4):
then, the least square method is adopted to solve the optimal solution of the affine coefficient, namely the coordinate transformation relation between the adjacent sequence images is recorded asThe following were used:
in the formulae (5) and (6),
step c: and calculating the first image coordinates of the corresponding matching points of the feature points in the previous frame image in the next frame image according to the coordinate transformation relation of the adjacent sequence images and the image coordinates of the feature points in the previous frame image.
In particular, the radiation conversion relation is used according to the formula (5)Is calculated in the image InThe coordinates of the central image areCorresponding to the matching point of (1) in the image In+1First image coordinates ofNamely:
step d: calculating a residual error according to the first image coordinate and a second image coordinate of a corresponding matching point in a next frame image of the feature point in a previous frame image in the third matching feature point set according to the following formula:
wherein,is the first image coordinate of the jth matching point of the (n + 1) th frame sequence image,is the second image coordinate of the jth matching point of the (n + 1) th frame sequence image, j is a positive integer, n belongs to [0, m ∈]。
Step e: removing the matching point pairs with the residual errors larger than the threshold value from the third matching characteristic point set to obtain a new matching point set, and recording the new matching point set as a new matching point set
Step f: repeating steps b to e, fitting new affine transformation coefficients by using new matching point pairs, establishing a polynomial relationship again, and eliminating residual errors larger than a threshold value T1Is matched. And carrying out iterative elimination until the residual characteristic points of the third matched characteristic point set reach a preset proportion or the residual error is smaller than a threshold value. The preset proportion can be set to be 1/3, and the finally obtained third matching feature point set is
Step g: f, using the third matching feature point set obtained in the step f, using a least square method to calculate an affine coefficient, and recording the affine coefficient asI.e. representing the coordinate transformation relationship of adjacent sequence images.
Step h: all two adjacent sequence landing camera images I can be determined by step gn+1And In(n is 0, 1, …, m), i.e. coordinate transformationAnd then the landing camera image I can be calculatedmAnd I0Is expressed asThe following were used:
103. and calculating the image coordinates of the detector falling point in the initial frame image according to the coordinate transformation relation of the initial frame image and the final frame image.
Optionally, in this embodiment, step 103 specifically includes:
and taking the image point in the center of the last frame image as a detector drop point, and calculating the image coordinate of the corresponding matching point of the image point in the center of the last frame image in the first frame image according to the image coordinate of the image point in the center of the last frame image and the coordinate transformation relation between the first frame image and the last frame image.
Specifically, after the detector is in the obstacle avoidance section, the engine is turned off, the horizontal speed of the detector is close to zero, and then the detector lands on the moon surface in a free-fall mode. The posture of the landing camera is stable at the moment, the landing camera is close to the lunar surface, and the light is not shielded, so that the image I of the landing camera can be obtained0The very center of (denoted as point p)0) As the landing position of the detector, the image coordinates are recorded asAs shown in fig. 2. Thus from equations (4) and (8), the drop point p can be calculated0In the landing camera image ImThe corresponding image coordinates in (1) are recorded asThe following were used:
104. the initial frame image is projected as an approximate orthographic image.
Specifically, in step 104, the landing camera image I with a certain inclination angle is transformed according to the central projection transformation methodmProjected as an approximate orthographic image I'm. The specific method comprises the following steps:
step a: solving of approximate ortho-image I'mImage coordinates of the corresponding pixel
The approximate position of the detector obtained by the telemetering information is taken as the position of the landing camera at the moment and is marked as Xs,Ys,Zs]T. The detector attitude information is taken as the attitude of the landing camera at the moment, and can be recorded as the attitude of the landing camera by using a rotation matrixThen according to collinearity equation (11) can be derived from ImAt any one image point coordinate pm(x, y) is taken into formula (11), and the ground coordinates corresponding to each image point are calculated and obtained and are marked as Pm(X, Y, Z), and making Z0, formula (12) can be obtained.
In the formula (12), f denotes a focal length of the camera. Zs、R33f are all constants. Each of formula (12) is divided by-R33f, equation (13) can be obtained, where the coefficients are represented by the new symbols, as follows:
the left side of equation (13) can be viewed as the scaled after translation of the coordinate system, such thatThen (X ', Y ') is orthophoto image I 'mIs like coordinate P'm
Step b: video I'mGrey scale assignment
Inverse transformation of equation (13) yields equation (14), as follows:
wherein Thus, video I'mAt any point in theThe image coordinates (X ', Y') are taken (14), and the point corresponding to the image I is determinedmMiddle image coordinate p'm(x ', y'). Due to the determined image coordinates p'mDoes not necessarily fall exactly on the original image ImOf the pixel element center, thus its gray value g'm(x ', y') can be determined by the gray value g of 4 picture elements around it1,g2,g3,g4The result is obtained by bilinear interpolation calculation as follows:
in the formula (15), Δ is an image ImWherein x ═ x '-floor (x'), y ═ y '-floor (y'). Finally, image point p'mGray value of g'm(x ', y ') to orthophoto image I 'mOf Pixel element P'm
Image I 'according to the method'mIs subjected to the above calculation to obtain an approximate ortho-image I'm
Step c: will fall on point p0In picture ImImage coordinates ofConversion to in video I'mImage coordinates of (1) as
105. And matching the approximate orthographic image with the detector landing zone pre-selected base map DOM image to obtain a second matching feature point set.
Specifically, in step 105, image I 'is processed by utilizing Afffin-SIFT algorithm'mAnd a DOM image I of the floor map of the falling areaDOMMatching is carried out to obtain a matching feature point set of the two, and the method specifically comprises the following steps:
first, extracting respectivelyImage I'mAnd IDOMThe Affinine-SIFT feature point sets of the overlapping regions are sequentially recorded asThen traverse image I'mFrom picture IDOMFinding out the feature points matched with the feature points, thereby obtaining a feature point set of the feature points, and recording the feature points as:andwherein,andrespectively represent picture I'mAnd IDOMThe image coordinates of the kth feature point of (1).
Due to the image ImAnd l'mIs approximate in image resolution, hence image I'mAnd a detector landing area pre-selected base map image IDOMThe image resolution is approximate, the local registration relation with the landing camera image can be found in the maximum area of the DOM image of the bottom image of the preselected landing area, and therefore the matching success rate is improved.
106. And calculating first position information of the detector falling point according to the image coordinate of the detector falling point in the initial frame image and the second matching feature point set.
Specifically, the drop point p can be obtained in step 1050Is in image I'mImage coordinates (note as) And picture I'mAnd IDOMThe set of matched feature points. If the DOM image of the base map of the drop zone has latitude and longitude coordinates, the same method as in step 102 can be used to determine the initial point of the drop pointStarting position information, denoted posFirst stage(XFirst stage,YFirst stage). The method comprises the following specific steps:
step a: from near-orthophoto image I'mAnd a DOM image I of the floor map of the landing zoneDOMFeature point set ofAndcalculating the coordinate transformation relation of the two images and recording asThe method comprises the following specific steps:
wherein,
step b: calculating at image I 'according to formula (4)'mThe coordinates of the central image areIs in image IDOMIs expressed as formula (17)
Step c: the DOM image of the falling area base map has longitude and latitude coordinates, so that the initial geographic coordinate is (X)0,Y0) Resolution in x direction ResxResolution in the y-direction ResyThese parameters are known, then according to equation (18) the initial geographical coordinates of the drop point, pos, can be calculatedFirst stage(XFirst stage,YFirst stage) The following are:
optionally, as an embodiment of the present invention, as shown in fig. 4, the method further includes:
107. all LRO NAC images covering the first position information are searched and named as i in sequence1,…,ik
108. And matching the approximate orthographic image with each frame of LRO NAC image obtained by retrieval to obtain a corresponding fourth matching feature point set.
Specifically, the LRO NAC video can be regarded as being approximately vertically captured, and the image I'mIs an approximate orthographic image projected from a landing camera image, so that in this step, the SIFT algorithm with image scale and rotation invariant characteristics is selected for the image I'mAnd i1Matching to obtain feature point sets of the two images, and recording the feature point sets as:andwherein,andrespectively represent picture I'mAnd i1The image coordinates of the kth feature point of (1).
109. And calculating the corresponding image coordinates of the detector falling point in each frame of LRO NAC image according to the fourth matching feature point set.
Specifically, the coordinate transformation relationship between the images is calculated by using the feature point sets of the two images and is recorded asNamely haveAnd finally, calculating to obtain a detector landing point (namely in picture I ') according to a formula (4)'mThe coordinates of the central image arePoint of (d) in the LRO NAC image i1Image coordinates of (1) asAs shown in formula (19):
110. and calculating second position information of the detector landing point according to the corresponding image coordinates of the detector landing point in each frame of LRO NAC image.
Specifically, on the premise that terrain elevation information, image exterior orientation elements and image point coordinates are known, the ground coordinates of the point, namely the LRO NAC image i, can be solved through an iterative method according to a collinear equation1Is a known parameter, whereinIs a line element, (a)1~a3,b1~b3,c1~c3) Is an angle element, fLROFor focal length, the image coordinates can then be determined from the collinearity equationGround coordinates of (2), note(X1,Y1,Z1) The following are:
since two equations cannot solve for X1、Y1And Z1Three unknown quantities, so that the average elevation Z of the landing zone is first determined0(Z0Calculating the average value of the detector falling area through a prepared full-moon DEM picture) to be substituted into the formula (20), calculating and calculating the initial value of the ground coordinate, and recording the initial value as the initial valueThe following were used:
finding and from full-moon DEM pictureThe point with the minimum difference value is taken as the height value of the pointThe ground coordinate is taken into a formula (21), and a new group of initial ground coordinate values can be obtained through calculation and are recorded asAnd finding out the elevation value corresponding to the coordinate from the DEMRepeating the steps until the height difference is twiceLess than threshold T2(generally, T is taken to be20.01) stops the iteration. The ground point coordinates calculated at this time are taken as the landing point position of the detector, i.e.
Because the sizes of errors introduced by error sources such as random errors of track measurement and control, matching errors among images, lunar elevation errors and the like are not the same in each observation, the positioning errors of the single images are inconsistent and have contingency. Therefore, a method of taking an average value by multiple observations is adopted to ensure the positioning precision.
The method flow in step S7 is adopted to sequentially utilize the image i2To ikSolving the position pos of the detector falling point2,…,posk. Then, the mean value of them is taken as the position pos of the detector landing point, i.e. the formula (22).
The invention firstly adopts sequence-based images to extract and match the characteristic points of the overlapping area of adjacent frames, and transmits the position of the detector falling point in the hovering image to an image ImThen the image is matched with the DOM image of the detector falling area preselected base map so as to solve the initial position of the detector falling point, because the image ImApproximate resolution of the pre-selected base DOM image, and image ImThe field of view range of the method is larger, so that the overlapping area of matching among the images is larger, and the method is favorable for finding out the local registration relation with the landing camera image from the extremely large area of the DOM image of the bottom image of the preselected landing area, thereby improving the matching success rate and ensuring the absolute positioning accuracy of the method. In addition, the LRO NAC historical image is adopted, lunar elevation auxiliary information is introduced into a collinear equation, the position of a landing point of the detector is solved through a multi-iteration calculation method, and the defect that the instantaneity is poor because the LRO needs to fly over the landing point to obtain a corresponding image and then perform positioning calculation after the detector lands is effectively avoided.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A detector lunar landing absolute positioning method based on multi-source image fusion is characterized by comprising the following steps:
step 1, acquiring sequence images shot by a landing camera on a detector in the landing process of the detector, and respectively carrying out feature extraction and matching on overlapping areas of adjacent sequence images to obtain a corresponding first matching feature point set;
step 2, calculating the coordinate transformation relation between the initial frame image and the final frame image in the sequence image according to the first matching feature point set;
step 3, calculating the image coordinates of the detector drop point in the initial frame image according to the coordinate transformation relation between the initial frame image and the final frame image;
step 4, projecting the initial frame image into an approximate orthoimage;
step 5, matching the approximate orthographic image with a detector landing area pre-selected base map DOM image to obtain a second matching feature point set;
and 6, calculating first position information of the detector falling point according to the image coordinate of the detector falling point in the initial frame image and the second matching feature point set.
2. The method according to claim 1, wherein the initial frame image is an image of a landing camera at a preset height, and the preset height is calculated by the following formula:
wherein f isDOMPreselecting the image resolution of a DOM image of a bottom image of a landing area for a detector landing moon, wherein s is the number of pixels, and theta is the transverse field angle of a landing camera;
and the last frame image is an image shot by the landing camera when the detector hovers in a falling moon mode.
3. The method according to claim 2, characterized in that said step 1 comprises in particular:
step 1.1, acquiring sequence images shot by a landing camera in the landing process of a detector, and respectively extracting feature point sets of overlapping areas of adjacent sequence images by adopting an SIFT algorithm;
step 1.2, respectively calculating Euclidean distances between each feature point of a next frame image and a feature point of a previous frame image in the adjacent sequence images, wherein when the ratio of the minimum Euclidean distance to the next minimum Euclidean distance does not exceed a threshold value, the feature point of the next frame image corresponding to the minimum Euclidean distance and the feature point of the previous frame image are corresponding matching point pairs;
and step 1.3, traversing all feature points in the previous frame of image according to the method in the step 1.2 to obtain a first matching feature point set of the adjacent sequence images.
4. The method according to claim 3, wherein the step 2 specifically comprises:
step 2.1, removing gross errors from the first matching feature point set by using a RANSAC method to obtain a third matching feature point set;
2.2, calculating a coordinate transformation relation of adjacent sequence images by adopting an affine transformation model according to the third matching feature point set;
2.3, calculating a first image coordinate of a corresponding matching point of the feature point in the previous frame image in the next frame image according to the coordinate transformation relation of the adjacent sequence images and the image coordinate of the feature point in the previous frame image;
step 2.4, calculating a residual error according to the first image coordinate and a second image coordinate of a corresponding matching point in a next frame image of the feature point in a previous frame image in the third matching feature point set according to the following formula:
wherein,is the first image coordinate of the jth matching point of the (n + 1) th frame sequence image,is the second image coordinate of the jth matching point of the (n + 1) th frame sequence image, j is a positive integer, n belongs to [0, m ∈];
Step 2.5, removing the matching point pairs with the residual errors larger than a threshold value from the third matching feature point set;
step 2.6, calculating the coordinate transformation relation of the adjacent sequence images by adopting an affine transformation model according to the eliminated third matching point set;
step 2.7, repeating the step 2.3 to the step 2.6 until the residual characteristic points of the third matched characteristic point set reach a preset proportion or the residual error is smaller than a threshold value;
step 2.8, calculating a coordinate transformation relation of adjacent sequence images by using the third matching feature point set obtained in the step 2.7;
and 2.9, calculating the coordinate transformation relation between the initial frame image and the final frame image in the sequence image according to the coordinate transformation relation obtained in the step 2.8.
5. The method according to claim 4, wherein the step 4 specifically comprises:
step 4.1, acquiring position information and posture information of the detector when the initial frame image is shot through remote measurement;
step 4.2, calculating the ground coordinates corresponding to the image coordinates of the feature points in the initial frame image according to the following formula:
Z=0
wherein, (X, Y) is the image coordinate of the characteristic point in the initial frame image, (X, Y, Z) is the ground coordinate, [ Xs,Ys,Zs]TFor the location information to be acquired by telemetry,f is the focal length of the camera, and is the attitude information acquired by remote measurement;
step 4.3, calculating the image coordinates of the image points in the approximate ortho image according to the following formula according to the ground coordinates obtained by calculation in the step 4.2:
wherein,
4.4, assigning a value to the gray level of each image point in the approximate ortho-image to obtain the approximate ortho-image;
the step 4.4 specifically comprises:
step 4.4.1, calculating the image coordinates of any image point in the approximate ortho-image in the corresponding image point in the initial frame image according to the following formula:
wherein, (X ', Y') is the image coordinate of any image point in the approximate orthoimage,
step 4.4.2, obtaining the gray values of 4 image points around the image coordinate obtained by calculation in the step 4.4.1 in the initial frame image, and calculating the gray value of each pixel in the approximate ortho-image according to the following formula:
wherein, g1,g2,g3,g4For a gray scale value of 4 image points, Δ is the sampling interval, x ″ -x '-floor (x'), y ″ -y '-floor (y').
6. The method according to claim 5, wherein the step 5 specifically comprises: and respectively extracting the feature points of the approximate orthographic image and the overlapped area of the DOM images of the detector landing area pre-selected base map by utilizing the Affine-SIFT algorithm to obtain a second matching feature point set.
7. The method according to claim 6, wherein the step 6 specifically comprises:
6.1, calculating the image coordinate of the detector falling point in the approximate orthoimage according to the image coordinate of the detector falling point in the final frame image and the image coordinates of the detector falling point in the approximate orthoimage according to the step 4.2 and the step 4.3;
6.2, calculating a coordinate transformation relation between the approximate orthographic image and the detector landing area pre-selection base map DOM image by adopting an affine transformation model according to the second matching feature point set;
6.3, calculating the image coordinate of the detector falling point in the detector landing area pre-selection base map DOM image according to the coordinate transformation relation between the approximate orthographic image and the detector landing area pre-selection base map DOM image and the image coordinate of the detector falling point in the approximate orthographic image;
6.4, calculating first position information of the detector drop point according to the following formula:
wherein (X)0,Y0)、ResxAnd ResyAnd respectively preselecting the initial geographic coordinate, the resolution in the x direction and the resolution in the y direction of the DOM image of the base map for the landing area of the detector.
8. The method according to any one of claims 1-7, wherein after step 6, further comprising:
step 7, retrieving all LRO NAC images covering the first position information;
step 8, matching the approximate orthographic image with each frame of LRO NAC image obtained by retrieval to obtain a corresponding fourth matching feature point set;
step 9, calculating corresponding image coordinates of the detector landing point in each frame of LRO NAC image according to the fourth matching feature point set;
and step 10, calculating second position information of the detector landing point according to the corresponding image coordinates of the detector landing point in each frame of LRO NAC image.
9. The method according to claim 8, wherein step 8 specifically comprises:
8.1, respectively calculating the coordinate transformation relation between the approximate orthographic image and each frame of LRO NAC image obtained by retrieval by adopting an affine transformation model according to the fourth matching feature point set;
and 8.2, respectively calculating the corresponding image coordinates of the detector falling point in each frame of LRO NAC image according to the coordinate transformation relation between the approximate orthographic image and each frame of LRO NAC image obtained by retrieval and the image coordinates of the detector falling point in the approximate orthographic image.
10. The method according to claim 9, wherein the step 10 specifically comprises:
step 9.1, acquiring a terrain elevation and line elements, angle elements and focal lengths of each frame of LRO NAC image;
and 9.2, calculating an initial value of the ground coordinate of the landing point of the detector according to the average value of the terrain elevation, the line element, the angle element and the focal length of each frame of LRO NAC image and the following formula:
wherein Z is0Is the average value of the terrain elevation,(a1~a3,b1~b3,c1~c3) And fLROLine elements, angle elements and focal lengths of each frame of LRO NAC image respectively;
9.3, calculating to obtain the ground coordinates of the landing point of the detector according to the elevation value of the point with the minimum absolute value of the elevation difference between the full-moon DEM image and the initial value of the ground coordinates, and finding out the corresponding elevation value from the full-moon DEM image; repeating the step 9.3 until the absolute value of the height difference is smaller than the threshold value;
step 9.4, calculating to obtain second position information of the detector landing point according to the ground coordinates of the detector landing point in each frame of LRO NAC image obtained in the step 9.3 and the following formula:
wherein, posiAnd k is the number of the LRO NAC images obtained by retrieval.
CN201711191047.0A 2017-11-24 2017-11-24 Detector lunar landing absolute positioning method based on multi-source image fusion Active CN108171732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711191047.0A CN108171732B (en) 2017-11-24 2017-11-24 Detector lunar landing absolute positioning method based on multi-source image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711191047.0A CN108171732B (en) 2017-11-24 2017-11-24 Detector lunar landing absolute positioning method based on multi-source image fusion

Publications (2)

Publication Number Publication Date
CN108171732A true CN108171732A (en) 2018-06-15
CN108171732B CN108171732B (en) 2020-11-06

Family

ID=62527644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711191047.0A Active CN108171732B (en) 2017-11-24 2017-11-24 Detector lunar landing absolute positioning method based on multi-source image fusion

Country Status (1)

Country Link
CN (1) CN108171732B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631876A (en) * 2019-01-18 2019-04-16 辽宁工程技术大学 A kind of inspection prober localization method based on one camera navigation image
CN111488702A (en) * 2020-06-28 2020-08-04 航天宏图信息技术股份有限公司 Drop point prediction method and device and electronic equipment
CN111899303A (en) * 2020-07-14 2020-11-06 中国人民解放军63920部队 Novel feature matching and relative positioning method considering space inverse projection constraint
CN112651277A (en) * 2020-09-16 2021-04-13 武昌理工学院 Remote sensing target analysis method based on multi-source image
CN115540878A (en) * 2022-09-27 2022-12-30 北京航天飞行控制中心 Lunar surface driving navigation method and device, electronic equipment and storage medium
CN115861393A (en) * 2023-02-16 2023-03-28 中国科学技术大学 Image matching method, spacecraft landing point positioning method and related device
CN115933652A (en) * 2022-11-29 2023-04-07 北京航天飞行控制中心 Lunar vehicle direct-drive teleoperation driving method based on sequence image splicing and fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927738A (en) * 2014-01-10 2014-07-16 北京航天飞行控制中心 Planet vehicle positioning method based on binocular vision images in large-distance mode

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927738A (en) * 2014-01-10 2014-07-16 北京航天飞行控制中心 Planet vehicle positioning method based on binocular vision images in large-distance mode

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
万文辉,等.: "基于降落图像匹配的嫦娥三号着陆点位置评估", 《航天器工程》 *
刘斌,等.: "基于LRO NAC影像的嫦娥三号着陆点高精度定位", 《科学通报》 *
刘斌,等.: "基于降落相机图像的嫦娥三号着陆轨迹恢复", 《遥感学报》 *
耿蕾蕾,等.: ""资源三号"卫星图像影像特征匹配方法研究", 《航天返回与遥感》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631876A (en) * 2019-01-18 2019-04-16 辽宁工程技术大学 A kind of inspection prober localization method based on one camera navigation image
CN109631876B (en) * 2019-01-18 2022-04-12 辽宁工程技术大学 Inspection detector positioning method based on single-camera navigation image
CN111488702A (en) * 2020-06-28 2020-08-04 航天宏图信息技术股份有限公司 Drop point prediction method and device and electronic equipment
CN111899303A (en) * 2020-07-14 2020-11-06 中国人民解放军63920部队 Novel feature matching and relative positioning method considering space inverse projection constraint
CN111899303B (en) * 2020-07-14 2021-07-13 中国人民解放军63920部队 Novel feature matching and relative positioning method considering space inverse projection constraint
CN112651277A (en) * 2020-09-16 2021-04-13 武昌理工学院 Remote sensing target analysis method based on multi-source image
CN115540878A (en) * 2022-09-27 2022-12-30 北京航天飞行控制中心 Lunar surface driving navigation method and device, electronic equipment and storage medium
CN115540878B (en) * 2022-09-27 2024-08-23 北京航天飞行控制中心 Moon surface driving navigation method and device, electronic equipment and storage medium
CN115933652A (en) * 2022-11-29 2023-04-07 北京航天飞行控制中心 Lunar vehicle direct-drive teleoperation driving method based on sequence image splicing and fusion
CN115861393A (en) * 2023-02-16 2023-03-28 中国科学技术大学 Image matching method, spacecraft landing point positioning method and related device
CN115861393B (en) * 2023-02-16 2023-06-16 中国科学技术大学 Image matching method, spacecraft landing point positioning method and related device

Also Published As

Publication number Publication date
CN108171732B (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN108171732B (en) Detector lunar landing absolute positioning method based on multi-source image fusion
CN110966991B (en) Single unmanned aerial vehicle image positioning method without control point
Hu et al. Understanding the rational function model: methods and applications
CN107101648B (en) Stellar camera calibration method for determining posture and system based on fixed star image in regional network
CN106780729A (en) A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN104897175A (en) On-orbit geometric calibration method and system of multi-camera optical push-broom satellite
CN113900125B (en) Satellite-ground combined linear array imaging remote sensing satellite full-autonomous geometric calibration method and system
Tao et al. Automated localisation of Mars rovers using co-registered HiRISE-CTX-HRSC orthorectified images and wide baseline Navcam orthorectified mosaics
CN104298887A (en) Relative radiation calibration method of multichip linear CCD (charge coupled device) camera
CN117253029B (en) Image matching positioning method based on deep learning and computer equipment
CN110006452A (en) No. six wide visual field cameras of high score are with respect to geometric calibration method and system
WO2024093635A1 (en) Camera pose estimation method and apparatus, and computer-readable storage medium
CN108444451B (en) Planet surface image matching method and device
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method
CN115183669A (en) Target positioning method based on satellite image
Gong et al. A detailed study about digital surface model generation using high resolution satellite stereo imagery
CN104567879B (en) A kind of combination visual field navigation sensor the earth's core direction extracting method
Zhao et al. Digital Elevation Model‐Assisted Aerial Triangulation Method On An Unmanned Aerial Vehicle Sweeping Camera System
CN109029379A (en) A kind of high-precision stereo mapping with low base-height ratio method
Hamidi et al. Precise 3D geo-location of UAV images using geo-referenced data
CN108681985B (en) Stripe splicing method of video satellite images
Liu et al. Adaptive re-weighted block adjustment for multi-coverage satellite stereo images without ground control points
Di et al. Co-registration of Chang’E-1 stereo images and laser altimeter data for 3D mapping of lunar surface
CN113905190A (en) Panorama real-time splicing method for unmanned aerial vehicle video
Kang et al. Repositioning Technique Based on 3D Model Using a Building Shape Registration Algorithm.

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant