CN113920046B - Multi-fragment satellite image stitching and geometric model construction method - Google Patents

Multi-fragment satellite image stitching and geometric model construction method Download PDF

Info

Publication number
CN113920046B
CN113920046B CN202111161976.3A CN202111161976A CN113920046B CN 113920046 B CN113920046 B CN 113920046B CN 202111161976 A CN202111161976 A CN 202111161976A CN 113920046 B CN113920046 B CN 113920046B
Authority
CN
China
Prior art keywords
image
segmented
images
original
rfm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111161976.3A
Other languages
Chinese (zh)
Other versions
CN113920046A (en
Inventor
王涛
王龙辉
张艳
张永生
戴晨光
于英
李磊
窦利军
周丽雅
李力
宋亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202111161976.3A priority Critical patent/CN113920046B/en
Publication of CN113920046A publication Critical patent/CN113920046A/en
Application granted granted Critical
Publication of CN113920046B publication Critical patent/CN113920046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a multi-fragment satellite image splicing and geometric model construction method, and belongs to the technical field of remote sensing image processing. Firstly, carrying out multi-slice image area network adjustment, and utilizing high-precision reference data to assist adjustment to obtain an original slice image RFM compensation model; then, an image space subsection affine transformation model is used for establishing a coordinate conversion relation of image points before and after stitching, and a panoramic stitching image is generated; and finally, uniformly dividing an image square grid for the panoramic stitching image, and generating a virtual control grid by using the established image point coordinate conversion relation and the original segmented image RFM compensation model, thereby constructing the panoramic stitching image RFM. The invention uses RFM as a coordinate conversion model, does not need to construct a strict imaging model of an original segmented image and a panoramic spliced image, comprehensively considers the coordinate conversion relation of an image space and the continuity of an object space, and has small calculated amount and high efficiency.

Description

Multi-fragment satellite image stitching and geometric model construction method
Technical Field
The invention relates to a multi-fragment satellite image splicing and geometric model construction method, and belongs to the technical field of remote sensing image processing.
Background
The space-borne linear array sensor is an important carrier for spaceflight remote sensing earth observation, and with the continuous progress of earth observation technology, higher requirements are put on the performance, index and imaging quality of the sensor. In order to achieve high imaging quality and ground coverage, most optical sensors acquire ground images using a tiled TDI CCD (Time Delay and Integration Charge-coupled Device) technology. The spliced TDI CCD sensor is widely applied with good imaging performance, and satellites such as IKONOS, quickBird, worldView-2, SPOT6/7, landSat-8, tiandepicted one (TH-1), resource three (ZY-3), high-resolution two and high-resolution seven are all carried with the sensor. With the continuous progress of technology, the spatial resolution of the image reaches sub-meter level, and the influence caused by the stitching error is not neglected, so that the original slicing image stitching processing mode and the panoramic stitching image geometric model construction are studied more.
Along the flight direction of the satellite platform, the original image data acquired by the spliced TDI CCD are a plurality of segmented images, so that the method can provide high-quality imaging and wide-field images to be widely applied in different fields, and the method can provide basic data support for diversified applications by splicing the original segmented images to generate panoramic images with geometric object image relationships, so that the premise of ensuring the high-quality splicing of the segmented images is that the images are processed and applied later.
The current image stitching mainly comprises an image space stitching and an object space stitching. The image side stitching mainly utilizes an image matching algorithm to generate homonymous points and utilizes a certain image side stitching model to finish image stitching, the algorithm is simple in principle and high in efficiency, but panoramic images generated by stitching do not have a definite geometric object-image relationship, and stitching accuracy is slightly low; the object space stitching is mainly based on continuity of object space, panoramic images with clear geometric object image relations are generated on the basis of a strict geometric model, and the method is high in accuracy, large in calculated amount and low in timeliness.
Disclosure of Invention
The invention aims to provide a multi-fragment satellite image splicing and geometric model construction method, which solves the problems of low efficiency or low precision in the existing splicing process and can quickly establish a geometric model after splicing.
The invention provides a multi-fragment satellite image splicing and geometric model construction method for solving the technical problems, which comprises the following steps:
1) Acquiring each original sliced image and DOM data to be spliced, and determining a control point or a control point and a connection point based on the original sliced images and the DOM data;
2) Performing adjustment processing on the original segmented image by using the obtained control points or the control points and the connection points, and obtaining an RFM compensation model of the original segmented image based on adjustment results;
3) Splicing the original segmented images by adopting a segmented affine transformation model to obtain panoramic spliced images;
4) Performing gridding treatment on the obtained panoramic spliced image, and determining corresponding image points of each grid point in the original segmented image according to the image point coordinate conversion relation;
5) Layering the elevations in the coverage range of the original segmented images, calculating the object space coordinates of virtual control grid points corresponding to the image grid points by using an original segmented image RFM compensation model to obtain a panoramic stitching image RFM, and solving the panoramic stitching image RFM to construct a geometric model after stitching of the multi-segmented satellite images.
Firstly, carrying out multi-slice image area network adjustment, and utilizing high-precision reference data to assist adjustment to obtain an original slice image RFM compensation model; then, an image space subsection affine transformation model is used for establishing a coordinate conversion relation of image points before and after stitching, and a panoramic stitching image is generated; and finally, uniformly dividing an image square grid for the panoramic stitching image, and generating a virtual control grid by using the established image point coordinate conversion relation and the original segmented image RFM compensation model, thereby constructing the panoramic stitching image RFM. The invention uses RFM as a coordinate conversion model, does not need to construct a strict imaging model of an original segmented image and a panoramic spliced image, comprehensively considers the coordinate conversion relation of an image space and the continuity of an object space, and has small calculated amount and high efficiency.
Further, in order to accurately determine control points of different types of images, in the step 2), when adjustment processing is performed, for the multi-linear array image, the original segmented image is subjected to multi-view matching to obtain connection points; screening part of connection points to be matched with DOM, and obtaining control point data after interpolating elevation values in DEM; for the single-line array image, the original sliced image is directly matched with DOM, and the DEM interpolates the elevation value to generate an obtained control point.
Further, in order to improve accuracy of adjustment, the step 2) uses an image side compensation model to establish a correction relationship between the image side coordinates (r, c) and the object side coordinates (P, L, H) when the adjustment is performed, and uses the correction relationship to perform the adjustment.
Further, the correction relation is as follows:
wherein (r, c) is an image space coordinate, (P, L, H) is an object space coordinate, r 0 ,r s ,c 0 ,c s For the image space coordinates normalization parameters, (P) n ,L n ,H n ) Is a standardized object space coordinate.
Further, in the step 3), when the original segmented images are spliced, the positions of the odd segmented images are kept unchanged, and the even segmented images are embedded between the odd segmented images.
Further, in the step 3), when the even-numbered segmented images are embedded between the odd-numbered segmented images, the change trend of the vertical offset of the inter-slice connection point is counted, the even-numbered segmented images are segmented, affine transformation coefficients of the segments are calculated, and the adopted calculation formula is as follows:
wherein (r, c) is the coordinate of a certain connecting point in the j-th section of the even-numbered image, and (l, s) represents the image space coordinate of the panoramic stitching image corresponding to the coordinate of the certain connecting point, a 0j ~b 2j Translation, rotation, and scaling in the segmented image row, column directions are described for the 6 affine transformation parameters of the j-th segment, respectively.
Further, the calculating process of the object coordinates of the virtual control lattice point in the step 5) is as follows:
A. uniformly layering elevations in an image coverage range, substituting coordinates r and c of any grid point of an original segmented image and ground elevation layering H into a correction relation between an image space coordinate and an object space coordinate to obtain the following steps:
wherein r is 0 、c 0 Is the initial value of image coordinates, (P, L, H) is the object coordinates, r 0 ,r s ,c 0 ,c s Standardized parameters for image space coordinates; (P) n ,L n ,H n ) Is a standardized object space coordinate;
B. and C, calculating a coordinate correction ground coordinate according to the relation obtained in the step A, and obtaining the ground coordinate of the virtual control point after multiple iterations.
Further, in order to prevent the model from being over parameterized, a spectrum correction iteration method is adopted for solving the panoramic stitching image RFM in the step 5).
Drawings
FIG. 1 is a flow chart of a method for multi-slice satellite image stitching and geometric model construction according to the present invention;
FIG. 2 is a schematic diagram of a multi-chip TDI CCD mounting structure in accordance with an embodiment of the present invention;
FIG. 3-a is a schematic diagram of the CCD position relationship before and after splicing in the embodiment of the invention;
FIG. 3-b illustrates a geometric stitching relationship between an odd-numbered tile image and an even-numbered tile image in an embodiment of the present invention;
FIG. 4 is a schematic diagram of virtual control grid point generation of a panoramic stitched image according to the present invention;
FIG. 5-a is a thumbnail of an original tile image in a real simulation experiment of the present invention;
FIG. 5-b is a thumbnail view of a panoramic stitched image in a real simulation experiment of the present invention;
fig. 5-c is a partial enlarged view of the panoramic stitching image in the real simulation experiment of the present invention.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings.
Firstly, carrying out multi-slice image area network adjustment, and utilizing high-precision reference data to assist adjustment to obtain an original slice image RFM compensation model; then, an image space subsection affine transformation model is used for establishing a coordinate conversion relation of image points before and after stitching, and a panoramic stitching image is generated; and finally, uniformly dividing an image square grid for the panoramic stitching image, and generating a virtual control grid by using the established image point coordinate conversion relation and the original segmented image RFM compensation model, thereby constructing the panoramic stitching image RFM. The specific implementation flow of the method is shown in fig. 1, and the specific implementation process is as follows.
1. Multi-slice image RFM area network adjustment
When a strict geometric model of an original segmented image is established, a certain systematic error exists in the external azimuth data observation value, so that a corresponding systematic error exists in the RFM generated by fitting the strict geometric model. Under the condition of high-precision reference data assistance, the system error of each original segmented image can be compensated, so that the RFM positioning precision is improved. Firstly, performing systematic error compensation on the original segmented image by using high-precision reference auxiliary data (DOM and DEM) so as to improve RFM positioning precision.
(1) Constructing an RFM compensation model: compensating the system error of the original segmented image (usually selecting an affine transformation model as an image side compensation model) and improving the accuracy of the RFM compensation model, as shown in the formula (1):
wherein a is 0 ~b 2 Compensating parameters for systematic errors of the segmented images.
The RFM links the image space coordinates and the object space coordinates through polynomial ratios, and is defined as follows:
wherein a is k ,b k ,c k And d k (k=0, 1, …, 19) is RFM parameter, b 0 And d 0 Typically 1; (r) n ,c n ) Is a standardized image space coordinate; (P) n ,L n ,H n ) Is a standardized object space coordinate.
Wherein r, c, P, L and H are non-standardized coordinates; r is (r) 0 ,r s ,c 0 ,c s Standardized parameters for image space coordinates;
P 0 ,P s ,L 0 ,L s ,H 0 ,H s parameters are normalized for object coordinates.
The correction relation between the image-side coordinates (r, c) and the object-side coordinates (P, L, H) is obtained as follows,
in the method, in the process of the invention,
(2) RFM area network adjustment
For the multi-linear array image, the multi-slice image firstly obtains connection points through multi-view matching, a part of connection points are screened to be matched with DOM to obtain plane coordinates of corresponding ground points, then elevation is obtained through DEM interpolation, the elevation is converted into control points through elevation references, and the control points and the connection points participate in adjustment together; for a single-line array image, the original segmented image is directly matched with DOM to obtain the plane coordinates of a ground control point, the elevation is obtained by interpolation in the DEM, and the obtained control point participates in adjustment.
When the study object is a multi-linear array image, the three-dimensional area network adjustment error equation of the RFM is as follows:
the above formula is expressed as a matrix form as follows:
V=AX+BT-l,P (6)
when the study object is a single line image, the regional net difference error equation of RFM is v=ax-l.
Wherein V is the residual vector of the error equation; x is an affine transformation coefficient correction vector of an image space; t is the coordinate correction vector of the object space of the connecting point; a and B are coefficient matrixes of unknown numbers; l is a constant term calculated by the initial value; p is the weight matrix.
The corresponding adjustment point can be obtained through adjustment. For multi-linear array images, such as ZY-3 satellite three-linear array images, the down view consists of 3 CCD sheets, and the front and rear CCD sheets consist of 4 CCD sheets. The multi-slice images are subjected to multi-view matching to obtain connection points, part of connection points are screened to be matched with digital orthophoto images (DOM) with geographic information to obtain corresponding ground point plane coordinates, elevation values are obtained through interpolation in a Digital Elevation Model (DEM), and the elevation values are converted to serve as control points through elevation references. For single line images, for example, TH-1 satellite high resolution images, are composed of 8 CCD plates. And directly matching the original sliced image with the DOM to obtain the plane coordinates of the control point, and interpolating and updating the elevation value in the DEM.
2. Panoramic stitching based on segmented affine transformation models.
The mounting relationship will be described using 3 TDI CCDs as an example, as shown in fig. 2. TDI CCD is staggered in the focal plane, the length of each slice is w pixels, dy pixels are spaced along the track direction, and dx pixels are overlapped in the vertical track direction. And pushing and scanning along the flight direction to obtain an original segmented image.
When the three images are spliced, the positions of the odd-numbered segmented images are kept unchanged, and the even-numbered segmented images are inlaid in the odd-numbered segmented images, as shown in fig. 3-a, VCCD2 is the part of the even-numbered segmented images, which is reserved on the panoramic spliced image, converted by the CCD 2. The odd-numbered segmented images are used as references, and the even-numbered segmented images are embedded, so that the imaging geometric relationship of the original segmented images is maintained to the greatest extent, and the left edge and the right edge of the panoramic stitching image are prevented from generating stretching deformation. The splicing lines are arranged on the right boundary of the CCD1 image and the left boundary of the CCD3 image, so that the odd-numbered fragmented image information can be completely reserved.
And automatically matching the multi-slice images to obtain inter-slice connection points in the overlapping areas of the adjacent slices. Based on the matched inter-slice connection points, the variation trend of the vertical offset is counted, the original slice image CCD2 is segmented, and affine transformation coefficients are calculated by using the inter-slice connection points. The affine transformation coefficient calculation process is as follows:
let k be arranged on the left side of the j-th image of CCD2 1 For the connection point, the coordinates in CCD1 and CCD2 are (r 1i ,c 1i ) Sum (r) 2i ,c 2i ),i=1,2,…,k 1 The method comprises the steps of carrying out a first treatment on the surface of the On the right side there is k 2 For the connection point, the coordinates in the CCD2 and the CCD3 are (r 2i ,c 2i ) Sum (r) 3i ,c 3i ),i=k 1 +1,k 1 +2,…,k 1 +k 2 . (r, c) represents an image of the original tile imageSquare coordinates; the first digit of the subscript represents the original tile image number and the second letter of the subscript represents the attachment point number.
And (3) representing the image side coordinates of the panoramic spliced image by (l, s), wherein the coordinates of the CCD2 connecting point on the panoramic spliced image are as follows: for the left connection point, (l) i ,s i )=(r 1i ,c 1i ),i=1,2,…,k 1 The method comprises the steps of carrying out a first treatment on the surface of the For the right connection point, (l) i ,s i )=(r 3i ,c 3i -2×dx),i=k 1 +1,k 1 +2,…,k 1 +k 2 . When converting the CCD2 image into the VCCD2 image, the coordinates of the connection point are calculated from (r 2i ,c 2i ) Conversion to (l) i ,s i ). Constructing an affine transformation model for the j-th section to describe the coordinate conversion relation before and after splicing, wherein the form is as follows:
where j=1, 2, …, n (n represents the number of segments); a, a 0j ~b 2j Translation, rotation, and scaling in the segmented image row, column directions are described for the 6 affine transformation parameters of the j-th segment, respectively.
The panoramic stitching image completely retains the CCD1 image and the CCD3 image, and the VCCD2 image is generated by the CCD2 image through gray resampling based on the piecewise affine transformation model, as shown in fig. 3-b. The coordinate conversion relation F between the original image and the panoramic image before and after splicing is as follows:
(1) Panoramic stitching image CCD1 area:
wherein L is more than or equal to 1 and less than or equal to L; s is more than or equal to 1 and w is more than or equal to w.
(2) Panoramic stitching image CCD1 area:
wherein L is j-1 ≤l≤L j The method comprises the steps of carrying out a first treatment on the surface of the w+1.ltoreq.s.ltoreq.2× (w-dx); j=1, 2, …, n (n represents the number of segments); a, a 0j ~b 2j Translation, rotation, and scaling in the segmented image row, column directions are described for the 6 affine transformation parameters of the j-th segment, respectively.
(3) Panoramic stitching image CCD3 region:
wherein L is more than or equal to 1 and less than or equal to L; [ 2X (w-dx) +1 ]. Ltoreq.s.ltoreq.3 Xw-2 Xdx.
And resampling the gray scale of the original segmented image according to the coordinate conversion relation F before and after the stitching, so as to generate a continuous seamless panoramic stitching image.
3. Constructing RFM model on panoramic image
The construction process of the panoramic stitching image RFM is shown in fig. 4, and the specific expression mode is as follows:
dividing the square grid of the image uniformly according to the size of the panoramic spliced image, firstly determining the image point P (r, c) in the original segmented image corresponding to any square grid point P (s, l) according to the image point coordinate conversion relation F,
secondly, uniformly layering the elevations in the image coverage range, and utilizing an original segmented image RFM compensation model to synthesize the ground elevation layering H, calculating the object space coordinates of the virtual control grid points V corresponding to the grid points P (s, l), wherein the specific algorithm is as follows:
bringing r, c and H into the correction relation to obtain:
wherein r is 0 、c 0 Is the initial value of the image coordinates. The deformation is carried out to obtain the product,
after traversing all image grid points of the panoramic spliced image, a corresponding virtual control grid can be obtained, and the panoramic spliced image RFM is constructed.
In order to verify the effectiveness of the method, two groups of Data, namely TH-1 high-resolution image (Data A) and ZY-3 three-linear array image (Data B) of a certain area, are selected for experiments to evaluate the accuracy of the method. The data details employed for this experiment are shown in table 1:
TABLE 1
And (5) evaluating the accuracy of the area network adjustment of the original segmented image.
For Data A, 504 control points are generated by adopting a mode of automatically matching with high-precision reference Data; for Data B, firstly, generating connection points by three-view matching of images, and then screening part of connection points to match with high-precision reference Data to obtain 511 control points and 7573 connection points in total. The points are uniformly distributed in the image range and have higher reliability. For Data a and Data B, 105 and 93 high-precision control points were acquired in the zone by GPS field measurements, respectively, as checkpoints to verify the adjustment accuracy. The adjustment accuracy results are shown in table 2:
TABLE 2
The results in table 2 show that the overall positioning accuracy after adjustment is improved obviously, which shows that the high-accuracy image data auxiliary area network adjustment effect is good, and the system error of the original segmented image is basically eliminated.
And (5) visual evaluation of panoramic images.
Panoramic stitched images are generated according to the method of the present invention, as shown in fig. 5-a, 5-B and 5-c, which are front, front and rear images of Data a and Data B in sequence. FIG. 5-a is a thumbnail of an original tile image, data A is imaged in a mechanical stitching mode, and adjacent tiles have a dislocation relationship of about 2114 pixels in the row direction; data B is imaged by optical stitching, and adjacent slices are less misplaced along the track. Fig. 5-b is a thumbnail of a panoramic stitched image, in which the apparent features (within a white box) at the stitching point are cut out for more visual assessment of stitching accuracy, and the magnified image is as shown in fig. 5-c. The enlarged image shows that the visual inspection has no dislocation phenomenon, and the panoramic spliced image adopting the method of the invention achieves the visual seamless precision requirement.
And evaluating the fitting precision of the panoramic stitching image RFM.
According to the invention, the panoramic stitching image RFM is constructed, and the fitting accuracy of the RFM is evaluated. In the experiment, the elevation is uniformly divided into 10 layers, and an image square grid is divided at equal intervals of 64 pixels by 64 pixels, and the points are used as control grid points; the encryption image grid and the elevation are layered, and the calculated points are used as check grid points. The control lattice points are used for solving RFM, and the check lattice points are used for analyzing RFM fitting accuracy. The results are shown in table 3, the units in table 3 being pixels:
TABLE 3 Table 3
As can be seen from Table 3, the fitting accuracy of the TH-1 high resolution satellite RFM is about 0.6% of pixels, the fitting accuracy of the ZY-3 front view image RFM is about 0.2%, and the front and rear view images are both 0.5% of pixels, so that the fitting accuracy of the RFM constructed by the invention is within 0.6% of pixels, and the method can be applied to photogrammetry.
And (5) evaluating the geometric accuracy of the panoramic spliced image.
In order to evaluate the effect of the invention further, the invention is compared with an object space stitching algorithm, and the geometric accuracy of the panoramic stitching image is analyzed from two aspects.
Firstly, quantitatively evaluating the precision.
And selecting the inter-slice connection points which are uniformly distributed in the overlapping range of the adjacent slice images, wherein the TH-1 high-resolution image 70 pairs, the ZY-3 front view image 20 pairs and the front and rear view images 30 pairs are used for evaluating the splicing accuracy. And converting the connection point coordinates of the odd number of slices into panorama stitching image coordinates, calculating the corresponding connection point coordinates on the even number of slices, differencing the calculated value and the measured value, and taking the error in statistics as a stitching precision evaluation basis. The results are shown in table 4, the units in table 4 being pixels:
TABLE 4 Table 4
As can be seen from the results of Table 4, the precision of the invention in the row direction approaches to the object space stitching algorithm, and the stitching precision of the 4-scene images is within 1 pixel, so that the stitching precision of the sub-pixel level is achieved.
And secondly, RFM positioning accuracy comparison.
And uniformly distributed homonymous points are selected as check points on panoramic spliced images generated by the invention and object space splicing algorithm respectively and used for evaluating the RFM positioning accuracy difference value of the two methods. For Data A, performing monolithic localization on the check point, and Gao Chengqu DEM interpolation; for Data B, the object coordinates are intersected in front of the checkpoint. The difference was used to evaluate RFM positioning accuracy differences, as shown in table 5, the error in table 5 being in meters:
TABLE 5
As can be seen from Table 5, the difference in positioning accuracy of RFM for the TH-1 panoramic stitched image was 0.193747m in X direction, 0.1568231 m in Y direction and 0.226853m in Z direction; the difference value of the RFM positioning accuracy of the ZY-3 panoramic spliced image is 0.131874m in the X direction, 0.103222m in the Y direction and 0.136224m in the Z direction. For both groups of data, the precision difference is within 0.3m, and the RFM generated by the invention and the object side splicing algorithm can be considered to achieve the same positioning precision by considering the error when the homonymous points are selected.
Therefore, the invention does not need to construct a strict imaging model of the original segmented image and the panoramic stitching image, and comprises three parts of multi-segmented image area network adjustment, panoramic stitching image generation and RFM construction, which are all based on the original segmented image and the RFM, the panoramic stitching image achieves sub-pixel stitching precision, and the RFM positioning precision is consistent with the object stitching algorithm, thereby meeting the subsequent application requirements of users. The invention comprehensively considers the coordinate conversion relation of the image space and the continuity of the object space, and has small calculated amount and high efficiency.

Claims (4)

1. The method for splicing the multi-fragment satellite images and constructing the geometric model is characterized by comprising the following steps of:
1) Acquiring each original sliced image and DOM data to be spliced, and determining a control point or a control point and a connection point based on the original sliced images and the DOM data; for the multi-linear array image, the original segmented image is subjected to multi-view matching to obtain connection points; screening part of connection points to be matched with DOM, and obtaining control point data after interpolating elevation values in DEM; for a single-line array image, the original sliced image is directly matched with DOM, and after the elevation value is interpolated by the DEM, a control point is generated;
2) Performing adjustment processing on the original segmented image by using the obtained control points or the control points and the connection points, and obtaining an RFM compensation model of the original segmented image based on adjustment results; the step 2) establishes a correction relation between an image space coordinate (r, c) and an object space coordinate (P, L, H) by adopting an image space compensation model when carrying out adjustment processing, and carries out adjustment processing by utilizing the correction relation; the correction relation is as follows:
wherein (r, c) is an image space coordinate, (P, L, H) is an object space coordinate, r 0 ,r s ,c 0 ,c s For the image space coordinates normalization parameters, (P) n ,L n ,H n ) Is a standardized object space coordinate;
3) Splicing the original segmented images by adopting a segmented affine transformation model to obtain panoramic spliced images;
4) Performing gridding treatment on the obtained panoramic spliced image, and determining corresponding image points of each grid point in the original segmented image according to the image point coordinate conversion relation;
5) Layering elevations in the coverage range of the original segmented images, calculating object space coordinates of virtual control grid points corresponding to the grid points of the image grid by using an original segmented image RFM compensation model to obtain a panoramic spliced image RFM, and solving the panoramic spliced image RFM to realize geometric model construction after multi-segmented satellite image splicing; the calculation process of the object coordinates of the virtual control lattice point in the step 5) is as follows:
A. uniformly layering elevations in an image coverage range, substituting coordinates r and c of any grid point of an original segmented image and ground elevation layering H into a correction relation between an image space coordinate and an object space coordinate to obtain the following steps:
wherein r is 0 、c 0 Is the initial value of image coordinates, (P, L, H) is the object coordinates, r 0 ,r s ,c 0 ,c s Standardized parameters for image space coordinates; (P) n ,L n ,H n ) Is a standardized object space coordinate;
B. and C, calculating a coordinate correction ground coordinate according to the relation obtained in the step A, and obtaining the ground coordinate of the virtual control point after multiple iterations.
2. The method according to claim 1, wherein the step 3) is to insert the even-numbered segmented images between the odd-numbered segmented images while the positions of the odd-numbered segmented images are kept unchanged when the original segmented images are spliced.
3. The method for splicing and geometric model construction of multi-segment satellite images according to claim 1, wherein in the step 3), when an even-number segment image is embedded between odd-number segment images, the variation trend of the vertical offset of the connection point between the segments is counted, and for the even-number segment images, affine transformation coefficients of each segment are calculated by adopting the following calculation formula:
wherein (r, c) is the coordinate of a certain connection point in the j-th section of the even-numbered image, and (l, s) represents the image space coordinate of the panoramic stitching image corresponding to the certain connection point, a 0j ~b 2j Translation, rotation, and scaling in the segmented image row, column directions are described for the 6 affine transformation parameters of the j-th segment, respectively.
4. The method for multi-slice satellite image stitching and geometric model construction according to claim 1, wherein the solving of the panoramic stitched image RFM in step 5) uses a spectral correction iterative method.
CN202111161976.3A 2021-09-30 2021-09-30 Multi-fragment satellite image stitching and geometric model construction method Active CN113920046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111161976.3A CN113920046B (en) 2021-09-30 2021-09-30 Multi-fragment satellite image stitching and geometric model construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111161976.3A CN113920046B (en) 2021-09-30 2021-09-30 Multi-fragment satellite image stitching and geometric model construction method

Publications (2)

Publication Number Publication Date
CN113920046A CN113920046A (en) 2022-01-11
CN113920046B true CN113920046B (en) 2023-07-18

Family

ID=79237427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111161976.3A Active CN113920046B (en) 2021-09-30 2021-09-30 Multi-fragment satellite image stitching and geometric model construction method

Country Status (1)

Country Link
CN (1) CN113920046B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830175B (en) * 2024-03-05 2024-06-07 暨南大学 Image geometric distortion self-calibration method under arbitrary orientation condition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN111612693A (en) * 2020-05-19 2020-09-01 中国科学院微小卫星创新研究院 Method for correcting rotary large-width optical satellite sensor
CN113324527A (en) * 2021-05-28 2021-08-31 自然资源部国土卫星遥感应用中心 Co-rail laser height measurement point and three-linear array three-dimensional image combined surveying and mapping processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN111612693A (en) * 2020-05-19 2020-09-01 中国科学院微小卫星创新研究院 Method for correcting rotary large-width optical satellite sensor
CN113324527A (en) * 2021-05-28 2021-08-31 自然资源部国土卫星遥感应用中心 Co-rail laser height measurement point and three-linear array three-dimensional image combined surveying and mapping processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于有理多项式模型的GF4卫星区域影像平差处理方法及精度验证;皮英冬等;《测绘学报》;20161215(第12期);第1448-1454页 *

Also Published As

Publication number Publication date
CN113920046A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN111126148B (en) DSM (digital communication system) generation method based on video satellite images
Henriksen et al. Extracting accurate and precise topography from LROC narrow angle camera stereo observations
Surazakov et al. Positional accuracy evaluation of declassified Hexagon KH-9 mapping camera imagery
CN102003938B (en) Thermal state on-site detection method for large high-temperature forging
KR101965965B1 (en) A method of automatic geometric correction of digital elevation model made from satellite images and provided rpc
US6810153B2 (en) Method for orthocorrecting satellite-acquired image
CN113358091B (en) Method for producing digital elevation model DEM (digital elevation model) by using three-linear array three-dimensional satellite image
CN104299228B (en) A kind of remote sensing image dense Stereo Matching method based on Accurate Points position prediction model
CN110006452B (en) Relative geometric calibration method and system for high-resolution six-size wide-view-field camera
Li et al. A new analytical method for estimating Antarctic ice flow in the 1960s from historical optical satellite imagery
Parente et al. Optimising the quality of an SfM‐MVS slope monitoring system using fixed cameras
CN110363758B (en) Optical remote sensing satellite imaging quality determination method and system
CN111473802A (en) Optical sensor internal orientation element calibration method based on linear array push-scanning
CN113920046B (en) Multi-fragment satellite image stitching and geometric model construction method
CN111524196B (en) In-orbit geometric calibration method for sweep large-width optical satellite
CN109029379B (en) High-precision small-base-height-ratio three-dimensional mapping method
CN105571598B (en) A kind of assay method of laser satellite altimeter footmark camera posture
CN108444451B (en) Planet surface image matching method and device
Yan et al. Topographic reconstruction of the “Tianwen-1” landing area on the Mars using high resolution imaging camera images
CN111161186B (en) Push-broom type remote sensor channel registration method and device
CN115326025B (en) Binocular image measurement and prediction method for sea waves
CN112767454B (en) Superposition information compensation method based on multi-view observation SAR data sampling analysis
CN113379648A (en) High-resolution seven-and-resource three-dimensional image joint adjustment method
Cao et al. Precise sensor orientation of high-resolution satellite imagery with the strip constraint
Fallah et al. Intensifying the spatial resolution of 3D thermal models from aerial imagery using deep learning-based image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant