CN118212366A - Moving target three-dimensional reconstruction method and device based on multiple remote sensing images - Google Patents

Moving target three-dimensional reconstruction method and device based on multiple remote sensing images Download PDF

Info

Publication number
CN118212366A
CN118212366A CN202410628964.4A CN202410628964A CN118212366A CN 118212366 A CN118212366 A CN 118212366A CN 202410628964 A CN202410628964 A CN 202410628964A CN 118212366 A CN118212366 A CN 118212366A
Authority
CN
China
Prior art keywords
remote sensing
sensing images
matching
dimensional reconstruction
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410628964.4A
Other languages
Chinese (zh)
Inventor
向俞明
王峰
周光尧
陈瑶
胡玉新
刘方坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202410628964.4A priority Critical patent/CN118212366A/en
Publication of CN118212366A publication Critical patent/CN118212366A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a moving target three-dimensional reconstruction method and device based on multiple remote sensing images. The method comprises the following steps: extracting a detection frame of a moving target from the multiple remote sensing images; performing azimuth correction and geometric positioning parameter fitting on the rest remote sensing images in the multiple remote sensing images based on the reference remote sensing images of the multiple remote sensing images to obtain multiple remote sensing images with unified references; combining the multiple remote sensing images with the unified reference into multiple pairs of double remote sensing images, and performing condition screening on the multiple pairs of double remote sensing images according to the satellite shooting angle and the sun irradiation angle to obtain a double remote sensing image screening set; performing pixel-by-pixel matching and matching point refinement on each group of the two remote sensing images in the screening set of the two remote sensing images in the detection frame to obtain a matching point result; and carrying out three-dimensional resolving on the matching point result according to the fitted geometric positioning parameters, generating a plurality of groups of sparse point clouds of the moving target, and fusing the plurality of groups of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.

Description

Moving target three-dimensional reconstruction method and device based on multiple remote sensing images
Technical Field
The invention relates to the technical field of three-dimensional reconstruction of remote sensing targets, in particular to a three-dimensional reconstruction method and device of a moving target based on multiple remote sensing images.
Background
The existing remote sensing target three-dimensional reconstruction technology mostly adopts co-orbit stereoscopic images observed at near same time, and requires that an observed target remains stationary. However, for some moving objects, such as marine ship objects, most of the time is in a sailing state, and the ship can change state along with the movement of sea waves in non-sailing time, so that it is difficult to acquire the observed stereoscopic images while meeting the three-dimensional reconstruction requirement.
With the gradual maturation of satellite remote sensing constellation technology, the remote sensing images of the ship targets at different positions, different times and different states can be obtained through different satellites and different observation angles under the condition of taking high-resolution observation into consideration by utilizing a multi-satellite multi-angle imaging mode, so that the global tracking of the ship targets is realized. The multi-star multi-angle remote sensing images contain three-dimensional observation information of ship targets and have data conditions for three-dimensional reconstruction. However, the observation view number of the multi-star multi-angle remote sensing image is tens or even hundreds of orders, and how to screen out the data which is most suitable for three-dimensional reconstruction of the moving ship target from the large-scale observation view number becomes a main reason for limiting the three-dimensional reconstruction precision.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a moving object three-dimensional reconstruction method and apparatus based on multiple remote sensing images.
An aspect of the embodiments of the present invention provides a moving object three-dimensional reconstruction method based on multiple remote sensing images, where the multiple remote sensing images are generated by shooting a moving object by a plurality of satellites at a plurality of angles, and the moving object three-dimensional reconstruction method includes: extracting a detection frame of the moving target from the multiple remote sensing images; based on the reference remote sensing image of the multiple remote sensing images, carrying out azimuth correction and geometric positioning parameter fitting on the rest remote sensing images in the multiple remote sensing images to obtain multiple remote sensing images with unified references; combining multiple remote sensing images with unified reference into multiple pairs of double remote sensing images, and performing condition screening on the multiple pairs of double remote sensing images according to the satellite shooting angles and the sun irradiation angles in the multiple pairs of double remote sensing images to obtain a double remote sensing image screening set; performing pixel-by-pixel matching and matching point refinement on each group of the two remote sensing images in the two remote sensing image screening set in the detection frame to obtain a matching point result of the two remote sensing image screening set; and carrying out three-dimensional calculation on the matching point results of the two remote sensing image screening sets according to the fitted geometric positioning parameters, generating a plurality of groups of sparse point clouds of the moving target, and fusing the plurality of groups of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
In some embodiments, the detecting frame for extracting the moving object from the multiple remote sensing images includes: identifying the moving target in the multiple remote sensing images; and extracting the detection frame of the identified moving object.
In some embodiments, the performing azimuth correction and geometric positioning parameter fitting on the rest of the remote sensing images in the multiple remote sensing images based on the reference remote sensing image of the multiple remote sensing images to obtain multiple remote sensing images with unified reference includes: calculating a minimum azimuth angle of the moving target in the multiple remote sensing images through the detection frame, and taking a remote sensing image corresponding to the minimum azimuth angle as the reference remote sensing image; calculating azimuth angle differences of the rest remote sensing images according to the minimum azimuth angle, and constructing an azimuth angle correction transformation model; correcting and calculating sampling interval points of the rest remote sensing images by using the azimuth correction transformation model to obtain corrected sampling interval point coordinates; and fitting the geometric positioning parameters of the rest remote sensing images by using the corrected sampling point interval coordinates to obtain a unified reference multiple remote sensing image.
In some embodiments, the combining the multiple remote sensing images with the unified reference into multiple pairs of dual remote sensing images, and performing condition screening on the multiple pairs of dual remote sensing images according to the satellite shooting angle and the sun irradiation angle in the multiple pairs of dual remote sensing images to obtain a dual remote sensing image screening set, including: forming the multiple-pair double remote sensing images by combining the multiple remote sensing images with the unified reference in a pairwise manner; calculating a three-dimensional intersection angle in the two remote sensing images according to the satellite shooting angles in the two remote sensing images; calculating shadow difference angles in the two remote sensing images according to the sun irradiation angles in the two remote sensing images; and screening out the two remote sensing images of which the three-dimensional intersection angles meet the first screening condition and the shadow difference angles meet the second screening condition to obtain the two remote sensing image screening set.
In some embodiments, the first screening condition is a stereo meeting angle greater than 20 °; the second screening condition is that the shadow difference angle is less than 15 degrees.
In some embodiments, the performing pixel-by-pixel matching and matching point refinement on each group of the two remote sensing images in the two remote sensing image screening set in the detection frame to obtain a matching point result of the two remote sensing image screening set includes: extracting feature descriptors of different levels of each set of the duplicate remote sensing images according to the convolutional neural network and the pre-training weight parameters to obtain multi-layer pyramid feature descriptors of each set of the duplicate remote sensing images; performing pixel-by-pixel matching on pixels in the detection frame at the top layer of the multi-layer pyramid feature descriptor to obtain initial matching points; and carrying out level-by-level matching and matching point refinement in the multi-layer pyramid feature descriptor based on the initial matching points to obtain matching point results of each group of two remote sensing images.
In some embodiments, the performing three-dimensional calculation on the matching point result of each set of two remote sensing images according to the fitted geometric positioning parameters, generating multiple sets of sparse point clouds of the moving object, and fusing the multiple sets of sparse point clouds to obtain a three-dimensional reconstruction result of the moving object, including: constructing a front intersection resolving model according to the fitted geometric positioning parameters; based on the front intersection resolving model, performing three-dimensional resolving on the matching point result of each set of the two remote sensing images to generate a plurality of sets of sparse point clouds of the moving target; and based on space semantic constraint, carrying out constraint fusion on the plurality of groups of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
In some embodiments, the performing constraint fusion on the multiple sets of sparse point clouds based on spatial semantic constraint to obtain a three-dimensional reconstruction result of the moving object includes: and carrying out constraint fusion on the plurality of groups of sparse point clouds by combining spatial coherence, elevation value similarity and moving target remote sensing image semantic similarity to obtain a three-dimensional reconstruction result of the moving target.
In some embodiments, the moving object comprises a ship object.
Another aspect of the embodiments of the present invention provides a moving object three-dimensional reconstruction apparatus based on multiple remote sensing images generated by photographing a moving object at a plurality of angles by a plurality of satellites, the moving object three-dimensional reconstruction apparatus comprising: the target extraction module is configured to extract a detection frame of the moving target from the multiple remote sensing images; the reference unification module is configured to perform azimuth correction and geometric positioning parameter fitting on the rest remote sensing images in the multiple remote sensing images based on the reference remote sensing images of the multiple remote sensing images to obtain multiple remote sensing images with unified references; the image screening module is configured to combine the multiple remote sensing images with the unified reference into a plurality of pairs of duplicate remote sensing images, and perform condition screening on the plurality of pairs of duplicate remote sensing images according to the satellite shooting angles and the sun irradiation angles in the plurality of pairs of duplicate remote sensing images to obtain a duplicate remote sensing image screening set; the characteristic matching module is configured to perform pixel-by-pixel matching and matching point refinement on each group of the duplicate remote sensing images in the duplicate remote sensing image screening set in the detection frame to obtain matching point results of the duplicate remote sensing image screening set; and the point cloud reconstruction module is configured to perform three-dimensional calculation on the matching point results of each set of the two remote sensing images according to the fitted geometric positioning parameters, generate a plurality of sets of sparse point clouds of the moving target, and fuse the plurality of sets of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
According to the moving target three-dimensional reconstruction method and the device based on the multiple remote sensing images, provided by the embodiment of the invention, the moving target three-dimensional reconstruction method and the device based on the multiple remote sensing images have at least the following beneficial effects:
The target azimuth correction and geometric positioning parameter fitting are carried out on the basis of the multi-star multi-angle moving target remote sensing image, so that a multi-remote sensing image with a unified reference can be obtained and used as a data base of three-dimensional reconstruction, and the problem that the moving target does not have the condition of three-dimensional reconstruction data is solved;
The condition screening is carried out through the multiple remote sensing images of the moving target to obtain a double remote sensing image screening set, so that the number of the observation views to be processed is greatly reduced, and the processing problem of the large-scale multi-star multi-angle observation views is solved;
Constructing a multi-level pyramid feature descriptor for the screened moving target remote sensing image, and realizing pyramid layer-by-layer pixel-by-pixel matching propagation by combining detection frame guidance, so that a high-precision matching point result can be obtained;
And carrying out three-dimensional solution through the fitted geometric positioning parameters and the matching point results, and carrying out iterative optimization fusion on the sparse point cloud, so as to obtain a multi-star multi-angle three-dimensional reconstruction result of the moving target, and solve the problem that the moving target cannot be processed by the traditional three-dimensional reconstruction method.
Drawings
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a three-dimensional reconstruction method of a moving object based on multiple remote sensing images according to an embodiment of the present invention.
FIG. 2 is a flow diagram of object extraction according to an embodiment of the invention.
Fig. 3 is a flow diagram of benchmark unification in accordance with an embodiment of the present invention.
Fig. 4 is a flowchart of image screening according to an embodiment of the invention.
Fig. 5 is a flow diagram of feature matching according to an embodiment of the invention.
Fig. 6 is a schematic flow chart of point cloud reconstruction according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of multiple remote sensing images of a ship target according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of the multi-remote sensing image shown in fig. 7 after unified referencing.
Fig. 9 is a block diagram of a moving object three-dimensional reconstruction device based on multiple remote sensing images according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
First, the multiple remote sensing images of the present invention will be described. In the invention, multiple remote sensing images can be generated by shooting moving targets at multiple angles by multiple satellites. The number of observation views of multiple remote sensing images can be often up to tens or even hundreds of orders of magnitude.
For example, 5 remote sensing satellites are used to shoot a moving object, and each satellite selects 20 shooting angles, then 5×20=100 remote sensing images are generated, that is, the observation view number of multiple remote sensing images is 100; if the 5 satellites respectively select 20, 22, 24, 28 and 16 shooting angles, 20+22+24+28+16=110 remote sensing images are generated, i.e. the number of observation views of the multiple remote sensing images is 110. The same number of angles may be selected for different satellites, or different numbers of angles may be selected, which is not limited in this regard.
In the invention, the moving target can be a ship target on the sea, moves along a preset track during navigation, can swing and shake under the influence of sea wind and sea waves during residence, and is difficult to reconstruct the target in a high-precision three-dimensional mode in the prior art.
In view of the above, the invention provides a three-dimensional reconstruction method of a moving target based on multiple remote sensing images, which can solve the problem of three-dimensional reconstruction of a moving marine ship target. It should be emphasized that other moving objects having similar motion characteristics to those of ship objects may also employ the three-dimensional reconstruction method of moving objects of the present invention, and thus should also be included in the scope of the present invention.
Fig. 1 is a flow chart of a three-dimensional reconstruction method of a moving object based on multiple remote sensing images according to an embodiment of the present invention.
As shown in fig. 1, in the present embodiment, the three-dimensional reconstruction method of a moving object based on multiple remote sensing images may include steps S110 to S150.
In step S110, a detection frame of a moving object is extracted from the multiple remote sensing images.
As an example, the detection frame may be a rectangular detection frame, which includes a long axis and a short axis, and four corner points are represented by coordinates. If the moving object is set in the detection frame, the coordinate information of the moving object can be represented by the coordinate information of the detection frame so as to carry out subsequent three-dimensional reconstruction of the moving object. The azimuth angle of the moving object can be calculated by detecting the length of the long and short axes of the frame.
In step S120, based on the reference remote sensing images of the multiple remote sensing images, performing azimuth correction and geometric positioning parameter fitting on the moving targets of the other remote sensing images in the multiple remote sensing images to obtain multiple remote sensing images with unified references.
For example, one or a group of remote sensing images in the multiple remote sensing images is selected as the reference remote sensing image, and the other remote sensing images except the reference remote sensing image are called the rest remote sensing images in the multiple remote sensing images. And correcting the rest remote sensing images through the reference remote sensing images so as to realize the unified reference of the whole multiple remote sensing images.
Understandably, due to different shooting angles and shooting rays, moving targets in the obtained multiple remote sensing images can present different geographic positions, target azimuth angles, target states, radiation characteristics and the like, three-dimensional reconstruction is difficult to directly perform, and the multiple remote sensing images with unified references can be used as data bases of the three-dimensional reconstruction, so that the problem that the moving targets do not have three-dimensional reconstruction data conditions is solved.
In step S130, multiple remote sensing images with unified reference are combined into multiple pairs of dual remote sensing images, and condition screening is performed on the multiple pairs of dual remote sensing images according to the satellite shooting angle and the sun irradiation angle in the multiple pairs of dual remote sensing images, so as to obtain a dual remote sensing image screening set.
Through screening, unsuitable remote sensing images can be removed, and the number of double combinations in the screening set of the double remote sensing images is reduced. The multiple remote sensing images of the moving target are subjected to condition screening to obtain two remote sensing image screening sets, so that the data volume to be processed is greatly reduced.
In step S140, each group of the two remote sensing images in the two remote sensing image screening set is subjected to pixel-by-pixel matching and matching point refinement in the detection frame, so as to obtain a matching point result of the two remote sensing image screening set.
The multi-level pyramid feature descriptors are constructed on the screened moving target remote sensing images, pyramid layer-by-layer pixel-by-pixel matching propagation is realized by combining detection frame guidance, and a high-precision matching point result can be obtained.
In step S150, according to the fitted geometric positioning parameters, three-dimensional calculation is performed on the matching point results of the two remote sensing image screening sets to generate multiple groups of sparse point clouds of the moving object, and the multiple groups of sparse point clouds are fused to obtain a three-dimensional reconstruction result of the moving object
And carrying out three-dimensional solution through the fitted geometric positioning parameters and the matching point results, and carrying out iterative optimization fusion on the sparse point cloud, so as to obtain a multi-star multi-angle three-dimensional reconstruction result of the moving target, and realize three-dimensional reconstruction of the moving target based on multiple remote sensing images.
FIG. 2 is a flow diagram of object extraction according to an embodiment of the invention.
As shown in fig. 2, in the present embodiment, the step S110 may further include sub-steps S111 to S112.
In sub-step S111, a moving object in the multiple remote sensing images is identified.
Taking a moving target as a ship target as an example, the ship target in the multiple remote sensing images can be identified through the model of the ship. For example, the ship targets are of model X, and all ship targets of model X in the multiple remote sensing images can be identified through an image identification algorithm.
In sub-step S112, a detection frame of the identified moving object is extracted.
For example, a rectangular detection frame is possible.
Fig. 3 is a flow diagram of benchmark unification in accordance with an embodiment of the present invention.
As shown in fig. 3, in this embodiment, the step S120 may further include sub-steps S121 to S124.
In sub-step S121, a minimum azimuth of a moving target in the multiple remote sensing images is calculated by the detection frame, and the remote sensing image corresponding to the minimum azimuth is used as a reference remote sensing image.
For example, the four corner coordinates of the rectangular detection frame are (x 1,y1; x2,y2; x3,y3; x4,y4), and the target azimuth delta can be calculated with the north as the x axis:
The observation view number in the multiple remote sensing images is n, the target azimuth angle delta 1,…,δn of each remote sensing image is calculated, the minimum azimuth angle is delta m=min(δ1,…,δn, and the m-th remote sensing image is selected as a reference remote sensing image. The observation number of the rest remote sensing images is n-1.
In sub-step S122, the azimuth difference of the remaining remote sensing images is calculated according to the minimum azimuth, and an azimuth correction transformation model is constructed.
For example, the following azimuth correction transform model T i may be constructed:
delta i is any target azimuth in the remaining remote sensing images.
In sub-step S123, correction calculation is performed on the sampling interval points of the rest remote sensing images by using the azimuth correction transformation model, so as to obtain corrected sampling interval point coordinates.
For example, according to the size of the moving object, uniform sampling interval points (x, y) grid may be generated, the transformation model T i is corrected according to the azimuth angle, and corrected sampling interval point coordinates (xT, yT) grid are calculated.
In sub-step S124, the geometric positioning parameters of the rest remote sensing images are fitted by using the corrected sampling point interval coordinates, so as to obtain a unified reference multiple remote sensing image.
For example, geometric positioning parameters (i.e., geographic coordinates) of the rest of the remote sensing images may be fitted according to the corrected sampling interval point coordinates (xT, yT) grid, so as to obtain a fitted target image geometric positioning parameter RPC T.
Fig. 4 is a flowchart of image screening according to an embodiment of the invention.
As shown in fig. 4, in the present embodiment, the step S130 may further include sub-steps S131 to S134.
In sub-step S131, the multiple remote sensing images with unified standards are combined in pairs to form multiple pairs of dual remote sensing images.
As an example, assuming that the multiple remote sensing images have 5 total observation views of the remote sensing image A, B, C, D, E, the combined two remote sensing images may have 10 total combinations of a-B, A-C, A-D, A-E, B-C, B-D, B-E, C-D, C-E, D-E. And so on, when the observed view number of the multiple remote sensing images is n, the combined number can reach (n 2 -n)/2. It can be seen that as the number of observation views of the multiple remote sensing images is increased, the corresponding number of combinations increases as a square number. The combined remote sensing images are directly processed, which definitely faces huge data processing capacity, so that screening is needed to improve the processing efficiency.
In sub-step S132, a stereo intersection angle in the two remote sensing images is calculated according to the satellite shooting angles in the two remote sensing images.
In sub-step S133, a shadow difference angle in the two remote sensing images is calculated according to the sun illumination angle in the two remote sensing images.
In sub-step S134, the two remote sensing images with the stereo meeting angle meeting the first screening condition and the shadow difference angle meeting the second screening condition are screened out, so as to obtain a screening set of the two remote sensing images.
In some embodiments, the first screening condition may be a stereo intersection angle greater than 20 °, and the second screening condition may be a shadow difference angle less than 15 °. Through the angle screening, a smaller number of duplicate remote sensing image screening sets meeting the requirement of three-dimensional reconstruction data can be obtained.
Fig. 5 is a flow diagram of feature matching according to an embodiment of the invention.
As shown in fig. 5, in the present embodiment, the step S140 may further include sub-steps S141 to S143.
In sub-step S141, feature descriptors of different levels of each set of two-way remote sensing images are extracted according to the convolutional neural network and the pre-training weight parameters, so as to obtain multi-layer pyramid feature descriptors of each set of two-way remote sensing images.
For example, feature descriptors of different levels in the ship target image can be extracted according to VGG (Visual Geometry Group) convolutional neural network and pre-training weight parameters, and assuming that the original moving target image size is w×h, the feature descriptors include feature descriptors w×h×c of original size to feature descriptors W/16×h/16×c of 1/16 size, where C is the channel number of the feature descriptors. The 1/16 size may be the top layer of the pyramid, but the invention is not limited thereto.
In sub-step S142, at the top layer of the multi-layer pyramid feature descriptor, pixel-by-pixel matching is performed on the pixels within the detection frame, resulting in an initial matching point.
For example, for the feature descriptors V 1、V2 of any pair of two remote sensing images, pixel-by-pixel matching may be performed on the top layer of the multi-layer pyramid feature descriptor, i.e. 1/16 size, to obtain an initial matching point. It is emphasized that the constraint pixel-by-pixel matching is only performed inside the detection box. The pixel-by-pixel matching may for example comprise the steps of: first, calculating the feature descriptor distance of the corresponding pixel, and then finding the pixel point with the minimum distance in the feature descriptor.
Wherein k1 is the first image detection frame in the dual remote sensing imagesIn the two remote sensing images, k2 is the second image detection frame/>C is the number of channels of the feature descriptor. For k1, by looking for its position/>The minimum feature descriptor distance in the database can find the corresponding matching point.
In sub-step S143, based on the initial matching points, level-by-level matching and matching point refinement are performed in the multi-level pyramid feature descriptors, so as to obtain matching point results of each set of two remote sensing images.
For example, based on the initial matching point (x m1,ym1;xm2,ym2), the position (2× m1,2×ym1;2×xm2,2×ym2) of the matching point in the next layer pyramid image can be calculated; selecting points in the range of (2X m2±1,2×ym2 +/-1) as matching candidate points, and calculating the distance of the feature descriptors to obtain refined matching points; repeating the steps, continuously refining the accuracy of the matching points until reaching the bottom layer of the pyramid, and obtaining the final matching points (x f1,yf1;xf2,yf2), namely a group of matching point results of the two remote sensing images.
Fig. 6 is a schematic flow chart of point cloud reconstruction according to an embodiment of the present invention.
As shown in fig. 6, in the present embodiment, the step S150 may further include sub-steps S151 to S153.
In sub-step S151, a forward intersection solution model is constructed from the fitted geometric positioning parameters.
In sub-step S152, based on the forward intersection resolving model, three-dimensional resolving is performed on the matching point result of each set of two remote sensing images, so as to generate multiple sets of sparse point clouds of the moving object.
In sub-step S153, constraint fusion is performed on multiple groups of sparse point clouds based on spatial semantic constraint, so as to obtain a three-dimensional reconstruction result of the moving object.
In some embodiments, constraint fusion can be performed on multiple groups of sparse point clouds by combining spatial coherence, elevation value similarity and moving target remote sensing image semantic similarity to obtain a three-dimensional reconstruction result of a moving target.
For example, multiple sets of sparse point clouds may be constraint fused according to the following formula:
Wherein, L k (x, y) represents the elevation value of the kth sparse point cloud at the pixel (x, y), W k (x, y) represents the regularization weight, I (x, y) represents the gray value of the pixel (x, y) of the kth sparse point cloud construction image, v (x, y) represents the sum of all k regularization weights, D (x, y) represents the elevation fusion result of the pixel (x, y), N, c, r respectively represent the standard deviation of three different Gaussian functions, and the sizes of a space neighborhood range, an elevation change range and a gray difference range are respectively controlled.
Through the embodiment, the target azimuth correction and geometric positioning parameter fitting are carried out on the basis of the multi-star multi-angle moving target remote sensing image, so that a unified reference multi-remote sensing image can be obtained and used as a data basis for three-dimensional reconstruction, and the problem that the moving target does not have three-dimensional reconstruction data conditions is solved; the condition screening is carried out through the multiple remote sensing images of the moving target to obtain a double remote sensing image screening set, so that the number of the observation views to be processed is greatly reduced, and the processing problem of the large-scale multi-star multi-angle observation views is solved; constructing a multi-level pyramid feature descriptor for the screened moving target remote sensing image, and realizing pyramid layer-by-layer pixel-by-pixel matching propagation by combining detection frame guidance, so that a high-precision matching point result can be obtained; and carrying out three-dimensional solution through the fitted geometric positioning parameters and the matching point results, and carrying out iterative optimization fusion on the sparse point cloud, so as to obtain a multi-star multi-angle three-dimensional reconstruction result of the moving target, and solve the problem that the moving target cannot be processed by the traditional three-dimensional reconstruction method.
Fig. 7 is a schematic diagram of multiple remote sensing images of a ship target according to an embodiment of the present invention. Fig. 8 is a schematic diagram of the multi-remote sensing image shown in fig. 7 after unified referencing. Fig. 7 and 8 are only used to illustrate the comparison before and after the unified reference, and are not intended to limit the present invention.
As shown in fig. 7, in this embodiment, for the same ship target reconstructed in three dimensions, the multiple remote sensing images may include 12 observation views, where the ship targets in the images exhibit different geographic locations, target azimuth angles, target states, radiation characteristics, and so on. Through calculation, one of the images is selected as a reference image (the image selected by a dotted line frame in the figure), and the other 11 images are remote sensing images needing correction.
As shown in fig. 8, by adopting the above-mentioned sub-steps S121 to S124 of unified reference, a multiple remote sensing image after unified reference can be obtained. In fig. 8, the reference image is taken as the dashed frame, and the remaining 11 images are corrected according to the reference image, so that the azimuth angle of the ship target and the size in the image are consistent.
Based on the three-dimensional reconstruction method of the moving target based on the multiple remote sensing images, the invention also provides a three-dimensional reconstruction device of the moving target based on the multiple remote sensing images. The device will be described in detail below in connection with fig. 9.
Fig. 9 is a block diagram of a moving object three-dimensional reconstruction device based on multiple remote sensing images according to an embodiment of the present invention. As shown in fig. 9, in the present embodiment, the moving target three-dimensional reconstruction device 900 based on multiple remote sensing images may include a target extraction module 910, a reference unification module 920, an image screening module 930, a feature matching module 940, and a point cloud reconstruction module 950.
The target extraction module 910 is configured to extract a detection frame of a moving target from the multiple remote sensing images;
the reference unifying module 920 is configured to perform azimuth correction and geometric positioning parameter fitting on the rest of the remote sensing images in the multiple remote sensing images based on the reference remote sensing images of the multiple remote sensing images, so as to obtain multiple remote sensing images with unified references;
The image screening module 930 is configured to combine multiple remote sensing images with unified reference into multiple pairs of dual remote sensing images, and perform condition screening on the multiple pairs of dual remote sensing images according to the satellite shooting angle and the solar irradiation angle in the multiple pairs of dual remote sensing images to obtain a dual remote sensing image screening set;
The feature matching module 940 is configured to perform pixel-by-pixel matching and matching point refinement on each group of the two remote sensing images in the two remote sensing image screening set in the detection frame, so as to obtain a matching point result of the two remote sensing image screening set;
The point cloud reconstruction module 950 is configured to perform three-dimensional calculation on the matching point result of each set of two remote sensing images according to the fitted geometric positioning parameters, generate multiple sets of sparse point clouds of the moving target, and fuse the multiple sets of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
It should be noted that, the apparatus portion in the embodiment of the present invention corresponds to the method portion in the embodiment of the present invention, and the description of the apparatus portion specifically refers to the method portion and is not described herein again.
The embodiments of the present invention are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the invention, and such alternatives and modifications are intended to fall within the scope of the invention.

Claims (10)

1. A moving object three-dimensional reconstruction method based on multiple remote sensing images, wherein the multiple remote sensing images are generated by shooting a moving object by a plurality of satellites at a plurality of angles, and the moving object three-dimensional reconstruction method is characterized by comprising the following steps:
extracting a detection frame of the moving target from the multiple remote sensing images;
performing azimuth correction and geometric positioning parameter fitting on moving targets of other remote sensing images in the multiple remote sensing images based on the reference remote sensing image of the multiple remote sensing images to obtain multiple remote sensing images with unified references;
Combining multiple remote sensing images with unified reference into multiple pairs of double remote sensing images, and performing condition screening on the multiple pairs of double remote sensing images according to the satellite shooting angles and the sun irradiation angles in the multiple pairs of double remote sensing images to obtain a double remote sensing image screening set;
Performing pixel-by-pixel matching and matching point refinement on each group of the two remote sensing images in the two remote sensing image screening set in the detection frame to obtain a matching point result of the two remote sensing image screening set;
and carrying out three-dimensional calculation on the matching point results of the two remote sensing image screening sets according to the fitted geometric positioning parameters, generating a plurality of groups of sparse point clouds of the moving target, and fusing the plurality of groups of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
2. The method of claim 1, wherein the extracting the detection frame of the moving object from the multiple remote sensing images comprises:
identifying the moving target in the multiple remote sensing images;
and extracting the detection frame of the identified moving object.
3. The method of claim 1, wherein the performing azimuth correction and geometric positioning parameter fitting on the moving targets of the remaining remote sensing images in the multiple remote sensing images based on the reference remote sensing image of the multiple remote sensing images to obtain multiple remote sensing images with unified reference comprises:
calculating a minimum azimuth angle of the moving target in the multiple remote sensing images through the detection frame, and taking a remote sensing image corresponding to the minimum azimuth angle as the reference remote sensing image;
Calculating azimuth angle differences of the rest remote sensing images according to the minimum azimuth angle, and constructing an azimuth angle correction transformation model;
correcting and calculating sampling interval points of the rest remote sensing images by using the azimuth correction transformation model to obtain corrected sampling interval point coordinates;
And fitting the geometric positioning parameters of the rest remote sensing images by using the corrected sampling point interval coordinates to obtain a unified reference multiple remote sensing image.
4. The method for three-dimensional reconstruction of a moving object according to claim 1, wherein the combining the multiple remote sensing images with the unified reference into multiple pairs of duplicate remote sensing images, and performing condition screening on the multiple pairs of duplicate remote sensing images according to the satellite shooting angle and the sun irradiation angle in the multiple pairs of duplicate remote sensing images to obtain a duplicate remote sensing image screening set, comprises:
Forming the multiple-pair double remote sensing images by combining the multiple remote sensing images with the unified reference in a pairwise manner;
calculating a three-dimensional intersection angle in the two remote sensing images according to the satellite shooting angles in the two remote sensing images;
calculating shadow difference angles in the two remote sensing images according to the sun irradiation angles in the two remote sensing images;
And screening out the two remote sensing images of which the three-dimensional intersection angles meet the first screening condition and the shadow difference angles meet the second screening condition to obtain the two remote sensing image screening set.
5. The method according to claim 4, wherein the first screening condition is that a stereo intersection angle is greater than 20 °;
The second screening condition is that the shadow difference angle is less than 15 degrees.
6. The method of claim 1, wherein performing pixel-by-pixel matching and matching point refinement on each set of the two remote sensing images in the two remote sensing image screening set in the detection frame to obtain a matching point result of the two remote sensing image screening set comprises:
Extracting feature descriptors of different levels of each set of the duplicate remote sensing images according to the convolutional neural network and the pre-training weight parameters to obtain multi-layer pyramid feature descriptors of each set of the duplicate remote sensing images;
performing pixel-by-pixel matching on pixels in the detection frame at the top layer of the multi-layer pyramid feature descriptor to obtain initial matching points;
and carrying out level-by-level matching and matching point refinement in the multi-layer pyramid feature descriptor based on the initial matching points to obtain matching point results of each group of two remote sensing images.
7. The method of claim 1, wherein the performing three-dimensional calculation on the matching point result of each set of two remote sensing images according to the fitted geometric positioning parameters to generate a plurality of sets of sparse point clouds of the moving object, and fusing the plurality of sets of sparse point clouds to obtain the three-dimensional reconstruction result of the moving object comprises:
Constructing a front intersection resolving model according to the fitted geometric positioning parameters;
based on the front intersection resolving model, performing three-dimensional resolving on the matching point result of each set of the two remote sensing images to generate a plurality of sets of sparse point clouds of the moving target;
And based on space semantic constraint, carrying out constraint fusion on the plurality of groups of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
8. The method for three-dimensional reconstruction of a moving object according to claim 7, wherein the performing constraint fusion on the plurality of groups of sparse point clouds based on spatial semantic constraint to obtain a three-dimensional reconstruction result of the moving object comprises:
and carrying out constraint fusion on the plurality of groups of sparse point clouds by combining spatial coherence, elevation value similarity and moving target remote sensing image semantic similarity to obtain a three-dimensional reconstruction result of the moving target.
9. The method for three-dimensional reconstruction of a moving object according to any one of claims 1 to 8, wherein the moving object comprises a ship object.
10. A moving object three-dimensional reconstruction device based on multiple remote sensing images generated by photographing a moving object at a plurality of angles by a plurality of satellites, the moving object three-dimensional reconstruction device comprising:
the target extraction module is configured to extract a detection frame of the moving target from the multiple remote sensing images;
the reference unification module is configured to perform azimuth correction and geometric positioning parameter fitting on the rest remote sensing images in the multiple remote sensing images based on the reference remote sensing images of the multiple remote sensing images to obtain multiple remote sensing images with unified references;
The image screening module is configured to combine the multiple remote sensing images with the unified reference into a plurality of pairs of duplicate remote sensing images, and perform condition screening on the plurality of pairs of duplicate remote sensing images according to the satellite shooting angles and the sun irradiation angles in the plurality of pairs of duplicate remote sensing images to obtain a duplicate remote sensing image screening set;
the characteristic matching module is configured to perform pixel-by-pixel matching and matching point refinement on each group of the duplicate remote sensing images in the duplicate remote sensing image screening set in the detection frame to obtain matching point results of the duplicate remote sensing image screening set;
and the point cloud reconstruction module is configured to perform three-dimensional calculation on the matching point results of each set of the two remote sensing images according to the fitted geometric positioning parameters, generate a plurality of sets of sparse point clouds of the moving target, and fuse the plurality of sets of sparse point clouds to obtain a three-dimensional reconstruction result of the moving target.
CN202410628964.4A 2024-05-21 2024-05-21 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images Pending CN118212366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410628964.4A CN118212366A (en) 2024-05-21 2024-05-21 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410628964.4A CN118212366A (en) 2024-05-21 2024-05-21 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images

Publications (1)

Publication Number Publication Date
CN118212366A true CN118212366A (en) 2024-06-18

Family

ID=91454757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410628964.4A Pending CN118212366A (en) 2024-05-21 2024-05-21 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images

Country Status (1)

Country Link
CN (1) CN118212366A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484668A (en) * 2015-01-19 2015-04-01 武汉大学 Unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting building contour line
CN108198230A (en) * 2018-02-05 2018-06-22 西北农林科技大学 A kind of crop and fruit three-dimensional point cloud extraction system based on image at random
CN114742707A (en) * 2022-04-18 2022-07-12 中科星睿科技(北京)有限公司 Multi-source remote sensing image splicing method and device, electronic equipment and readable medium
WO2022147976A1 (en) * 2021-01-11 2022-07-14 浙江商汤科技开发有限公司 Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device
CN117315138A (en) * 2023-09-07 2023-12-29 浪潮软件科技有限公司 Three-dimensional reconstruction method and system based on multi-eye vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484668A (en) * 2015-01-19 2015-04-01 武汉大学 Unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting building contour line
CN108198230A (en) * 2018-02-05 2018-06-22 西北农林科技大学 A kind of crop and fruit three-dimensional point cloud extraction system based on image at random
WO2022147976A1 (en) * 2021-01-11 2022-07-14 浙江商汤科技开发有限公司 Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device
CN114742707A (en) * 2022-04-18 2022-07-12 中科星睿科技(北京)有限公司 Multi-source remote sensing image splicing method and device, electronic equipment and readable medium
CN117315138A (en) * 2023-09-07 2023-12-29 浪潮软件科技有限公司 Three-dimensional reconstruction method and system based on multi-eye vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FENG WANG: "Three-Dimensional Reconstruction From a Multiview Sequence of Sparse ISAR Imaging of a Space Target", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 56, no. 2, 4 December 2017 (2017-12-04), pages 611 - 620, XP011676237, DOI: 10.1109/TGRS.2017.2737988 *

Similar Documents

Publication Publication Date Title
CN112085845B (en) Outdoor scene rapid three-dimensional reconstruction device based on unmanned aerial vehicle image
CN112085844B (en) Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
Muller et al. MISR stereoscopic image matchers: Techniques and results
US10152828B2 (en) Generating scene reconstructions from images
Li et al. A new analytical method for estimating Antarctic ice flow in the 1960s from historical optical satellite imagery
CN110930508A (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
CN116402942A (en) Large-scale building three-dimensional reconstruction method integrating multi-scale image features
CN116245757B (en) Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data
CN114066960A (en) Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
CN117253029B (en) Image matching positioning method based on deep learning and computer equipment
CN115471749A (en) Multi-view multi-scale target identification method and system for extraterrestrial detection unsupervised learning
Gong et al. Point cloud and digital surface model generation from high resolution multiple view stereo satellite imagery
CN106846393B (en) Vanishing point extracting method and system based on global search
CN117132737A (en) Three-dimensional building model construction method, system and equipment
CN117078756A (en) Airborne ground target accurate positioning method based on scene retrieval matching
Wang et al. Automated mosaicking of UAV images based on SFM method
CN118212366A (en) Moving target three-dimensional reconstruction method and device based on multiple remote sensing images
Li et al. Few-shot fine-grained classification with rotation-invariant feature map complementary reconstruction network
Wang et al. Monocular 3d object detection based on pseudo-lidar point cloud for autonomous vehicles
Re et al. Evaluation of an area-based matching algorithm with advanced shape models
Bagheri et al. Exploring the applicability of semi-global matching for SAR-optical stereogrammetry of urban scenes
Huang et al. Image network generation of uncalibrated UAV images with low-cost GPS data
Guo et al. Research on 3D geometric modeling of urban buildings based on airborne lidar point cloud and image
CN117765168B (en) Three-dimensional reconstruction method, device and equipment for satellite remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination