CN117576343A - Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image - Google Patents

Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image Download PDF

Info

Publication number
CN117576343A
CN117576343A CN202311581323.XA CN202311581323A CN117576343A CN 117576343 A CN117576343 A CN 117576343A CN 202311581323 A CN202311581323 A CN 202311581323A CN 117576343 A CN117576343 A CN 117576343A
Authority
CN
China
Prior art keywords
image
full
images
triangle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311581323.XA
Other languages
Chinese (zh)
Other versions
CN117576343B (en
Inventor
岳庆兴
刘书含
陈颖
葛邦宇
王懿哲
薛白
王艺颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Original Assignee
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ministry Of Natural Resources Land Satellite Remote Sensing Application Center filed Critical Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority to CN202311581323.XA priority Critical patent/CN117576343B/en
Publication of CN117576343A publication Critical patent/CN117576343A/en
Application granted granted Critical
Publication of CN117576343B publication Critical patent/CN117576343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images, which comprises the following steps of S1, preparing basic image data; s2, acquiring homonymy points of the image; s3, correcting RPC parameters of the full-color image; s4, acquiring a point cloud of the full-color image; s5, counting a point cloud range; s6, screening grid points for constructing a triangular net; s7, shielding detection is carried out to determine the visible image of each triangle of the triangle mesh; s8, obtaining RGB three-band color images; s9, storing the three-dimensional format result. The advantages are that: the method comprises the steps of constructing a ground surface three-dimensional MESH model based on multi-view satellite stereoscopic images, constructing a geometric model through three-view pairs obtained by satellites on three orbits, constructing a texture model through two rolling side-sway images and two pitching side-sway images, constructing a three-dimensional MESH model of the multi-view high-resolution satellite images, and solving the problems in the prior art.

Description

Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image
Technical Field
The invention relates to the technical field of mapping, in particular to a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images.
Background
The large-area ground surface three-dimensional MESH model can be generally extracted through multi-view images acquired by a laser radar or a multi-lens camera carried on a middle-low altitude platform of an unmanned plane, an airship and the like, and the later is generally called aviation oblique photography. The three-dimensional MESH model comprises a geometric model and a texture map. The point cloud acquired by oblique photography or laser radar is the basis for constructing the MESH geometric model.
The laser radar can directly acquire the point cloud, and the accuracy and the reliability of the point cloud are generally superior to those of the point cloud acquired by oblique photography. The laser radar has the defect that high-quality texture images cannot be acquired at the same time, and texture information of MESH can be constructed by combining images acquired in other modes. Oblique photography has poor effects of constructing some details and hollow structures, but has lower cost and wider application in the field of urban three-dimensional modeling. The resolution of the image obtained by aerial oblique photography is generally between 1 cm and 20cm, and 2 cm to 5cm is a more common resolution range.
With the development of related technologies such as a satellite platform and a camera, three-dimensional MESH model construction by acquiring multi-view high-resolution images through the satellite camera is one of new technical means of three-dimensional modeling. The multi-view satellite images are greatly restricted by orbit period and orbit height in the aspects of time acquisition consistency, resolution, observation angle and the like. For example, the resolution of the satellite image understar with ultra-high resolution is generally about 0.3-0.5 m, which is about an order of magnitude lower than that of the unmanned aerial vehicle image; generally, stereoscopic images of a target area are obtained through gesture movement in two directions of pitching and rolling, and in order to form stereoscopic images with more visual angles for a building, multi-track imaging or multi-star combination is needed to realize. The common time span of single-star multi-track imaging is larger, and matching precision is affected.
Disclosure of Invention
The invention aims to provide a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images, so as to solve the problems in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images comprises the following steps,
s1, preparing basic image data:
the basic image data comprise three Pleiades three-dimensional image combinations and two large-dip-angle images; each three-dimensional image combination comprises an image with the smallest inclination angle, and the observation angles and the resolutions of the images with the smallest inclination angles are different from each other; each image comprises a full-color image and a multispectral image with the same observation time and scope as the corresponding full-color image;
s2, acquiring homonymous points of the image:
the full-color image of the image 1-1 in the three-view stereoscopic image combination 1, the full-color image of the image 2-1 in the three-view stereoscopic image combination 2 and the full-color image of the image 3-1 in the three-view stereoscopic image combination 3 are respectively used as main images, the main images are divided into rectangular images, characteristic points of the rectangular images are extracted by utilizing Harris operators, and homonymous points are matched on the full-color images of the rest images through a correlation coefficient method based on the characteristic points, so that the homonymous points corresponding to the images are obtained;
S3, correcting RPC parameters of the full-color image:
the RPC parameter of each full-color image is corrected by using all homonymous points through the mode that the front intersection and the rear intersection are alternately executed;
s4, acquiring a point cloud of the full-color image:
preparing quasi-epipolar line images of two full-color images in the same three-dimensional image combination by using a projection track method, and acquiring two parallax images by using an SGM matching algorithm based on different sequences of the quasi-epipolar line images; consistency test is carried out on the parallax images, and the parallax images of the full-color images are obtained after the neighborhood of the effective points of the parallax images and the marks are traversed; each image point of the quasi-epipolar line image parallax image of one full-color image is acquired according to the parallax value to obtain homonymous points on the quasi-epipolar line image of the other full-color image, the ground coordinates are acquired through front intersection, and all the ground coordinates form point clouds corresponding to the two full-color images;
s5, counting a point cloud range:
dividing a point cloud range into rectangular grids, setting a statistical range, counting the elevation median value of a plane coordinate in the statistical range, taking the elevation median value as the elevation of the rectangular grid points, and setting the rectangular grid points as invalid values if no point exists in the statistical range;
s6, screening grid points for constructing a triangular net:
Calculating the gradient of each rectangular grid point, screening out rectangular grid points meeting a threshold according to the gradient, constructing a triangular network, expanding the plane coordinates of the rectangular grid points meeting the threshold, and forming a vertex together with an elevation value to construct a Delaunay triangular network TIN;
s7, shielding detection is carried out to determine the visible image of each triangle of the triangle mesh:
calculating an observation vector through RPC calculation of a full-color image, calculating a rough external azimuth element through the RPC of the full-color image according to the observation vector, calculating three image point coordinates of three vertexes of each triangle of a triangle network through the external azimuth element according to a collineation equation, realizing image calculation of all view angles of the triangle based on the image point coordinates, acquiring information whether full-color images of all view angles of each triangle are visible or not, and calculating and determining an optimal texture image of each triangle according to the information;
s8, acquiring RGB three-band color images:
the full-color images of all the visual angle images and the multispectral images are fused through a Pansharp algorithm to obtain RGB three-band color images;
s9, storing the three-dimensional format achievements:
and sequentially storing triangle mesh bounding box information, the number of vertexes, vertex information, the number of triangles, the vertex serial numbers of the triangles and the names of texture images.
Preferably, the pitch angle, roll angle and resolution of the three small-pitch images and the two large-pitch images in each three-dimensional image combination are as follows,
in the triple-view stereoscopic image combination 1:
the pitch angle of the image 1-1 is 2.699 degrees, the lateral swing angle is-1.108 degrees, and the resolution is 0.498 m;
the pitch angle of the images 1-2 is-16.268 degrees, the lateral swing angle is 3.636 degrees, and the resolution is 0.534 m;
the pitch angle of the images 1-3 is 10.871 degrees, the lateral swing angle is-3.100 degrees, and the resolution is 0.514 m;
in the triple-view stereoscopic image combination 2:
the pitch angle of the image 2-1 is 7.381 degrees, the lateral swing angle is 23.162 degrees, and the resolution is 0.554 m;
the pitch angle of the image 2-2 is-5.652 degrees, the lateral swing angle is 27.328 degrees, and the resolution is 0.605 m;
the pitch angle of the image 2-3 is 13.666 degrees, the lateral swing angle is 21.519 degrees, and the resolution is 0.553 meters;
three-dimensional image combination 3:
the pitch angle of the image 3-1 is-5.905 degrees, the lateral swing angle is-25.182 degrees, and the resolution is 0.567 m;
the pitch angle of the image 3-2 is 7.401 degrees, the lateral swing angle is-28.815 degrees, and the resolution is 0.627 m;
the pitch angle of the image 3-3 is-12.208 degrees, the side swing angle is-23.884 degrees, and the resolution is 0.563 m;
large tilt angle image 1:
the pitch angle of the image 4 is-28.734 degrees, the lateral swing angle is 8.038 degrees, and the resolution is 0.633 m;
Large tilt angle image 2:
the pitch angle of the image 5 is 30.618 degrees, the yaw angle is 5.099 degrees, and the resolution is 0.633 meter.
Preferably, step S2 specifically includes,
s21, taking the full-color image of the image 1-1 as a main image, uniformly dividing the main image into 11x11 rectangular images with the same size, and extracting characteristic points of each rectangular image through a Harris operator;
for one of the characteristic points, matching homonymous points on full-color images of the rest images by a correlation coefficient method to obtain corresponding matching points, and forming a connecting point by the matching points and the characteristic points; the same method obtains the corresponding connection point of each characteristic point;
s22, taking the full-color image of the image 2-1 as a main image, and acquiring a homonymy point corresponding to the image 2-1 by adopting the method in S21;
s23, taking the full-color image of the image 3-1 as a main image, and acquiring a homonymy point corresponding to the image 3-1 by adopting the method in S21.
Preferably, step S3 is specifically executed by iteration from front intersection to rear intersection until the absolute value of the difference between the front and rear results of all the RPC parameter correction terms is smaller than a preset threshold, ending the iteration, and replacing the RPC parameter term of the multispectral image corresponding to the panchromatic image with the modified panchromatic image RPC parameter; the process is repeated to complete the correction of the RPC parameters of the full-color images and the corresponding multispectral images of all the images.
Preferably, in step S3,
the front intersection is specifically that the coordinates of the ground points corresponding to the homonymous points are calculated by utilizing the coordinates of a plurality of image points of each homonymous point P and RPC parameters of the corresponding full-color images, and after the ground coordinates of all homonymous points are calculated, each full-color image is provided with a group of intersection control points taking the image points-the ground points as basic units;
the back intersection is specifically that the ground point coordinates in each intersection control point obtain a calculated image point coordinate through corresponding full-color image RPC parameters, and a certain deviation exists between the calculated image point coordinate and the image point; according to the consistency of the deviation distribution of each full-color image and an error threshold, eliminating points with large errors by using a RANSAC algorithm, wherein the eliminated intersection control points are called coarse difference points, and the reserved intersection control points are called effective control points; the first three parameters of the RPC molecular term of each full-color image are corrected by using the effective control point of the full-color image.
Preferably, step S4 specifically includes,
s41, utilizing a projection trajectory method to manufacture quasi-epipolar line images Epi1-1 and Epi1-2 of full-color images of an image 1-1 and full-color images of an image 1-2, acquiring a parallax image D (1-1-1-2) of the Epi1-1 through an SGM matching algorithm, exchanging the sequence of the Epi1-1 and the Epi1-2, and acquiring a parallax image D1 (1-2-1-1) of the Epi1-2 through the SGM matching algorithm again;
S42, setting the parallax of the p-th row and the q-th column on the D (1-1-1-2) as D (p, q), setting the parallax of the p-th row and the q-th INT (D (p, q) +0.5) on the D1 (1-2-1-1) as D1 (p, q+D (p, q)), wherein INT represents an integer;
assuming de=q-D1 (p, q+int (D (p, q) +0.5)), if |de| >1, D (p, q) is set to an invalid value, and a left-right matching consistency check of a disparity map is completed; then traversing 3X 3 neighborhood of effective points of each disparity map, if more than 4 disparities in 9 disparities in the neighborhood are invalid values, marking the points as 0, otherwise marking the points as 1, and finishing all point marks of one disparity map; then traversing marks of 3×3 neighborhoods of each point of the disparity map, and setting the point disparity as an invalid value if the number of marks of 0 exceeds 4; thereby obtaining a parallax image of the full-color image 1-1;
s43, each image point of the parallax map of the Epi1-1 is intersected with the same-name point on the Epi1-2 according to the parallax value to obtain ground coordinates, and all the ground coordinates form a point cloud corresponding to the full-color image of the image 1-1 and the full-color image of the image 1-2;
s44, sequentially acquiring point clouds corresponding to the image 1-1 full-color image and the image 1-3 full-color image, the image 2-1 full-color image and the image 2-2 full-color image, the image 2-1 full-color image and the image 2-3 full-color image, the image 3-1 full-color image and the image 3-2 full-color image, and the image 3-1 full-color image and the image 3-3 full-color image by adopting the mode of S41-S43.
Preferably, in step S5, the point cloud range is divided into rectangular grids with longitude and latitude directions having resolutions equal to the preset resolution, coordinates of each rectangular grid point are obtained, and the longitude range and the latitude range are determined according to the coordinates of the grid points and the preset resolution; taking the longitude range and the latitude range as statistical ranges, counting the elevation median value of the plane coordinates in the statistical ranges, and taking the elevation median value as the elevation of the grid points; if no point exists in the statistical range, setting the elevation of the grid point as an invalid value; the same method obtains elevation values of all grid points.
Preferably, step S6 is specifically to calculate the gradient of each grid point; if the gradient is greater than a threshold, the grid point is used for constructing a triangular network, otherwise, the triangular network is abandoned; multiplying 100000 grid point plane coordinates meeting the threshold together with the elevation to form Nt vertexes to construct Delaunay triangularly TIN in which Tn triangles are stored, wherein each triangle records three vertex numbers.
Preferably, step S7 comprises in particular the steps of,
s71, calculating two plane coordinates (V, U) and (V1, U1) through image point coordinates (r, c) of a full-color image and two object space elevation surfaces H, H, respectively converting the two plane coordinates (V, U, H) and (V1, U1, H1) into geocentric rectangular coordinates (X, Y, Z) and (X1, Y1, Z1), and calculating and acquiring an observation vector by utilizing the two geocentric rectangular coordinates;
Wherein r is the LINE_OFF parameter of RPC; c is the samp_off parameter of RPC; h is HEIGHT_OFF parameter of RPC; h1 is HEIGHT_OFF parameter of RPC plus 1;
s72, regarding each image as an image shot by a frame type camera, and calculating a position and an attitude matrix of the shot image;
changing the image point coordinates (r, c) into (r+1, c), calculating the ground coordinates (X2, Y2, Z2) corresponding to H and the ground coordinates (X3, Y3, Z3) corresponding to H1, normalizing the observation vector and the position of the shot image, calculating and normalizing the flight vector, and cross-integrating the normalized position of the shot image with the normalized flight vector to obtain a cross product vector; the normalized position, flight vector and cross product vector of the shot image are the external azimuth elements corresponding to the full-color image; calculating external orientation elements corresponding to each full-color image by the same method;
s73, calculating coordinates of three image points of three vertexes of each triangle of the triangle network through external azimuth elements of the full-color image according to a collineation equation, and counting the image points positioned in the triangle; the image point in the triangle records the triangle sequence number; after all triangles are calculated, each image point records a plurality of triangle sequence numbers; each pixel coordinate (r, c) corresponds to an image space vector and to an object space vector;
According to the projection center, the object space vector corresponding to the pixel coordinates and the three image point coordinates of the recorded triangle, calculating the object space coordinates corresponding to the pixel coordinates (r, c), and further calculating the distance between the coordinates and the projection center; assuming that the pixel coordinates (r, c) correspond to M triangles, then M distances can be obtained, and the triangle with the smallest distance is a visible triangle, otherwise, the triangle is an invisible triangle; the same method completes the calculation of all visual angle images, and records the information whether each triangle is visible to all visual angle full-color images;
s74, assuming that a certain triangle is visible to the images of K visual angles, calculating the object vector corresponding to the center pixel of the K visual angles, and calculating the normal vector of the triangle according to the three vertex coordinates of the triangle; and calculating K included angles between a normal vector of the triangle and object vectors corresponding to central pixels of the K visual angles according to the cosine theorem, and determining an image corresponding to a vector with the smallest included angle as an optimal image of the triangle.
Preferably, the triangle mesh bounding box information is represented by Gauss three-degree band projection coordinates;
the calculation mode of the number of the vertexes is that the number of the vertexes of the object side participating in constructing the triangle net is multiplied by the number of the images participating in constructing the texture;
The vertex information is the object space coordinate and the image point coordinate of the Gaussian three-degree band projection format;
the number of triangles is the number of triangles formed by constructing a triangle net;
the vertex sequence number of the triangle is the vertex sequence number corresponding to three vertices of each triangle;
the texture image name is the best image name corresponding to the current triangle.
The beneficial effects of the invention are as follows: according to the method, the ground surface three-dimensional MESH model is built based on multi-view satellite stereoscopic images, a geometric model is built through three-view pairs obtained by satellites on three orbits, a texture model is built through two rolling side-sway images and two pitching side-sway images, the three-dimensional MESH model of the multi-view high-resolution satellite images is built, and the technical problems of three-dimensional modeling core in the prior art such as multi-view satellite image regional network adjustment, three-view stereoscopic matching obtaining point cloud data, multi-track point cloud data fusion, triangle network building, shielding calculation, texture mapping and the like are solved.
Drawings
FIG. 1 is a flow chart of a method of making an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the invention.
As shown in fig. 1, in this embodiment, a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images is provided, and the method uses three Pleiades three-dimensional stereoscopic image combinations and two images with large pitch angles to construct a MESH model, wherein each image with a view angle is composed of a full-color image with a resolution of about 0.5 m and a multispectral image with a resolution of about 2 m. The full-color images in three Pleiades three-view stereoscopic image combinations acquired by three tracks are used for constructing a geometric model, and three full-color images with minimum pitch angles in the three Pleiades three-view stereoscopic image combinations, multispectral images corresponding to the same view angle, two full-color images with large pitch angles and multispectral images with the same view angle are used for constructing a texture model. The method comprises the following steps in particular,
1. preparing basic image data:
the basic image data comprise three Pleiades three-dimensional image combinations and two large-dip-angle images; each three-dimensional image combination comprises an image with the smallest inclination angle, and the observation angles and the resolutions of the images with the smallest inclination angles are different from each other; each image comprises a full-color image and a multispectral image with the same observation time and scope as the corresponding full-color image.
The correspondence between each image and the observation angle and resolution are shown in table 1, and for convenience of distinguishing and description, the image serial number is used to refer to each image, and each image includes a full-color image and a multispectral image with the same observation time and scope as the full-color image.
Table 1: image and observation angle and resolution ratio corresponding relation table
2. Acquiring homonymy points of the image:
the full-color image of the image 1-1 in the three-view stereoscopic image combination 1, the full-color image of the image 2-1 in the three-view stereoscopic image combination 2 and the full-color image of the image 3-1 in the three-view stereoscopic image combination 3 are respectively used as main images, the main images are divided into rectangular images, characteristic points of the rectangular images are extracted by utilizing Harris operators, and homonymous points are matched on the full-color images of the rest images through a correlation coefficient method based on the characteristic points, so that the homonymous points corresponding to the images are obtained. Specifically, the method comprises the following steps of,
2.1, taking the full-color image of the image 1-1 as a main image, uniformly dividing the main image into 11x11 rectangular images with the same size, and extracting ni (i=0, 1, … 120, representing a rectangular image sequence number) feature points of each rectangular image through a Harris operator; when ni >10, 10 are randomly selected from all feature points extracted from the rectangular image, so that ni is more than or equal to 0 and less than or equal to 10.
Assuming that n_1-1 feature points are extracted altogether, for the j (j=0, 1, …, n_1-1) th feature point pj_1-1, matching the same name points on the full-color images of the rest images by a correlation coefficient method to obtain corresponding matching points (including pj_1-2, pj_1-3, pj_2-1, pj_2-2, pj_2-3, pj_3-1, pj_3-2, pj_3-3, pj_4, pj_5), forming a connection point together with the feature point pj_1-1, wherein the image plane coordinates of the pj_1-1 on the corresponding full-color images are (rj_1-1, cj_1-1), and the image plane coordinates of the corresponding matching points are (rj_1-2, cj_1-2), (rj_3, cj_1-3), …, (rj_4, cj_5). The same method obtains the connection point corresponding to each feature point.
2.2, taking the full-color image of the image 2-1 as a main image, and adopting the method in 2.1 to obtain N_2-1 homonymous points corresponding to the image 2-1;
2.3, taking the full-color image of the image 3-1 as a main image, and adopting the method in 2.1 to obtain N_3-1 homonymous points corresponding to the image 3-1.
3. Correcting RPC parameters of full-color images:
the RPC parameters of each full-color image are modified by alternating the front intersection with the rear intersection using all homonymies. Specifically, the method comprises the following steps of,
the intersection of 3.1 and the front refers to that the coordinates Pg (L, B, H) of the ground point corresponding to the same name point are calculated by using the coordinates of a plurality of image points of each same name point P and the RPC parameters of the corresponding full-color image, and L, B, H represents the longitude, latitude and altitude of the ground point. After the ground coordinates of all the same-name points are calculated, each full-color image has a group of intersection control points G (r, c, L, B, H) taking the image point P (r, c) -the ground point Pg (L, B, H) as a basic unit.
3.2, the ground point coordinates (L, B, H) in each intersection control point are obtained by corresponding full-color image RPC parameters to obtain a calculated image point coordinate Pc (r ', c'), and a deviation exists between the calculated image point coordinate Pc (r ', c') and the image point P (r, c) and is marked as Pe (re, ce), wherein re=r '-r, ce=c' -c; according to the consistency of the Pe distribution of each full-color image and an error threshold, the points with large errors are removed by using a RANSAC algorithm, and the error threshold is set to be 0.5 pixel in the method. The rejected intersection control points are called coarse difference points, and the reserved intersection control points are called effective control points; the first three parameters of the RPC molecular item of each full-color image are corrected by using the effective control point of each full-color image, and the correction process is rear convergence.
3.3, performing iteration through a plurality of front intersections and rear intersections until absolute values of difference of results of two times before and after all RPC parameter correction items are smaller than a preset threshold (such as 1 e-15), ending iteration, and replacing the RPC parameter item of the multispectral image corresponding to the panchromatic image with the modified panchromatic image RPC parameter; the process is repeated to complete the correction of the RPC parameters of the full-color images and the corresponding multispectral images of all the images.
4. Acquiring a point cloud of a full-color image:
Preparing quasi-epipolar line images of two full-color images in the same three-dimensional image combination by using a projection track method, and acquiring two parallax images by using an SGM matching algorithm based on different sequences of the quasi-epipolar line images; consistency test is carried out on the parallax images, and the parallax images of the full-color images are obtained after the neighborhood of the effective points of the parallax images and the marks are traversed; and acquiring homonymous points on the quasi-epipolar line images of one full-color image according to the parallax value by each image point of the quasi-epipolar line image parallax image of the other full-color image, acquiring ground coordinates through front intersection, and forming point clouds corresponding to the two full-color images by all the ground coordinates. Specifically, the method comprises the following steps of,
4.1, utilizing a projection trajectory method to manufacture quasi-epipolar line images Epi1-1 and Epi1-2 of the full-color image of the image 1-1 and the full-color image of the image 1-2, obtaining a parallax image D (1-1-1-2) of the Epi1-1 through an SGM matching algorithm, exchanging the sequence of the Epi1-1 and the Epi1-2, and obtaining a parallax image D1 (1-2-1-1) of the Epi1-2 through the SGM matching algorithm.
4.2, let the parallax of the p-th row and q-th column on D (1-1-1-2) be D (p, q), and the parallax of the p-th row q+INT (D (p, q) +0.5) column on D1 (1-2-1-1) be D1 (p, q+D (p, q)), INT represents an integer;
assuming de=q-D1 (p, q+int (D (p, q) +0.5)), if |de| >1, D (p, q) is set to an invalid value-9999, and a left-right matching consistency check of a disparity map is completed; then traversing 3X 3 neighborhood of effective points of each disparity map, if more than 4 disparities in 9 disparities in the neighborhood are invalid values, marking the points as 0, otherwise marking the points as 1, and finishing all point marks of one disparity map; then traversing the marks of 3×3 neighborhoods of each point of the disparity map, and if the number of marks of 0 exceeds 4, setting the point disparity as an invalid value-9999; thereby, a parallax map of the full-color image 1-1 is obtained.
4.3, obtaining ground coordinates through front intersection of homonymous points on the quasi-epipolar line image Epi1-2 of the full-color image of the image 1-1 according to parallax values of each image point of the quasi-epipolar line image Epi1-1 of the full-color image of the image 1-2, and forming point clouds corresponding to the full-color image of the image 1-1 and the full-color image of the image 1-2 by all the ground coordinates.
4.4, acquiring point clouds corresponding to the image 1-1 full-color image and the image 1-3 full-color image, the image 2-1 full-color image and the image 2-2 full-color image, the image 2-1 full-color image and the image 2-3 full-color image, the image 3-1 full-color image and the image 3-2 full-color image, and the image 3-1 full-color image and the image 3-3 full-color image in sequence in a 4.1-4.3 mode.
5. Counting a point cloud range:
dividing the point cloud range into rectangular grids, setting a statistical range, counting the elevation median value of the plane coordinate in the statistical range, taking the elevation median value as the elevation of the rectangular grid points, and setting the rectangular grid points as invalid values if no point exists in the statistical range.
Specifically, the point cloud range is divided into rectangular grids with longitude and latitude resolution of preset resolution (res=0.00001 degrees), the height (latitude direction) of the grid is h=int ((B1-B0)/0.00001), and the width (longitude direction) is w=int ((L1-L0)/0.00001), the minimum longitude L0, the maximum longitude L1, the minimum latitude B0, and the maximum latitude B1.
Coordinates of each rectangular grid point Dg (i, j) (i represents a row, ordered from bottom to top, j represents a column, ordered from left to right) are obtained by L (i, j) =l0+j×res, and B (i, j) =b0+i×res. And determining a longitude range and a latitude range according to coordinates of the grid points and a preset resolution, wherein the longitude range [ L-Res×0.5, L+Res×0.5] and the latitude range [ B-Res×0.5, B+Res×0.5]; taking the longitude range and the latitude range as statistical ranges, counting the elevation median Hm of the plane coordinates in the statistical ranges, and taking the elevation median Hm as the elevation H (i, j) of the grid points Dg (i, j); if there is no point in the statistical range, the elevation of the grid point Dg (i, j) is set to an invalid value-9999; the same method obtains elevation values of all grid points.
6. Screening grid points for constructing a triangular net:
and calculating the gradient of each rectangular grid point, screening out rectangular grid points meeting the threshold according to the gradient, constructing a triangular network, expanding the plane coordinates of the rectangular grid points meeting the threshold, and forming vertexes together with the elevation values to construct the Delaunay triangular network TIN.
Specifically, for each grid point (i, j), a gradient dH (i, j) is calculated, dH (i, j) =max (ABS (H (i, j) -H (i+1, j)), ABS (H (i, j) -H (i, j+1)). MAX represents the maximum of the two values, ABS represents taking the absolute value if dH (i, j) is greater than a threshold, which is otherwise discarded, in the present invention, 1 meter, all grid point plane coordinates (L (i, j) and B (i, j)) meeting the threshold are multiplied by 100000, together with the elevation H (i, j), forming Nt vertices, which are represented by xyz_i (i=0, 1, …, nt-1), constructing Delaunay triangle network TIN in which Tn is to be stored with Tn triangles tr_i (i=0, 1, …, tn-1), each triangle nd_i_i being recorded with the sequence number of nd_i_2 nd_i.
7. Occlusion detection to determine the visible image of each triangle of the triangle mesh:
calculating an observation vector through the RPC of the full-color image, calculating a rough external azimuth element through the RPC of the full-color image according to the observation vector, calculating three image point coordinates of three vertexes of each triangle of the triangular network through the external azimuth element according to a collineation equation, realizing image calculation of all view angles of the triangle based on the image point coordinates, acquiring information whether full-color images of all view angles of each triangle are visible or not, and calculating and determining the optimal texture image of each triangle according to the information.
The observation here means that no occlusion triangle exists between the main point of the observation camera and each point connecting line inside the triangle under the condition of the observation angle of a certain image, and the process is called occlusion detection. Since satellite images are typically push-broom imaging and do not provide a rigorous imaging model. The method of the invention carries out shielding detection by constructing a virtual frame type camera. If In images participate In occlusion detection, each triangle needs to determine whether the In images are visible or not, and whether the In images are visible or not is recorded as vfg _i_j (i=0, 1, …, tn-1, representing triangle number, j=0, 1, …, in-1, representing image number). Occlusion detection specifically includes the following,
7.1 calculation of the observation vector vwxl_i (i=0, 1, 2) by means of the panchromatic image RPC
Calculating two plane coordinates (V, U) and (V1, U1) through image point coordinates (r, c) of a full-color image and two object space elevation surfaces H, H1, respectively converting the two plane coordinates (V, U, H) and (V1, U1, H1) into geocentric rectangular coordinates (X, Y, Z) and (X1, Y1, Z1), and calculating and acquiring an observation vector by utilizing the two geocentric rectangular coordinates; wherein r is the LINE_OFF parameter of RPC; c is the samp_off parameter of RPC; h is HEIGHT_OFF parameter of RPC; h1 is HEIGHT_OFF parameter of RPC plus 1.
The three component calculation method of the observation vector comprises the following steps:
vwxl0_0=(X1-X),vwxl0_1=(Y1-Y),vwxl0_2=(Z1-Z);
the vector length is:
the normalized observation vector is:
vwxl_0=vwxl0_0/Len;vwxl_1=vwxl0_1/Len;vwxl_2=1/Len。
7.2 calculating the outline to be the external orientation element through the full-color image RPC
Namely, each image is regarded as an image shot by a frame camera, and the position O_i (i=O, 1, 2) and the posture matrix R_i (i=0, 1, …, 8) of the shot image are calculated; the positions of the shot images are respectively as follows:
O_0=V+vwxl_0*500000,O_1=U+vwxl_1*500000,O_2=vwxl_2*500000。
taking r in 7.1 as r+1, calculating ground coordinates (X2, Y2, Z2) corresponding to H and ground coordinates (X3, Y3, Z3) corresponding to H1, normalizing the observation vector and the position of the photographed image (the normalized observation vector and the position of the photographed image are vwxl '_i (i=0, 1, 2) and O' _i (i=0, 1, 2), calculating a flight vector v_i (i=0, 1, 2) = (O '_i-o_i) (i=0, 1, 2), and normalizing the flight vector v_i (i=0, 1, 2) to v' _i (i=0, 1, 2); the normalized position of the shot image and the normalized flight vector are 1×3 matrix, and a cross product is made to obtain a cross product vector q_i (i=0, 1, 2); the nine parameters of the matrix R are O' _i (i=0, 1, 2), v_i (i=0, 1, 2), and q_i (i=0, 1, 2), i.e. the position, flight vector and cross product vector of the unified photographed image are taken as external azimuth elements corresponding to the full-color image; calculating external orientation elements corresponding to each full-color image by the same method;
7.3, calculating three image point coordinates of each triangle of the triangle network through the external azimuth element of the full-color image according to a collineation equation, counting the image points in the triangle, and recording the triangle sequence number by the image points in the triangle; after all triangles are calculated, each image point records a plurality of triangle sequence numbers; for each pixel coordinate (R, c) (in meters), its corresponding image space vector is XL (R, c, -1), and its corresponding object vector is xl_i=r x l (i=0, 1, 2);
according to a projection center O_i (i=0, 1, 2), an object space vector XL_i (i=0, 1, 2) corresponding to a pixel coordinate and three image point coordinates of a recording triangle, calculating an object space coordinate corresponding to the pixel coordinate (r, c), and further calculating the distance between the coordinate and the projection center; assuming that the pixel coordinates (r, c) correspond to M triangles, then M distances can be obtained, and the triangle with the smallest distance is a visible triangle, otherwise, the triangle is an invisible triangle; the same method completes the calculation of all view images, and records the information of whether each triangle is visible to all view full-color images In vfg _i_j (i=0, 1, …, tn-1, representing triangle number, j=0, 1, …, in-1, representing image number).
7.4, calculating the optimal texture image of each triangle:
assuming that the ith triangle (i=0, 1, …, tn-1) is visible to the image of K views, calculating an object vector xl0_m_j (m=0, 1,2, j=0, 1, …, K-1) corresponding to the center pixel of K views, and calculating the normal vector of the triangle according to the three vertex coordinates of the triangle i; and calculating K included angles between a normal vector xl_abc of the triangle and an object vector xl0_m_j (m=0, 1,2, j=0, 1, …, K-1) corresponding to a central pixel of the K visual angles according to the cosine theorem, and determining an image corresponding to a vector with the smallest included angle as the optimal image of the triangle.
The normal vector is calculated in the following way:
assuming that three vertexes are a, b and c, calculating a normalized vector xl_ab between a and b, calculating a normalized vector xl_ac between a and c, and calculating a cross product xl_abc of xl_ab and xl_ac to obtain a normal vector of the triangle i.
8. Acquiring RGB three-band color images:
and fusing the full-color images of all the visual angle images with the multispectral images through a Pansharp algorithm to obtain the RGB three-band color image.
9. And (3) storing the three-dimensional format achievement:
and sequentially storing triangle mesh bounding box information, the number of vertexes, vertex information, the number of triangles, the vertex serial numbers of the triangles and the names of texture images. Specifically:
(1) Triangle mesh bounding box information: expressed by Gauss three-degree band projection coordinates, things are x-axis, and north-south are y-axis. Minimum X coordinate minX, minimum Y coordinate minY, minimum height minH, maximum X coordinate maxX, maximum Y coordinate maxY, maximum height maxH.
(2) Number of vertices: the calculation mode is that the number Nt of the vertices of the object space participating in the construction of the triangular net is multiplied by the number Nm of the images participating in the construction of the texture.
(3) Vertex information: nt×Nm vertex information is sequentially stored. And calculating the phase plane coordinates (rj, cj) of the ith (i=0, 1, nt-1) object vertex corresponding to the jth texture image through RPC of the jth (j=0, 1, …, nm-1) full-color image, wherein the stored vertex information is the object coordinates (X, Y, H) and the image point coordinates (rj, cj) of the Gaussian three-degree band projection format.
(4) Triangle number: the number of triangles formed by constructing the triangular netTn
(5) Triangle vertex number: the vertex number corresponding to the three vertices of each triangle, assuming that the triangle number is Ti (i=0, 1, …, tn-1) multiplied by the best image number m (m=0, 1, …, nm-1) calculated in 7.4.
(6) Texture image name: the best image name corresponding to the current triangle.
By adopting the technical scheme disclosed by the invention, the following beneficial effects are obtained:
The invention provides a three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images, which is used for constructing a ground surface three-dimensional MESH model based on multi-view satellite stereoscopic images, constructing a geometric model through three-view pairs acquired by satellites on three orbits, constructing a texture model through two rolling side-sway images and two pitching side-sway images, realizing the construction of the three-dimensional MESH model of the multi-view high-resolution satellite images, and solving the three-dimensional modeling core technical problems of multi-view satellite image regional network adjustment, three-view stereoscopic matching acquisition point cloud data, multi-track point cloud data fusion, triangular network construction, shielding calculation, texture mapping and the like in the prior art.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which is also intended to be covered by the present invention.

Claims (10)

1. A three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic images is characterized by comprising the following steps of: comprises the following steps of the method,
s1, preparing basic image data:
the basic image data comprise three Pleiades three-dimensional image combinations and two large-dip-angle images; each three-dimensional image combination comprises an image with the smallest inclination angle, and the observation angles and the resolutions of the images with the smallest inclination angles are different from each other; each image comprises a full-color image and a multispectral image with the same observation time and scope as the corresponding full-color image;
S2, acquiring homonymous points of the image:
the full-color image of the image 1-1 in the three-view stereoscopic image combination 1, the full-color image of the image 2-1 in the three-view stereoscopic image combination 2 and the full-color image of the image 3-1 in the three-view stereoscopic image combination 3 are respectively used as main images, the main images are divided into rectangular images, characteristic points of the rectangular images are extracted by utilizing Harris operators, and homonymous points are matched on the full-color images of the rest images through a correlation coefficient method based on the characteristic points, so that the homonymous points corresponding to the images are obtained;
s3, correcting RPC parameters of the full-color image:
the RPC parameter of each full-color image is corrected by using all homonymous points through the mode that the front intersection and the rear intersection are alternately executed;
s4, acquiring a point cloud of the full-color image:
preparing quasi-epipolar line images of two full-color images in the same three-dimensional image combination by using a projection track method, and acquiring two parallax images by using an SGM matching algorithm based on different sequences of the quasi-epipolar line images; consistency test is carried out on the parallax images, and the parallax images of the full-color images are obtained after the neighborhood of the effective points of the parallax images and the marks are traversed; each image point of the quasi-epipolar line image parallax image of one full-color image is acquired according to the parallax value to obtain homonymous points on the quasi-epipolar line image of the other full-color image, the ground coordinates are acquired through front intersection, and all the ground coordinates form point clouds corresponding to the two full-color images;
S5, counting a point cloud range:
dividing a point cloud range into rectangular grids, setting a statistical range, counting the elevation median value of a plane coordinate in the statistical range, taking the elevation median value as the elevation of the rectangular grid points, and setting the rectangular grid points as invalid values if no point exists in the statistical range;
s6, screening grid points for constructing a triangular net:
calculating the gradient of each rectangular grid point, screening out rectangular grid points meeting a threshold according to the gradient, constructing a triangular network, expanding the plane coordinates of the rectangular grid points meeting the threshold, and forming a vertex together with an elevation value to construct a Delaunay triangular network TIN;
s7, shielding detection is carried out to determine the visible image of each triangle of the triangle mesh:
calculating an observation vector through RPC calculation of a full-color image, calculating a rough external azimuth element through the RPC of the full-color image according to the observation vector, calculating three image point coordinates of three vertexes of each triangle of a triangle network through the external azimuth element according to a collineation equation, realizing image calculation of all view angles of the triangle based on the image point coordinates, acquiring information whether full-color images of all view angles of each triangle are visible or not, and calculating and determining an optimal texture image of each triangle according to the information;
S8, acquiring RGB three-band color images:
the full-color images of all the visual angle images and the multispectral images are fused through a Pansharp algorithm to obtain RGB three-band color images;
s9, storing the three-dimensional format achievements:
and sequentially storing triangle mesh bounding box information, the number of vertexes, vertex information, the number of triangles, the vertex serial numbers of the triangles and the names of texture images.
2. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 1, wherein the method comprises the following steps of: the pitch angle, roll angle and resolution of the three small-pitch images and the two large-pitch images in each three-view stereoscopic image combination are as follows,
in the triple-view stereoscopic image combination 1:
the pitch angle of the image 1-1 is 2.699 degrees, the lateral swing angle is-1.108 degrees, and the resolution is 0.498 m;
the pitch angle of the images 1-2 is-16.268 degrees, the lateral swing angle is 3.636 degrees, and the resolution is 0.534 m;
the pitch angle of the images 1-3 is 10.871 degrees, the lateral swing angle is-3.100 degrees, and the resolution is 0.514 m;
in the triple-view stereoscopic image combination 2:
the pitch angle of the image 2-1 is 7.381 degrees, the lateral swing angle is 23.162 degrees, and the resolution is 0.554 m;
the pitch angle of the image 2-2 is-5.652 degrees, the lateral swing angle is 27.328 degrees, and the resolution is 0.605 m;
The pitch angle of the image 2-3 is 13.666 degrees, the lateral swing angle is 21.519 degrees, and the resolution is 0.553 meters;
three-dimensional image combination 3:
the pitch angle of the image 3-1 is-5.905 degrees, the lateral swing angle is-25.182 degrees, and the resolution is 0.567 m;
the pitch angle of the image 3-2 is 7.401 degrees, the lateral swing angle is-28.815 degrees, and the resolution is 0.627 m;
the pitch angle of the image 3-3 is-12.208 degrees, the side swing angle is-23.884 degrees, and the resolution is 0.563 m;
large tilt angle image 1:
the pitch angle of the image 4 is-28.734 degrees, the lateral swing angle is 8.038 degrees, and the resolution is 0.633 m;
large tilt angle image 2:
the pitch angle of the image 5 is 30.618 degrees, the yaw angle is 5.099 degrees, and the resolution is 0.633 meter.
3. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 2, wherein the method comprises the following steps of: step S2 specifically includes the following,
s21, taking the full-color image of the image 1-1 as a main image, uniformly dividing the main image into 11x11 rectangular images with the same size, and extracting characteristic points of each rectangular image through a Harris operator;
for one of the characteristic points, matching homonymous points on full-color images of the rest images by a correlation coefficient method to obtain corresponding matching points, and forming a connecting point by the matching points and the characteristic points; the same method obtains the corresponding connection point of each characteristic point;
S22, taking the full-color image of the image 2-1 as a main image, and acquiring a homonymy point corresponding to the image 2-1 by adopting the method in S21;
s23, taking the full-color image of the image 3-1 as a main image, and acquiring a homonymy point corresponding to the image 3-1 by adopting the method in S21.
4. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 1, wherein the method comprises the following steps of: step S3, the front intersection-rear intersection is carried out iteratively until the absolute value of the difference between the front result and the rear result of all the RPC parameter correction items is smaller than a preset threshold value, the iteration is ended, and the modified full-color image RPC parameters are used for replacing the RPC parameter items of the multispectral images corresponding to the full-color images; the process is repeated to complete the correction of the RPC parameters of the full-color images and the corresponding multispectral images of all the images.
5. The method for making the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 4, wherein the method comprises the following steps of: in the step S3 of the process,
the front intersection is specifically that the coordinates of the ground points corresponding to the homonymous points are calculated by utilizing the coordinates of a plurality of image points of each homonymous point P and RPC parameters of the corresponding full-color images, and after the ground coordinates of all homonymous points are calculated, each full-color image is provided with a group of intersection control points taking the image points-the ground points as basic units;
The back intersection is specifically that the ground point coordinates in each intersection control point obtain a calculated image point coordinate through corresponding full-color image RPC parameters, and a certain deviation exists between the calculated image point coordinate and the image point; according to the consistency of the deviation distribution of each full-color image and an error threshold, eliminating points with large errors by using a RANSAC algorithm, wherein the eliminated intersection control points are called coarse difference points, and the reserved intersection control points are called effective control points; the first three parameters of the RPC molecular term of each full-color image are corrected by using the effective control point of the full-color image.
6. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 2, wherein the method comprises the following steps of: step S4 specifically includes the following,
s41, utilizing a projection trajectory method to manufacture quasi-epipolar line images Epi 1-1 and Epi 1-2 of full-color images of an image 1-1 and full-color images of an image 1-2, acquiring a parallax image D (1-1-1-2) of the Epi 1-1 through an SGM matching algorithm, exchanging the sequence of the Epi 1-1 and the Epi 1-2, and acquiring a parallax image D1 (1-2-1-1) of the Epi 1-2 through the SGM matching algorithm again;
s42, setting the parallax of the p-th row and the q-th column on the D (1-1-1-2) as D (p, q), setting the parallax of the p-th row and the q-th INT (D (p, q) +0.5) on the D1 (1-2-1-1) as D1 (p, q+D (p, q)), wherein INT represents an integer;
Assuming de=q-D1 (p, q+int (D (p, q) +0.5)), if |de| >1, D (p, q) is set to an invalid value, and a left-right matching consistency check of a disparity map is completed; then traversing 3X 3 neighborhood of effective points of each disparity map, if more than 4 disparities in 9 disparities in the neighborhood are invalid values, marking the points as 0, otherwise marking the points as 1, and finishing all point marks of one disparity map; then traversing marks of 3×3 neighborhoods of each point of the disparity map, and setting the point disparity as an invalid value if the number of marks of 0 exceeds 4; thereby obtaining a parallax image of the full-color image 1-1;
s43, obtaining homonymous points on the Epi 1-2 from each image point of the parallax map of the Epi 1-1 according to the parallax value, obtaining ground coordinates through front intersection, and forming point clouds corresponding to the full-color image of the image 1-1 and the full-color image of the image 1-2 by all the ground coordinates;
s44, sequentially acquiring point clouds corresponding to the image 1-1 full-color image and the image 1-3 full-color image, the image 2-1 full-color image and the image 2-2 full-color image, the image 2-1 full-color image and the image 2-3 full-color image, the image 3-1 full-color image and the image 3-2 full-color image, and the image 3-1 full-color image and the image 3-3 full-color image by adopting the mode of S41-S43.
7. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 1, wherein the method comprises the following steps of: step S5, dividing the point cloud range into rectangular grids with longitude and latitude directions with resolutions being preset resolutions, obtaining coordinates of each rectangular grid point, and determining the longitude range and the latitude range according to the coordinates of the grid point and the preset resolution; taking the longitude range and the latitude range as statistical ranges, counting the elevation median value of the plane coordinates in the statistical ranges, and taking the elevation median value as the elevation of the grid points; if no point exists in the statistical range, setting the elevation of the grid point as an invalid value; the same method obtains elevation values of all grid points.
8. The method for making the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 7, wherein the method comprises the following steps of: step S6 is specifically to calculate the gradient of each grid point; if the gradient is greater than a threshold, the grid point is used for constructing a triangular network, otherwise, the triangular network is abandoned; multiplying 100000 grid point plane coordinates meeting the threshold together with the elevation to form Nt vertexes to construct Delaunay triangularly TIN in which Tn triangles are stored, wherein each triangle records three vertex numbers.
9. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 1, wherein the method comprises the following steps of: step S7 specifically includes the steps of,
s71, calculating two plane coordinates (V, U) and (V1, U1) through image point coordinates (r, c) of a full-color image and two object space elevation surfaces H, H, respectively converting the two plane coordinates (V, U, H) and (V1, U1, H1) into geocentric rectangular coordinates (X, Y, Z) and (X1, Y1, Z1), and calculating and acquiring an observation vector by utilizing the two geocentric rectangular coordinates;
wherein r is the L INE_OFF parameter of RPC; c is the samp_off parameter of RPC; h is the HE ignt_off parameter of RPC; h1 is HEIGHT_OFF parameter of RPC plus 1;
S72, regarding each image as an image shot by a frame type camera, and calculating a position and an attitude matrix of the shot image;
changing the image point coordinates (r, c) into (r+1, c), calculating the ground coordinates (X2, Y2, Z2) corresponding to H and the ground coordinates (X3, Y3, Z3) corresponding to H1, normalizing the observation vector and the position of the shot image, calculating and normalizing the flight vector, and cross-integrating the normalized position of the shot image with the normalized flight vector to obtain a cross product vector; the normalized position, flight vector and cross product vector of the shot image are the external azimuth elements corresponding to the full-color image; calculating external orientation elements corresponding to each full-color image by the same method;
s73, calculating coordinates of three image points of three vertexes of each triangle of the triangle network through external azimuth elements of the full-color image according to a collineation equation, and counting the image points positioned in the triangle; the image point in the triangle records the triangle sequence number; after all triangles are calculated, each image point records a plurality of triangle sequence numbers; each pixel coordinate (r, c) corresponds to an image space vector and to an object space vector;
according to the projection center, the object space vector corresponding to the pixel coordinates and the three image point coordinates of the recorded triangle, calculating the object space coordinates corresponding to the pixel coordinates (r, c), and further calculating the distance between the coordinates and the projection center; assuming that the pixel coordinates (r, c) correspond to M triangles, then M distances can be obtained, and the triangle with the smallest distance is a visible triangle, otherwise, the triangle is an invisible triangle; the same method completes the calculation of all visual angle images, and records the information whether each triangle is visible to all visual angle full-color images;
S74, assuming that a certain triangle is visible to the images of K visual angles, calculating the object vector corresponding to the center pixel of the K visual angles, and calculating the normal vector of the triangle according to the three vertex coordinates of the triangle; and calculating K included angles between a normal vector of the triangle and object vectors corresponding to central pixels of the K visual angles according to the cosine theorem, and determining an image corresponding to a vector with the smallest included angle as an optimal image of the triangle.
10. The method for manufacturing the three-dimensional MESH model based on the high-resolution satellite stereoscopic image according to claim 1, wherein the method comprises the following steps of: the triangular mesh bounding box information is represented by Gaussian three-degree band projection coordinates;
the calculation mode of the number of the vertexes is that the number of the vertexes of the object side participating in constructing the triangle net is multiplied by the number of the images participating in constructing the texture;
the vertex information is the object space coordinate and the image point coordinate of the Gaussian three-degree band projection format;
the number of triangles is the number of triangles formed by constructing a triangle net;
the vertex sequence number of the triangle is the vertex sequence number corresponding to three vertices of each triangle;
the texture image name is the best image name corresponding to the current triangle.
CN202311581323.XA 2023-11-24 2023-11-24 Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image Active CN117576343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311581323.XA CN117576343B (en) 2023-11-24 2023-11-24 Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311581323.XA CN117576343B (en) 2023-11-24 2023-11-24 Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image

Publications (2)

Publication Number Publication Date
CN117576343A true CN117576343A (en) 2024-02-20
CN117576343B CN117576343B (en) 2024-04-30

Family

ID=89887818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311581323.XA Active CN117576343B (en) 2023-11-24 2023-11-24 Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image

Country Status (1)

Country Link
CN (1) CN117576343B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117928494A (en) * 2024-03-18 2024-04-26 中国人民解放军战略支援部队航天工程大学 Geometric positioning measurement method, system and equipment for optical satellite slice images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090226114A1 (en) * 2008-03-07 2009-09-10 Korea Aerospace Research Institute Satellite image fusion method and system
CN107862744A (en) * 2017-09-28 2018-03-30 深圳万图科技有限公司 Aviation image three-dimensional modeling method and Related product
CN110826407A (en) * 2019-10-09 2020-02-21 电子科技大学 Stereo matching method for high-resolution satellite generalized image pairs
CN116468869A (en) * 2023-06-20 2023-07-21 中色蓝图科技股份有限公司 Live-action three-dimensional modeling method, equipment and medium based on remote sensing satellite image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090226114A1 (en) * 2008-03-07 2009-09-10 Korea Aerospace Research Institute Satellite image fusion method and system
CN107862744A (en) * 2017-09-28 2018-03-30 深圳万图科技有限公司 Aviation image three-dimensional modeling method and Related product
CN110826407A (en) * 2019-10-09 2020-02-21 电子科技大学 Stereo matching method for high-resolution satellite generalized image pairs
CN116468869A (en) * 2023-06-20 2023-07-21 中色蓝图科技股份有限公司 Live-action three-dimensional modeling method, equipment and medium based on remote sensing satellite image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAO ZHENG 等: "Color Difference Optimization Method for Multisource Remote Sensing Image Processing", 《IOP CONF. SERIES: EARTH AND ENVIRONMENTAL SCIENCE》, 31 December 2020 (2020-12-31), pages 1 - 11 *
薛白 等: "多重约束条件下的不同遥感影像匹配方法", 《国土资源遥感》, vol. 32, no. 3, 30 September 2020 (2020-09-30), pages 49 - 54 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117928494A (en) * 2024-03-18 2024-04-26 中国人民解放军战略支援部队航天工程大学 Geometric positioning measurement method, system and equipment for optical satellite slice images
CN117928494B (en) * 2024-03-18 2024-05-24 中国人民解放军战略支援部队航天工程大学 Geometric positioning measurement method, system and equipment for optical satellite slice images

Also Published As

Publication number Publication date
CN117576343B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN103744086B (en) A kind of high registration accuracy method of ground laser radar and close-range photogrammetry data
Al-Rousan et al. Automated DEM extraction and orthoimage generation from SPOT level 1B imagery
CN108765298A (en) Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN103198524B (en) A kind of three-dimensional reconstruction method for large-scale outdoor scene
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
Pepe et al. Techniques, tools, platforms and algorithms in close range photogrammetry in building 3D model and 2D representation of objects and complex architectures
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN106204443A (en) A kind of panorama UAS based on the multiplexing of many mesh
CN117576343B (en) Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
JP2003519421A (en) Method for processing passive volume image of arbitrary aspect
CN113358091B (en) Method for producing digital elevation model DEM (digital elevation model) by using three-linear array three-dimensional satellite image
KR102081332B1 (en) Equipment for confirming the error of image by overlapping of orthoimage
CN104363438A (en) Panoramic three-dimensional image manufacturing method
CN114241125B (en) Multi-view satellite image-based fine three-dimensional modeling method and system
CN106920276A (en) A kind of three-dimensional rebuilding method and system
KR20120041819A (en) Method for generating 3-d high resolution ndvi urban model
CN102519436A (en) Chang'e-1 (CE-1) stereo camera and laser altimeter data combined adjustment method
CN113566793A (en) True orthoimage generation method and device based on unmanned aerial vehicle oblique image
Bybee et al. Method for 3-D scene reconstruction using fused LiDAR and imagery from a texel camera
CN112461204A (en) Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height
Gong et al. A detailed study about digital surface model generation using high resolution satellite stereo imagery
CN112862683A (en) Adjacent image splicing method based on elastic registration and grid optimization
CN117092621A (en) Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction
Li et al. Research on multiview stereo mapping based on satellite video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant