AU2021102109A4 - Fusion method of oblique photography model - Google Patents
Fusion method of oblique photography model Download PDFInfo
- Publication number
- AU2021102109A4 AU2021102109A4 AU2021102109A AU2021102109A AU2021102109A4 AU 2021102109 A4 AU2021102109 A4 AU 2021102109A4 AU 2021102109 A AU2021102109 A AU 2021102109A AU 2021102109 A AU2021102109 A AU 2021102109A AU 2021102109 A4 AU2021102109 A4 AU 2021102109A4
- Authority
- AU
- Australia
- Prior art keywords
- model
- oblique
- fusion
- outer contour
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001802 infusion Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a fusion method of oblique photography models,
comprising: S5: At least one first oblique model and one second oblique model of
the target area are obtained, and both of them are constituted by multiple triangular
patches; S2: obtaining the spatial range of outer contour of the first oblique model,
and finding the fusion area of the first oblique model and the second oblique model;
S3: Subdividing the fusion area by taking the outer contour boundary of the first
oblique model as the constraint condition; S4: Interpolating the texture coordinate
of the fusion area processed in Step S3; during fusion of oblique photography
models of the invention, subdividing the triangular patches of original models only
without restructuring, and avoiding the damage of original grid structure and
unreal texture after model fusion.
Si: Obtaining at least one first oblique model and one
second oblique model of the target area are obtained, both
of them constituted by multiple triangular patches; S i
\ /
S2: obtaining the spatial range of outer contour of the first
oblique model, and finding the fusion area of the first
oblique model and the second oblique model; S2
\/
S3: Subdividing the fusion area by taking the outer
contour boundary of the first oblique model as the
constraint condition: 3
S4: Interpolating the texture coordinate of the fusion area
processed in Step S3; 54
Figure 1
haa
Figure 2
Description
Si: Obtaining at least one first oblique model and one second oblique model of the target area are obtained, both of them constituted by multiple triangular patches; Si
/ S2: obtaining the spatial range of outer contour of the first oblique model, and finding the fusion area of the first oblique model and the second oblique model; S2
\/ S3: Subdividing the fusion area by taking the outer contour boundary of the first oblique model as the constraint condition: 3
S4: Interpolating the texture coordinate of the fusion area processed in Step S3; 54
Figure 1
haa
Figure 2
Technical Field
The invention relates to the technical field of image processing, in particular to a fusion method of oblique photography models.
Background Art
Currently, the fusion of oblique photography models refers to the process that multiple three-dimensional models obtained in the same batch and with certain o overlapping range are fused into a whole. Different from the traditional three-dimensional model fusion, the grid resolution of multiple three-dimensional models restructured in the same batch of oblique photography three-dimensional modeling task is roughly the same; meanwhile, the texture features are fine enough since the original image resolution collected by other industries is superior to 5cm or even reaching the millimeter scale. Therefore, when fusing multiple models, not only the fused model geometry should be accurate enough to reflect the shape features of terrain and objects, but also the fused model texture should have a sense of reality to objectively express the color features of terrain and objects. However, the three-dimensional model fusion in the prior art mainly includes grid fusion and texture fusion, wherein the grid fusion can be further divided into three methods, cutting-based, hole filling-based and differential grid deformation-based. The cutting-based method can realize grid fusion by intersecting and restructuring the triangular patches; the hole filling-based method realizes grid fusion by deleting and restructuring the redundant triangular patches in overlapping area; generally, for texture fusion, different textures of triangular patches can realize smooth transition by adjusting the texture coordinate. However, the fusion of oblique photography models is not a simple overlapping of geometric structure fusion and texture fusion, and the fusion tightness may directly affect the final fusion effect of models. Although there are various grid fusion methods for oblique photography models, the basic idea is to restructure the apexes of original triangular patches in the overlapping area. Whereas, such method may damage the geometrical characteristics of original models and result in distortions such as stretching and unnatural transition during texture fusion. Therefore, it is urgent for technicians in this field to solve the above problems by providing a fusion method of oblique photography model.
Summary of invention
o For this reason, the invention provides an infusion method of oblique photography models. The triangular patches of the original models are subdivided but not restructured during fusion, which can avoid the damage of original grid structure and unreal texture after fusion. In order to achieve the above mentioned purposes, the invention is adopted with the following technical scheme: A fusion method of oblique photography models includes: Sl: Obtaining at least one first oblique model and one second oblique model of the target area are obtained, both of them constituted by multiple triangular patches; S2: obtaining the spatial range of outer contour of the first oblique model, and finding the fusion area of the first oblique model and the second oblique model; S3: Subdividing the fusion area by taking the outer contour boundary of the first oblique model as the constraint condition; S4: Interpolating the texture coordinate of the fusion area processed in Step S3; Preferentially, the process for finding the fusion area of the first oblique model and the second oblique model in Step S2 comprises specifically includes: S21: Each triangular patch dividing the first oblique model is projected to a two-dimensional plane to obtain corresponding plane triangle;
S22: After traversing all triangular patches, the plane triangles intersect with the spatial range of outer contour to form several intersecting plane triangles, and the area of triangular patches corresponding to the intersecting plane triangles is the fusion area. Preferentially, outer contour plane range of the first oblique model is realized with the half-edge data structure in Step S2. Preferentially, Step S3 specifically includes: After traversing a triangular patch in the fusion area anticlockwise, obtaining the intersection of outer contour spatial range of the first oblique model, and calculating the plane coordinate and o elevation of the intersecting point. Preferentially, Step S4 specifically includes: The texture coordinate of the intersecting point can be obtained by calculating the linear interpolation of texture coordinate of the half-edge two ends. According to the above technical scheme, the invention discloses a fusion method of oblique photography models compared with the prior art, integrates the texture fusion and grid fusion closely and only adjusts the texture coordinate of restructured triangular patches of the fused model. Therefore, there is no texture dislocation in triangular patches at the final fusion boundary, and there is no significant difference in texture and color of triangular patches on both sides of fusion boundary.
Description of Figures
To better describe the embodiment of the present invention or the technical schemes of current technologies, a brief introduction of attached figures to be used in the descriptions of embodiment or current technologies is made hereby. Obviously, the attached figures described below are only the embodiment of this invention. For common technicians in this field, they can obtain other attached figures based on these without making additional creative endeavors.
Figure 1 Flow chart of fusion method of oblique photography models specified in the invention; Figure 2 Diagram of finding outer contour range in Embodiment 1; Figure 3 Diagram of triangular patches in fusion area in Embodiment 1; Figure 4 Partial diagram of triangular patches in Embodiment 1; Figure 5 Fused model I (to be fused) in Embodiment 2; Figure 6 Fused model II (to be fused) in Embodiment 2; Figure 7 Fusion result I of fused model I and fused model II (to be fused) in Embodiment 2; o Figure 8 Fusion result I of fused model I and fused model II (to be fused) in Embodiment 2; Figure 9 Grid fusion result of fused model I and fused model II (to be fused) in Embodiment 2;
Specific Implementation Modalities
In the following, clear and complete descriptions of the technical scheme in the embodiment of the present invention will be provided in combination with the drawings in the embodiment of the invention; obviously, the described embodiments are only a part of the embodiments, not the whole ones. Based on the embodiment for the invention, all other embodiments acquired by the ordinary technicians in this field without creative endeavors, shall be in the protection scope of the invention. Embodiment 1 As shown in Figure 1, Embodiment 1 discloses a fusion method of oblique photography models, comprising: S: Obtaining at least one first oblique model and one second oblique model of the target area are obtained, both of them constituted by multiple triangular patches;
Oblique model is generally the oblique images of target area obtained by direct flight of multi-lens camera and processed by the industry. S2: obtaining the spatial range of outer contour of the first oblique model, and finding the fusion area of the first oblique model and the second oblique model; S3: Subdividing the fusion area by taking the outer contour boundary of the first oblique model as the constraint condition; S4: Interpolating the texture coordinate of the fusion area processed in Step S3; In a specific embodiment, the process for finding the fusion area of the first o oblique model and the second oblique model in Step S2 specifically includes: S21: Each triangular patch dividing the first oblique model is projected to a two-dimensional plane to obtain corresponding plane triangle; S22: After traversing all triangular patches, the plane triangles intersect with the spatial range of outer contour to form several intersecting plane triangles, and the area of triangular patches corresponding to the intersecting plane triangles is the fusion area. In a specific embodiment, the outer contour spatial range of the first oblique model in Step S2 is realized with the half-edge data structure. Specifically, the process for defining the outer contour spatial range of the zo first oblique model with half-edge data structure is as follows: (1) Judgment of boundary half edge For the inner side of grid model, the two half edges must belong to two different triangular patches; For the boundary side of grid model, there must be one of two half edges not belonging to any triangular patches. As shown in Figure 2, the two half edges h and hOl corresponding to the inner side VIVO belong to the patch fl and f6 respectively, while the two half edges h12 and h21 of the boundary side V1V2 belong to the triangular patch fl, and h12 is a half edge not belonging to any triangular patches. Therefore, whether the polygon patch of current half edge is empty or not can be judged according to the characteristic that the boundary half edge does not belong to any polygon patches: If it is empty, the current half edge is the boundary half edge of the model. (2) Finding all boundary half edges After traversing all half edges of models, all half edges on the boundary of grid model are recorded with the method in (1), and the heads and tails of these half edges are connected sequentially to obtain the outer contour boundary of the grid model. The outer contour boundary of model in Figure 1 is a closed polygon formed by connecting the heads and tails of the boundary half edges h12, h23, h34, h45, h56 and h61. o (3) Obtaining the plane outer contour boundary of grid model After projecting the outer contour boundary of grid model to a two-dimensional plane and neglecting the elevation Z, the outer contour plane range of center model can be obtained. In a specific embodiment, as shown in Figure 3, Step S3 specifically includes: Traversing one triangular patch in the fusion area anticlockwise, obtaining the intersection with the outer contour spatial range of the first oblique model and calculating the plane coordinate and elevation of intersecting point. Expression of plane coordinate: (YC -YB)X +(XC -XB)Y1 (XCYB XBYC) C
(YV -YVo)X+(XVo - n)Y n (XVYVo -XVo n)=0 (YD - C)X2 +(C - D 12 +v2YV - XVIYV2)= 0 (Y>-Y )X, +(X--)YV(XuY-XV)YV2)=0 Expression of elevation:
V(X'-_XB) 2 B 2y_ C =)ZBCB)
(XC B2 C( B_2
Z12 Zu Z+(X 2 2 1 = ZCC+( XC) +(Y 2 YC) )(ZDy )(ZD-ZC) V(XD XC2(D C2
Wherein: (Xi,Y) indicates the plane coordinate of intersecting point, Zi indicates the elevation, Therefore, subdivision is realized. In a specific embodiment, as shown in Figure 4, the original triangular patch BMC of center model is subdivided into BMIl and CIlM, and the original
r, triangular patch DCM is subdivided into CMI2 and DI2M; besides, the original triangular patch VOV1V2 of another model is subdivided into VoI1C,VOCV 2 and CDV 2 . Step S4 specifically includes: The texture coordinate of intersecting point can be obtained according to the linear interpolation of texture coordinate of the two ends of half edge, the expression is as follows: tl = sty± + _Xv sarr2 s t artn2 - Ustart u = ed ,, -uxustart,, star - Xstart 2 + startt 2 ) r (Xendd - Ystt) en start
Where, (ui,vi) indicates the texture coordinate of the point, "I" indicates the o apex to be calculated, and "start" and "end" indicate the starting point and final point of Point i on the half edge. In a specific embodiment, Step S4 specifically includes: The texture coordinate of the intersecting point can be obtained by calculating the linear interpolation of texture coordinate of the half-edge two ends. Embodiment 2 This paper proposes a fusion method of oblique photography models based on the integrated mapping system (IMS) developed by Chinese Academy of Surveying & Mapping, verifies the method by taking the fusion of two restructured oblique photography models as the example. The test data are oblique and vertical images of Dongying City, the number of original images of two oblique photography models to be fused is 21 pcs and 24 pcs respectively, wherein the overlapping in the east-west direction of two oblique photography models is 25m, and the overlapping direction in the south-north direction is 140m; Figure 4-5 shows the model to be fused. The number of apexes of the two oblique photography models in experiment was 57,032 and 73,090, and the number of patches was 113,569 and 145,733. The number of apexes of fused models was 122,892, which is reduced by 7,847 pcs compared with the original data; After fusion, the number of triangular patches was
244,812, which is reduced by 15,811 pcs compared with the original data. See Figure 6-7 for the fusion result. With regard to the geometrical fusion of models, this paper deleted the triangular patches in the cutting model range, but subdivided and restructured the triangular patches at the fusion boundary. The restructured triangular patches reflected the surface landforms accurately, and the fused model has no holes, cracks and dislocation at the cutting boundary but with a correct grid topology structure. The grid fusion result is shown in Figure 8. Hence, this paper only recalculated the interpolation of triangular patch o textures at the fusion boundary of restructured cutting model and the model to be fused. As shown in Figure 7, color features of model are truly expressed after texture fusion, the texture of fused model is consistent with the original data, no texture loss or dislocation occurred on both sides of the fusion boundary, and there is no significant difference in the texture and color of triangular patches on both sides of fusion boundary. Each embodiment in this specification is described in a progressive manner, focusing on its differences from other embodiments, and the same and similar parts between each embodiment can be referred to. For the apparatus disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the zo description is relatively simple, and reference can be made to the description of the method section for relevant points. The above description of the disclosed embodiments enables the professional and technicians in this field to practice or use the invention. It will be apparent to those professional technicians in this field for various modifications to these embodiments. And the general principles defined in herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Accordingly, the invention will not be limited to the embodiments shown herein, but will conform to the widest scope consistent with the principles and novel features disclosed herein
Claims (5)
1. A fusion method of oblique photography models is characterized by comprising: S: Obtaining at least one first oblique model and one second oblique model of the target area are obtained, both of them constituted by multiple triangular patches; S2: obtaining the spatial range of outer contour of the first oblique model, and finding the fusion area of the first oblique model and the second oblique model; S3: Subdividing the fusion area by taking the outer contour boundary of the o first oblique model as the constraint condition; S4: Interpolating the texture coordinate of the fusion area processed in Step S3;
2. As described in Claim 1, the fusion method of the oblique photography model is characterized in that the finding process of the fusion area of the first oblique model and the second oblique model in Step S2 specifically includes: S21: Each triangular patch dividing the first oblique model is projected to a two-dimensional plane to obtain corresponding plane triangle; S22: After traversing all triangular patches, the plane triangles intersect with o the spatial range of outer contour to form several intersecting plane triangles, and the area of triangular patches corresponding to the intersecting plane triangles is the fusion area.
3. As described in Claim 2, the fusion method of the oblique photography model is characterized in that the outer contour plane range defining the first oblique model in Step S2 is realized by the half-edge data structure.
4. As described in Claim 1, the fusion method of oblique photography model is characterized in that Step S3 specifically includes: After traversing a triangular patch in the fusion area anticlockwise, obtaining the intersection of outer contour spatial range of the first oblique model, and calculating the plane coordinate and
q elevation of the intersecting point.
5. As described in Claim 1 or 4, the fusion method of oblique photography model is characterized in that Step S4 specifically includes: The texture coordinate of the intersecting point can be obtained by calculating the linear interpolation of texture coordinate of the half-edge two ends.
1n
S1: Obtaining at least one first oblique model and one second oblique model of the target area are obtained, both of them constituted by multiple triangular patches; S1 2021102109
S2: obtaining the spatial range of outer contour of the first oblique model, and finding the fusion area of the first oblique model and the second oblique model; S2
S3: Subdividing the fusion area by taking the outer contour boundary of the first oblique model as the constraint condition; S3
S4: Interpolating the texture coordinate of the fusion area processed in Step S3; S4
Figure 1
Figure 2
1/4
Vo
B
li
C
Vi >2 •H3- 2021102109
0 Vi
E
Figure 3
Vo
A '/ V / \ B / / // \ / »1
\ / <7 « \ h ■Hr \ Vl " 4(0 Vi S
E Figure 4
A
>>
1 Figure 5
2/4
$?: m •: ► -<r.. ■';?> - 2021102109
Figure 6
rJ ■
&
Mh^ « VS.T-V SSK h:s
■ i M fv.
Figure 7
N: :y>«A:. : -M* \ . 'Lw
Figure 8
3/4
gj
Figure 9
4/4 i
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010535858.3 | 2020-06-12 | ||
CN202010535858.3A CN111681322B (en) | 2020-06-12 | 2020-06-12 | Fusion method of oblique photography model |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021102109A4 true AU2021102109A4 (en) | 2021-06-03 |
Family
ID=72435556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021102109A Ceased AU2021102109A4 (en) | 2020-06-12 | 2021-04-21 | Fusion method of oblique photography model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111681322B (en) |
AU (1) | AU2021102109A4 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115409941B (en) * | 2022-08-31 | 2023-05-30 | 中南大学 | Three-dimensional ground object model fusion method and system in three-dimensional road scene |
CN116310225A (en) * | 2023-05-16 | 2023-06-23 | 山东省国土测绘院 | OSGB (open sensor grid) model embedding method and system based on triangle network fusion |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010121085A1 (en) * | 2009-04-16 | 2010-10-21 | Ioan Alexandru Salomie | Scalable particle interactive networks |
CN102003938B (en) * | 2010-10-11 | 2013-07-10 | 中国人民解放军信息工程大学 | Thermal state on-site detection method for large high-temperature forging |
CN103049896B (en) * | 2012-12-27 | 2015-09-16 | 浙江大学 | The geometric data of three-dimensional model and data texturing autoregistration algorithm |
CN103886640B (en) * | 2014-03-26 | 2017-01-04 | 中国测绘科学研究院 | The acquisition methods of the threedimensional model of a kind of building and system |
CN104392457B (en) * | 2014-12-11 | 2017-07-11 | 中国测绘科学研究院 | Incline the tie point automatic matching method and device of image |
CN109754463B (en) * | 2019-01-11 | 2023-05-23 | 中煤航测遥感集团有限公司 | Three-dimensional modeling fusion method and device |
CN109993783B (en) * | 2019-03-25 | 2020-10-27 | 北京航空航天大学 | Roof and side surface optimization reconstruction method for complex three-dimensional building point cloud |
CN110136259A (en) * | 2019-05-24 | 2019-08-16 | 唐山工业职业技术学院 | A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS |
CN110956699B (en) * | 2019-11-27 | 2022-10-25 | 西安交通大学 | GPU (graphics processing unit) parallel slicing method for triangular mesh model |
-
2020
- 2020-06-12 CN CN202010535858.3A patent/CN111681322B/en not_active Expired - Fee Related
-
2021
- 2021-04-21 AU AU2021102109A patent/AU2021102109A4/en not_active Ceased
Also Published As
Publication number | Publication date |
---|---|
CN111681322A (en) | 2020-09-18 |
CN111681322B (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2021102109A4 (en) | Fusion method of oblique photography model | |
CN104331872B (en) | Image split-joint method | |
CN103198524B (en) | A kind of three-dimensional reconstruction method for large-scale outdoor scene | |
CN106056539A (en) | Panoramic video splicing method | |
CN112861201B (en) | 3D printing support area generation method based on support point expansion fusion | |
JP5955028B2 (en) | Image processing apparatus, image processing method, and image processing program | |
CN111815710B (en) | Automatic calibration method for fish-eye camera | |
CN104794683B (en) | Based on the video-splicing method scanned around gradual change piece area planar | |
CN105528797A (en) | Optical image color consistency self-adaption processing and quick mosaic method | |
CN111369660A (en) | Seamless texture mapping method for three-dimensional model | |
CN102831601A (en) | Three-dimensional matching method based on union similarity measure and self-adaptive support weighting | |
CN107689033A (en) | A kind of fish eye images distortion correction method based on ellipse segmentation | |
CN111382591B (en) | Binocular camera ranging correction method and vehicle-mounted equipment | |
CN115239820A (en) | Split type flying vehicle aerial view real-time splicing and parking space detection method | |
CN108230381B (en) | Multi-view stereoscopic vision method combining space propagation and pixel level optimization | |
Patil et al. | A comparative evaluation of SGM variants (including a new variant, tMGM) for dense stereo matching | |
CN105427240A (en) | Image cropping method | |
CN107464214B (en) | Method for generating panoramic view of solar power station | |
CN116702281A (en) | Intelligent construction method based on digital twinning | |
CN109767484A (en) | With the light and color homogenization method and system of color consistency in a kind of portion three-dimensional picture pasting | |
CN106875374B (en) | Weak connection image splicing method based on line features | |
CN112365506A (en) | Aerial photograph automatic correction and splicing operation method for oblique photography measurement | |
CN116245928A (en) | Three-dimensional reconstruction method based on binocular stereo matching | |
CN111524075B (en) | Depth image filtering method, image synthesizing method, device, equipment and medium | |
CN114581332A (en) | Redundancy elimination method and device for overlapped point clouds suitable for multiple scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |