TWI285843B - System and method of generating a virtual city model - Google Patents

System and method of generating a virtual city model Download PDF

Info

Publication number
TWI285843B
TWI285843B TW94115908A TW94115908A TWI285843B TW I285843 B TWI285843 B TW I285843B TW 94115908 A TW94115908 A TW 94115908A TW 94115908 A TW94115908 A TW 94115908A TW I285843 B TWI285843 B TW I285843B
Authority
TW
Taiwan
Prior art keywords
building
image
wall
model
texture
Prior art date
Application number
TW94115908A
Other languages
Chinese (zh)
Other versions
TW200641683A (en
Inventor
Liang-Jian Chen
Jian-You Rau
Fu-An Tsai
Jin-Jin Liou
Guo-Shin Shiau
Original Assignee
Univ Nat Central
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Central filed Critical Univ Nat Central
Priority to TW94115908A priority Critical patent/TWI285843B/en
Publication of TW200641683A publication Critical patent/TW200641683A/en
Application granted granted Critical
Publication of TWI285843B publication Critical patent/TWI285843B/en

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

This invention relates to a system and method for manufacturing a virtual city model. The method employs a window graphic interface to build an operating platform, including steps of providing construction information, computing image preview, resolving visual azimuth, determining occluded areas, dealing with situations of being not occluded and partially occluded. This invention can generate large amount of texture images of curtain walls for constructions in a 3-D virtual city with the use of effective and quick visual resolving azimuth parameters external to the camera. It can also automatically compensate textures on self-occluded areas of the constructions in response to occurrence of self-occlusion phenomenon on the constructions, so as to achieve excellent image quality and quick and effective generation of texture images at the same time.

Description

1285843 說,·.轉克‘: 九、發明說明: 【發明所屬之技術領域】 本發明是有關於一種製作擬真城市模型的方法,尤 指一種主要與三維擬真城市景觀瀏覽中建築物紋理影像 . 製作之技術。 【先前技術】 寶 由於城市為人類生活環境最重要的據點,城市環境 關係著人類生活品質的好壞,其中的建築物、道路、樹 φ 木、各種交通工具、以及人類本身的動態變化都直接或 間接的相互影響。因此,三維擬真城市模型之製作亦愈 顯重要,因為其應用領域相當廣泛,例如防救災與重建、 城市導覽、不動產買賣、教育、古蹟數位典藏、環境影 響評估、電子商務、行動通訊基地台設置地點之模擬與 規劃等等。 按,一般習用之技術如下: 1.如德國 Haala,N·, 2004.所發表 “On the 馨 Refinement of Urban Models by Terrestrial Data Collection”,Vol· 35,Part· B3,Istanbul, Turkey,pp. 564-569.其係為GPS+推掃式數位相機(平行 +透視投影)之方位與幾何處理系統,其房屋模型為真實 高度多面體,且所處理牆面數目係所有可視面一起處 理;而其所產生之缺點為所需之設備價格高,拍攝位置 必須位於適當之至高點,後處理工作量大,以及耒處理 任何遮蔽問題。 12858431285843 Said,·.转克': 九, invention description: [Technical field of invention] The present invention relates to a method for making a immersive city model, especially a building texture in a main and three-dimensional imaginary urban landscape browsing Image. The technology of production. [Prior Art] Because the city is the most important base for human living environment, the urban environment is related to the quality of human life. The dynamic changes of buildings, roads, trees, various vehicles, and human beings are directly Or indirect interaction. Therefore, the production of 3D immersive city models is becoming more and more important because of its wide range of applications, such as disaster prevention and reconstruction, city navigation, real estate sales, education, historical digital collection, environmental impact assessment, e-commerce, mobile communication base. Simulation and planning of the location of the station. Press, the commonly used techniques are as follows: 1. As published by Haala, N., 2004., “On the Refinment of Urban Models by Terrestrial Data Collection”, Vol. 35, Part·B3, Istanbul, Turkey, pp. 564 -569. It is a GPS+ push-broom digital camera (parallel + perspective projection) orientation and geometry processing system. The house model is a real height polyhedron, and the number of processed walls is processed together with all visible surfaces; The disadvantage is that the equipment required is expensive, the shooting position must be at an appropriate high point, the post-processing workload is large, and any shading problems are dealt with. 1285843

.¾ •麻束 2004.所發 2.如英國 / 伊朗 Varshosaz, M., 表 ” Occlusion-Free 3D Realistic Modelling of Buildings in Urban Areas" , IAPRS, Vol.35, Part. B4,.3⁄4 • Hemp bundle 2004. Issued 2. For example, UK / Iran Varshosaz, M., Table “ Occlusion-Free 3D Realistic Modelling of Buildings in Urban Areas" , IAPRS, Vol.35, Part. B4,

Istanbul, Turkey,pp· 437-442·其係為 GPS+經緯儀等 特殊設備,利用多張同時定位之相片與幾何處理系統, 其房屋模型為真實高度多面體,且所處理牆面紋理係一 次利用多張影像鑲嵌方式,且一次處理一面牆,而其遮 蔽處理係為利用多視角影像相互補償之方式;而此方式 擧所產生之缺點為需特殊設備,價格高,且後處理較費時 費力。 3·如“國 Kada, M·,2004·所發表” Hardware-BasedIstanbul, Turkey, pp· 437-442·It is a special equipment such as GPS+theodolite. It uses multiple simultaneous photo and geometric processing systems. The house model is a real high polyhedron, and the processed wall texture is used at one time. Image mosaic method, and one wall is processed at a time, and the shading processing is a method of compensating each other by using multi-view images; and the disadvantage of this method is that special equipment is required, the price is high, and the post-processing is time-consuming and labor-intensive. 3.·For example, “National Kada, M·, 2004· published” Hardware-Based

Texture Extraction for Building Facades" , IAPRS,Texture Extraction for Building Facades" , IAPRS,

Vol·35,Part·B4, Istanbul,Turkey,pp· 420-425·其 係為利用平面透視投影(八參數轉換)之幾何處理系統, 其房屋模塑為棱柱體或多面體,且所處理牆面數目係一 次-面牆,而其遮蔽處理係為利用多視角影像鑲嵌及後 處理修補之方式;而此方式所產生之缺點為針對複雜房 屋而言,須要比本發明較多之人工以進行點選控制點、 僅用平面投影轉換將不適用於廣角相機所拍攝之影像, 產生之紋理影像幾何品質較低、後處理工作量大。 故’上述所提習用之三種方法均無法快速有效的製 作建築物紋理影像,以適合實際使用之所需。 【發明内容】 本發明之主要目的係在於,可藉由視覺化介面協助 1285843 操作者解決相機外方位求解的處理瓶頸。 本發明之另一目的係在於,可以鏡面反射 _因為㈣㈣分自我遮蔽所造成的紋理不足^ $明之又—目的係在於,可以—次處理現場照片 片:Γ視Γ紋理影像資料,減少必須處理的現場照 數目,以減少工作量,增加效率。 為達上述之目的,本發明係—種製作擬真城市模型 、法,係以視請圖介面建構—操作平台,其主 t建築物資料提供、影像預覽計算、視覺化綠求解、 遞蔽處判斷、無遮蔽時之處理及部分遮蔽時之處理等步 :’當進行時係提供-建築物外觀真實高度模型及現場 =築物外觀紋理影像資料’·以三維劉覽方式模擬建築物 ί型達到近似現場照片之場景,可自動或人工選取此場 尽中建築物未被完全遮蔽之可視面,且以該可視面之屋 頂角與屋腳之三維地理座標作為地面㈣點,並以航測 技術之單張後方交會法計算相機拍攝時之初始外方位參 數;反投影計算可視面四個角落相對應之影像座標,並 將四個角落連結’形成建築物模型牆面之框架。當操作 員動態調整影像控制點時,可即時求解相機外方位參 數,並進行建築物模型框架之反投影套疊,再㈣㈣ 物模型框架是否與照片中建築物之牆面周邊正相套 σ以視見化方式完成相機外方位之求解;判斷有、無 遮蔽現象’並計算自我遮蔽之位置;在沒有遮蔽現象處 係將牆面上等間距網格之三維空間座標,以反投影方式 95.H4 1285843 計算位於現場建築物外觀紋理影像之影像座標,再以内 插方式得到完整之牆面紋理影像;在有部分遮蔽現象處 係先產生無遮蔽處之紋理影像,並沿著此紋理影像周邊 判斷沒有紋理資訊之像元數目,與其相對於該牆面周邊 ·· 像元數之比例後,定義佔有最大遮蔽比例之周邊作為主 . 要遮蔽邊,利用該主要遮蔽邊相鄰兩個周邊上之遮蔽像 元數的最大值,當作鏡面反射之寬度與鏡面位置,並以 鏡面反射方式自動補償遮蔽處之紋理資訊,藉以得到完 # 整之牆面紋理影像。 【實施方式】 請參閱『第1圖』,如圖所示:本發明係一種製作擬 真城市模型的方法,係以視窗及繪圖應用軟體設計一視 覺化介面建構一操作平台,藉以作為操作員與電腦之溝 通介面,而所使用之軟體開發平台為Microsoft Visual Studio . NET,主要繪圖程式庫為OpenGL公用程式庫,該 繪圖程式庫提供了大部分影像介面卡之3D繪圖加速功 能。本系統主要包含有建築物資料提供1、影像預覽計 算2、視覺化方位求解3、遮蔽處判斷4、無遮蔽時之 處理5及部分遮蔽時之處理6等步驟。如是,藉由上述 之排列構成一全新之製作擬真城市模型的方法。 請參閱『第2〜1 3圖』,如各圖所示:當本發明以 上述之各步驟於使用時,其運用方式如下: a.建築物資料提供1 :提供一建築物外觀真實高度 模型及現場建築物外觀紋理影像資料,該建築物外觀之 1285843Vol. 35, Part B4, Istanbul, Turkey, pp. 420-425. It is a geometric processing system using planar perspective projection (eight-parameter conversion), the house is molded into a prism or a polyhedron, and the treated wall The number is one-face wall, and the shading treatment is a method of using multi-view image mosaic and post-processing repair; and the disadvantage of this method is that for complex houses, more labor is required than the present invention. Selecting a control point and using only planar projection conversion will not work for images captured by a wide-angle camera, resulting in a lower texture quality and a higher post-processing workload. Therefore, the above three methods are not able to quickly and effectively produce building texture images to suit the needs of actual use. SUMMARY OF THE INVENTION The main object of the present invention is to assist the 1285843 operator in solving the processing bottleneck of the camera external orientation solution by visualizing the interface. Another object of the present invention is to provide specular reflection _ because (4) (four) self-shadowing caused by insufficient texture ^ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The number of scene photos to reduce the workload and increase efficiency. In order to achieve the above purpose, the present invention is a method for fabricating a immersive city model and a method, which is constructed by an interface, an operation platform, a main t building data supply, an image preview calculation, a visual green solution, and a delivery site. Judgment, treatment without masking, and processing during partial masking: 'When performing, provide the real height model of the building's appearance and the scene = the appearance of the building's texture image'. Simulate the building in a three-dimensional view A scene that approximates the scene photo can be automatically or manually selected from the visible surface of the field where the building is not completely obscured, and the three-dimensional geographic coordinates of the roof angle and the foot of the visible surface are used as ground (four) points, and aerial survey technology is used. The single resection method calculates the initial external orientation parameter when the camera is photographed; the back projection calculates the corresponding image coordinates of the four corners of the visible surface, and connects the four corners to form a frame of the building model wall surface. When the operator dynamically adjusts the image control point, the camera's external orientation parameters can be solved immediately, and the back projection nesting of the building model frame can be performed, and then (4) (4) Whether the object model frame is positively aligned with the wall periphery of the building in the photo The visualization method completes the solution of the outer orientation of the camera; judges the presence or absence of the shadow phenomenon and calculates the position of the self-shadowing; in the absence of the shadowing phenomenon, the three-dimensional space coordinates of the equally spaced grid on the wall are back-projected 95. H4 1285843 calculates the image coordinates of the image of the appearance of the building in the field, and then obtains the complete wall texture image by interpolation; in the case of partial shadowing, the texture image of the unobstructed area is first generated and judged along the periphery of the texture image. The number of pixels without texture information, and the ratio of the number of pixels relative to the perimeter of the wall, defines the perimeter that occupies the largest masking ratio as the main. To mask the edge, use the main shadowing edge on the adjacent two perimeters. The maximum value of the number of masked pixels is used as the width of the specular reflection and the position of the mirror, and the shadow is automatically compensated by the specular reflection. The texture information, in order to get the # whole wall texture image. [Embodiment] Please refer to "Figure 1", as shown in the figure: The present invention is a method for fabricating a pseudo-real city model, which is constructed by a visual interface of a window and a drawing application software design, thereby constructing an operation platform as an operator. The software development interface with the computer, and the software development platform used is Microsoft Visual Studio . NET. The main drawing library is the OpenGL utility library, which provides 3D graphics acceleration for most image interface cards. The system mainly includes steps of building material supply 1, image preview calculation 2, visual orientation calculation 3, masking judgment 4, processing without masking 5, and processing 6 when partially masking. If so, a new method of making a immersive city model is constructed by the above arrangement. Please refer to "Fig. 2~1 3", as shown in the figures: When the present invention is used in the above-mentioned steps, its operation is as follows: a. Building material supply 1: Provide a real height model of the building appearance And on-site building appearance texture image data, the appearance of the building is 1284843

,隹建築物模型1 角落的三維地理座 真實高度模型為一 j , 、 ,、 ,隹建 標’作為地面控制點盥诸玆仏AL…1, 隹 building model 1 corner of the three-dimensional geographic seat The true height model is a j, , , , , 隹 标 ’ as a ground control point 盥 仏 仏 AL...1

1,八記載建築物屋頂與屋腳各個角落的 一 11 〇經爭先校正,因 此在相關計算令可利用其透鏡畸變參數同時改正其幾何 變形’以利於廣角相機之應用。 ,b.影像預覽計算2 :以三維劉覽方式模擬建築物模 型1 1達到近似現場照片之場景(如第5圖所示),可自 動或人工選取此場景中建築物未被完全遮蔽之可視面, 且以該可視面之屋頂角與屋腳之三維地理座標作為地面 控制點二如第6圖左下視窗21中之(E,N,H)資料所 示接著,操作員可以在建築物外觀紋理影像中點選可 視之屋頂角與屋腳作為影像控制點座標,如第6圖左下 視窗2 1中之(Line, Sample)資料所示。當點選四個以 上分佈均勻之控制點對(Pair)後,本發明即可利用航測 技術中之單張後方交會法即時計算相機拍攝時之初始外 方位參數,包括位置(χ,γ,ζ)與姿態(〇 ,>,六個係 數,如第6圖右下視窗2 2中之(Χ,Υ,Ζ)與(ω,必,κ ) 資料所示。 \ , c•視覺化方位求解3 :由於地面控制點來源、影像 控制點之點選與透鏡畸變參數等都具有一定的誤差,因 1285843 此上述方式所求得之初始外方位參數也必定帶有誤差。 2解決此困境,本發㈣用初始外方位參數與航測技術 中之共線條件式’反投影計算所有可視面四個角落相對 應之影像座標,並且將四㈣落連結,形成建築物模型 ^1牆面3 1之框架;t操作員動態調整影像控制點 可=求解相機外方位參數,並進行建築物模型框 架之反投影套叠’再以人眼來判斷建築物模型框架是否 與:片中建築物之牆面周邊正確的套合。如第6圖右上 視尚2 3所示,如果相機外方位參數不正確,則反投影 後之㈣物模型框架錢無法與建築物外形正確套合, 若直接將此計算結果时產生牆面紋理影像,其結果必 定無法被接受,如第7圖牆面3 1Jl之紋理仍然可以看 到天工象’乃是不正確的結果;而當反投景彡後之建築 ,模型1 1框架與建築物外形正確套合時,所反映的即 是正確的外方位參數求解結果,此時所產生的牆面紋理 影像將如第8圖所示,牆面3 !上除了因為樹木3 2非 自我遮蔽的紋理無法移除之外,其餘皆為正料牆面3 1紋理影像。利用此視覺化求解過程,除了可以有效且 正確的求得相機外方位參數外,並且可以維持最佳之紋 理影像幾何品質。 d.遮蔽處判斷4 :利用前述相機外方位求解結果與 牆面幾何模型,可判斷有、無自我遮蔽現象,並計算自 我遮蔽之位置。仔細觀察第4圖可以發現,正面最高之 牆面3 1有遮蔽到其左邊的牆面3工。當求得相機外方 1285843 : t •简_'丨:.靜$ 位參數後,利用可視面之三維地理座標,反投影到影像 空間_,並且利用各個牆面到相機位置之距離可判斷牆面 間之遮蔽關係,以及遮蔽位置。 請參閱第9 a、9b圖所示,其係為從天空俯瞰建築 物模型1 1與相機7位置之示意圖,其中第93圖之相機 ' 透視中心在P1 ,牆面A、B與牆面C、D,分別投影到像平 面71之位置分別為a、b與c、d,此時a ' b與c、d彼此間 並未重疊,因此可以得知兩者間並未發生遮蔽現象。而 攀第9b圖之相機透視中心在P2,同樣的牆面A、B與牆面C、 卜在像平面71之位置分別為a、b與c、d,此時a、b與c、 d彼此間發生了重疊現象,而且由於B、P2之距離比c、P2 之距離長,因此可以判斷得知牆面c、D將會遮蔽到牆面 A、B 〇 " e·無遮蔽時之處理5 :如果某一牆面沒有被遮蔽之 ,象,則可利用共線條件式,以反投影方式計算牆面上 _等間距網格之三維空間座標,對應於現場拍攝之建築物 外觀紋理影像上的影像座標,再以内插方式得到每;;個 網格之灰度值而得到完整之牆面紋理影像,如第i 〇 所示為第ga牆面c、d之紋理影像。 咬理I部Λ遮蔽時之處理6 ··本發明假設大部分的牆面 :理具有重複性或均調性,因此若某一牆面有 式,象發=時,則對於未被遮蔽處首先利用共線條件 :’以反投影方式計算牆面上等間距網格三維 標’對應於現場拍攝之建築物外觀紋理影像上的^座 12 12858431, eight records the roof of the building and the corners of the foot of the 11 〇 contiguous correction, so in the relevant calculations can use its lens distortion parameters while correcting its geometric deformation 'to facilitate the application of wide-angle cameras. b. Image preview calculation 2: Simulate the building model in a three-dimensional view. 1 1 A scene that approximates the live photo (as shown in Figure 5), which can be automatically or manually selected for visually obscured buildings in this scene. Face, and the three-dimensional geographic coordinates of the roof angle and the foot of the visible surface are used as ground control points. As shown in the (E, N, H) data in the lower left window 21 of Figure 6, the operator can look at the building. In the texture image, click the visible roof corner and the foot of the house as the image control point coordinates, as shown in the Line (Sample) data in the lower left window of Figure 6. When four or more evenly distributed control point pairs are selected, the present invention can use the single resection method in the aerial survey technology to instantly calculate the initial external orientation parameters of the camera, including the position (χ, γ, ζ). ) and posture (〇, >, six coefficients, as shown in the bottom right window 2 of Figure 6 (Χ, Υ, Ζ) and (ω, 必, κ) data. \ , c• visual orientation Solve 3: Because the source of the ground control point, the selection of the image control point and the lens distortion parameters have certain errors, the initial external orientation parameter obtained by the above method must also have an error. 2 Solve this dilemma, The present invention (4) calculates the image coordinates corresponding to the four corners of all visible faces by using the initial external azimuth parameter and the collinear conditional formula 'the back projection in the aerial survey technology, and connects the four (four) falling links to form the building model ^1 wall surface 3 1 The frame; t operator dynamically adjusts the image control point = can solve the camera's external azimuth parameters, and carry out the back projection nesting of the building model frame. Then judge the building model frame with the human eye: the wall of the building in the piece Surrounding The correct fit. As shown in Figure 6 on the upper right, if the orientation parameter of the camera is not correct, the money of the (4) model frame after the back projection cannot be correctly matched with the shape of the building. When the wall texture image is generated, the result must not be accepted. For example, the texture of the wall surface 3 1Jl in Fig. 7 can still see that the image of the sky is 'incorrect result; and when the building is reversed, the model 1 1 When the frame and the shape of the building are correctly assembled, the correct external orientation parameter solution is reflected. The resulting wall texture image will be as shown in Figure 8, except for the trees on the wall 3! 3 2 non-self-shadowing texture can not be removed, the rest are the same wall surface 3 1 texture image. With this visual solution process, in addition to the effective and correct camera orientation parameters can be obtained, and can maintain the most Good texture image geometry quality d. Obscuration judgment 4: Using the above camera external orientation solution results and wall geometry model, you can judge the presence and absence of self-shadowing, and calculate the position of self-shadowing. The figure shows that the highest wall 3 1 on the front has a wall covering the left side of the work. When the camera is obtained outside the field 1284543 : t • Jane _'丨:. After the static bit parameter, the 3D geography of the visible surface is used. Coordinates, back projection to image space _, and use the distance from each wall to the camera position to determine the shadow relationship between the walls, as well as the shielding position. See Figures 9a, 9b, which is from the sky overlooking the building A schematic diagram of the position of the object model 1 1 and the camera 7, wherein the camera of the 93rd view is centered at P1, the walls A, B and the walls C and D are respectively projected to the image plane 71 at positions a, b and c, respectively. , d, at this time a ' b and c, d do not overlap each other, so it can be known that there is no shadowing between the two. The perspective center of the camera in Figure 9b is P2, the same wall A, B and wall C, and the position of the image plane 71 are a, b and c, d, respectively, a, b and c, d There is overlap between each other, and since the distance between B and P2 is longer than the distance between c and P2, it can be judged that the wall c and D will be shielded to the wall surface A, B 〇" e· without shading Treatment 5: If a wall is not obscured, the collinear condition can be used to calculate the three-dimensional coordinates of the _ equally spaced grid on the wall, corresponding to the appearance of the building. The image coordinates on the image are then interpolated to obtain a gray image of the grid to obtain a complete wall texture image, as shown in Fig. ii, the texture image of the ga wall c and d. Treatment of the bite I part Λ 6 6 · · The invention assumes that most of the wall surface: the principle of repeatability or uniformity, so if a wall has a style, like the hair = when First, use the collinear condition: 'calculate the three-dimensional mark of the equidistant grid on the wall by the back projection method' corresponding to the image of the building's appearance texture image taken on site 1 12285843

遮蔽現f之紋理影像」,如第10圖為第9b圖牆面A、B 、'、〜像其右邊黑色區塊因為被牆面C、D之遮蔽而 無法產生紋理資訊。之後依據此「有遮蔽現象之紋理影 像」沿著其影像周邊判斷沒有紋理資訊之像元數目,與 其相對於該牆面周邊像元數之比Μ,並定義^占有最大遮 蔽比例之周邊當作主要遮蔽邊6 1,如第1 2 a圖之主要 遮蔽邊6 1其最大遮蔽比例為1〇〇%。接著,利用在主要The texture image of the current f is shaded. As shown in Fig. 10, the wall A, B, ', and ~ on the right side of the wall are not covered by the wall faces C and D. Then, according to the "texture image with masking phenomenon", the number of pixels without texture information is judged along the periphery of the image, compared with the number of pixels around the wall, and the periphery of the largest masking ratio is defined as The main shielding edge 6 1 , such as the main shielding edge 6 1 of the 1 2 a diagram, has a maximum shielding ratio of 1%. Then, use in the main

遮蔽邊相鄰的兩個周邊上遮蔽像元數的最大值,當作鏡 面反射之寬度與鏡面位置,例如第丄2a圖中W1大於W2, 因此以wi當作鏡面反射之寬度,並且以a、b當作鏡面位 置,而圖中虛線框c之紋理即是做為鏡面反射補償遮蔽處 之紋理資訊,最後可以得到完整之紋理影像,如第丄2 b 圖所示。The maximum value of the number of masked pixels on the two adjacent sides of the shadow side is taken as the width of the specular reflection and the mirror position. For example, in the second graph, W1 is larger than W2, so wi is used as the width of the specular reflection, and a is b is used as the mirror position, and the texture of the dotted frame c in the figure is the texture information of the mirror reflection compensation shadow, and finally the complete texture image can be obtained, as shown in Fig. 2b.

第1 3圖三維擬真城市模型中建築物外牆紋理貼圖 係利用本系統與方法所製作,其中地形面幾何模型係採 用20公尺網格數值地形模型,其上並利用航空相片當做 材質予以敷貼,使整個景觀模擬更接近真實世界。 综上所述,本發明製作擬真城市模型的方法,以視 覺化方式快速有效的求解相機的外方位參數,並且當建 築物發生部份自我遮蔽現象時,可自動補償建築物遮蔽 處之紋理,同時滿足影像品質與快速製作紋理影像的目 的,可有效改善習用之種種缺點達到使用時之實用性與 穩定性,進而使本發明之産生能更進步、更實用、更符 13 1285843 j % if 疹替換負'/ 合使用者之所須,確已符合發明專利宇請之要件,爰依 法提出專利申請。 ----- 惟以上所述者,僅為本發明之較佳實施例而已,當 不月b以此限定本發明實施之範圍;故,凡依本發明申請 f利範圍及發明說明書内容所作之簡單的等效變化與 乜飾,皆應仍屬本發明專利涵蓋之範圍内。 1285843 羊替換頁 """ " .11111••讓一.'—一1 _二”"^ 0^^immk\u.......................... 〆 【圖式簡單說明】 第1圖,係本發明各步驟之流程示意ρ。 第2圖,係本發明使用時之流程示意圖。 第3圖,係本發明使用之真實高度多面體三維房屋模型 示意圖。 第4圖,係本發明使用之現場拍攝建築物外觀影像示意 圖。 第5圖,係本發明以三維瀏覽方式模擬三維房屋模型近 I 似現場照片之場景示意圖。 第6圖,係本發明點選影像控制點以視覺化方式計算相 機外方位參數之示意圖。 第7圖,係不正確之外方位參數所產生之紋理影像示意 圖。 第8圖,係正確之外方位參數所產生之紋理影像示意圖。 第9a、9b圖,係本發明判斷牆面相互遮蔽之示意圖。 _ 第1 0圖,係無遮蔽現象所產生之完整紋理影像示意圖。 第1 1圖,係有部份自我遮蔽現象之紋理影像示意圖。 第1 2 a,係本發明鏡面反射補償遮蔽處紋理示意圖。 第1 2b圖,係本發明經鏡面反射補償後之紋理影像示 意圖。 第1 3圖,係本發明製作之三維擬真城市模型示意圖。 15 1285843 【元件標號對照】= 建築物資料提供1 建築物模型1 1 建築物1 2 • 影像預覽計算2 , 左下視窗2 1 右下視窗2 2 右上視窗2 3 # 視覺化方位求解3 牆面3 1 樹木3 2 遮蔽處判斷4 無遮蔽時之處理5 部分遮蔽時之處理6 主要遮蔽邊6 1 相機7 _像平面71 相機透視中心PI、P2The texture of the exterior wall of the building in the 3D symmetry city model is made by the system and method. The geometric model of the terrain is a 20-meter grid numerical terrain model, and the aerial photograph is used as the material. Applying makes the entire landscape simulation closer to the real world. In summary, the method for making a pseudo-real city model in the present invention quickly and effectively solves the camera's external orientation parameters in a visual manner, and automatically compensates for the texture of the building's shelter when the building partially self-shadows. At the same time, the purpose of satisfying the image quality and the rapid production of the texture image can effectively improve the practical disadvantages of the conventional use and the stability and the stability of the invention, thereby making the invention more progressive, more practical, and more suitable for 13 1285843 j % if The rash replacement negative / / user needs, has indeed met the requirements of the invention patent, and filed a patent application according to law. The above is only the preferred embodiment of the present invention, and the scope of the present invention is limited to the scope of the present invention; The simple equivalent changes and the decoration are still within the scope of the patent of the present invention. 1285843 Sheep Replacement Page"""" .11111•• Let one.'—一一 _二”"^ 0^^immk\u............... ........... 〆 [Simplified description of the drawings] Fig. 1 is a flow chart showing the steps of the steps of the present invention. Fig. 2 is a flow chart showing the flow of the present invention. The schematic diagram of the real high polyhedral three-dimensional house model used in the present invention. Fig. 4 is a schematic diagram of the appearance image of the building used in the present invention. FIG. 5 is a three-dimensional browsing method for simulating a three-dimensional house model. Fig. 6 is a schematic diagram of visually calculating the orientation parameter of the camera by clicking on the image control point of the present invention. Fig. 7 is a schematic diagram of the texture image generated by the incorrect orientation parameter. Schematic diagram of the texture image generated by the correct orientation parameter. Figures 9a and 9b are schematic diagrams for judging the mutual obstruction of the wall surface by the present invention. _ Figure 10 is a schematic diagram of the complete texture image produced by the non-shadowing phenomenon. Figure, is a schematic diagram of the texture image with partial self-shadowing The 1 2 a is a schematic diagram of the texture of the specular reflection compensation mask of the present invention. The 1 2b diagram is a schematic diagram of the texture image after the mirror reflection compensation of the present invention. The 13th graph is a schematic diagram of the 3D pseudo-real city model produced by the present invention. 15 1285843 [Component label comparison] = Building data supply 1 Building model 1 1 Building 1 2 • Image preview calculation 2, lower left window 2 1 lower right window 2 2 upper right window 2 3 #Visual orientation solving 3 wall surface 3 1 Trees 3 2 Masking judgment 4 Processing without shading 5 Processing when partially obscuring 6 Main shading edge 6 1 Camera 7 _ Image plane 71 Camera perspective center PI, P2

Claims (1)

1285843 / - ’-1 $請委員明示9反年^月日 1正替換頁丨所提之修正糸有無超ώ原説明書 —_ !或圖式所揭露之範厲 、申請專利範圍:+ .一種製作擬真城市模型的方法,係以視窗繪圖介面建 構一操作平台,其至少包括下列步驟: a. 建築物資料提供:提供一建築物外觀真實高度 模型及現場建築物外觀紋理影像資料; b. 影像預覽計算:以三維瀏覽方式模擬建築物模 型達到近似現場照片之場景,可自動或人工選取此場 景中建築物未被完全遮蔽之可視面,且以該可視面之 屋頂角與屋腳之三維地理座標作為地面控制點,並以 人工點選建築物外觀紋理影像上可視之屋頂角與屋 腳作為影像控制點座標,進而計算相機拍攝時之初始 外方位參數; c. 視覺化方位求解:反投影計算可視面四個角 落相對應之影像座標,並將四個角落連結,形成建築 物模型牆面之框架,當操作員動態調整影像控制點 時,可即時求解相機外方位參數,並進行建築物模型 框架之反投影套疊,再觀察建築物模型框架是否與照 片中建築物之牆面周邊正確的套合; d. 遮蔽處判斷··利用前述外方位求解結果與牆面 幾何模型,判斷有、無自我遮蔽現象,並計算自我遮 蔽之位置; e. 無遮蔽時之處理··係以反投影方式計算牆面上 等間距網格之三維空間座標,對應於現場拍攝之建築 物外觀紋理影像上的影像座標,再以内插方式得到完 17 整之牆面紋 f·部分遮蔽時之處理··先以步驟e之方式產生未 遮蔽處之牆面紋理影像,接著沿著此紋理影像周邊判 斷沒有紋理資訊之像元數目,與其相對於該牆面周邊 像元數之比例後,定義佔有最大遮蔽比例之周邊作為 主要遮蔽邊,利用該主要遮蔽邊相鄰兩個周邊上之遮 蔽像元數的最大值,當作鏡面反射之寬度與鏡面位 置’並以鏡面反射方式自動補償遮蔽處之紋理資訊, 藉以得到完整之牆面紋理影像。 ! •如申請專利範圍第1項所述之製作擬真城市模型的 方法’其中,該建築物外觀之真實高度模型為一真實 高度多面體三維建築物模型,其記載建築物屋頂與屋 腳各個角落的三維地理座標,作為地面控制點與建築 物外形框架之資料來源。 •如申請專利範圍第1項所述之製作擬真城市模型的 方法,其中,該建築物外觀之現場紋理影像係為一現 場拍攝建築物外觀之數位化影像,作為紋理資訊以及 點選影像控制點之來源。 •如申請專利範圍第1項所述之製作擬真城市模型的 方法,其中,該相機外方位參數求解係利用航測技術 中之單張後方交會法進行即時計算。 •如申請專利範圍第1項所述之製作擬真城市模型的 方法,其中,該無遮蔽時之處理,係利用共線條件式, 以反投影方式計算牆面上等間距網格之三雄空間座 1285843 Hi修㈤正替換頁 標,對應於現場建築物外觀紋理影像上的影像座標, 再以内插方式得到每一個網格之灰度值而得到完整 之牆面紋理影像。 6 · 如申請專利範圍第1項所述之製作擬真城市模型的 方法,其中,該部分遮蔽時之處理,係對於未被遮蔽 處首先利用專利申請範圍第5項之方法產生紋理影 像後,再以鏡面反射方式補償遮蔽處之紋理資訊,以 得到完整之牆面紋理影像。 1285843 nm!, …r' 七、指定代表圖: (一) 本案指定代表圖為:第(1)圖。 (二) 本代表圖之元件符號簡單說明: 建築物資料提供1 影像預覽計算2 視覺化方位求解3 遮蔽處判斷4 無遮蔽時之處理5 部分遮蔽時之處理61285843 / - '-1 $Please ask the members to express the 9th year of the year, the date of the month, the replacement of the page, the revision of the page, the presence or absence of the original specification, or the scope of the patent application: +. A method for fabricating a immersive city model is to construct an operation platform by using a window drawing interface, which at least comprises the following steps: a. Building data providing: providing a real height model of the building appearance and the appearance texture image data of the building; b Image preview calculation: simulates the building model in a three-dimensional browsing manner to achieve a scene similar to the scene photo, and can automatically or manually select the visible surface of the scene in which the building is not completely obscured, and the roof corner and the foot of the visible surface The 3D geographic coordinates are used as the ground control points, and the roof corners and the roofs visible on the exterior texture image of the building are manually selected as the image control point coordinates, thereby calculating the initial external orientation parameters when the camera is photographed; c. Visual orientation solving: The back projection calculates the image coordinates corresponding to the four corners of the visible surface, and connects the four corners to form a frame of the wall of the building model. When the operator dynamically adjusts the image control point, the camera's external orientation parameters can be solved instantly, and the back projection nesting of the building model frame can be performed, and then the building model frame is correctly aligned with the wall perimeter of the building in the photo; d. Judging the shadowing position··Using the above-mentioned external orientation solution result and the wall geometry model to judge the presence or absence of self-shadowing and calculate the position of self-shadowing; e. Processing without masking··Responsive calculation of wall The three-dimensional space coordinates of the equal-pitch grid on the surface correspond to the image coordinates on the appearance texture image of the building photographed on the spot, and then the interpolation process is completed to complete the processing of the wall surface f· partial shading. The method of e generates a wall texture image of the unobscured portion, and then judges the number of pixels without texture information along the periphery of the texture image, and defines the ratio of the largest masking ratio after the ratio with respect to the number of pixels around the wall. As the main shielding edge, the maximum value of the number of masking pixels on the two adjacent sides of the main shielding edge is used as the specular reflection And mirror position of 'reflection and specular manner automatically compensates for the texture information of the shelter, so as to obtain a complete image of the wall texture. • The method for producing a immersive city model as described in claim 1 'where the true height model of the building's appearance is a true height polyhedral three-dimensional building model that records the roof of the building and the corners of the house. The 3D geographic coordinates serve as a source of information for ground control points and building shape frames. • The method of producing a immersive city model as described in claim 1, wherein the on-site texture image of the exterior of the building is a digital image of the appearance of the building, as texture information and point-selection image control The source of the point. • The method for producing a immersive city model as described in claim 1 of the patent scope, wherein the camera external orientation parameter solving system utilizes a single resection method in the aerial survey technology for immediate calculation. • The method for producing a immersive city model as described in claim 1 of the patent scope, wherein the processing without occlusion uses a collinear conditional expression to calculate the three-dimensional space of the equally spaced grid on the wall by back projection. Block 1284543 Hi repair (5) is replacing the page mark, corresponding to the image coordinates on the appearance texture image of the on-site building, and then obtaining the gray value of each grid by interpolation to obtain the complete wall texture image. 6 · The method for producing a immersive city model as described in claim 1, wherein the processing of the partial occlusion is performed after the unmasked portion first uses the method of the fifth application of the patent application to generate a texture image. The texture information of the shadow is compensated by specular reflection to obtain a complete wall texture image. 1285843 nm!, ...r' VII. Designation of representative drawings: (1) The representative representative of the case is: (1). (2) Brief description of the symbol of the representative map: Building data supply 1 Image preview calculation 2 Visual orientation solution 3 Shadowing judgment 4 Processing without masking 5 Processing during partial masking 6 55
TW94115908A 2005-05-17 2005-05-17 System and method of generating a virtual city model TWI285843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW94115908A TWI285843B (en) 2005-05-17 2005-05-17 System and method of generating a virtual city model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW94115908A TWI285843B (en) 2005-05-17 2005-05-17 System and method of generating a virtual city model

Publications (2)

Publication Number Publication Date
TW200641683A TW200641683A (en) 2006-12-01
TWI285843B true TWI285843B (en) 2007-08-21

Family

ID=39457381

Family Applications (1)

Application Number Title Priority Date Filing Date
TW94115908A TWI285843B (en) 2005-05-17 2005-05-17 System and method of generating a virtual city model

Country Status (1)

Country Link
TW (1) TWI285843B (en)

Also Published As

Publication number Publication date
TW200641683A (en) 2006-12-01

Similar Documents

Publication Publication Date Title
JP7162933B2 (en) Method, apparatus and system for establishing an internal spatial model of an object, and computer apparatus and computer readable storage medium
US9420253B2 (en) Presenting realistic designs of spaces and objects
El-Hakim et al. Detailed 3D reconstruction of large-scale heritage sites with integrated techniques
JP5105643B2 (en) System for texture rising of electronic display objects
US20240169674A1 (en) Indoor scene virtual roaming method based on reflection decomposition
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
US20040196282A1 (en) Modeling and editing image panoramas
US20120081357A1 (en) System and method for interactive painting of 2d images for iterative 3d modeling
CN104463969B (en) A kind of method for building up of the model of geographical photo to aviation tilt
El-Hakim et al. Detailed 3D reconstruction of monuments using multiple techniques
Soycan et al. Perspective correction of building facade images for architectural applications
Peña-Villasenín et al. 3-D modeling of historic façades using SFM photogrammetry metric documentation of different building types of a historic center
CN109523622B (en) Unstructured light field rendering method
TWM565860U (en) Smart civil engineering information system
KR101875047B1 (en) System and method for 3d modelling using photogrammetry
CN105550992A (en) High fidelity full face texture fusing method of three-dimensional full face camera
TWI267799B (en) Method for constructing a three dimensional (3D) model
JP4688309B2 (en) 3D computer graphics creation support apparatus, 3D computer graphics creation support method, and 3D computer graphics creation support program
Martinez et al. Creation of a virtual reality environment of a university museum using 3D photogrammetric models
Petkov et al. Interactive visibility retargeting in vr using conformal visualization
JPH06348815A (en) Method for setting three-dimensional model of building aspect in cg system
El-Hakim et al. 3D reconstruction of complex architectures from multiple data
TWI285843B (en) System and method of generating a virtual city model
JP2000076453A (en) Three-dimensional data preparing method and its device
CN115496908A (en) Automatic layering method and system for high-rise building oblique photography model

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees