TW201015493A - 3D depth generation by block-based texel density analysis - Google Patents

3D depth generation by block-based texel density analysis Download PDF

Info

Publication number
TW201015493A
TW201015493A TW97138308A TW97138308A TW201015493A TW 201015493 A TW201015493 A TW 201015493A TW 97138308 A TW97138308 A TW 97138308A TW 97138308 A TW97138308 A TW 97138308A TW 201015493 A TW201015493 A TW 201015493A
Authority
TW
Taiwan
Prior art keywords
depth information
block
dimensional image
generating
stereoscopic depth
Prior art date
Application number
TW97138308A
Other languages
Chinese (zh)
Inventor
Liang-Gee Chen
Chao-Chung Cheng
Chung-Te Li
Ling-Hsiu Huang
Original Assignee
Himax Tech Ltd
Univ Nat Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Tech Ltd, Univ Nat Taiwan filed Critical Himax Tech Ltd
Priority to TW97138308A priority Critical patent/TW201015493A/en
Publication of TW201015493A publication Critical patent/TW201015493A/en

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A system and method of generating three-dimensional (3D) depth information is disclosed. A classification and segmentation unit segments a two-dimensional (2D) image into a number of segments, such that pixels having similar characteristics are classified into the same segment. A spatial-domain texel density analysis unit performs texel density analysis on the 2D image to obtain textual density. A depth assignment unit assigns depth information to the 2D image according to the analyzed textual density.

Description

201015493 九、發明說明: 【發明所屬之技術領域】 本發明係有關立體深度(3D depth )資訊的產生,特201015493 IX. Description of the invention: [Technical field to which the invention pertains] The present invention relates to the generation of 3D depth information.

I 別是關於區塊相關之材質密度(block-based texel density)分析以產生立體深度資訊的技術。 【先前技術】 當三維物體藉由照相機或攝影機而投影映射至二維影 像平面時,由於此種投射係為非唯一的多對一轉換,因此 會失去立體渾度資訊|。換句話說,無法藉由投射後的影像 點來決定其深度。為了得到一個完整重現或近似的立體表 現,必須恢復或產生這些立體深度資訊,用以進行影像強 化(enhancement)、影像復原(restoration)、影像 ❹合成或影像的顯示。 材質(texture)代表物體表面的性質,其包含有材質 要素(texture primitive/element 或 texel)。藉由材質 的測量玎用以區分細緻或粗糙表面之物體,而據以產生立 體深度資訊。材質梯度(gradient)使得愈遠離觀看者具 有愈密的材質。傳統使用二維頻域轉換方式對原始二維影 201015493 像或其放大/縮小影像進行轉換。根據這些影像材質密产的 梯度方向,以分派立體深度資訊。由於二維頻域轉換^ 複雜的計算及耗費相當的時間,因此很難用以進行視訊的 即時(real time)分析。 鑑於上述傳統方法未能即時地產生立體深度資訊,因 此亟需提出-種立體深度資訊的產生系統及方法,以快速 ❹ 地重現或近似出立體表現。 ' 【發明内容】 鑑於上述,本發明的目的之—在於提出—種新穎立體深度資 訊的產生系統及方法’以快速地重現或近似出立體表現。 _ 根縣發明實_ ’本發明提供-種立體深度魏之產生系 統及方法。分類及分割單元將二維影像分割為多個片段,使得具 類似特徵之像素被分類於同—片段。空_材質密度分析單元分 析一維影像以得到材質密度^在一實施例中,空間域材質密度分 析單το係為區塊相關(block-based),其將二維影像分為多個區 塊,且依序分析區塊以決定其邊界數量。深度分派單元根據經分 析之材質密度以分派深度資訊給二維影像,藉此可即時地重現或 201015493 近似出立體表現。 【實施方式】 第一圖顯示本發明如施例之立體(30 )深度資訊產生 系統100。為了便於瞭解本發明’包含有原始影像、處理 中影像、結果影像之例示影像也同時附屬顯示於圖式中。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 輸入裝置10提供或接收一或多張二維(平面)輸入 影像(步驟20),用以I進行本實施例之影像/視訊處理之 用。輸入裝置10可以是一種光電裝置,用以將三維物體 投影映射至二維影像平面。在本實施例中,輸入裝置1〇 參可以是照相機’用以取像得到二維影像;或者可以是攝影 機,用以取得多幅影像。在另一實施例中,輸入裝置1() 可以是前置處理裝置,用以進行一或多個影像處理工作, 例如影像強化、影像復原、影像分析、影像壓縮或影像合 成再者,輪入裝置10可更包含一儲存裝置(例如半導 體記憶體或硬碟),用以儲存經前置處理裝置所處理的影 像。如則所述,當三維物體投影映射至二維影像平面時, 會失去立體深度資訊,因此,以下將詳述本發明實施例之 201015493 立體深度資訊產生系統100的其他方塊是如何用來處理輪 入裝置10所提供的二維影像。 顏色分類及分割單元11處理二維影像,將整個影像 分割為多個片段(segment)(步驟21),使得具類似特徵 (例如類似顏色或亮度)之像素被分類於同一片段。在本 說明書中,”單元”一詞可用以表示一電路、一程式或其紕 Φ 合。在一實施例中’顏色分類及分割單元11根據顏色而 對影像進行分割。亦即’具相同或類似顏色的像素被分類 至同一區塊。事前知識(prior knowledge) 12也可提供 及辅助顏色分類及分割單元11進行顏色之分類(步驟 22)。一般來說’事前知識12係提供個別主題(例如花海、 草地、人、磚)的相關顏色。例如’圖式中輸入裝置10 的相關附屬影像中’黃色花海及綠色草地為二主要主題。 ❹事前知識12也可由處理單元(未顯示於圖式中)來產生, 或者由使用老提供。因而’在本實施例中,顏色分類及分 割單元11將影像分割為二區塊:花海及草地。 接下來’區塊相關(block-based )之空間域(spatial domain)材質(texel/textual)密度分析單元13針對每 一區塊來進行材質密度的分析’以得到材質密度(步驟 201015493 23)。在本實施例中,二維影像包含512x512像素,而整 個影像則區分為64x64區塊,其中每一區塊具有8x8像 素。由於本實施例係於空間域裡依序分析每個區塊,因而 足以進行即時(real time)分析。在本實施例中,分析每 一區塊以決定每一區塊内的邊界(edge)數量。例如,距 觀看者較遠的草地所具有的邊界數量遠大於較近的花海之 邊界數量。換句話說,草地區塊之材質密度高於花海區塊 φ 之材質密度,此表示草地係遠離觀看者。雖然本實施例係 決定每一區塊内的邊界數量,然而,也可以使用其他的空 間域材質密度分析方法。 之後,深度分派單元14根據事前知識(prior knowledge) 15 (步驟25)分派深度資訊給每一區塊(步 驟24)。在例示實施例中,具教小材質密度之區塊(亦即, φ 花海)被分派以較小深度值,而具較大材質密度之區塊(亦 即,草地)則被分派以較大深度值。對於該例示實施例, 事前知識15提供較小深度位準(接近觀看者)給低密度 區塊(亦即,花海),而提供較大深度位準(遠離觀看者) 給高密度區塊(亦即,草地);在另一實施例中,則是提供 較小深度位準給底端區塊,而提供較大深度位準給頂端區 201015493 塊。類似於另一事前知識12,事前知識15也可由處理單 元來產生,或者由使用者提供。 除了提供深度位準外,事前知識15也可以提供個別 的深度範圍給各區塊。一般而言,事前知識15提供較大 深度範圍給距觀看者較近的區塊,而提供較小深度範圍給 距觀看者較遠的區塊。在例示的實施例中,事前知識15 φ 提供較大深度範圍給(較近的)花海,因此使得花海的深 度變化高於草地的深度變化。 輸出裝置16從深度分派單元14接收立體深度資訊, 並產生輸出影像(步驟26)。在一實施例中,輸出裝置16 可以為顯示裝置,用以顯示或供觀看所接收的深度資訊。 在另一實施例中,輸出裝置16可以為儲存裝置,例如半 φ 導體記憶體或硬碟,用以儲存所接收的深度資訊。再者, 輸出裝置16也可更包含一後置處理裝置,用以進行一或 多種影像處理,例如影像強化、影像復原、影像分析、影 像壓縮或影像合成。 11 201015493 根據上述之本發明實施例,與先前技術所述之傳統立 體冰度資訊產生方法比較起來,本發明實施雌能快速地 重現或近似出立體表現。 以上所述僅為本發明之較佳實施例而已,並非用以限 定本發明之申請專利範囹. 把固,凡其它未脫離發明所揭示之精 神下所完成之等效改變或修 參 利範圍内。均應包含在τ述之申請專 【圖式簡單說明】 第Γ圖顯示本發明實施例u體深度資訊產生系統。 第一圖顯示本發明實施例之办 立體深度資訊產生方法的流程 參 【主要元件符號說明】 100 立體深度資訊產生系统 10 輸入裝置 11 顏色分類及分割單& 12 事前知識 13 區塊相關之空間域材質密度分析單元 14 深度分派單元 12 t 201015493 15 事前知識 16 輸出裝置 20-26 實施例之流程步驟I is a technique for block-based texel density analysis to generate stereo depth information. [Prior Art] When a three-dimensional object is projected onto a two-dimensional image plane by a camera or a camera, since the projection is a non-unique multi-to-one conversion, the stereoscopic information is lost. In other words, the depth of the image point cannot be determined by the projected image point. In order to obtain a fully reproducible or approximate stereoscopic representation, these stereoscopic depth information must be restored or generated for image enhancement, restoration, image synthesis, or image display. A texture represents the nature of an object's surface and contains a material primitive (element primitive/element or texel). The measurement of the material is used to distinguish objects on a fine or rough surface to generate stereo depth information. The gradient makes the farther away from the viewer. Traditionally, the original 2D shadow 201015493 image or its enlarged/reduced image is converted using a two-dimensional frequency domain conversion method. The depth depth information is assigned based on the gradient direction of the image material. Due to the complicated calculation and time consuming of the two-dimensional frequency domain conversion, it is difficult to perform real time analysis of video. In view of the fact that the above conventional methods fail to generate stereoscopic depth information in an instant, it is therefore necessary to propose a system and method for generating stereoscopic depth information to quickly reproduce or approximate stereoscopic performance. SUMMARY OF THE INVENTION In view of the above, it is an object of the present invention to provide a system and method for generating a novel stereoscopic depth information to quickly reproduce or approximate stereoscopic performance. _ Root County Invention _ The present invention provides a stereoscopic depth generation system and method. The classification and segmentation unit divides the two-dimensional image into a plurality of segments, so that pixels having similar features are classified into the same-segment. The empty_material density analysis unit analyzes the one-dimensional image to obtain the material density. In one embodiment, the spatial domain material density analysis single το is block-based, which divides the two-dimensional image into multiple blocks. And analyze the block in order to determine the number of boundaries. The depth dispatch unit assigns depth information to the 2D image based on the analyzed material density, thereby instantly reproducing or 201011493 approximating the stereoscopic performance. [Embodiment] The first figure shows a stereo (30) depth information generating system 100 of the present invention as an embodiment. For ease of understanding of the present invention, an exemplary image including the original image, the processed image, and the resulting image is also attached to the drawing. The second figure shows the flow steps of the stereo depth information generating method in the embodiment of the present invention. The input device 10 provides or receives one or more two-dimensional (planar) input images (step 20) for performing the image/video processing of the present embodiment. Input device 10 can be an optoelectronic device for mapping a three-dimensional object projection to a two-dimensional image plane. In this embodiment, the input device 1 may be a camera for taking a two-dimensional image, or may be a camera for acquiring a plurality of images. In another embodiment, the input device 1() may be a pre-processing device for performing one or more image processing operations, such as image enhancement, image restoration, image analysis, image compression, or image synthesis, and wheeling. The device 10 can further include a storage device (such as a semiconductor memory or a hard disk) for storing images processed by the pre-processing device. As described above, when the three-dimensional object projection is mapped to the two-dimensional image plane, the stereoscopic depth information is lost. Therefore, how the other blocks of the 201015493 stereoscopic depth information generating system 100 of the embodiment of the present invention are used to process the wheel will be described in detail below. The two-dimensional image provided by the device 10 is entered. The color classification and segmentation unit 11 processes the two-dimensional image, and divides the entire image into a plurality of segments (step 21) so that pixels having similar features (e.g., similar colors or brightness) are classified into the same segment. In this specification, the term "unit" can be used to mean a circuit, a program, or a combination thereof. In an embodiment, the color classification and segmentation unit 11 segments the image in accordance with the color. That is, pixels having the same or similar colors are classified into the same block. Prior knowledge 12 may also provide and assist the color classification and segmentation unit 11 to classify colors (step 22). In general, 'pre-existing knowledge 12' provides relevant colors for individual subjects (eg flower sea, grass, people, bricks). For example, 'the yellow flower sea and the green grass field in the related auxiliary image of the input device 10 in the drawing are the two main themes. The prior knowledge 12 can also be generated by a processing unit (not shown in the drawings) or provided by the use of the old. Thus, in the present embodiment, the color classification and division unit 11 divides the image into two blocks: flower sea and grass. Next, a block-based spatial domain material (texel/textual) density analysis unit 13 performs material density analysis for each block to obtain a material density (step 201015493 23). In this embodiment, the 2D image contains 512x512 pixels, and the entire image is divided into 64x64 blocks, each of which has 8x8 pixels. Since this embodiment analyzes each block sequentially in the spatial domain, it is sufficient for real time analysis. In this embodiment, each block is analyzed to determine the number of edges within each block. For example, grassland that is farther from the viewer has a much larger number of borders than the border of the closer flower sea. In other words, the material density of the grass area block is higher than the material density of the flower sea block φ, which means that the grassland system is far away from the viewer. Although this embodiment determines the number of boundaries within each block, other spatial domain material density analysis methods can be used. Thereafter, the depth dispatch unit 14 dispatches depth information to each block in accordance with prior knowledge 15 (step 25) (step 24). In the illustrated embodiment, blocks that teach small material densities (i.e., φ Huahai) are assigned a smaller depth value, while blocks with a larger material density (i.e., grass) are assigned to Large depth value. For this exemplary embodiment, prior knowledge 15 provides a lower depth level (near the viewer) to the low density block (i.e., Huahai), while providing a larger depth level (away from the viewer) to the high density block. (i.e., grassland); in another embodiment, a lower depth level is provided to the bottom end block, while a larger depth level is provided to the top end area 201015493 block. Similar to another prior knowledge 12, the prior knowledge 15 can also be generated by the processing unit or provided by the user. In addition to providing depth levels, prior knowledge 15 can also provide individual depth ranges for each block. In general, the prior knowledge 15 provides a larger depth range for blocks closer to the viewer and a smaller depth range for blocks farther away from the viewer. In the illustrated embodiment, the prior knowledge 15 φ provides a greater depth range to the (nearer) flower sea, thus causing the depth of the flower sea to change more than the depth of the grass. Output device 16 receives stereo depth information from depth dispatch unit 14 and produces an output image (step 26). In an embodiment, the output device 16 can be a display device for displaying or viewing the received depth information. In another embodiment, the output device 16 can be a storage device, such as a semi-φ conductor memory or a hard disk, for storing the received depth information. Furthermore, the output device 16 may further comprise a post-processing device for performing one or more image processing, such as image enhancement, image restoration, image analysis, image compression or image synthesis. 11 201015493 According to the above-described embodiments of the present invention, the present invention can quickly reproduce or approximate stereoscopic performance in comparison with the conventional stereo ice information generating method described in the prior art. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the application of the present invention. The equivalent changes or modifications may be made without departing from the spirit of the invention. Inside. All of them should be included in the application of τ. [Simplified description of the drawings] The figure shows a system for generating depth information of the embodiment of the present invention. The first figure shows the flow of the stereoscopic depth information generating method in the embodiment of the present invention. [Main component symbol description] 100 stereoscopic depth information generating system 10 input device 11 color classification and segmentation single & 12 prior knowledge 13 block related space Domain material density analysis unit 14 Depth dispatch unit 12 t 201015493 15 Prior knowledge 16 Output device 20-26 Process steps of the embodiment

1313

Claims (1)

201015493 十、申請專利範圍: 1. 一種立體深度資訊之產生系統,包含: 一分類及分割單元,用以將二維影像分割為複數個片 段,使得具類似特徵之像素被分類於同一片段; 一空間域材質密度分析單元,其分析該二維影像以得到 材質密度;及 赢 一深度分派單元,其根據該分析之材質密度以分派深度 資訊給該二維影像。 2. 如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之二維影像係!根據顏色以進行分類及分割。 3. 如申請專利範圍第1項所述立體深度資訊之產生系統, ^ 其中上述之二維影像係根據亮度以進行分類及分割。 4. 如申請專利範圍第1項所述立體深度資訊之產生系統, 更包含事前知識,其提供個別顏色或亮度給該分類及分割 單元。 5. 如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之空間域材質密度分析單元係為區塊相關 201015493 (block-based),其將該二維影像分為複數個區塊,且依 序分析該區塊之材質密度。 6. 如申請專利範圍第5項所述立體深度資訊之產生系統, 其中上述之每一區塊經分析以決定其邊界之數量。 7. 如申請專利範圍第1項所述立體深度資訊之產生系統, φ 更包含事前知識,其提供較小深度位準給低密度區塊,而 提供較大深度位準給高密度區塊。 8. 如申請專利範圍第1項所述立體深度資訊之產生系統, 更包含事前知識,其提供較小深度位準給底端區塊,而提 供較大深度位準給頂端區塊。 參 9.如申請專利範圍第1項所述立體深度資訊之產生系統, 更包含一輸入裝置,其將三維物體投影映射至二維影像平 面。 10.如申請專利範圍第9項所述立體深度資訊之產生系 統,其中上述之輸入裝置更儲存該二維影像。 15 201015493 11. 如申請專利範圍第1項所述立體深度資訊之產生系 統,更包含一輸出裝置,其接收該深度資訊。 12. 如申請專利範圍$ 11項所述立體深度資訊之產生系 統,其中上述之輸出裝置更儲存或顯示該深度資訊。 13. —種立體深度資訊之產生方法,包含: φ 將二維影像分割為複數個片段,使得具類似特徵之像素 被分類於同一片段; 分析該二維影像以得到材質密度;及 根據該分析之材質密度以分派深度資訊給該二維影像。 I I 14. 如申請專利範圍第13項所述立體深度資訊之產生方 法,其中上述之二維影像係根據顏色以進行分類及分割。 Λ 15. 如申請專利範圍第13項所述立體深度資訊之產生方 法,其中上述之二維影像係根據亮度以進行分類及分割。 16. 如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含接收事前知識,於該分割步驟中提供個別顏色 或亮度。 1 201015493 17·如申請專利範圍第13項所述立體深度資訊之產生方 法,其中上述之材質密度分析係為區塊相關 (block-based),其將該二維影像分為複數個區塊,且依 序分析該區塊之材質密度。 18. 如申請專利範圍第17項所述立體深度資訊之產生方 φ 法,其中上述之每一區塊經分析以決定其邊界之數量。 19. 如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含接收事前知識,於分派深度資訊步驟中,提供 較小深度位準給低密度區塊,而提供較大深度位準給高密 度區塊。 (Λ 20.如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含接收事前知識,於分派深度資訊步驟中,提供 較小深度位準給底端區塊,而提供較大深度位準給頂端區 塊0 , 17 201015493 21.如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含一步驟,用以將三維物體投影映射至二維影像 平面。 22.如申請專利範圍第21項所述立體深度資訊之產生方 法,更包含一步驟,以儲存該二維影像。 Φ 23.如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含一步驟,用以接收該深度資訊。 24.如申請專利範圍第23項所述立體深度資訊之產生方 ! 法,更包含一步驟,苡儲存或顯示該深度資訊。201015493 X. Patent application scope: 1. A stereoscopic depth information generation system, comprising: a classification and segmentation unit for dividing a two-dimensional image into a plurality of segments, so that pixels having similar features are classified into the same segment; The spatial domain material density analysis unit analyzes the two-dimensional image to obtain a material density; and wins a depth dispatch unit that assigns depth information to the two-dimensional image according to the analyzed material density. 2. The system for generating stereoscopic depth information according to claim 1, wherein the two-dimensional image system is classified and divided according to colors. 3. The system for generating stereoscopic depth information according to item 1 of the patent application, wherein the two-dimensional image described above is classified and divided according to brightness. 4. The system for generating stereoscopic depth information according to claim 1 of the patent application further includes prior knowledge, which provides individual colors or brightness to the classification and segmentation unit. 5. The system for generating stereoscopic depth information according to claim 1, wherein the spatial domain material density analysis unit is block-related 201015493 (block-based), and the two-dimensional image is divided into a plurality of regions. Block, and sequentially analyze the material density of the block. 6. The system for generating stereoscopic depth information according to claim 5, wherein each of the above blocks is analyzed to determine the number of boundaries thereof. 7. For the stereoscopic depth information generation system described in claim 1, φ further includes prior knowledge, which provides a lower depth level to the low density block and a larger depth level to the high density block. 8. The system for generating stereoscopic depth information according to claim 1 of the patent application further includes prior knowledge, which provides a lower depth level to the bottom block and a larger depth level to the top block. 9. The stereoscopic depth information generating system of claim 1, further comprising an input device that maps the three-dimensional object projection to the two-dimensional image plane. 10. The stereoscopic depth information generating system of claim 9, wherein the input device further stores the two-dimensional image. 15 201015493 11. The system for generating stereoscopic depth information according to claim 1 of the patent application, further comprising an output device that receives the depth information. 12. The method of claim 3, wherein the output device further stores or displays the depth information. 13. A method for generating stereoscopic depth information, comprising: φ dividing a two-dimensional image into a plurality of segments, such that pixels having similar features are classified into the same segment; analyzing the two-dimensional image to obtain a material density; and according to the analysis The material density is assigned to the 2D image by assigning depth information. I I 14. The method for generating stereoscopic depth information according to claim 13, wherein the two-dimensional image is classified and divided according to colors. Λ 15. The method for generating stereoscopic depth information according to claim 13 wherein the two-dimensional image is classified and divided according to brightness. 16. The method of generating stereoscopic depth information as described in claim 13 of the patent application, further comprising receiving prior knowledge, wherein the individual color or brightness is provided in the dividing step. 1 201015493. The method for generating stereoscopic depth information according to claim 13, wherein the material density analysis is block-based, and the two-dimensional image is divided into a plurality of blocks. And the material density of the block is analyzed in order. 18. The method of generating stereoscopic depth information according to claim 17 wherein each of said blocks is analyzed to determine the number of boundaries thereof. 19. The method for generating stereoscopic depth information according to claim 13 of the patent application scope, further comprising receiving prior knowledge, providing a smaller depth level to the low density block in the step of dispatching the depth information, and providing a larger depth level Give high density blocks. (Λ 20. The method for generating stereoscopic depth information according to claim 13 of the patent application scope, further includes receiving prior knowledge, and providing a smaller depth level to the bottom end block in the step of dispatching the depth information, and providing a greater depth The position is given to the top block 0, 17 201015493. 21. The method for generating stereoscopic depth information according to claim 13 of the patent application scope further includes a step for mapping the three-dimensional object projection to the two-dimensional image plane. The method for generating the stereoscopic depth information described in claim 21 further includes a step of storing the two-dimensional image. Φ 23. The method for generating the stereoscopic depth information according to claim 13 of the patent application scope further includes a step. The method for receiving the depth information. 24. The method for generating stereoscopic depth information according to claim 23, further comprising a step of storing or displaying the depth information.
TW97138308A 2008-10-03 2008-10-03 3D depth generation by block-based texel density analysis TW201015493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW97138308A TW201015493A (en) 2008-10-03 2008-10-03 3D depth generation by block-based texel density analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW97138308A TW201015493A (en) 2008-10-03 2008-10-03 3D depth generation by block-based texel density analysis

Publications (1)

Publication Number Publication Date
TW201015493A true TW201015493A (en) 2010-04-16

Family

ID=44830066

Family Applications (1)

Application Number Title Priority Date Filing Date
TW97138308A TW201015493A (en) 2008-10-03 2008-10-03 3D depth generation by block-based texel density analysis

Country Status (1)

Country Link
TW (1) TW201015493A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI498852B (en) * 2012-05-24 2015-09-01 Silicon Integrated Sys Corp Device and method of depth map generation
US9958267B2 (en) 2015-12-21 2018-05-01 Industrial Technology Research Institute Apparatus and method for dual mode depth measurement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI498852B (en) * 2012-05-24 2015-09-01 Silicon Integrated Sys Corp Device and method of depth map generation
US9958267B2 (en) 2015-12-21 2018-05-01 Industrial Technology Research Institute Apparatus and method for dual mode depth measurement

Similar Documents

Publication Publication Date Title
EP3139589B1 (en) Image processing apparatus and image processing method
TWI536318B (en) Depth measurement quality enhancement
TWI524734B (en) Method and device for generating a depth map
JP4896230B2 (en) System and method of object model fitting and registration for transforming from 2D to 3D
US11348267B2 (en) Method and apparatus for generating a three-dimensional model
KR101168384B1 (en) Method of generating a depth map, depth map generating unit, image processing apparatus and computer program product
CN107967707B (en) Apparatus and method for processing image
TWI764959B (en) Apparatus and method for generating a light intensity image
JP4199170B2 (en) High-dimensional texture mapping apparatus, method and program
CN109906600B (en) Simulated depth of field
JP6541920B1 (en) INFORMATION PROCESSING APPARATUS, PROGRAM, AND INFORMATION PROCESSING METHOD
WO2019167453A1 (en) Image processing device, image processing method, and program
JP2016537901A (en) Light field processing method
TWI502546B (en) System, method, and computer program product for extruding a model through a two-dimensional scene
US20100079448A1 (en) 3D Depth Generation by Block-based Texel Density Analysis
US7907147B2 (en) Texture filtering apparatus, texture mapping apparatus, and method and program therefor
TW201237803A (en) Algorithm for compensating hollows generated after conversion of 2D images
JP2022518402A (en) 3D reconstruction method and equipment
TW201015493A (en) 3D depth generation by block-based texel density analysis
Nguyen et al. High-definition texture reconstruction for 3D image-based modeling
JP2009211561A (en) Depth data generator and depth data generation method, and program thereof
KR101849696B1 (en) Method and apparatus for obtaining informaiton of lighting and material in image modeling system
TW201528774A (en) Apparatus and method for creating 3D scene
JP2014164497A (en) Information processor, image processing method and program
TWI768231B (en) Information processing device, recording medium, program product, and information processing method