201015493 九、發明說明: 【發明所屬之技術領域】 本發明係有關立體深度(3D depth )資訊的產生,特201015493 IX. Description of the invention: [Technical field to which the invention pertains] The present invention relates to the generation of 3D depth information.
I 別是關於區塊相關之材質密度(block-based texel density)分析以產生立體深度資訊的技術。 【先前技術】 當三維物體藉由照相機或攝影機而投影映射至二維影 像平面時,由於此種投射係為非唯一的多對一轉換,因此 會失去立體渾度資訊|。換句話說,無法藉由投射後的影像 點來決定其深度。為了得到一個完整重現或近似的立體表 現,必須恢復或產生這些立體深度資訊,用以進行影像強 化(enhancement)、影像復原(restoration)、影像 ❹合成或影像的顯示。 材質(texture)代表物體表面的性質,其包含有材質 要素(texture primitive/element 或 texel)。藉由材質 的測量玎用以區分細緻或粗糙表面之物體,而據以產生立 體深度資訊。材質梯度(gradient)使得愈遠離觀看者具 有愈密的材質。傳統使用二維頻域轉換方式對原始二維影 201015493 像或其放大/縮小影像進行轉換。根據這些影像材質密产的 梯度方向,以分派立體深度資訊。由於二維頻域轉換^ 複雜的計算及耗費相當的時間,因此很難用以進行視訊的 即時(real time)分析。 鑑於上述傳統方法未能即時地產生立體深度資訊,因 此亟需提出-種立體深度資訊的產生系統及方法,以快速 ❹ 地重現或近似出立體表現。 ' 【發明内容】 鑑於上述,本發明的目的之—在於提出—種新穎立體深度資 訊的產生系統及方法’以快速地重現或近似出立體表現。 _ 根縣發明實_ ’本發明提供-種立體深度魏之產生系 統及方法。分類及分割單元將二維影像分割為多個片段,使得具 類似特徵之像素被分類於同—片段。空_材質密度分析單元分 析一維影像以得到材質密度^在一實施例中,空間域材質密度分 析單το係為區塊相關(block-based),其將二維影像分為多個區 塊,且依序分析區塊以決定其邊界數量。深度分派單元根據經分 析之材質密度以分派深度資訊給二維影像,藉此可即時地重現或 201015493 近似出立體表現。 【實施方式】 第一圖顯示本發明如施例之立體(30 )深度資訊產生 系統100。為了便於瞭解本發明’包含有原始影像、處理 中影像、結果影像之例示影像也同時附屬顯示於圖式中。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 輸入裝置10提供或接收一或多張二維(平面)輸入 影像(步驟20),用以I進行本實施例之影像/視訊處理之 用。輸入裝置10可以是一種光電裝置,用以將三維物體 投影映射至二維影像平面。在本實施例中,輸入裝置1〇 參可以是照相機’用以取像得到二維影像;或者可以是攝影 機,用以取得多幅影像。在另一實施例中,輸入裝置1() 可以是前置處理裝置,用以進行一或多個影像處理工作, 例如影像強化、影像復原、影像分析、影像壓縮或影像合 成再者,輪入裝置10可更包含一儲存裝置(例如半導 體記憶體或硬碟),用以儲存經前置處理裝置所處理的影 像。如則所述,當三維物體投影映射至二維影像平面時, 會失去立體深度資訊,因此,以下將詳述本發明實施例之 201015493 立體深度資訊產生系統100的其他方塊是如何用來處理輪 入裝置10所提供的二維影像。 顏色分類及分割單元11處理二維影像,將整個影像 分割為多個片段(segment)(步驟21),使得具類似特徵 (例如類似顏色或亮度)之像素被分類於同一片段。在本 說明書中,”單元”一詞可用以表示一電路、一程式或其紕 Φ 合。在一實施例中’顏色分類及分割單元11根據顏色而 對影像進行分割。亦即’具相同或類似顏色的像素被分類 至同一區塊。事前知識(prior knowledge) 12也可提供 及辅助顏色分類及分割單元11進行顏色之分類(步驟 22)。一般來說’事前知識12係提供個別主題(例如花海、 草地、人、磚)的相關顏色。例如’圖式中輸入裝置10 的相關附屬影像中’黃色花海及綠色草地為二主要主題。 ❹事前知識12也可由處理單元(未顯示於圖式中)來產生, 或者由使用老提供。因而’在本實施例中,顏色分類及分 割單元11將影像分割為二區塊:花海及草地。 接下來’區塊相關(block-based )之空間域(spatial domain)材質(texel/textual)密度分析單元13針對每 一區塊來進行材質密度的分析’以得到材質密度(步驟 201015493 23)。在本實施例中,二維影像包含512x512像素,而整 個影像則區分為64x64區塊,其中每一區塊具有8x8像 素。由於本實施例係於空間域裡依序分析每個區塊,因而 足以進行即時(real time)分析。在本實施例中,分析每 一區塊以決定每一區塊内的邊界(edge)數量。例如,距 觀看者較遠的草地所具有的邊界數量遠大於較近的花海之 邊界數量。換句話說,草地區塊之材質密度高於花海區塊 φ 之材質密度,此表示草地係遠離觀看者。雖然本實施例係 決定每一區塊内的邊界數量,然而,也可以使用其他的空 間域材質密度分析方法。 之後,深度分派單元14根據事前知識(prior knowledge) 15 (步驟25)分派深度資訊給每一區塊(步 驟24)。在例示實施例中,具教小材質密度之區塊(亦即, φ 花海)被分派以較小深度值,而具較大材質密度之區塊(亦 即,草地)則被分派以較大深度值。對於該例示實施例, 事前知識15提供較小深度位準(接近觀看者)給低密度 區塊(亦即,花海),而提供較大深度位準(遠離觀看者) 給高密度區塊(亦即,草地);在另一實施例中,則是提供 較小深度位準給底端區塊,而提供較大深度位準給頂端區 201015493 塊。類似於另一事前知識12,事前知識15也可由處理單 元來產生,或者由使用者提供。 除了提供深度位準外,事前知識15也可以提供個別 的深度範圍給各區塊。一般而言,事前知識15提供較大 深度範圍給距觀看者較近的區塊,而提供較小深度範圍給 距觀看者較遠的區塊。在例示的實施例中,事前知識15 φ 提供較大深度範圍給(較近的)花海,因此使得花海的深 度變化高於草地的深度變化。 輸出裝置16從深度分派單元14接收立體深度資訊, 並產生輸出影像(步驟26)。在一實施例中,輸出裝置16 可以為顯示裝置,用以顯示或供觀看所接收的深度資訊。 在另一實施例中,輸出裝置16可以為儲存裝置,例如半 φ 導體記憶體或硬碟,用以儲存所接收的深度資訊。再者, 輸出裝置16也可更包含一後置處理裝置,用以進行一或 多種影像處理,例如影像強化、影像復原、影像分析、影 像壓縮或影像合成。 11 201015493 根據上述之本發明實施例,與先前技術所述之傳統立 體冰度資訊產生方法比較起來,本發明實施雌能快速地 重現或近似出立體表現。 以上所述僅為本發明之較佳實施例而已,並非用以限 定本發明之申請專利範囹. 把固,凡其它未脫離發明所揭示之精 神下所完成之等效改變或修 參 利範圍内。均應包含在τ述之申請專 【圖式簡單說明】 第Γ圖顯示本發明實施例u體深度資訊產生系統。 第一圖顯示本發明實施例之办 立體深度資訊產生方法的流程 參 【主要元件符號說明】 100 立體深度資訊產生系统 10 輸入裝置 11 顏色分類及分割單& 12 事前知識 13 區塊相關之空間域材質密度分析單元 14 深度分派單元 12 t 201015493 15 事前知識 16 輸出裝置 20-26 實施例之流程步驟I is a technique for block-based texel density analysis to generate stereo depth information. [Prior Art] When a three-dimensional object is projected onto a two-dimensional image plane by a camera or a camera, since the projection is a non-unique multi-to-one conversion, the stereoscopic information is lost. In other words, the depth of the image point cannot be determined by the projected image point. In order to obtain a fully reproducible or approximate stereoscopic representation, these stereoscopic depth information must be restored or generated for image enhancement, restoration, image synthesis, or image display. A texture represents the nature of an object's surface and contains a material primitive (element primitive/element or texel). The measurement of the material is used to distinguish objects on a fine or rough surface to generate stereo depth information. The gradient makes the farther away from the viewer. Traditionally, the original 2D shadow 201015493 image or its enlarged/reduced image is converted using a two-dimensional frequency domain conversion method. The depth depth information is assigned based on the gradient direction of the image material. Due to the complicated calculation and time consuming of the two-dimensional frequency domain conversion, it is difficult to perform real time analysis of video. In view of the fact that the above conventional methods fail to generate stereoscopic depth information in an instant, it is therefore necessary to propose a system and method for generating stereoscopic depth information to quickly reproduce or approximate stereoscopic performance. SUMMARY OF THE INVENTION In view of the above, it is an object of the present invention to provide a system and method for generating a novel stereoscopic depth information to quickly reproduce or approximate stereoscopic performance. _ Root County Invention _ The present invention provides a stereoscopic depth generation system and method. The classification and segmentation unit divides the two-dimensional image into a plurality of segments, so that pixels having similar features are classified into the same-segment. The empty_material density analysis unit analyzes the one-dimensional image to obtain the material density. In one embodiment, the spatial domain material density analysis single το is block-based, which divides the two-dimensional image into multiple blocks. And analyze the block in order to determine the number of boundaries. The depth dispatch unit assigns depth information to the 2D image based on the analyzed material density, thereby instantly reproducing or 201011493 approximating the stereoscopic performance. [Embodiment] The first figure shows a stereo (30) depth information generating system 100 of the present invention as an embodiment. For ease of understanding of the present invention, an exemplary image including the original image, the processed image, and the resulting image is also attached to the drawing. The second figure shows the flow steps of the stereo depth information generating method in the embodiment of the present invention. The input device 10 provides or receives one or more two-dimensional (planar) input images (step 20) for performing the image/video processing of the present embodiment. Input device 10 can be an optoelectronic device for mapping a three-dimensional object projection to a two-dimensional image plane. In this embodiment, the input device 1 may be a camera for taking a two-dimensional image, or may be a camera for acquiring a plurality of images. In another embodiment, the input device 1() may be a pre-processing device for performing one or more image processing operations, such as image enhancement, image restoration, image analysis, image compression, or image synthesis, and wheeling. The device 10 can further include a storage device (such as a semiconductor memory or a hard disk) for storing images processed by the pre-processing device. As described above, when the three-dimensional object projection is mapped to the two-dimensional image plane, the stereoscopic depth information is lost. Therefore, how the other blocks of the 201015493 stereoscopic depth information generating system 100 of the embodiment of the present invention are used to process the wheel will be described in detail below. The two-dimensional image provided by the device 10 is entered. The color classification and segmentation unit 11 processes the two-dimensional image, and divides the entire image into a plurality of segments (step 21) so that pixels having similar features (e.g., similar colors or brightness) are classified into the same segment. In this specification, the term "unit" can be used to mean a circuit, a program, or a combination thereof. In an embodiment, the color classification and segmentation unit 11 segments the image in accordance with the color. That is, pixels having the same or similar colors are classified into the same block. Prior knowledge 12 may also provide and assist the color classification and segmentation unit 11 to classify colors (step 22). In general, 'pre-existing knowledge 12' provides relevant colors for individual subjects (eg flower sea, grass, people, bricks). For example, 'the yellow flower sea and the green grass field in the related auxiliary image of the input device 10 in the drawing are the two main themes. The prior knowledge 12 can also be generated by a processing unit (not shown in the drawings) or provided by the use of the old. Thus, in the present embodiment, the color classification and division unit 11 divides the image into two blocks: flower sea and grass. Next, a block-based spatial domain material (texel/textual) density analysis unit 13 performs material density analysis for each block to obtain a material density (step 201015493 23). In this embodiment, the 2D image contains 512x512 pixels, and the entire image is divided into 64x64 blocks, each of which has 8x8 pixels. Since this embodiment analyzes each block sequentially in the spatial domain, it is sufficient for real time analysis. In this embodiment, each block is analyzed to determine the number of edges within each block. For example, grassland that is farther from the viewer has a much larger number of borders than the border of the closer flower sea. In other words, the material density of the grass area block is higher than the material density of the flower sea block φ, which means that the grassland system is far away from the viewer. Although this embodiment determines the number of boundaries within each block, other spatial domain material density analysis methods can be used. Thereafter, the depth dispatch unit 14 dispatches depth information to each block in accordance with prior knowledge 15 (step 25) (step 24). In the illustrated embodiment, blocks that teach small material densities (i.e., φ Huahai) are assigned a smaller depth value, while blocks with a larger material density (i.e., grass) are assigned to Large depth value. For this exemplary embodiment, prior knowledge 15 provides a lower depth level (near the viewer) to the low density block (i.e., Huahai), while providing a larger depth level (away from the viewer) to the high density block. (i.e., grassland); in another embodiment, a lower depth level is provided to the bottom end block, while a larger depth level is provided to the top end area 201015493 block. Similar to another prior knowledge 12, the prior knowledge 15 can also be generated by the processing unit or provided by the user. In addition to providing depth levels, prior knowledge 15 can also provide individual depth ranges for each block. In general, the prior knowledge 15 provides a larger depth range for blocks closer to the viewer and a smaller depth range for blocks farther away from the viewer. In the illustrated embodiment, the prior knowledge 15 φ provides a greater depth range to the (nearer) flower sea, thus causing the depth of the flower sea to change more than the depth of the grass. Output device 16 receives stereo depth information from depth dispatch unit 14 and produces an output image (step 26). In an embodiment, the output device 16 can be a display device for displaying or viewing the received depth information. In another embodiment, the output device 16 can be a storage device, such as a semi-φ conductor memory or a hard disk, for storing the received depth information. Furthermore, the output device 16 may further comprise a post-processing device for performing one or more image processing, such as image enhancement, image restoration, image analysis, image compression or image synthesis. 11 201015493 According to the above-described embodiments of the present invention, the present invention can quickly reproduce or approximate stereoscopic performance in comparison with the conventional stereo ice information generating method described in the prior art. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the application of the present invention. The equivalent changes or modifications may be made without departing from the spirit of the invention. Inside. All of them should be included in the application of τ. [Simplified description of the drawings] The figure shows a system for generating depth information of the embodiment of the present invention. The first figure shows the flow of the stereoscopic depth information generating method in the embodiment of the present invention. [Main component symbol description] 100 stereoscopic depth information generating system 10 input device 11 color classification and segmentation single & 12 prior knowledge 13 block related space Domain material density analysis unit 14 Depth dispatch unit 12 t 201015493 15 Prior knowledge 16 Output device 20-26 Process steps of the embodiment
1313