201015487 七、指定代表圖: (一) 本案指定代表圖為:第(一 (二) 本代表圖之元件符號簡單說明. 100 立體深度資訊產生系統 10 輪入裝置 11 影像分割單元 12 區域模糊程度估算單元 13 深度分派單元 14 輸出裝置 八、 本案若有彳b學式時,請揭*最脑示俩特徵的化學式: 九、 發明說明:| 【發明所屬之技術領域】 本發明係有關立體深度(3D depth)資訊的產生,特 別疋關於估算區域模糊程度(l〇cal biurriness)以產生立 體深度資訊的技術。 【先前技術】 當二維物體藉由照相機或攝影機而投影映射至二維影 像平面時,由於此種投射係為非唯一的多對一轉換,因此 會失去立體深度資訊。換句話說,無法藉由投射後的影像 201015487 點來決定其深度。為了得到〜個完整重現或近似的立體表 現,必須恢復或產生這些立體深度資訊,用以進行影像強 化(enhancement)、影像復原(rest〇rati〇n)、影像 合成或影像的顯示。 照相機藉由透鏡而將平行入射光線聚合於光軸的焦點 上。從透鏡到焦點的距離稱為焦距。如果來自物體的光線 ❿聚合良好,則稱此物體的二維影像為焦點對準的;如果來 自物體的光線未聚合良好,職此物體的二維影像為失 焦。影像中的失:|、物體會呈現_現象,且_程度和距 離或深度成正比。因此,藉由測量模齡度可用以產生立 體深度資訊。 生方法之一為針對多張同一場 區域之模糊程度進行分析。根 ,因而可以得知影像中的立201015487 VII. Designated representative map: (1) The representative representative figure of this case is: (1) The simple description of the component symbol of the representative map. 100 Stereoscopic depth information generation system 10 Wheeling device 11 Image segmentation unit 12 Area blur degree estimation Unit 13 Depth Dispatch Unit 14 Output Device 8. If there is a 学b formula in this case, please uncover the chemical formula of the two features: IX. Invention Description: | [Technical Field of the Invention] The present invention relates to stereo depth ( 3D depth) The generation of information, especially about the estimation of the degree of blurring (l〇cal biurriness) to generate stereoscopic depth information. [Prior Art] When a two-dimensional object is projected onto a two-dimensional image plane by a camera or a camera Since this kind of projection is a non-unique multi-to-one conversion, the stereo depth information will be lost. In other words, the depth of the projected image 201015487 cannot be determined. In order to get ~ complete reproduction or approximation Stereoscopic performance, these stereoscopic depth information must be restored or generated for image enhancement and shadowing Like restoration (rest〇rati〇n), image synthesis or image display. The camera uses a lens to concentrate parallel incident light onto the focus of the optical axis. The distance from the lens to the focus is called the focal length. If the light from the object is ❿ If the aggregation is good, the two-dimensional image of the object is called in focus; if the light from the object is not well polymerized, the two-dimensional image of the object is out of focus. The loss in the image: |, the object will exhibit _ phenomenon, And the degree is proportional to the distance or depth. Therefore, by measuring the age of the mold, it is possible to generate stereo depth information. One of the methods is to analyze the degree of blur of multiple fields in the same field. Stand
傳統立體深度資訊的產 景不同對焦(/距離)的同一 據這些不同模糊程度及距離 深度資訊。 另一種傳統立體深度資訊的產生方法為針對單張影像 的個別區域進行二維頻域轉換或高通濾波,所得到之高頰 強度即代表個別的模糊程度。根據模糊程度而可以得知整 201015487 個影像的立體深度資訊。此方法的缺點為’當影像中物體 的顏色不同,或亮度相近,或物體的材質特徵不月顯時 則报難區別各物體之間的模糊程度。 鑑於上述傳統方法未能忠實地或簡易地產生立體深度 資訊’因此亟需提出一種立體深度資訊的產生系統及方 法’以忠實地且簡易地重現或近似出立體表現。 ❹ 【發明内容】 鑑於上述,本發明的目的之一在於提出一種新穎立體深度資訊 的產生系統及方法,以忠實地且簡易地重現或近似出立體表現。 根據本發明實施例’本發明提供一種立體深度資訊之產生系統 ® 及方法。獨立於顏色及物體之區域模糊程度估算單元分析二維影 像的每一像素之模糊程度。接下來,深度分派單元根據模糊程度 以分派深度資訊給二維影像。在一實施例中,區域模糊程度估算 單元包含一濾波器,其產生高階統計值(H0S),用以表示模糊程 度。該濾波器被使用三次來分別分析紅色、綠色、藍色像 素,以獲得相對應之統計值,其中,三個顏色當中的最大 統計值則作為深度分派時的引導指標(leading 201015487 performer)〇 【實施方式】 第一圖顯示本發明實施例之立體(3D)深度資訊產生 系統100。為了便於瞭解本發明,包含有原始影像、處理 中影像、結果影像之例示影像也同時附屬顯示於圖式中。 φ 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 , 輸入裝置10提供或接收一或多張二維(平面)輸入 影像(步驟20),用以進行本實施例之影像/視訊處理之 用。輸入裝置10可以是一種光電裝置,用以將三維物體 投影映射至二維影像平面。在本實施例中,輸入裝置工〇 〇 可以是照相機,用以取像得到二維影像;或者可以是攝影 機’用以取得多幅影像。在另一實施例中,輸入裝置10 可以是前置處理裝置,用以進行一或多個影像處理工作, 例如影像強化、影像復原、影像分析、影像壓縮或影像合 成。再者,輸入裝置10可更包含一儲存裝置(例如半導 體記憶體或硬碟),用以儲存經前置處理裝置所處理的影 像。如前所述,當三維物體投影映射至二維影像平面時f 會失去立體深度資訊,因此,以下將詳述本發明實施例之 8 201015487 、统100的其他方塊是如何用來處理輸 二維影像。 八影像分割單元11而將將整個影像 7”\^域(或像素集)(步驟21)。在本說明書中,” ❹ 2使一^t示—電路、m其組合。分割的目 :=續的i處理可以更簡單且一本實施例中, =:::係使用傳統影像處理技術以峨辨別 算單立於齡^狀__程度估 ⑩ 立體深度資訊產生系 入裝置10所提供的 =:波8係一物二二 :辨別模糊程度。底下的演算法顧示-較佳2 祕力)4 綠色亮度(Igreetl) 其中,Icolor代表像素敝色亮度(Ired)、 或藍色贵:度(iblue); 作,y)代表相鄰於像素(x,y)的像素集合; 代表集合内的像素總數量; 201015487 <代表紅色的平均值(七)、綠色平均值(<)或藍色平均 值(化),也可表示為 mc(x,y)=-~- ^{sj) y η {sjyerj{xty) 在本實施例中,獲得一高階統計值(high 〇rder statistics,HOS)或高階中央動差(centrai moment) 糁用以估算模糊程度。在本說明書中,”高階”一詞係指大於 二階者。雖然本實施例使用高階(特別是四階)統計值, 但是’在其他實施例中,也可以使用二階。上述得到之H〇s 可用以估算模糊程度。換句話說,較大的H〇s表示相對應 區域較近於觀看者;相反的,較小的HOS表示相對應區域 較遠於觀看者。 在本實施例中’使用三次上述濾波器來分別分析紅 色、綠色、藍色像素’以獲得相對應之HOS。紅色、綠色、 藍色的最大HOS則作為深度分派時的引導指標(leading performer)。例如,假如紅色通道的HOS為最大者,則 接下來的深度分排將完全針對紅色通道來進行。 201015487 在另一實施例中,針對統計值的絕對值來獲得絕對 HOS,其通常較標準(normal) HOS來得準確,可表示 如下: «>〇 其中,/^)(x,>〇=士 iy η ⑩ 自影像分割單元11所得到的分割資訊以及自區域模 糊程度估算單元12所估算的模糊程度被饋至深度分派單 元13,用以分派深度資訊每一區域(或分割片段)(步驟 23)。一般來說,每一區域之深度資訊的分派方式彼此不 同,不過,二或多個區域也可以採用相同的分派方式。另 外,深度分派單元13也可根據事前知識(pri()I> knowledge)及模糊程度估算來分派深度資訊给一區域中 ❹ 的像素。一般而言,具較小模糊程度之像素被分派以較小 深度資訊(亦即,靠近觀看者)’而具較大模糊程度之像素 被分派以較大深度資訊(亦即,遠離觀看者)。 輸出裝置14從深度分派單元13接收立體深度資訊, 並產生輪出影像(步驟24)。在一實施例中,輸出裝置14 可以為_示裝置,用以顯示或供觀看所接收的深度資訊。 11 201015487 在另一實施例中,輪屮 導體記憶體或硬碟,用 4可以為儲存裝置,例如半 輸出裝置14也可更存所接收的深度資訊。再者’ 多種影像處理,例如置處理裝置,用以進行-或 像壓縮或影像合成。自化、影像復原、影像分析、影 發月實施例,與先前技術所述之傳統立The traditional stereoscopic depth information is different from the focus (/distance) of the same according to these different degrees of blur and distance depth information. Another conventional stereo depth information is generated by performing two-dimensional frequency domain conversion or high-pass filtering on individual regions of a single image, and the obtained high-bucket intensity represents an individual degree of blur. According to the degree of blur, we can know the stereo depth information of the entire 201015487 images. The disadvantage of this method is that when the colors of the objects in the image are different, or the brightness is similar, or the material characteristics of the object are not displayed, it is difficult to distinguish the degree of blurring between the objects. In view of the fact that the above conventional methods fail to produce stereoscopic depth information faithfully or simply, it is therefore necessary to propose a system and method for generating stereoscopic depth information to faithfully and simply reproduce or approximate stereoscopic performance. SUMMARY OF THE INVENTION In view of the above, it is an object of the present invention to provide a system and method for generating novel stereoscopic depth information that faithfully and simply reproduces or approximates stereoscopic representation. According to an embodiment of the present invention, the present invention provides a stereoscopic depth information generation system and method. The blur degree estimation unit independent of the color and the object analyzes the degree of blur of each pixel of the two-dimensional image. Next, the depth dispatch unit assigns depth information to the two-dimensional image according to the degree of blur. In an embodiment, the region blur degree estimation unit includes a filter that produces a high order statistical value (HOS) for indicating the degree of blur. The filter is used three times to analyze the red, green, and blue pixels separately to obtain the corresponding statistical value, wherein the largest statistical value among the three colors is used as the guiding indicator for the depth assignment (leading 201015487 performer)〇[ Embodiments The first figure shows a stereoscopic (3D) depth information generating system 100 according to an embodiment of the present invention. In order to facilitate the understanding of the present invention, an exemplary image including the original image, the processed image, and the resulting image is also attached to the drawing. φ The second figure shows the flow of the stereo depth information generating method in the embodiment of the present invention. The input device 10 provides or receives one or more two-dimensional (planar) input images (step 20) for performing the image/video processing of the present embodiment. Input device 10 can be an optoelectronic device for mapping a three-dimensional object projection to a two-dimensional image plane. In this embodiment, the input device process may be a camera for taking a two-dimensional image, or may be a camera for acquiring a plurality of images. In another embodiment, the input device 10 can be a pre-processing device for performing one or more image processing operations, such as image enhancement, image restoration, image analysis, image compression, or image synthesis. Furthermore, the input device 10 can further include a storage device (e.g., a semiconductor memory or a hard disk) for storing images processed by the pre-processing device. As described above, when the three-dimensional object projection is mapped to the two-dimensional image plane, f will lose the stereoscopic depth information. Therefore, the following is a detailed description of how the other blocks of the embodiment 100 of the present invention are used to process the two-dimensional image. image. The eight image segmentation unit 11 will set the entire image 7" field (or pixel set) (step 21). In the present specification, "❹ 2 makes a circuit, m, and combinations thereof. The purpose of segmentation: = continued i processing can be simpler and in this embodiment, =::: is the use of traditional image processing technology to identify the individual in the age of __ degree estimate 10 stereo depth information generation The device 10 provides a == wave 8 system and a second object: the degree of blurring is discriminated. The underlying algorithm shows - preferably 2 secret) 4 Green brightness (Igreetl) where Icolor represents the pixel 亮度 color brightness (Ired), or blue expensive: degree (iblue); y) represents adjacent to the pixel a set of pixels of (x, y); represents the total number of pixels in the set; 201015487 < represents the average of the red (seven), the green average (<) or the blue average (chemical), can also be expressed as mc (x, y) = -~- ^{sj) y η {sjyerj{xty) In this embodiment, a high-order statistic value (HOS) or a high-order central motion (centrai moment) is obtained. To estimate the degree of ambiguity. In this specification, the term "higher order" means more than the second order. Although the present embodiment uses higher order (especially fourth order) statistics, 'in other embodiments, second order can also be used. The H〇s obtained above can be used to estimate the degree of ambiguity. In other words, a larger H 〇 s indicates that the corresponding area is closer to the viewer; conversely, a smaller HOS indicates that the corresponding area is farther from the viewer. In the present embodiment, 'the above filters are used three times to respectively analyze the red, green, and blue pixels' to obtain the corresponding HOS. The maximum HOS of red, green, and blue is used as a leading performer for deep dispatch. For example, if the HOS of the red channel is the largest, then the next depth partition will be done entirely for the red channel. 201015487 In another embodiment, the absolute HOS is obtained for the absolute value of the statistical value, which is generally accurate compared to the normal HOS, and can be expressed as follows: «>〇 where, /^)(x,>〇= The segmentation information obtained from the image segmentation unit 11 and the degree of blurment estimated from the region blur degree estimation unit 12 are fed to the depth assignment unit 13 for assigning each region (or segmentation segment) of the depth information (steps) 23) Generally speaking, the depth information of each region is assigned differently, but two or more regions can also adopt the same distribution method. In addition, the depth dispatch unit 13 can also be based on prior knowledge (pri() I> Knowledge and fuzzy degree estimation to assign depth information to pixels in a region. In general, pixels with a smaller degree of blur are assigned with smaller depth information (ie, closer to the viewer)' The pixels of the degree of blur are assigned with greater depth information (i.e., away from the viewer). Output device 14 receives stereo depth information from depth dispatch unit 13 and produces a round-out image (Step 24). In an embodiment, the output device 14 may be a display device for displaying or viewing the received depth information. 11 201015487 In another embodiment, the rim conductor memory or hard disk, 4 can be used as a storage device, for example, the half output device 14 can also store the received depth information. Further, a variety of image processing, such as processing devices, can be used to perform - or image compression or image synthesis. Self-adaptation, image restoration , image analysis, shadowing month embodiment, and the traditional establishment described in the prior art
⑩ 體深度資訊產生方法I:卜私如A t匕較起來,本發明實施例較能忠實地 且簡易地重現或近似出立體表現。 以上所述僅為本發明之較佳實施例而已,並非用以限 疋本發明之申請專利範圍;凡其它未脫離發明所揭示之精 神下所完叙等效改變祕飾,域包含在下述之申請專 利範圍内β ❹ 【圖式簡單說明】 第一圖顯示本發明實施例之立體深度資訊產生系統。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 【主要元件符號說明】 100 立體深度資訊產生系統 12 201015487 - 10 輸入裝置 11 影像分割單元 12 區域模糊程度估算單元 13 深度分派單元 14 輸出裝置 20-24 實施例之流程步驟 ❹10 Body depth information generation method I: Compared with A t匕, the embodiment of the present invention can faithfully and simply reproduce or approximate the stereoscopic performance. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the claims of the present invention; Patent application within the scope of β ❹ [Simplified description of the drawings] The first figure shows a stereoscopic depth information generation system according to an embodiment of the present invention. The second figure shows the flow steps of the stereo depth information generating method in the embodiment of the present invention. [Description of main component symbols] 100 Stereoscopic depth information generation system 12 201015487 - 10 Input device 11 Image division unit 12 Area blur degree estimation unit 13 Depth allocation unit 14 Output device 20-24 Process steps of the embodiment ❹
G 13G 13