TW201015487A - 3D depth generation by local blurriness estimation - Google Patents

3D depth generation by local blurriness estimation Download PDF

Info

Publication number
TW201015487A
TW201015487A TW97138278A TW97138278A TW201015487A TW 201015487 A TW201015487 A TW 201015487A TW 97138278 A TW97138278 A TW 97138278A TW 97138278 A TW97138278 A TW 97138278A TW 201015487 A TW201015487 A TW 201015487A
Authority
TW
Taiwan
Prior art keywords
depth information
generating
degree
information according
stereoscopic depth
Prior art date
Application number
TW97138278A
Other languages
Chinese (zh)
Other versions
TWI368183B (en
Inventor
Liang-Gee Chen
Chao-Chung Cheng
Chung-Te Li
Ling-Hsiu Huang
Original Assignee
Himax Tech Ltd
Univ Nat Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Tech Ltd, Univ Nat Taiwan filed Critical Himax Tech Ltd
Priority to TW097138278A priority Critical patent/TWI368183B/en
Publication of TW201015487A publication Critical patent/TW201015487A/en
Application granted granted Critical
Publication of TWI368183B publication Critical patent/TWI368183B/en

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A system of generating three-dimensional (3D) depth information is disclosed. A color and object independent local blurriness estimation unit analyzes blurriness of each pixels of a two-dimensional (2D) image. Subsequently, a depth assignment unit assigns depth information to the 2D image according to the blurriness.

Description

201015487 七、指定代表圖: (一) 本案指定代表圖為:第(一 (二) 本代表圖之元件符號簡單說明. 100 立體深度資訊產生系統 10 輪入裝置 11 影像分割單元 12 區域模糊程度估算單元 13 深度分派單元 14 輸出裝置 八、 本案若有彳b學式時,請揭*最脑示俩特徵的化學式: 九、 發明說明:| 【發明所屬之技術領域】 本發明係有關立體深度(3D depth)資訊的產生,特 別疋關於估算區域模糊程度(l〇cal biurriness)以產生立 體深度資訊的技術。 【先前技術】 當二維物體藉由照相機或攝影機而投影映射至二維影 像平面時,由於此種投射係為非唯一的多對一轉換,因此 會失去立體深度資訊。換句話說,無法藉由投射後的影像 201015487 點來決定其深度。為了得到〜個完整重現或近似的立體表 現,必須恢復或產生這些立體深度資訊,用以進行影像強 化(enhancement)、影像復原(rest〇rati〇n)、影像 合成或影像的顯示。 照相機藉由透鏡而將平行入射光線聚合於光軸的焦點 上。從透鏡到焦點的距離稱為焦距。如果來自物體的光線 ❿聚合良好,則稱此物體的二維影像為焦點對準的;如果來 自物體的光線未聚合良好,職此物體的二維影像為失 焦。影像中的失:|、物體會呈現_現象,且_程度和距 離或深度成正比。因此,藉由測量模齡度可用以產生立 體深度資訊。 生方法之一為針對多張同一場 區域之模糊程度進行分析。根 ,因而可以得知影像中的立201015487 VII. Designated representative map: (1) The representative representative figure of this case is: (1) The simple description of the component symbol of the representative map. 100 Stereoscopic depth information generation system 10 Wheeling device 11 Image segmentation unit 12 Area blur degree estimation Unit 13 Depth Dispatch Unit 14 Output Device 8. If there is a 学b formula in this case, please uncover the chemical formula of the two features: IX. Invention Description: | [Technical Field of the Invention] The present invention relates to stereo depth ( 3D depth) The generation of information, especially about the estimation of the degree of blurring (l〇cal biurriness) to generate stereoscopic depth information. [Prior Art] When a two-dimensional object is projected onto a two-dimensional image plane by a camera or a camera Since this kind of projection is a non-unique multi-to-one conversion, the stereo depth information will be lost. In other words, the depth of the projected image 201015487 cannot be determined. In order to get ~ complete reproduction or approximation Stereoscopic performance, these stereoscopic depth information must be restored or generated for image enhancement and shadowing Like restoration (rest〇rati〇n), image synthesis or image display. The camera uses a lens to concentrate parallel incident light onto the focus of the optical axis. The distance from the lens to the focus is called the focal length. If the light from the object is ❿ If the aggregation is good, the two-dimensional image of the object is called in focus; if the light from the object is not well polymerized, the two-dimensional image of the object is out of focus. The loss in the image: |, the object will exhibit _ phenomenon, And the degree is proportional to the distance or depth. Therefore, by measuring the age of the mold, it is possible to generate stereo depth information. One of the methods is to analyze the degree of blur of multiple fields in the same field. Stand

傳統立體深度資訊的產 景不同對焦(/距離)的同一 據這些不同模糊程度及距離 深度資訊。 另一種傳統立體深度資訊的產生方法為針對單張影像 的個別區域進行二維頻域轉換或高通濾波,所得到之高頰 強度即代表個別的模糊程度。根據模糊程度而可以得知整 201015487 個影像的立體深度資訊。此方法的缺點為’當影像中物體 的顏色不同,或亮度相近,或物體的材質特徵不月顯時 則报難區別各物體之間的模糊程度。 鑑於上述傳統方法未能忠實地或簡易地產生立體深度 資訊’因此亟需提出一種立體深度資訊的產生系統及方 法’以忠實地且簡易地重現或近似出立體表現。 ❹ 【發明内容】 鑑於上述,本發明的目的之一在於提出一種新穎立體深度資訊 的產生系統及方法,以忠實地且簡易地重現或近似出立體表現。 根據本發明實施例’本發明提供一種立體深度資訊之產生系統 ® 及方法。獨立於顏色及物體之區域模糊程度估算單元分析二維影 像的每一像素之模糊程度。接下來,深度分派單元根據模糊程度 以分派深度資訊給二維影像。在一實施例中,區域模糊程度估算 單元包含一濾波器,其產生高階統計值(H0S),用以表示模糊程 度。該濾波器被使用三次來分別分析紅色、綠色、藍色像 素,以獲得相對應之統計值,其中,三個顏色當中的最大 統計值則作為深度分派時的引導指標(leading 201015487 performer)〇 【實施方式】 第一圖顯示本發明實施例之立體(3D)深度資訊產生 系統100。為了便於瞭解本發明,包含有原始影像、處理 中影像、結果影像之例示影像也同時附屬顯示於圖式中。 φ 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 , 輸入裝置10提供或接收一或多張二維(平面)輸入 影像(步驟20),用以進行本實施例之影像/視訊處理之 用。輸入裝置10可以是一種光電裝置,用以將三維物體 投影映射至二維影像平面。在本實施例中,輸入裝置工〇 〇 可以是照相機,用以取像得到二維影像;或者可以是攝影 機’用以取得多幅影像。在另一實施例中,輸入裝置10 可以是前置處理裝置,用以進行一或多個影像處理工作, 例如影像強化、影像復原、影像分析、影像壓縮或影像合 成。再者,輸入裝置10可更包含一儲存裝置(例如半導 體記憶體或硬碟),用以儲存經前置處理裝置所處理的影 像。如前所述,當三維物體投影映射至二維影像平面時f 會失去立體深度資訊,因此,以下將詳述本發明實施例之 8 201015487 、统100的其他方塊是如何用來處理輸 二維影像。 八影像分割單元11而將將整個影像 7”\^域(或像素集)(步驟21)。在本說明書中,” ❹ 2使一^t示—電路、m其組合。分割的目 :=續的i處理可以更簡單且一本實施例中, =:::係使用傳統影像處理技術以峨辨別 算單立於齡^狀__程度估 ⑩ 立體深度資訊產生系 入裝置10所提供的 =:波8係一物二二 :辨別模糊程度。底下的演算法顧示-較佳2 祕力)4 綠色亮度(Igreetl) 其中,Icolor代表像素敝色亮度(Ired)、 或藍色贵:度(iblue); 作,y)代表相鄰於像素(x,y)的像素集合; 代表集合内的像素總數量; 201015487 <代表紅色的平均值(七)、綠色平均值(<)或藍色平均 值(化),也可表示為 mc(x,y)=-~- ^{sj) y η {sjyerj{xty) 在本實施例中,獲得一高階統計值(high 〇rder statistics,HOS)或高階中央動差(centrai moment) 糁用以估算模糊程度。在本說明書中,”高階”一詞係指大於 二階者。雖然本實施例使用高階(特別是四階)統計值, 但是’在其他實施例中,也可以使用二階。上述得到之H〇s 可用以估算模糊程度。換句話說,較大的H〇s表示相對應 區域較近於觀看者;相反的,較小的HOS表示相對應區域 較遠於觀看者。 在本實施例中’使用三次上述濾波器來分別分析紅 色、綠色、藍色像素’以獲得相對應之HOS。紅色、綠色、 藍色的最大HOS則作為深度分派時的引導指標(leading performer)。例如,假如紅色通道的HOS為最大者,則 接下來的深度分排將完全針對紅色通道來進行。 201015487 在另一實施例中,針對統計值的絕對值來獲得絕對 HOS,其通常較標準(normal) HOS來得準確,可表示 如下: «>〇 其中,/^)(x,>〇=士 iy η ⑩ 自影像分割單元11所得到的分割資訊以及自區域模 糊程度估算單元12所估算的模糊程度被饋至深度分派單 元13,用以分派深度資訊每一區域(或分割片段)(步驟 23)。一般來說,每一區域之深度資訊的分派方式彼此不 同,不過,二或多個區域也可以採用相同的分派方式。另 外,深度分派單元13也可根據事前知識(pri()I> knowledge)及模糊程度估算來分派深度資訊给一區域中 ❹ 的像素。一般而言,具較小模糊程度之像素被分派以較小 深度資訊(亦即,靠近觀看者)’而具較大模糊程度之像素 被分派以較大深度資訊(亦即,遠離觀看者)。 輸出裝置14從深度分派單元13接收立體深度資訊, 並產生輪出影像(步驟24)。在一實施例中,輸出裝置14 可以為_示裝置,用以顯示或供觀看所接收的深度資訊。 11 201015487 在另一實施例中,輪屮 導體記憶體或硬碟,用 4可以為儲存裝置,例如半 輸出裝置14也可更存所接收的深度資訊。再者’ 多種影像處理,例如置處理裝置,用以進行-或 像壓縮或影像合成。自化、影像復原、影像分析、影 發月實施例,與先前技術所述之傳統立The traditional stereoscopic depth information is different from the focus (/distance) of the same according to these different degrees of blur and distance depth information. Another conventional stereo depth information is generated by performing two-dimensional frequency domain conversion or high-pass filtering on individual regions of a single image, and the obtained high-bucket intensity represents an individual degree of blur. According to the degree of blur, we can know the stereo depth information of the entire 201015487 images. The disadvantage of this method is that when the colors of the objects in the image are different, or the brightness is similar, or the material characteristics of the object are not displayed, it is difficult to distinguish the degree of blurring between the objects. In view of the fact that the above conventional methods fail to produce stereoscopic depth information faithfully or simply, it is therefore necessary to propose a system and method for generating stereoscopic depth information to faithfully and simply reproduce or approximate stereoscopic performance. SUMMARY OF THE INVENTION In view of the above, it is an object of the present invention to provide a system and method for generating novel stereoscopic depth information that faithfully and simply reproduces or approximates stereoscopic representation. According to an embodiment of the present invention, the present invention provides a stereoscopic depth information generation system and method. The blur degree estimation unit independent of the color and the object analyzes the degree of blur of each pixel of the two-dimensional image. Next, the depth dispatch unit assigns depth information to the two-dimensional image according to the degree of blur. In an embodiment, the region blur degree estimation unit includes a filter that produces a high order statistical value (HOS) for indicating the degree of blur. The filter is used three times to analyze the red, green, and blue pixels separately to obtain the corresponding statistical value, wherein the largest statistical value among the three colors is used as the guiding indicator for the depth assignment (leading 201015487 performer)〇[ Embodiments The first figure shows a stereoscopic (3D) depth information generating system 100 according to an embodiment of the present invention. In order to facilitate the understanding of the present invention, an exemplary image including the original image, the processed image, and the resulting image is also attached to the drawing. φ The second figure shows the flow of the stereo depth information generating method in the embodiment of the present invention. The input device 10 provides or receives one or more two-dimensional (planar) input images (step 20) for performing the image/video processing of the present embodiment. Input device 10 can be an optoelectronic device for mapping a three-dimensional object projection to a two-dimensional image plane. In this embodiment, the input device process may be a camera for taking a two-dimensional image, or may be a camera for acquiring a plurality of images. In another embodiment, the input device 10 can be a pre-processing device for performing one or more image processing operations, such as image enhancement, image restoration, image analysis, image compression, or image synthesis. Furthermore, the input device 10 can further include a storage device (e.g., a semiconductor memory or a hard disk) for storing images processed by the pre-processing device. As described above, when the three-dimensional object projection is mapped to the two-dimensional image plane, f will lose the stereoscopic depth information. Therefore, the following is a detailed description of how the other blocks of the embodiment 100 of the present invention are used to process the two-dimensional image. image. The eight image segmentation unit 11 will set the entire image 7" field (or pixel set) (step 21). In the present specification, "❹ 2 makes a circuit, m, and combinations thereof. The purpose of segmentation: = continued i processing can be simpler and in this embodiment, =::: is the use of traditional image processing technology to identify the individual in the age of __ degree estimate 10 stereo depth information generation The device 10 provides a == wave 8 system and a second object: the degree of blurring is discriminated. The underlying algorithm shows - preferably 2 secret) 4 Green brightness (Igreetl) where Icolor represents the pixel 亮度 color brightness (Ired), or blue expensive: degree (iblue); y) represents adjacent to the pixel a set of pixels of (x, y); represents the total number of pixels in the set; 201015487 < represents the average of the red (seven), the green average (<) or the blue average (chemical), can also be expressed as mc (x, y) = -~- ^{sj) y η {sjyerj{xty) In this embodiment, a high-order statistic value (HOS) or a high-order central motion (centrai moment) is obtained. To estimate the degree of ambiguity. In this specification, the term "higher order" means more than the second order. Although the present embodiment uses higher order (especially fourth order) statistics, 'in other embodiments, second order can also be used. The H〇s obtained above can be used to estimate the degree of ambiguity. In other words, a larger H 〇 s indicates that the corresponding area is closer to the viewer; conversely, a smaller HOS indicates that the corresponding area is farther from the viewer. In the present embodiment, 'the above filters are used three times to respectively analyze the red, green, and blue pixels' to obtain the corresponding HOS. The maximum HOS of red, green, and blue is used as a leading performer for deep dispatch. For example, if the HOS of the red channel is the largest, then the next depth partition will be done entirely for the red channel. 201015487 In another embodiment, the absolute HOS is obtained for the absolute value of the statistical value, which is generally accurate compared to the normal HOS, and can be expressed as follows: «>〇 where, /^)(x,>〇= The segmentation information obtained from the image segmentation unit 11 and the degree of blurment estimated from the region blur degree estimation unit 12 are fed to the depth assignment unit 13 for assigning each region (or segmentation segment) of the depth information (steps) 23) Generally speaking, the depth information of each region is assigned differently, but two or more regions can also adopt the same distribution method. In addition, the depth dispatch unit 13 can also be based on prior knowledge (pri() I&gt Knowledge and fuzzy degree estimation to assign depth information to pixels in a region. In general, pixels with a smaller degree of blur are assigned with smaller depth information (ie, closer to the viewer)' The pixels of the degree of blur are assigned with greater depth information (i.e., away from the viewer). Output device 14 receives stereo depth information from depth dispatch unit 13 and produces a round-out image (Step 24). In an embodiment, the output device 14 may be a display device for displaying or viewing the received depth information. 11 201015487 In another embodiment, the rim conductor memory or hard disk, 4 can be used as a storage device, for example, the half output device 14 can also store the received depth information. Further, a variety of image processing, such as processing devices, can be used to perform - or image compression or image synthesis. Self-adaptation, image restoration , image analysis, shadowing month embodiment, and the traditional establishment described in the prior art

⑩ 體深度資訊產生方法I:卜私如A t匕較起來,本發明實施例較能忠實地 且簡易地重現或近似出立體表現。 以上所述僅為本發明之較佳實施例而已,並非用以限 疋本發明之申請專利範圍;凡其它未脫離發明所揭示之精 神下所完叙等效改變祕飾,域包含在下述之申請專 利範圍内β ❹ 【圖式簡單說明】 第一圖顯示本發明實施例之立體深度資訊產生系統。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 【主要元件符號說明】 100 立體深度資訊產生系統 12 201015487 - 10 輸入裝置 11 影像分割單元 12 區域模糊程度估算單元 13 深度分派單元 14 輸出裝置 20-24 實施例之流程步驟 ❹10 Body depth information generation method I: Compared with A t匕, the embodiment of the present invention can faithfully and simply reproduce or approximate the stereoscopic performance. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the claims of the present invention; Patent application within the scope of β ❹ [Simplified description of the drawings] The first figure shows a stereoscopic depth information generation system according to an embodiment of the present invention. The second figure shows the flow steps of the stereo depth information generating method in the embodiment of the present invention. [Description of main component symbols] 100 Stereoscopic depth information generation system 12 201015487 - 10 Input device 11 Image division unit 12 Area blur degree estimation unit 13 Depth allocation unit 14 Output device 20-24 Process steps of the embodiment ❹

G 13G 13

Claims (1)

201015487 申請專利範圍: 1· 一種立體深度資訊之產生系統,包含: 一獨立於顏色及賴之區域模_度估算單元,用以分 析二維影像的每一像素之祺糊程度丨及 深度資訊給 一深度分派單元,其根據該模糊程度以分派 該二維影像。 ❹ ==第1項所述立體深度資訊之產生系統, 色及物想之區域模糊程度估算單元包含 其中,n代表該濾波器的階層數; 魯 Icolor代表像素的紅色、 π (x,ym表相鉢像| 5藍色之亮度; —合内的像:::r像 〜代絲色n均值 咖)七&加)。 利範園第2項所述立 其中上述濾波器所產生的社&正體深度資訊之產生系統, ♦、、β果為η-階統計值。 201015487 4. 如申請專利範圍第3項所述立體深度資訊之產生系統, 其中上述η的值為4。 5. 如申請專利範圍第3項所述立體深度資訊之產生系統, 其中上述之濾波器被使用三次來分別分析紅色、綠色、藍 色像素,以獲得相對應之統計值,其中,三個顏色當中的 最大統計值則作為深度分派時的引導指標(leading _ performer)。 6.如申請專利範圍第2項所述立體深度資訊之產生系統, 更包含一絕對統計值,可表示為: 其中,士 ΣΙ,—㈣-先(Μ。 Φ 7.如申請專利範圍第1項所述立體深度資訊之產生系統, 更包含: 一分割單元,用以將該二維影像分割為多個區域。 15 201015487 8. 如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之深度分派單元分派較小深度資訊給較小模糊程 度之像素,而分派較大深度資訊給較大模糊程度之像素。 9. 如申請專利範圍第1項所述立體深度資訊之產生系統, 更包含一輸入裝置,其將三維物體投影映射至二維影像平 面。 10. 如申請專利範圍第9項所述立體深度資訊之產生系 統,其中上述之輸入裝置更儲存該二維影像。 [ 11. 如申請專利範圍第1項所述立體深度資訊之產生系 統,更包含一輸出裝置,其接收該深度資訊。 ❹ 12.如申請專利範圍第11項所述立體深度資訊之產生系 統,其中上述之輸出裝置更儲存或顯示該深度資訊。 13. —種立體深度資訊之產生方法,包含: 分析二維影像的每一像素之模糊程度;及 根據該模糊程度以分派深度資訊給該二維影像。 16 201015487 14.如申請專利範圍第13項所述立體深度資訊之產生方 法,其中上述模糊程度之分析步驟係由下列濾波器執行: ^C4) y) = Σ color 0 -^C ^))4 其中,n代表該濾波器的階層數; Icoi。!·代表像素的紅色、綠色或藍色之亮度; 77 (X,y)代表相鄰於像素(X,y)的像素集合; N,代表集合内的像素總數量; ❹ <代表紅色、綠色或藍色平均值,也可表示為 «>,少)=去 ΣΚ5,,)° 15.如申請專利範圍第14項所述立體深度資訊之產生方 法,其中上述濾波器所產生的結果為η-階統計值。 ❿ 16.如申請專利範圍第15項所述立體深度資訊之產生方 法,其中上述η的值為4。 17.如申請專利範圍第15項所述立體深度資訊之產生方 法,其中上述之濾波器被使用三次來分別分析紅色、綠色、 藍色像素,以獲得相對應之統計值,其中,三個顏色當中 17 201015487 的最大統計值則作為深度分派時的引導指標(leading performer) ° 18.如申請專利範圍第14項所述立體深度資訊之產生方 法,更包含一絕對統計值,可表示為: 編=鐵 參 其中,苑— -T^c〇i〇Xs,t)-mc{x,y)\ (s,t^(x,y) 19. 如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含一步驟用以將該二維影像分割為多個區域。 ! 20. 如申請專利範圍第13項所述立體深度資訊之產生方 法,於上述深度資訊之分派步驟中,分派較小深度資訊給 ® 較小模糊程度之像素,而分派較大深度資訊給較大模糊程 度之像素。 21.如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含一步驟,用以將三維物體投影映射至二維影像 平面。 18 201015487 22.如申請專利範圍第21項所述立體深度資訊之產生方 法,更包含一步驟,以儲存該二維影像。 23·如申請專利範圍第13項所述立體深度資訊之產生方 法,更包含一步驟,以接收該深度資訊。 24.如申請專利範圍第23項所述立體深度資訊之產生方 e 法,更包含一步驟,以儲存或顯示該深度資訊。201015487 Patent application scope: 1. A stereoscopic depth information generation system, comprising: a region-independent color estimation unit for analyzing the degree of ambiguity and depth information of each pixel of the two-dimensional image. A depth dispatch unit that assigns the two-dimensional image according to the degree of blur. ❹ == The stereoscopic depth information generation system according to item 1, the region and the blurring degree estimation unit of the object and the object are included, wherein n represents the number of layers of the filter; Lu Icolor represents the red color of the pixel, π (x, ym table) Like the image | 5 blue brightness; - inside the image: :: r like ~ generation silk color n mean coffee) seven & plus). According to the second item of Li Fanyuan, the generation system of the social & depth information generated by the above filter, ♦, β is the η-order statistical value. 201015487 4. The system for generating stereoscopic depth information according to claim 3, wherein the value of η is 4. 5. The system for generating stereoscopic depth information according to claim 3, wherein the filter is used three times to separately analyze red, green, and blue pixels to obtain corresponding statistical values, wherein three colors The largest statistic value is used as the leading _ performer for depth dispatch. 6. The system for generating stereoscopic depth information according to item 2 of the patent application scope further comprises an absolute statistical value, which can be expressed as: wherein, gentry, - (four) - first (Μ. Φ 7. as claimed in claim 1 The system for generating stereoscopic depth information further includes: a dividing unit for dividing the two-dimensional image into a plurality of regions. 15 201015487 8. The system for generating stereoscopic depth information according to claim 1 of the patent application scope, The above-mentioned depth dispatching unit assigns a small depth information to a pixel with a small degree of blurring, and assigns a large depth information to a pixel with a large degree of blurring. 9. The system for generating stereoscopic depth information according to claim 1 of the patent application scope And an input device for mapping the three-dimensional object projection to the two-dimensional image plane. 10. The stereoscopic depth information generating system according to claim 9, wherein the input device further stores the two-dimensional image. 11. The system for generating stereoscopic depth information according to claim 1, further comprising an output device that receives the depth information. The stereoscopic depth information generating system of claim 11, wherein the output device further stores or displays the depth information. 13. A method for generating stereoscopic depth information, comprising: analyzing blur of each pixel of the two-dimensional image Degree; and assigning depth information to the two-dimensional image according to the degree of blurring. 16 201015487 14. The method for generating stereoscopic depth information according to claim 13 wherein the analysis step of the blur degree is performed by the following filter : ^C4) y) = Σ color 0 -^C ^))4 where n represents the number of levels of the filter; Icoi. !· represents the brightness of the red, green or blue of the pixel; 77 (X, y) represents the set of pixels adjacent to the pixel (X, y); N, represents the total number of pixels in the set; ❹ < represents red, The green or blue average value can also be expressed as «>, less) = go to ΣΚ5,,) ° 15. The method for generating stereoscopic depth information according to claim 14 of the patent application, wherein the result of the above filter It is an η-order statistic. ❿ 16. The method of generating stereoscopic depth information according to claim 15, wherein the value of η is 4. 17. The method of generating stereoscopic depth information according to claim 15, wherein the filter is used three times to analyze red, green, and blue pixels, respectively, to obtain corresponding statistical values, wherein three colors The maximum statistical value of 17 201015487 is used as the leading performer in the depth allocation. 18. The method for generating the stereo depth information according to claim 14 of the patent application scope includes an absolute statistical value, which can be expressed as: = 铁参中,苑— -T^c〇i〇Xs,t)-mc{x,y)\ (s,t^(x,y) 19. The depth of depth information as described in claim 13 The method for generating the method further includes a step of dividing the two-dimensional image into a plurality of regions. [20] The method for generating stereoscopic depth information according to claim 13 is distributed in the dispatching step of the depth information. The smaller depth information is given to the pixels with a smaller degree of blurring, and the larger depth information is assigned to the pixels with larger blurring degree. 21. The method for generating the stereoscopic depth information according to claim 13 of the patent application scope further includes a step For mapping a three-dimensional object projection to a two-dimensional image plane. 18 201015487 22. The method for generating stereoscopic depth information according to claim 21 of the patent application further includes a step of storing the two-dimensional image. The method for generating stereoscopic depth information according to Item 13 of the patent scope further includes a step of receiving the depth information. 24. The method for generating stereoscopic depth information according to claim 23 of the patent application scope further includes a step. To store or display the depth information.
TW097138278A 2008-10-03 2008-10-03 3d depth generation by local blurriness estimation TWI368183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW097138278A TWI368183B (en) 2008-10-03 2008-10-03 3d depth generation by local blurriness estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW097138278A TWI368183B (en) 2008-10-03 2008-10-03 3d depth generation by local blurriness estimation

Publications (2)

Publication Number Publication Date
TW201015487A true TW201015487A (en) 2010-04-16
TWI368183B TWI368183B (en) 2012-07-11

Family

ID=44830062

Family Applications (1)

Application Number Title Priority Date Filing Date
TW097138278A TWI368183B (en) 2008-10-03 2008-10-03 3d depth generation by local blurriness estimation

Country Status (1)

Country Link
TW (1) TWI368183B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164868A (en) * 2011-12-09 2013-06-19 金耀有限公司 Method and device for generating image with depth-of-field (DOF) effect
TWI613903B (en) * 2016-07-11 2018-02-01 龍華科技大學 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI471677B (en) * 2013-04-11 2015-02-01 Altek Semiconductor Corp Auto focus method and auto focus apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164868A (en) * 2011-12-09 2013-06-19 金耀有限公司 Method and device for generating image with depth-of-field (DOF) effect
CN103164868B (en) * 2011-12-09 2018-01-16 金耀有限公司 The method and apparatus for producing the image with Deep Canvas
TWI613903B (en) * 2016-07-11 2018-02-01 龍華科技大學 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image

Also Published As

Publication number Publication date
TWI368183B (en) 2012-07-11

Similar Documents

Publication Publication Date Title
EP1719079B1 (en) Creating a depth map
US20160307368A1 (en) Compression and interactive playback of light field pictures
CN107407554B (en) Simulating a multi-camera imaging system
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
KR101168384B1 (en) Method of generating a depth map, depth map generating unit, image processing apparatus and computer program product
EP3351001B1 (en) Method for encoding a light field content
KR101377733B1 (en) -up-scaling
WO2008086049A1 (en) Rendering 3d video images on a stereo-enabled display
JP2007520822A (en) Depth map generation
TW201208349A (en) Method and apparatus for converting 2D image into 3D image
US20100080481A1 (en) 3D Depth Generation by Local Blurriness Estimation
TW201015487A (en) 3D depth generation by local blurriness estimation
CN103096102B (en) Image deformation method
WO2022156451A1 (en) Rendering method and apparatus
US10529057B2 (en) Image processing apparatus and image processing method
CN103026387B (en) Method for generating multiple view picture from single image
CN115244570A (en) Merging split pixel data to obtain deeper depth of field
Kooima et al. A gpu sub-pixel algorithm for autostereoscopic virtual reality
Jin et al. A quaternion gradient operator for color image edge detection
US20180124321A1 (en) Image processing apparatus and image processing method
US20120147008A1 (en) Non-uniformly sampled 3d information representation method
Cheng et al. 51.3: An Ultra‐Low‐Cost 2‐D/3‐D Video‐Conversion System
US20180075583A1 (en) Image processing apparatus and image processing method
JP2013242784A (en) Moving image visibility quantification apparatus, and moving image visibility quantification method and program
TW201015493A (en) 3D depth generation by block-based texel density analysis