TW201044317A - Method of transforming two-dimensional image into three-dimensional image - Google Patents

Method of transforming two-dimensional image into three-dimensional image Download PDF

Info

Publication number
TW201044317A
TW201044317A TW98118436A TW98118436A TW201044317A TW 201044317 A TW201044317 A TW 201044317A TW 98118436 A TW98118436 A TW 98118436A TW 98118436 A TW98118436 A TW 98118436A TW 201044317 A TW201044317 A TW 201044317A
Authority
TW
Taiwan
Prior art keywords
image
value
dimensional image
edge
data
Prior art date
Application number
TW98118436A
Other languages
Chinese (zh)
Inventor
Chien-Hung Chen
Hsiang-Tan Lin
Meng-Chao Kao
Original Assignee
Chunghwa Picture Tubes Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chunghwa Picture Tubes Ltd filed Critical Chunghwa Picture Tubes Ltd
Priority to TW98118436A priority Critical patent/TW201044317A/en
Publication of TW201044317A publication Critical patent/TW201044317A/en

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method of transforming a two-dimensional image into a three-dimensional image is provided. First, a two-dimensional image is received. Then, the two-dimensional image is transformed into a gray level image. Then, a plurality of objects in the gray level image is extracted. The plurality of objects in the gray level image is marked. Next, according to the plurality of objects in the gray level image and a distance between the first boundaries of the gray level image, a plurality of depths corresponding to the plurality of objects is determined. Moreover, according to the plurality of depths corresponding to the plurality of objects, a depth plot is generated. Therefore, the depth plot and the two-dimensional image may be combined to produce a three-dimensional image.

Description

201044317 υνιυυ^οιΓΨ 30913twf.doc/n 六、發明說明: 【發明所屬之技術領201044317 υνιυυ^οιΓΨ 30913twf.doc/n VI. Description of the invention: [Technology

本發明是一種將二维影像轉換為三維影像的方法,且 特別是有關於-種由二維影像產生所對應的深度圖,並奸 合二維影像與深度圖以轉換為三維影像的方法。 Q 【先前技術】 ¢) 一般而&,在三維(Three-Dimensi〇n,3D)影像顯示 器的種類中,柵欄式(Barrier Type)的三維影像顯示器了 利用雙眼視差法,使人眼感受到三维影像。 疋 圖1是利用柵欄式三維影像顯示器架構之示意圖。笋 參照圖1 ’在液晶顯示器的背光3〇上增加一個用來分光用 的栅欄10,使液晶顯示器的晝素20 (Pixel)可分為兩類, 標示為1的晝素顯示人的左眼影像,標示為2的晝素顯示 人的右眼影像,經過影像合成,產生三維影像顯示器的顯 〇 ° •’ 在美國專利第0232666號中揭露一技術,其技術依據 前一張及目前晝面的運動向量、亮度或顏色值做邊緣偵 測’來產生深度圖。值得一提的是’此類技術在偵測物件The present invention is a method for converting a two-dimensional image into a three-dimensional image, and in particular, a method for generating a depth map corresponding to a two-dimensional image and forcing a two-dimensional image and a depth map to be converted into a three-dimensional image. Q [Prior Art] ¢) In general, &3, in the type of three-dimensional (3D) image display, the Barrier Type 3D image display uses the binocular parallax method to make the human eye feel To 3D images.疋 Figure 1 is a schematic diagram of a fenced 3D image display architecture. Bamboo shooter Figure 1 'Add a fence 10 for splitting light on the backlight 3 of the LCD monitor, so that the Pixel of the liquid crystal display can be divided into two categories. The eye image, the pixel labeled 2, displays the right eye image of the person, and the image is synthesized to produce a three-dimensional image display. ' A technique is disclosed in US Pat. No. 0232666, the technical basis of which is based on the previous and current The motion vector, brightness, or color value of the face is edge detected to generate a depth map. It is worth mentioning that this type of technology is detecting objects.

時容易發生嚴重錯誤,產生錯誤的深度圖,影響三維影像 的品質極巨。 V 【發明内容】 本發明提供一種將二維影像轉換為三維影像的方 3 201044317 uyiUU4(Diiw 30913twf.doc/n 法,可直接由二維影像淬取物件以 — 像與所淬取的深度圖即可$#_ ’木又圖,結合二維影 需要其他資訊,不需額外本發明不 像轉換為三維影像。 蝴取汉備’就可以將二維影 本發明提供一種將-絡& 法,包括下述步驟。首先,接^像三'維影像的方 維影像轉換為-灰階影像。⑦;、衫。接著’將二 接著,對細彡料_件進’;像巾的物件。 =所對應之深度值。而後,:據=== :像深度圖。最後,結合深度圖與二維影::= 階貪加料,上狀方法,其巾在淬取灰 步驟包括對灰階影像進行邊緣偵測 膨脹(MatiGn)運算崎@ $運异貝料進灯 填滿書ίΓ像真物件輪_所對應的區域以取得一 f本發明之—實施财’上述之方法,其中在對灰階 ^象進行邊緣伽(Edge Esti_ 2=:Prewitt遮罩(Mask)淬取灰階影 緣,其中邊緣偵測的方程式如下: v/ = |GxxjP| + |q1;xP| 其中,▽/表示運算資料邛為3*3矩陣的晝素資料, 201044317 uyiUU4〇i FW 30913twf.doc/nIt is prone to serious errors, resulting in a wrong depth map, which affects the quality of 3D images. V SUMMARY OF THE INVENTION The present invention provides a method for converting a two-dimensional image into a three-dimensional image. 201044317 uyiUU4 (Diiw 30913twf.doc/n method, which can directly extract objects from a two-dimensional image with an image and a depth map extracted You can use $#_ 'wood and map, combined with 2D shadow, you need other information, no additional invention is not like converting to 3D image. Butterfly Hanbei can provide a kind of 将-络 & Including the following steps. First, the square-dimensional image of the image of the three-dimensional image is converted into a gray-scale image. 7;, the shirt. Then 'the next two, the fine material _ pieces into the body; = corresponding to the depth value. Then, according to ===: like the depth map. Finally, combined with the depth map and the two-dimensional shadow::= order greedy feeding, the upper method, the towel in the quenching ash step includes the gray The image of the edge detection expansion (MatiGn) operation Saki @ $ 异 贝 进 进 进 填 填 填 填 Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ 所 所 所 所 所 所 所 所 所 所 所 所 所 所 所 所 所 所 所Edge gamma for gray-scale image (Edge Esti_ 2=:Prewitt mask (Mask) for grayscale Shadow edge, the equation for edge detection is as follows: v/ = |GxxjP| + |q1;xP| where ▽/ indicates the data of the data is 3*3 matrix, 201044317 uyiUU4〇i FW 30913twf.doc/ n

Gx與Gy為3*3矩陣之Prewitt遮罩(j^ask)Gx and Gy are 3*3 matrix Prewitt masks (j^ask)

Gy分別表示如下: -i ο ΓGy is expressed as follows: -i ο Γ

Gx=Gx=

Gy= -1 Ο 1 -1 Ο 1 0 0 0 1 1 1Gy= -1 Ο 1 -1 Ο 1 0 0 0 1 1 1

O 在本發明之一實施例中,上述t 方法’其中在對運算 貧料進行膨脹運算以連結物件邊緣 本乂取侍物件輪廓之步驟 包括下述步驟。首先’選取一晝素矩陳 a朽a] 尸4 P5凡,其 :P1:P9表示晝素資料。接著’判斷晝素資::二一 第一數值。鎌’當晝素資料p5等於第—數值,且 料ΡπίνΡ6〜P9至少其中之—不等 —常貝 、 寻於弟一數值時,則將晝O In one embodiment of the invention, the t method described above wherein the step of expanding the operation of the lean material to join the edge of the object captures the contour of the object comprises the following steps. First, 'choose a 昼 矩 a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a Then 'judge the prime money:: 21 first value.镰’ When the prime data p5 is equal to the first value, and the material ΡπίνΡ6~P9 is at least one of them - not equal to - often, when looking for a value, then 昼

素舅料Ρ5調整為一第二數值。接I a双值接考,重複上述步驟以對運 异肓料進行膨脹運算。 在本發明之一實施例中 為。,第二數值W 上返之方法’其中第-數值 在本發明之一實施例中, :;r=區域之步驟=步== 其4所===:得:第:掃描影, 素資料之左側晝素資料以側弟數值’第一晝 寸/、側畫素賁料均等於一第二數值 5 201044317 0910046ITW 309I3twf.doc/n 日^將第-晝素資料為第二數值。接著,由右至卢, 由^上掃描邊緣影像以取得—第二掃描影像,當所^ 等於第—數值’第二畫素資料之右側ΐ 素貝料與下側畫素資料均等於第二 7 料調整為第二數值。然後&隹 ^. 一旦素資 影像以取得填滿晝面』耳外木弟一抑描影像與第二掃插 第二實施例中,上述之其中第-數值為。, 的物tr例中,上述之其中在對灰階影像中 影像中/Ιίΐ驟包括下述步驟。首先,依序對灰階 件所對所對應的區域進行標記。接著,調整各物 像的ίί’ί括=日種將二維影像轉換為三維影 步驟,得到二貞測、膨脹、填滿及物件標記等 可轉換成三維影像。U象^要其他資訊’即 舉本翻之上述特徵和優雜更明顯㈣,下文特 舉只施例,並配合所附圖式作詳細說明如下。下 【貫施方式】 維影=方=^^施例_二維影像轉換為三 去之机私圖。請芩照圖2,首先,步驟S2〇i中, 201044317 ^>〇913twf.doc/n f收二二維影像。接著,步驟S2〇2中,將二維影像轉換 為-灰階影像。然後,步驟S203中’洋取灰階影像中的 物件。接著,步驟S204中,對灰階影像中的物件進行標 記。然後,步驟S205中,根據灰階影像中的各物件與^ 戶皆影像的-第-邊界之間的距離決定各物件所對應之深卢 值。而後,步驟S207中,根據各物件所對應之深度值^ 生-深度圖。最後,步驟S207中,結合深度圖與二維影 ◎ 像以產生一三維影像。 圖3是依照本發明一實施例說明步驟S2〇3的一種實施 例之流程圖。請合併參照圖2與圖3。在本實施例中,、步 驟s2〇3包括步驟S301〜S303。例如,對灰階影像進行邊ς 偵測以產生一運算資料。然後,對運算資料進行膨脹運算 以連結物件邊緣以取得物件輪廓以產生一邊緣影像。^ 著,填滿各物件輪廓内所對應的區域以取得一填滿晝面。 所屬技術領域中具有通常知識者可以視其需求而以 任何方式實現步驟S301,例如梯度觀念偵測法、小波偵測 法與運算子偵測法。在此僅以Prewitt遮罩來作為影像二維 差分運算子偵測影像邊緣為例,且Prewitt遮罩為一三乘三 矩陣,其偵測影像邊緣之判斷式如下: +l(Qix^+^2x^+^3><^+C^x^+(^x^+(^xi>^7xi?^x^+G>9X^)| =| GaP( +| QrxP\ 其中Gx與Gy為Prewitt遮罩: 201044317 0910046ITW 30913twf.doc/nThe prime material Ρ 5 is adjusted to a second value. Connect the I a double value test and repeat the above steps to perform the expansion operation on the transport data. In an embodiment of the invention, it is. , the method of returning the second value W, wherein the first value is in one embodiment of the invention, :; r = region step = step == 4 of its ===: get: first: scan shadow, prime data The left 昼 资料 资料 ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' 、 ' 、 、 、 、 、 、 、 、 、 、 、 、 ' ' 2010 2010 2010 2010 2010 2010 2010 2010 2010 Then, from right to left, the edge image is scanned by ^ to obtain the second scanned image, and when the ^ is equal to the first value, the second pixel data and the lower pixel data are equal to the second pixel data. 7 The material is adjusted to the second value. Then & 隹 ^. Once the image of the image is taken to fill the surface, the image is scanned and the second sweep is inserted. In the second embodiment, the above-mentioned first value is . In the example of the object tra, the above steps are included in the image in the grayscale image. First, the regions corresponding to the grayscale components are sequentially marked. Then, adjust the image of each object to convert the 2D image into a 3D image, and obtain the 2D image, expand, fill, and mark the object into 3D images. U-pictures ^ need other information'. The above characteristics and advantages and disadvantages are more obvious (4). The following is a specific example and is described in detail below with reference to the drawings. [Continuous application method] Wei Ying = Fang = ^ ^ Example _ two-dimensional image conversion to three to the machine private map. Please refer to FIG. 2, first, in step S2〇i, 201044317 ^>〇913twf.doc/n f to receive two two-dimensional images. Next, in step S2〇2, the two-dimensional image is converted into a grayscale image. Then, in step S203, the object in the grayscale image is taken. Next, in step S204, the objects in the grayscale image are marked. Then, in step S205, the deep LU value corresponding to each object is determined according to the distance between each object in the grayscale image and the -th boundary of the image of the image. Then, in step S207, the depth-value map is based on the depth value corresponding to each object. Finally, in step S207, the depth map and the two-dimensional image are combined to generate a three-dimensional image. 3 is a flow chart illustrating an embodiment of step S2〇3 in accordance with an embodiment of the present invention. Please refer to FIG. 2 and FIG. 3 together. In the present embodiment, the step s2〇3 includes steps S301 to S303. For example, edge detection is performed on grayscale images to generate an operational data. The computational data is then expanded to join the edges of the object to obtain an outline of the object to produce an edge image. ^, fill the area corresponding to the outline of each object to get a filled surface. Those having ordinary skill in the art can implement step S301 in any manner according to their needs, such as gradient concept detection, wavelet detection, and operator detection. Here, only the Prewitt mask is used as an example of the image two-dimensional differential operator detection image edge, and the Prewitt mask is a three-by-three matrix, and the judgment of detecting the image edge is as follows: +l(Qix^+^ 2x^+^3><^+C^x^+(^x^+(^xi>^7xi?^x^+G>9X^)| =| GaP( +| QrxP\ where Gx and Gy are Prewitt mask: 201044317 0910046ITW 30913twf.doc/n

Gxl 〇χ; ~-ι ο Γ % Gy2 ~-i -i -Γ Gx 二 Gx, - 一 1 0 1 ^Gy = Gy4 Gys = 0 0 0 Gxl Gx9_ -1 0 1 1 1 1 P為晝素對應遮罩位置: P3 P6 P9 P2 P5 P8 PM P4 P7 L _ p 圖4是依照本發明一實施例說明膨脹運算的示意圖。 圖5是依照本發明一實施例說明膨脹運算之運算方向 的示意圖。請合併參照圖3〜圖5。如圖5所示,在運算資 料501中,以由左至右及由上至下的方式,進行膨脹運算 以連結物件邊緣而取得物件輪廓來產生邊緣影像(步驟 302)。其運算方式如下,首先,選取運算資料501中之晝 素矩陣P: Pi Pi Pi Pa Ps P6 Pi Ps P9 ,其中表示晝素矩陣P中之晝 素資料’分別為1、〇、1、〇、〇、1、〇、〇及〇。首先,判 斷晝素矩陣P中之晝素資料p5是否為〇,當晝素資料p5 為0,且Ph、P6〜9不全等於0時,則將P5修改為1。膨脹 運算後之邊緣矩陣P’:Gxl 〇χ; ~-ι ο Γ % Gy2 ~-i -i -Γ Gx 2Gx, - 1 1 0 1 ^Gy = Gy4 Gys = 0 0 0 Gxl Gx9_ -1 0 1 1 1 1 P is the pixel corresponding Mask position: P3 P6 P9 P2 P5 P8 PM P4 P7 L_p Figure 4 is a schematic diagram illustrating the expansion operation in accordance with an embodiment of the present invention. Figure 5 is a schematic diagram showing the direction of operation of the expansion operation in accordance with one embodiment of the present invention. Please refer to FIG. 3 to FIG. 5 together. As shown in Fig. 5, in the calculation data 501, an expansion operation is performed from left to right and top to bottom to obtain an edge image by connecting the edge of the object to generate an edge image (step 302). The operation method is as follows. First, the pixel matrix P in the operation data 501 is selected: Pi Pi Pi Pa Ps P6 Pi Ps P9 , wherein the pixel data in the pixel matrix P is 1, 〇, 1, 〇, respectively. 〇, 1, 〇, 〇 and 〇. First, it is judged whether or not the pixel data p5 in the pixel matrix P is 〇. When the pixel data p5 is 0, and Ph, P6 to 9 are not all equal to 0, P5 is changed to 1. Edge matrix P' after expansion operation:

P\ Pi P P\ P's P P'l p\ P 運算後之邊緣矩陣的晝素資料,分別為1、〇、1、〇、1、卜 0、0及0。換句話說,當晝素資料P5等於第一數值,且晝 201044317P\ Pi P P\ P's P P'l p\ P The matrix data of the edge matrix after operation are 1, 〇, 1, 〇, 1, Bu 0, 0 and 0, respectively. In other words, when the pixel data P5 is equal to the first value, and 昼 201044317

Ubf i um〇i Γ W 30913twf. doc/n 素資料biW冰至少射之—衫於第—數值時,則 將畫素貧料P’5設為-第二數值。重複上述步驟以對運营 資料進行膨脹運算。由於Ρ5等於〇,且ρι〜ρ4、¥ &不: 部等於〇,因此可得到Ρ,ι十9為 及0。對整張畫面(運算資料)進行上述膨脹運算後,可將物 件邊緣封·以取得㈣輪如產生-邊緣雜。值得注 意的是,在本實施例中,第—數值為〇,第二數值為i,立 中1表示具有灰階值,0表示無灰階值(轉為可視晝面為黑 色)’但本發明不以此為限。 接下來,對邊緣影像進行填滿運算以取得填滿書 此步驟可清楚標不㈣件卿區域與位置。圖6是依 Γ月:實施Γ兒明圖3中步驟S3°3的示意圖。請參:圖 6 ’百先,將邊緣影像⑽作由左至右及由上 射 〇 -掃卿05’其中當P5等於。且二 “、則將P5调整為卜重複上述數值調整方式 為掃描單位’由左至右及由上至下進行掃插,在 =ι個邊緣影像6G1崎描與數值調整後,可得 一 2影像603。接著,改變掃描方向,以相同的數 ς,由右至左及由下至上掃描邊緣影像謝二 一^影像604,其掃描方式的示意圖如第 606,其中當ρ5Λ η η,卜 W目田方式 、T田P5 4於〇且p6與p8皆為】時,則將朽 正-。然後’將第-掃晦影像6G3與第二掃目培影像^ 9 201044317 0910046ITW 30913twf.d〇c/n 取父集,即可制填滿晝面6Q2,由填滿畫面⑽2甲可明 顯看出物件所在區域。 接下來,進-步说明對物件進行標記的方式,圖 依,本發明-實關說_ 2巾步驟隨的示意圖 IS二在此實施例中,為了方便說明,灰階影像的晝 表^旦=與255為例,其令255表示物件所在區域,〇 本發明不以此為限。首先,將灰階影像 區域進行標記,若灰階影像701的書 ,^ ,、左方貪料為〇 ’則標記新編號。 i因件區域(由255所組成的區 點如標記影像7(^=,分別為Η,其標記 接著,調整標記影像7〇? 號,當上方資料不為0,左二的標記編 緣)時’則複製上方的標記:車邊 255,且其上方或左方的晝;^ X8)。*晝素資料為 則依照上方標記修改此晝素^料^ 〇(如標記點X9)時, 5),同時修改左方的晝素資料^標§己點X9的值修改為 5)。接下來,重新對所有標‘、標記點X6、的值修改為 記數字。修改後,即得到_’碰其成為連續的標 上的標記號碼⑺會被修烟’其中標記點X7 可以清楚發現有6個物件,’、、八*此,錄標§己影像7〇4 上述標記方式僅為本發明之^己為1〜6。值得注意是, 貫知例,本發明並不受限於 10 201044317 \jy i\j\j^\jxTW 30913twf.doc/n 此0 圖8疋依照本發明—實施例說明產生兩 的影像示意圖。請合併參考圖2、圖3及圖度圖Ubf i um〇i Γ W 30913twf. doc/n Data biW ice at least shots - when the shirt is at the first value, the pixel poor P'5 is set to the second value. Repeat the above steps to expand the operational data. Since Ρ5 is equal to 〇, and ρι~ρ4, ¥ & no: is equal to 〇, so Ρ, ι 十9 is and 0. After performing the above expansion operation on the entire screen (arithmetic data), the edge of the object can be sealed to obtain (4) rounds such as generation-edge miscellaneous. It should be noted that, in this embodiment, the first value is 〇, the second value is i, the center 1 indicates that there is a gray scale value, and 0 indicates that there is no gray scale value (turns to the visible surface is black) 'but this The invention is not limited to this. Next, fill in the edge image to get the fill-up book. This step clearly indicates the area and location of the (4) piece. Fig. 6 is a schematic diagram of the step S3°3 in Fig. 3 of the implementation of the child. Please refer to: Figure 6 ‘100 first, the edge image (10) is left to right and from the top 扫 - sweeping 05' where P5 is equal. And two, then P5 is adjusted to repeat the above numerical adjustment mode is the scanning unit's scan from left to right and top to bottom. After =1 edge image 6G1 and the numerical adjustment, you can get a 2 Image 603. Next, changing the scanning direction, scanning the edge image from right to left and from bottom to top with the same number ς, the image of the scanning method is as shown in the first 606, wherein ρ5Λ η η, Bu W When the field is in the way, the T-field P5 4 is in the 〇 and the p6 and p8 are both, then it will be ruined. Then 'the first broom image 6G3 and the second sweep image ^ 9 201044317 0910046ITW 30913twf.d〇c /n Take the parent set, you can fill the face 6Q2, and fill the picture (10) 2 A can clearly see the area where the object is located. Next, step-by-step instructions on how to mark the object, Figure, the present invention - real 。 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 First, mark the grayscale image area, if the grayscale image 701 is a book, ^ If the left side is greedy, then the new number will be marked. i The area of the object (the area consisting of 255 is marked as image 7 (^=, respectively, Η, followed by the mark, adjust the mark image 7〇? If the above data is not 0, the left two marks are edited), then the upper mark is copied: the car side 255, and the upper or left side of the mark; ^ X8). * The 昼素 data is modified according to the above mark When the material ^ 〇 (such as the marked point X9), 5), at the same time modify the left side of the 昼 资料 ^ ^ § § § point X9 value is modified to 5). Next, re-mark all the ', mark X6, The value of the change is changed to the number. After the modification, the mark number (7) which is continually marked as _' will be repaired. 'The mark X7 can clearly find that there are 6 objects, ', 八* this, recorded The above-mentioned marking method is only 1 to 6 of the present invention. It is worth noting that the present invention is not limited to 10 201044317 \jy i\j\j^\jxTW 30913twf .doc/n This Figure 8 illustrates a schematic diagram of two images generated in accordance with the present invention. Please refer to Figure 2, Figure 3 and Figure

t ^ 801 〇 , ;〇;,^S2〇J =r為二r象’然後淬取灰階影像中二^ o 〇 進行臉^運:、^乂驟咖)。然後,對運算資料802 事像件邊緣而取得物件麵來產生邊緣 二象(^驟%02)。接下來,填滿各邊緣影像803之 =廓内所對應的區域,以取得填滿晝面叫步驟 後,依序對填滿晝面8Q4中的各物件所對應的 仃“ ’將過小的物件去除,以得到兩景深的深度 :。此外’值得注意的是’本影像示意圖的物件健 马1,其深度值分為兩層,即背景與物件。 步圖_9是依照本發明一實施例說明產生多景深的深度圖 、y像示思圖。睛合併參考圖2、圖3、圖5。二維影像9〇1 /中匕括夕個物件,在淬取出個別物件後,會對所有物件進 行己,然後依據個別物件的座標位置決定其物件所對應 的冰度值。在圖9中’針對運算資料9〇2、邊緣影像9〇3 及填滿晝面904的產生方法與圖8中的類似,在此不再贅 述。接著’依序對填滿晝面904中的各物件所對應的區域 進行標記’並將過小的物件去除,其中填滿畫面904共有 個物件’可標記為1〜5(步驟S201〜S204)。然後,依據個 別物件與灰階影像(晝面)下方邊界之間的距離決定個別物 11 201044317 09100461TW 30913twf.doc/n 件^珠度圖(步驟S2〇5),距離愈遠,深度值越小(代表距離 愈遠),距離愈近,深度值愈大(代表距離越近)。若兩物件 與下方邊界的距離相等,則其深度值相同。 然後,依照各該物件所對應之深度值產生一深度圖(步 驟S206),如深度圖905所示,其中包括5個物件,而最 下方的物件Bl、B2與下方邊界的距離相等,因此其深度 值相同。物件B3、B4、B5與下方邊界的距離皆不相同, 因此分別具有不同的深度值。若加上背景的深度值,深度 圖905即具有5種深度值,其各別深度值的數值的分配方 式可由使用者預先設定而得,例如將晝面的高度對應於最 ^罙度值,而物件與下方邊界之_距離則可依照比例計 算出其對定的深度值。值得注意的是,本實施例的深度值 是依照物件與下方邊界之間的距離計算而得,但其計算方 式並不限定於上述方式。另外,也可以以不同的邊界,例 如晝面上方邊界或左、右方邊界作為基準邊界來計算盆深 度值,本實施例並不受限。最後,在取得深度圖之後了即 可結合二維影像產生三維影像(步驟S207)。 一總的來說,本發明提供可直接由二維影像淬取物件以 及深度圖的方法,結合二維影像與所淬取的深度圖即可形 成二維影像。藉此,本發明不需要其他資訊,不需額外影 像擷取設備,就可以將二維影像轉換為三維影像。、〜 雖然本發明已以實施例揭露如上,然其並非用以限定 本發明,任何所屬技術領域中具有通常知識者,在不脫離 本發明之精神和範圍内,當可作些許之更動與潤飾,故本 12 201044317 uyiuu4〇xFW 30913twf.doc/n 發明之保護範圍當視後附之申請專利範圍所界定者為準。 【圖式簡單說明】 圖1是利用柵欄式三維影像顯示器架構之示意圖。 圖2是依照本發明一實施例說明將二維影像轉換為三 維影像的方法之流程圖。 圖3是依照本發明一實施例說明步驟S203的一種實施 例之流程圖。 圖4是依照本發明一實施例說明膨脹運算的示意圖。 圖5是依照本發明一實施例說明膨脹運算之運算方向 的示意圖。 圖6是依照本發明一實施例說明圖3中步驟S303的 示意圖。 圖7是依照本發明一實施例說明圖2中步驟S204的 示意圖。 圖8是依照本發明一實施例說明產生兩景深的深度圖 的影像示意圖。 圖9是依照本發明一實施例說明產生多景深的深度圖 的影像示意圖。 【主要元件符號說明】 10 :栅欄 13 201044317 09100461'I'W 30913twf.doc/n 20 :晝素 30 :背光 501、802、902 :運算資料 601、 803、903 :邊緣影像 602、 804、904 :填滿晝面 603 :第一掃瞄影像 604 :第二掃瞄影像 605 :第一掃瞄方式 606 :第二掃瞄方式 701 :灰階影像 702〜704 :標記影像 801、901 :二維影像 805 :兩景深的深度圖 905 :多景深的深度圖 XI〜X9 :標記點 P :晝素矩陣 P’ :膨脹運算後之邊緣矩陣 S201〜S207 :流程圖步驟 S301〜S303 :流程圖步驟 14t ^ 801 〇 , ;〇;,^S2〇J =r is the second r image' and then the gray image is extracted by the two ^ o 进行 for the face ^,: ^ 乂 咖 咖). Then, the object surface is obtained for the edge of the computational material 802 to produce an edge image (^%02). Next, fill the area corresponding to the inside of each edge image 803 to obtain the 仃 ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' Remove to obtain the depth of two depths of field: In addition, 'notable is the object of the image diagram of the horse 1 whose depth value is divided into two layers, that is, the background and the object. Step _9 is an embodiment according to the present invention Explain the depth map and y image plots that produce multiple depths of field. The eyes are combined with reference to Figure 2, Figure 3, and Figure 5. The two-dimensional image 9〇1 / Zhongxie eve objects will be all after quenching individual objects. The object is carried out, and then the ice value corresponding to the object is determined according to the coordinate position of the individual object. In Fig. 9, the method for generating the data 9 〇 2, the edge image 9 〇 3 and filling the 面 surface 904 and FIG. 8 The similarities in the above will not be described here. Then, 'the areas corresponding to the objects in the filled 904 are marked in sequence' and the objects that are too small are removed, and the filled objects 904 are marked as 1 to 5 (steps S201 to S204). Then, depending on the individual The distance between the object and the lower boundary of the grayscale image (昼面) determines the individual object 11 201044317 09100461TW 30913twf.doc/n piece ^ pearlity map (step S2〇5), the farther the distance, the smaller the depth value (representing the distance Far), the closer the distance is, the larger the depth value (representing the closer distance). If the distance between the two objects is equal to the lower boundary, the depth values are the same. Then, a depth map is generated according to the depth value corresponding to each object ( Step S206), as shown in the depth map 905, which includes 5 objects, and the lowermost objects B1 and B2 have the same distance from the lower boundary, so the depth values are the same. The distances between the objects B3, B4, and B5 and the lower boundary are both Different from each other, they have different depth values. If the depth value of the background is added, the depth map 905 has five depth values, and the value of the respective depth values can be set by the user in advance, for example, The height of the surface corresponds to the maximum value, and the distance between the object and the lower boundary can be calculated according to the ratio of the depth value. It is worth noting that the depth value of the embodiment is according to the object and the lower part. The distance between the boundaries is calculated, but the calculation method is not limited to the above method. Alternatively, the basin depth value may be calculated by using different boundaries, such as the upper boundary of the kneading plane or the left and right boundaries as the reference boundary. The embodiment is not limited. Finally, after obtaining the depth map, the two-dimensional image can be combined to generate a three-dimensional image (step S207). In general, the present invention provides an object and a depth map that can be directly extracted from the two-dimensional image. The method combines the two-dimensional image with the extracted depth map to form a two-dimensional image. Thus, the present invention does not require additional information, and can convert the two-dimensional image into a three-dimensional image without additional image capturing equipment. The present invention has been disclosed in the above embodiments, but it is not intended to limit the invention, and those skilled in the art can make some modifications and refinements without departing from the spirit and scope of the invention. Therefore, this article 12 201044317 uyiuu4〇xFW 30913twf.doc/n The scope of protection of the invention is subject to the definition of the scope of the patent application. [Simple Description of the Drawings] FIG. 1 is a schematic diagram of a three-dimensional image display architecture using a fence. 2 is a flow chart illustrating a method of converting a two-dimensional image into a three-dimensional image in accordance with an embodiment of the invention. FIG. 3 is a flow chart illustrating an embodiment of step S203 in accordance with an embodiment of the present invention. 4 is a schematic diagram illustrating an expansion operation in accordance with an embodiment of the present invention. Figure 5 is a schematic diagram showing the direction of operation of the expansion operation in accordance with one embodiment of the present invention. FIG. 6 is a schematic diagram showing the step S303 of FIG. 3 according to an embodiment of the invention. Figure 7 is a block diagram showing the step S204 of Figure 2 in accordance with an embodiment of the present invention. FIG. 8 is a diagram showing an image of a depth map for generating two depths of field according to an embodiment of the invention. FIG. 9 is a diagram showing an image of a depth map for generating multiple depths of field according to an embodiment of the invention. [Description of main component symbols] 10: Fence 13 201044317 09100461'I'W 30913twf.doc/n 20 : Alizarin 30: Backlights 501, 802, 902: Operational data 601, 803, 903: Edge images 602, 804, 904 Filling the face 603: the first scan image 604: the second scan image 605: the first scan mode 606: the second scan mode 701: the grayscale image 702~704: the mark image 801, 901: two-dimensional Image 805: depth map of two depths of field 905: depth map XI to X9 of multiple depth of field: marker point P: pixel matrix P': edge matrix S201 to S207 after expansion operation: flowchart steps S301 to S303: flowchart step 14

Claims (1)

201044317 uyiuu^oiiW 30913twf.doc/n 七 括 甲請寻利範圍: 1. -種將二維景彡像轉換為三維影像的絲,該方法包 接收一二維影像; 將該二維影像轉換為—灰階影像; 淬取該灰階影像中的物件; ❹ ❹ 對該灰階影像中的物件進行標記; 根據該㈣影像巾的各該物件與該灰階影像的 &界之間的轉決定各該物件所對狀深度值; 根據各該物件觸紅深纽衫—深賴;以及 結合該深度圖與該二維影像以產生一三維影像。 2:如中請專魏圍第丨項所述之方法,其中在 火P白景夕像中的物件之步驟包括·· 以 生一:影像進行邊緣偵測(Ε· EStimation)以產 面,滿各該物件輪廓⑽對應的區域以取得—填滿晝 階影利範圍第2項所述之方法,其中在對該灰 料之步驟=、、:制(Edge Esiimation)以產生該運算資 罩(Mask)淬取該灰齡_物件邊緣, 一宁邊緣偵測的方程式如下: 201044317 uyiUU4t>u w 309l3twf.doc/n W = \Gxx P\ + \Gyx 其中,▽/表示該運算資料’p為3*3矩陣的晝素資料, Gj Gyg 3*3矩陣之prewitt遮罩(Mask),上述G與 Gy分別表示如下:201044317 uyiuu^oiiW 30913twf.doc/n See the profit range: 1. A type of silk that converts a two-dimensional scene into a three-dimensional image. The method receives a two-dimensional image; converts the two-dimensional image into a grayscale image; extracting an object in the grayscale image; ❹ 标记 marking the object in the grayscale image; and converting between the object of the (4) image towel and the & Determining the depth value of each object; according to each object, touching the red button-shirt; and combining the depth map with the two-dimensional image to generate a three-dimensional image. 2: If you want to use the method described in the Wei Wei Di 丨 item, the steps in the object in the fire P white scene include: · Producing one: image for edge detection (Ε·EStimation) to produce noodles, Filling the area corresponding to the object outline (10) to obtain the method described in item 2 of the second-order profit range, wherein the step of the gray material is used to generate the computing mask. (Mask) to extract the gray age _ object edge, the equation for the edge detection of Jining is as follows: 201044317 uyiUU4t> ut 309l3twf.doc/n W = \Gxx P\ + \Gyx where ▽/ indicates that the operation data 'p is The data of the 3*3 matrix, the prewitt mask of the Gj Gyg 3*3 matrix, the above G and Gy are respectively as follows: 4.如申請專利範圍第2項所述之方法’其中在對該運 算資料進行膨脹運算以連結物件邊緣以取得物件輪廓之步 驟包括下: 号 乂 選取一晝素矩陣p= Λ Ps P6 Pi Pi P9 其中Pi〜P9表示晝素 資料 判斷晝素資料I>5是否為一第—數值· P〜P當ί ί ί 1斗P5等於該第,,且畫素資料Pl〜P4、 P5調整為-第二數值;以及 職h _晝素貧料 驟崎該運算資料進轉脹運算。 5. 如申睛專利範圍第4項所述之 ^ 值為〇,該第二數值為h 宏’、中5亥第一數 6. 如申請專利範圍第2項所述之方法,其 該物件輪廓内所對應的區域之步驟包括: 口 左二由上至下掃描該邊緣影像以取得一第一槁 ^像,其中#所掃描之—第—晝素資料^_|_^ 16 TW 30913twf.doc/n 201044317 值,5亥第一晝素資料之左側晝素 聯集該第“填 於一第二數值時,將該第一晝素資ς士側晝素資料均等 由右至左,由下至上掃^該邊為該第二數值; ,影像,當所掃描之—第二晝素像以取得一第二掃 〒二晝素資料之右側晝素資料盥下側該第一數值,該 二數’將該第二晝素資料i整資料均等於該第 滿晝面 #&!:如申請專利範圍第6項所述之方法 O 值為〇,該第二數值為丨。 万去’其中該第一數 π比旦/.如申請專利範圍第1項所述之方法,其中在㈣方 白衫像中的物件進行標記之步驟包括:/、 標記依^該灰階影像巾的各·件所對應_域進行 不同:ίΐ該物件所對應的標記,使各該物件分別對應於 j的彳示记亚依照一順序排列。 〇 174. The method of claim 2, wherein the step of expanding the operation data to join the edge of the object to obtain the contour of the object comprises: 乂 selecting a halogen matrix p= Λ Ps P6 Pi Pi P9 where Pi~P9 indicates that the element data is judged whether the element data I>5 is a first value - P~P when ί ί ί 1 bucket P5 is equal to the first, and the pixel data P1 to P4, P5 are adjusted to - The second value; and the job h _ 昼 贫 贫 骤 骤 该 该 该 该 该 该 该 该 该 该 该 该5. If the value of the value stated in item 4 of the scope of the patent application is 〇, the second value is h macro ', the first number of 5 hai. 6. The method described in claim 2, the object The step of the corresponding area in the contour comprises: scanning the edge image from top to bottom to obtain a first image, wherein # scanned - the first element data ^_|_^ 16 TW 30913twf. Doc/n 201044317 value, the left side of the first 昼 资料 资料 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联 联Down to the top of the sweep, the edge is the second value; and the image, when the scanned second image is taken to obtain the second value of the right side of the second sweep data, the first value is The second number 'the second 昼 资料 i i 均 均 & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & The method of claim 1, wherein the object in the (four) square white shirt is marked by the method described in claim 1 Including: /, the mark is different according to the _ field corresponding to each piece of the gray-scale image towel: 标记 The mark corresponding to the object, so that each object corresponds to the 彳 彳 of the j is arranged in a sequence. 17
TW98118436A 2009-06-03 2009-06-03 Method of transforming two-dimensional image into three-dimensional image TW201044317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98118436A TW201044317A (en) 2009-06-03 2009-06-03 Method of transforming two-dimensional image into three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98118436A TW201044317A (en) 2009-06-03 2009-06-03 Method of transforming two-dimensional image into three-dimensional image

Publications (1)

Publication Number Publication Date
TW201044317A true TW201044317A (en) 2010-12-16

Family

ID=45001310

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98118436A TW201044317A (en) 2009-06-03 2009-06-03 Method of transforming two-dimensional image into three-dimensional image

Country Status (1)

Country Link
TW (1) TW201044317A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103108200A (en) * 2011-11-09 2013-05-15 晨星软件研发(深圳)有限公司 Processing method of stereoscopic images
TWI463862B (en) * 2010-12-31 2014-12-01 Wt Microelectronics Co Ltd Apparatus and method for processing wide dynamic range image
TWI478098B (en) * 2011-08-18 2015-03-21 Univ Nat Taiwan System and method of correcting a depth map for 3d image
US9008413B2 (en) 2011-10-27 2015-04-14 Mstar Semiconductor, Inc. Processing method for a pair of stereo images
TWI499278B (en) * 2012-01-20 2015-09-01 Univ Nat Taiwan Science Tech Method for restructure images

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI463862B (en) * 2010-12-31 2014-12-01 Wt Microelectronics Co Ltd Apparatus and method for processing wide dynamic range image
TWI478098B (en) * 2011-08-18 2015-03-21 Univ Nat Taiwan System and method of correcting a depth map for 3d image
US9008413B2 (en) 2011-10-27 2015-04-14 Mstar Semiconductor, Inc. Processing method for a pair of stereo images
CN103108200A (en) * 2011-11-09 2013-05-15 晨星软件研发(深圳)有限公司 Processing method of stereoscopic images
CN103108200B (en) * 2011-11-09 2016-03-02 晨星软件研发(深圳)有限公司 To the processing method of stereopsis
TWI499278B (en) * 2012-01-20 2015-09-01 Univ Nat Taiwan Science Tech Method for restructure images

Similar Documents

Publication Publication Date Title
CN101593349B (en) Method for converting two-dimensional image into three-dimensional image
TWI343207B (en) Device and method for obtain a clear image
JP5209121B2 (en) Parallax image generation device
US9544576B2 (en) 3D photo creation system and method
CN103493093B (en) Image processing apparatus, camera device and image processing method
CN106504194B (en) A kind of image split-joint method based on best splicing plane and local feature
JP6412690B2 (en) Method for obtaining depth information and display device
TW201044317A (en) Method of transforming two-dimensional image into three-dimensional image
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN105654547B (en) Three-dimensional rebuilding method
TWI517136B (en) Image display device and image display method
TW201123083A (en) Method and system for providing augmented reality based on marker tracing, and computer program product thereof
CN109155070A (en) Use the method and computer program product of flat mirror calibration stereo imaging system
TW201029443A (en) Method and device for generating a depth map
JP2009080578A (en) Multiview-data generating apparatus, method, and program
JP2012244527A (en) Apparatus and method for processing image, apparatus and method for creating complement image, program, and storage medium
TW201101228A (en) Image processing method and related apparatus for rendering two-dimensional image to show three-dimensional effect
JP2017524920A (en) Method and system for measuring lens distortion
CN110140151A (en) Device and method for generating light intensity image
TWI456976B (en) Image processing device and method, and stereoscopic image display device
JP5267708B2 (en) Image processing apparatus, imaging apparatus, image generation method, and program
TW201308194A (en) Electronic device, image display system and method thereof
TWI331731B (en) Method of utilizing multi-view images to solve occlusion problem for photorealistic model reconstruction
CN104537627B (en) A kind of post-processing approach of depth image
CN104980733B (en) Glasses-free 3D display crosstalk test method and test image thereof