TW201025186A - Image processing method for providing depth information - Google Patents

Image processing method for providing depth information Download PDF

Info

Publication number
TW201025186A
TW201025186A TW97151506A TW97151506A TW201025186A TW 201025186 A TW201025186 A TW 201025186A TW 97151506 A TW97151506 A TW 97151506A TW 97151506 A TW97151506 A TW 97151506A TW 201025186 A TW201025186 A TW 201025186A
Authority
TW
Taiwan
Prior art keywords
resolution
vertices
depth
grids
grid
Prior art date
Application number
TW97151506A
Other languages
Chinese (zh)
Other versions
TWI370410B (en
Inventor
Kai-Che Liu
Chun-Te Wu
Wei-Hao Huang
Fu-Chiang Jan
Ya-Chi Tsai
Feng-Hsiang Lo
Original Assignee
Ind Tech Res Inst
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ind Tech Res Inst filed Critical Ind Tech Res Inst
Priority to TW097151506A priority Critical patent/TWI370410B/en
Publication of TW201025186A publication Critical patent/TW201025186A/en
Application granted granted Critical
Publication of TWI370410B publication Critical patent/TWI370410B/en

Links

Landscapes

  • Image Processing (AREA)

Abstract

An image processing method is provided. The image processing method processes a number of capturing images, which are captured at a number of capturing angles, and thus produces a depth map of a viewing angle. By using a multi-resolution mesh established according to this invention, and calculating the depth value of the vertex of the multi-resolution mesh to obtain the depth value of other pixels, the depth map can be obtained. The multi-resolution mesh relates to a virtual viewing plane corresponding to the viewing angle. The multi-resolution mesh has a number of meshes having different resolutions. Each mesh has a number of vertexes. Each depth value of the vertex is generated according to at least a portion of the capturing images. During the establishment of the multi-resolution mesh, if the relationship of the depth values in the mesh having a first resolution conforms to a first predetermined condition, then the mesh, which has the first resolution and conforms to the first predetermined condition, is divided so as to generate a number of meshes having a second resolution.

Description

201025186 I WHO/^Γ/Λ. 九、發明說明: 【發明所屬之技術領域】 本發明是有關於一種影像處理方法,且特別是有關於 一種可提供深度地圖的影像處理方法。 【先前技術】 近年來’立體影像處理技術係日益受到重視,其可提 供具有立體效果之立體影像以供使用者觀看。 • 為了提供具有立體效果之立體影像,亦可提供三維内 容(three-dimensional content)至一立體顯示器 (autostereoscopic display)。上述之三維内容包括一二維影 像及一深度資訊(three information)。此深度資訊例如對應 至此二維影像之一深度地圖(depth map)。亦即,此深度地 圖包含了具有對應至二維影像的各晝素的深度值。根據二 維影像與對應之深度地圖,可於立體顯示器顯示出多個視 角(multi-view)之立體影像,能讓使用者不用配載特殊眼鏡 ® 亦可獲得3D之觀賞效果。 在美國專利案號第5,361,127號所揭露之專利中,發 明人係以具有多重刻面(multi-faceted)之一鏡頭的拍攝裝 置,來擷取不同折射面(refractive facet)上的影像。如此, 便可依據此些影像來取得對應至拍攝物體之深度地圖的 深度值。再者’於美國專利案號第6,487,304號所揭露之 專利中,係提出一種方法及系統,其係利用多組影像來取 得深度地圖的深值度、或取得場景物件的移動(motion)方 201025186 , ,201025186 I WHO/^Γ/Λ. IX. Description of the Invention: [Technical Field] The present invention relates to an image processing method, and more particularly to an image processing method capable of providing a depth map. [Prior Art] In recent years, the stereoscopic image processing technology has been receiving increasing attention, and it can provide a stereoscopic image having a stereoscopic effect for the user to view. • Three-dimensional content to autostereoscopic display is also available to provide stereoscopic images. The above three-dimensional content includes a two-dimensional image and a three-information. This depth information corresponds, for example, to a depth map of one of the two-dimensional images. That is, the depth map contains depth values having respective elements corresponding to the two-dimensional image. According to the 2D image and the corresponding depth map, a multi-view stereo image can be displayed on the stereo display, so that the user can obtain the 3D viewing effect without the special lens ® . In the patent disclosed in U.S. Patent No. 5,361,127, the inventor uses a multi-faceted lens to capture images on different refractive faces. In this way, the depth value corresponding to the depth map of the photographed object can be obtained based on the images. Further, in the patent disclosed in U.S. Patent No. 6,487,304, a method and system are proposed which use multiple sets of images to obtain the depth of the depth map or to obtain the motion of the scene object 201025186 , ,

1 W48 UY£K 式。以及,在美國專利案號第7,359,547號所揭露之專利 中,發明人提出依據各種不同光線條件下拍攝的多個影像 來取得深度地圖之深度值。 雖然已經有多種深度地圖的作法被提出來,然如何有 效率地產生對應至一二維影像之深度地圖之深度值的方 法,以減少系統運算量’仍然為業界所致力之課題之一。 【發明内容】 本發明提出一種影像處理方法,用以根據由多個位於❿ 不同拍攝位置之影像擷取器所拍攝之多個拍攝影像,以產 生對應於觀看視角之深度地圖。此方法包括下列步驟。首 先,建立一組初始網格。初始網格與觀看視角所對應之虛 擬影像平面相關。此組初始網格係具有多個第一解析度之 網格,每第一解析度之網袼具有多個第一解析度之頂2。 接著,根據至少部份之拍攝影像產生第一解析度之頂點之 深度值。判斷各個第一解析度之網格内之各頂點的深度值 的關係是否符合-第一預定條件,若是,則分割符合第一❹ 預定條件之第一解析度之網格以產生多個第二解析度之 網格,,產生包含第一解析度之網格及第二解析度之網格 之一組夕重解析度網格(multi_res〇luti〇n ^^沾)。第二解析 度之網格係各具有多個第二解析度之頂點。之後,根據至 少部份之拍攝影像產生第二解析度之頂點之深度值。然 後,依據第-解析度之頂點之深度值,及第二解析度之頂 點之深度值’以產生非對應至此些頂點之其他畫素之深度 6 201025186 i who 值,以產生深度地圖。 為讓本發明之上述内容能更明顯易懂,下文特舉一較 佳實施例,並配合所附圖式,作詳細說明如下。 【實施方式】 本發明係提供一種影像處理方法,用以根據由多個位 於不同拍攝位置之影像擷取器所拍攝之多個拍攝影像,以 產生對應於一觀看視角(viewing angle )之一深度地圖。 _ 請參照第1圖,其所繪示乃依照本發明之一實施例之 影像處理方法的流程圖。此方法包括下列步驟。 首先,如步驟S102所示,建立一組初始網格。此組 初始網格係與觀看視角所對應之一虛擬影像平面相關。此 組初始網格係具有多個第一解析度之網格,每個第一解析 度之網格具有多個第一解析度之頂點(vertex)。 接著,如步驟S104所示,根據至少部份之此些拍攝 影像產生此些第一解析度之頂點之深度值。然後,如步驟 ® S106所示,判斷各個第一解析度之網格内之各頂點的深度 值的關係是否符合一第一預定條件。若是,則進入步驟 S108,分割符合第一預定條件之第一解析度之網格以產生 多個第二解析度之網格,並產生包含些第一解析度之網格 及此些第二解析度之網格之一組多重解析度網格。此些第 二解析度之網格係各具有多個第二解析度之頂點。 之後,如步驟S110所示,根據至少部份之此些拍攝 影像產生此些第二解析度之頂點之深度值。接著,如步驟 7 201025186 '1 W48 UY£K style. In the patent disclosed in U.S. Patent No. 7,359,547, the inventor proposes to obtain the depth value of the depth map based on a plurality of images taken under various lighting conditions. Although a variety of depth map practices have been proposed, how to efficiently generate a depth map corresponding to a depth map of a two-dimensional image to reduce the amount of system computation is still one of the topics of the industry. SUMMARY OF THE INVENTION The present invention provides an image processing method for generating a depth map corresponding to a viewing angle based on a plurality of captured images captured by a plurality of image pickers located at different shooting positions. This method includes the following steps. First, create a set of initial meshes. The initial mesh is related to the virtual image plane corresponding to the viewing angle. The initial set of meshes has a plurality of meshes of a first resolution, and each of the nets of the first resolution has a plurality of tops 2 of the first resolution. Next, a depth value of the apex of the first resolution is generated based on at least a portion of the captured image. Determining whether a relationship of depth values of vertices in each of the first resolution grids meets a first predetermined condition, and if so, dividing a grid of first resolutions that meet the first predetermined condition to generate a plurality of second A grid of resolutions, which produces a grid of first resolutions and a grid of second resolutions (multi_res〇luti〇n^^). The grid of the second resolution each has a plurality of vertices of the second resolution. Thereafter, a depth value of the apex of the second resolution is generated based on at least a portion of the captured image. Then, the depth value of the apex of the first resolution and the depth value of the apex of the second resolution are used to generate a depth 6 201025186 i who value that does not correspond to the other pixels of the vertices to generate a depth map. In order to make the above description of the present invention more comprehensible, a preferred embodiment will be described below in detail with reference to the accompanying drawings. [Embodiment] The present invention provides an image processing method for generating a depth corresponding to a viewing angle according to a plurality of captured images captured by a plurality of image capturing devices located at different shooting positions. map. Referring to Figure 1, there is shown a flow chart of an image processing method in accordance with an embodiment of the present invention. This method includes the following steps. First, as shown in step S102, a set of initial meshes is created. This group of initial grids is related to one of the virtual image planes corresponding to the viewing angle. The initial grid of the set has a plurality of grids of a first resolution, and each grid of the first resolution has a plurality of vertex of the first resolution. Then, as shown in step S104, depth values of the vertices of the first resolutions are generated according to at least some of the captured images. Then, as shown in step ® S106, it is judged whether the relationship of the depth values of the vertices in the respective first resolution grids conforms to a first predetermined condition. If yes, proceeding to step S108, dividing a grid of the first resolution that meets the first predetermined condition to generate a plurality of grids of the second resolution, and generating a grid including the first resolutions and the second parsing A multi-resolution grid of one of the grids of degrees. The grids of the second resolution each have a plurality of vertices of the second resolution. Then, as shown in step S110, the depth values of the vertices of the second resolution are generated according to at least some of the captured images. Then, as step 7 201025186 '

i w^e/zrA S112所示,依據此些第一解析度之頂點之深度值’及此些 第二解析度之頂點之深度值,以產生非對應至此些頂點之 其他畫素之深度值,以產生上述之深度地圖。 茲將第1圖之影像處理方法的詳細作法說明如下。請 同時參照第1圖、第2A圖及第3圖。第2A圖繪示為一組 初始網格MRM0之部分之示意圖。第3圖繪示為觀測位置 Vp與多個測試深度平面(testingplane)TPl〜TPM之關係之 一例。 於步驟S102中’建立一組初始網格MRM0,如第2A 圖所示。此組初始網格MRM0係與觀看視角402所對應之 一虛擬影像平面404相關。虛擬影像平面4〇4係為實質上 與觀看視角402之方向垂直之平面,如第3圖所示。此組 初始網格MRM0具有多個第一解析度之網格,例如是網格 Ml。每個第一解析度之網格具有多個第一解析度之頂點。 舉例來說’網格Ml為三角形網格,且網格mi具有頂點 mla、mlb 及 mlc。 之後,如步驟S104所示,根據至少部份之此些拍攝 影像產生些第一解析度之頂點之深度值。請參照第4圖, 其繪示乃步驟S104所包含之子步驟sl〇4a、si04b與S104c 之流程圖。 如第3圖所示’於步驟S104a中,設定對應至多個測 試深度平面TP1〜TPM之多個預定深度值沁〜<。空間中之 場景係可被分割為多個測試深度平面TP1〜TPM,此些測 試深度平面TP1〜TPM分別為與觀測位置vp相距一距離之 201025186 X ** TVi / Λ 1 平面,上述之多個距離係與預定深度值相關。由於 人眼對於近距離之物件感受較為敏銳,故本實施例採用的 方式為離觀測位置Vp愈近之兩個測試深度平面的預定深 度值之差愈小。例如,測試深度平面TP1及TP2於空間中 之預定深度值沁及A之差會小於測試深度平面T P 2及T P 3 於空間中之預定深度值A及A之差。 預定深度值沁〜4例如係以下列方式所設定: "m= 1 ^ m ( 1 (式” Φ ^max ^min ^max 其中,M係表示為此些預定深度值之個數。係表 示為第m個預定深度值(亦即是測試深度平面TPm之預定 深度值),w係為介於1及Μ之間之整數;i/max係表示此 些預定深度值之最大值。dmin係表示此些預定深度值之最 小值。依據式1可知,此些預定深度值係分佈於空間 (disparity space)中,且距離觀測位置Vp愈近之兩預定深 度值之差係愈小。 ❿ 之後,如步驟S104b所示,根據此觀看視角402,選 取部份之此些拍攝影像。被選取之多個拍攝影像係定義為 多個鄰近拍攝影像。選取鄰近拍攝影像之方法例如為,依 據各影像擷取器之拍攝位置相對於觀看視角402之方向 之距離,選取與觀看視角402具有較近距離之部份之拍攝 位置所對應之此些拍攝影像,以作為此些鄰近拍攝影像。 舉例來說,於第3圖中,於計算各拍攝位置相對於觀 看視角402之方向之距離時,例如係先找出觀看視角402 9 201025186 ·. I W48 ΓΖΥΆ 所對應之方向向量,並計算拍攝位置Cl〜C5與觀看視角 402所對應之方向向量之垂直距離,例如分別為第3圖之 垂直距離pdl〜pd5。假設與觀看視角402之方向具有較近 距離之四個拍攝位置所對應之四個拍攝影像會被選為鄰 近拍攝影像,於此例中,被選到之鄰近拍攝影像將為從拍 攝位置C1〜C4所取得之拍攝影像。 接著’如步驟S104c所示,分別針對各此些第一解析 度之頂點,基於此些鄰近拍攝影像之影像資訊進行顏色一 致性驗證(Color Consistency Verification,CCV),並據以選◎ 擇此些預定深度值之一作為對應之第一解析度之頂點之 深度值。 舉例來說’茲以第2 A圖之頂點m 1 a為例說明之。請 同時參照第3圖及第5圖,第5圖纟會示為所選取之鄰近拍 攝影像之一例之示意圖。假設所選擇之鄰近拍攝影像Iw^e/zrA S112, according to the depth values of the vertices of the first resolution and the depth values of the vertices of the second resolutions, to generate depth values of other pixels that do not correspond to the vertices, To generate the depth map described above. The detailed description of the image processing method of Fig. 1 is explained below. Please refer to Figure 1, Figure 2A and Figure 3 at the same time. Figure 2A is a schematic diagram showing a portion of a set of initial meshes MRM0. Fig. 3 is a diagram showing an example of the relationship between the observation position Vp and a plurality of test depth planes TP1 to TPM. In step S102, a set of initial meshes MRM0 is created, as shown in Fig. 2A. The set of initial mesh MRM0 is associated with a virtual image plane 404 corresponding to the viewing angle of view 402. The virtual image plane 4〇4 is a plane substantially perpendicular to the direction in which the viewing angle 402 is viewed, as shown in Fig. 3. This set of initial mesh MRM0 has a plurality of meshes of a first resolution, such as mesh M1. Each first resolution grid has a plurality of vertices of a first resolution. For example, the grid M1 is a triangular grid, and the grid mi has vertices mla, mlb, and mlc. Then, as shown in step S104, the depth values of the vertices of the first resolution are generated according to at least some of the captured images. Please refer to FIG. 4, which is a flowchart of sub-steps sl4a, si04b and S104c included in step S104. As shown in Fig. 3, in step S104a, a plurality of predetermined depth values 沁 to < corresponding to the plurality of test depth planes TP1 to TPM are set. The scene in the space can be divided into a plurality of test depth planes TP1 TPPM, and the test depth planes TP1 TPPM are respectively a distance of 201025186 X ** TVi / Λ 1 from the observation position vp, the plurality of the above The distance system is related to a predetermined depth value. Since the human eye is more sensitive to the object at a close distance, the embodiment adopts a method in which the difference between the predetermined depth values of the two test depth planes which are closer to the observation position Vp is smaller. For example, the difference between the predetermined depth values 沁 and A of the test depth planes TP1 and TP2 in space may be less than the difference between the predetermined depth values A and A of the test depth planes T P 2 and T P 3 in the space. The predetermined depth value 沁~4 is set, for example, in the following manner: "m= 1 ^ m (1 (Expression Φ ^max ^min ^max where M is the number of predetermined depth values for this purpose. For the mth predetermined depth value (that is, the predetermined depth value of the test depth plane TPm), w is an integer between 1 and ;; i/max is the maximum value of the predetermined depth values. The minimum value of the predetermined depth values is expressed. According to Equation 1, the predetermined depth values are distributed in the disparity space, and the difference between the two predetermined depth values closer to the observation position Vp is smaller. According to the viewing angle of view 402, a part of the captured images are selected according to the viewing angle of view 402. The selected plurality of captured images are defined as a plurality of adjacent captured images. The method for selecting adjacent captured images is, for example, based on each image. For the distance between the shooting position of the picker and the direction of the viewing angle of view 402, the captured images corresponding to the shooting positions of the portion closer to the viewing angle of view 402 are selected as the adjacent captured images. ,to In the figure 3, when calculating the distance between each shooting position relative to the direction of the viewing angle of view 402, for example, the direction vector corresponding to the viewing angle of view 402 9 201025186 ·. I W48 先 is first found, and the shooting positions C1 to C5 are calculated and viewed. The vertical distances of the direction vectors corresponding to the viewing angle 402 are, for example, the vertical distances pdl to pd5 of the third figure. It is assumed that four shooting images corresponding to the four shooting positions having a close distance to the viewing angle 402 are selected. For the adjacent captured image, in this example, the selected adjacent captured image will be the captured image taken from the shooting positions C1 to C4. Then, as shown in step S104c, the vertex of each of the first resolutions is respectively Color Consistency Verification (CCV) is performed based on the image information of the adjacent captured images, and one of the predetermined depth values is selected as the depth value corresponding to the apex of the first resolution. For example, the apex m 1 a of Figure 2A is taken as an example. Please refer to Figure 3 and Figure 5 at the same time. Figure 5 shows the selected proximity photography. The schematic diagram of one case. Suppose the selected images of adjacent photographing

Iml〜Im4係為以拍攝位置C1〜C4所取得之拍攝影像。本實 施例係使用平面掃描法(plane sweeping meth〇d)來選擇此 些預定深度值之一作為對應之第一解析度之頂點之深度 ❹ 值。 所謂的平面掃描法係指,先選擇多個預定深度值之一 來進行顏色-致性驗證。如果所選擇的預定深度值為正確 的或是接近的,則將對應之該測試深度平面之影像區塊反 投影至多個拍攝位置時,所產生之影像區塊的顏色反應會 一致。若不一致的話,則改選擇其他的預定深度值,再做 -次顏色-致性驗證。如此藉由逐一選擇具有不同的預定 201025186 a. ¥>«♦〇/z.m 深度值之測試深度平面,來進行顏色一致性驗證,可以找 出顏色反應最趨於一致之預定深度值,以作為對應之第一 解析度之頂點之深度值。 兹將顏色一致性驗證的作法進一步說明如下。如第5 圖所示,首先,於各鄰近拍攝影像Iml〜Im4中各選取一個 影像區塊Pal〜Pa4。影像區塊Pal〜pa4為此頂點mia經由 具有預定深度值dl之測試深度平面TPi,反投影至拍攝位 置C1〜C4時’於各鄰近拍攝影像Iml〜Im4中所對應之影 鲁像區塊。 接著,對此些鄰近拍攝影像Iml〜im4之影像區塊 Pal〜Pa4進行顏色一致性驗證。於實作中,對此些鄰近拍 攝影像Iml〜Im4之影像區塊Pal〜Pa4進行顏色一致性驗證 之作法例如為,首先,取得任二個鄰近拍攝影像Iml〜Im4 之影像區塊Pal〜Pa4之色彩強度(intensity)之相關係數 (correlation coefficient)。各相關係數係依據以下之公式以 被取得: ν ν[Σ*(νί)2][Σ*σ;^ϊ (式 2) 其中’ π為第f及_/個鄰近拍攝影像之影像區塊之色 強度之相關係數,/认及仏為第/及y個鄰近拍攝影像中 之第A:個對應晝素之色彩強度;7;及乃分別為第z•及y個鄰 近拍攝影像之色彩強度之平均值。 舉例來說,由第2式可知,〇2為鄰近拍攝影像Iml 及Im2之影像區塊Pal及Pa2之色彩強度之相關係數。“ 201025186 ,Iml to Im4 are captured images obtained at the shooting positions C1 to C4. This embodiment uses plane sweeping meth〇d to select one of these predetermined depth values as the depth ❹ value of the corresponding first resolution vertices. The so-called plane scanning method refers to first selecting one of a plurality of predetermined depth values for color-induced verification. If the selected predetermined depth value is correct or close, when the image block corresponding to the test depth plane is back-projected to a plurality of shooting positions, the color response of the generated image block will be the same. If they are inconsistent, then choose another predetermined depth value, and then do - color-caused verification. By performing color consistency verification by selecting test depth planes having different predetermined 201025186 a. ¥>«♦〇/zm depth values one by one, it is possible to find a predetermined depth value in which the color reaction is most consistent, as The depth value corresponding to the vertex of the first resolution. The practice of color consistency verification is further explained below. As shown in Fig. 5, first, one image block Pal to Pa4 is selected for each of the adjacent captured images Iml to Im4. The image blocks Pal to pa4 are the shadow image blocks corresponding to the respective adjacent captured images Iml to Im4 when the vertex mia is backprojected to the shooting positions C1 to C4 via the test depth plane TPi having the predetermined depth value d1. Then, the image blocks Pal~Pa4 of the adjacent captured images Iml~im4 are subjected to color consistency verification. In the implementation, the color consistency verification is performed on the image blocks Pal~Pa4 of the adjacent captured images Iml~Im4, for example, first, the image blocks Pal~Pa4 of any two adjacent captured images Iml~Im4 are obtained. The correlation coefficient of the color intensity (intensity). Each correlation coefficient is obtained according to the following formula: ν ν[Σ*(νί)2][Σ*σ;^ϊ (Formula 2) where 'π is the image block of the fth and _/th neighboring captured images The correlation coefficient of the intensity of the color, / is recognized as the color intensity of the A: the corresponding pixel in the / and y adjacent shots; 7; and is the color of the z and y adjacent shots respectively The average of the intensities. For example, it can be seen from the second formula that 〇2 is the correlation coefficient of the color intensities of the image blocks Pal and Pa2 adjacent to the captured images Iml and Im2. " 201025186 ,

1 W4872FA 及心分別為鄰近拍攝影像Im 1及Im2之影像區塊Pal及 Pa2中之第々個對應晝素之色彩強度;f及7;分別為鄰近拍 攝影像Iml及Im2之影像區塊Pal及Pa2之色彩強度之平 均值。 同時’相仿地’對此些鄰近拍攝影像Iml及Im3、imi 及Im4、Im2及Im3、Im2及Im4、lm3及Im4分別進行顏 色一致性驗證,以取得色彩強度之相關係數、 〜及〜。並定義ccV值等於1減去、〜、〜、〜、 及之平均值。 ❹ 如此’重複針對每個具有不同的預定深度值之測試深 度平面分別計算出一個CCV值,並選擇CCV值最小者所 對應之預定深度值,做為對應之第一解析度之頂點的深度 值。 於步驟S104之後,係執行步驟S106,判斷各個第一 解析度之網格内之各頂點的深度值的關係是否符合一第 一預定條件。若是,則進入步驟S108,分割符合第一預定 條件之第一解析度之網格以產生多個第二解析度之網格。〇 較佳地,於步驟S106中,若一個第一解析度之網格 之任二個第一解析度之頂點之差值之最大值大於一第一 預定值,則此第一解析度之網格符合上述之第一預定條 件。 茲以網格Ml為例,將上述之第一預定條件說明如 下。請參照第2B圖,其繪示為第一解析度之網格M1與 第二解析度之網格ΜΓ之一例之示意圖。於此例中,係藉 12 201025186 由判斷網格Ml之任二頂點之深度值之差值之最大值是否 大於一第一預定值,來判斷網格Ml中是否滿足上述之第 一預定條件。上述之任二頂點係指頂點mla〜mlc之任二, 且係依據以下之公式來判斷: max mD -ma p,qe{\X3},p^g μ q >τ (式3) 其中,%〜%表示為網格Ml之二頂點,即頂點 mla〜mlc之其中之二。Γ為第一預定值。於實作中,第一 預定值係可依據情況予以設定。 若符合式3,即網格Ml中,二個頂點之深度值之差 值之最大值大於第一預定值,則表示網格Ml之三個頂點 所對應之深度值變化較大。此時,網格Ml將被分割成三 個第二解析度之網格,例如是網格Ml’。第二解析度之網 格ΜΓ具有三個第二解析度之頂點mla’、mlb’及mlc’。 之後,如步驟S110所示,根據至少部份之此些拍攝 影像產生此些第二解析度之頂點之深度值。此些第二解析 度之頂點之深度值之求法可使用上述之此些第一解析度 之頂點之深度值的求法求得。 如此,於步驟S110後,初始網格MRM0將變成多重 解析度網格MRM1,如第2B圖所示。請參照第6圖,其 繪示本實施例之一組多重解析度網格與其所對應到之三 維立體影像之一例。假設三維影像Im具有一場景物件 Sp,此場景物件Sp例如為一球體。此組多重解析度網格 具有多個不同解析度之網格,例如網格M。不同解析度之 網格之分佈情形將會與此觀看視角所看到之三維影像I m 13 201025186 ,1 W4872FA and heart are respectively the color intensity of the corresponding pixel in the image blocks Pal and Pa2 of the adjacent image Im 1 and Im2; f and 7; respectively, the image block Pal of the adjacent image Iml and Im2 The average of the color intensity of Pa2. At the same time, the similarity of the color intensity is determined by the similarity of the color images, such as Iml and Im3, imi and Im4, Im2 and Im3, Im2 and Im4, lm3 and Im4, respectively. And define the ccV value equal to 1 minus, ~, ~, ~, and the average value.如此 So repeatedly calculate a CCV value for each test depth plane with different predetermined depth values, and select the predetermined depth value corresponding to the smallest CCV value as the depth value corresponding to the apex of the first resolution. . After step S104, step S106 is performed to determine whether the relationship of the depth values of the vertices in the grids of the respective first resolutions meets a first predetermined condition. If yes, go to step S108 to divide the grid of the first resolution that meets the first predetermined condition to generate a plurality of grids of the second resolution. Preferably, in step S106, if the maximum value of the difference between the vertices of any two first resolutions of the grid of the first resolution is greater than a first predetermined value, the network of the first resolution The grid meets the first predetermined condition described above. Taking the grid M1 as an example, the first predetermined condition described above is explained as follows. Please refer to FIG. 2B, which is a schematic diagram showing an example of a grid M1 of the first resolution and a grid of the second resolution. In this example, whether the maximum value of the difference between the depth values of the two vertices of the judgment mesh M1 is greater than a first predetermined value is determined by 12 201025186 to determine whether the first predetermined condition is satisfied in the mesh M1. Any of the above two vertices refers to any two of the vertices mla to mlc, and is judged according to the following formula: max mD -ma p,qe{\X3}, p^g μ q >τ (Equation 3) %~% is represented as the second vertex of the mesh M1, that is, two of the vertices mla~mlc. Γ is the first predetermined value. In practice, the first predetermined value can be set depending on the situation. If the maximum value of the difference between the depth values of the two vertices is larger than the first predetermined value in Equation 3, that is, in the mesh M1, it means that the depth values corresponding to the three vertices of the mesh M1 vary greatly. At this time, the mesh M1 will be divided into three meshes of the second resolution, for example, the mesh M1'. The second resolution grid has three vertices mla', mlb', and mlc' of the second resolution. Then, as shown in step S110, the depth values of the vertices of the second resolution are generated according to at least some of the captured images. The method of finding the depth values of the vertices of the second resolutions can be obtained by using the depth values of the vertices of the first resolutions described above. Thus, after step S110, the initial mesh MRM0 will become the multi-resolution mesh MRM1 as shown in Fig. 2B. Please refer to FIG. 6 , which illustrates an example of a set of multi-resolution grids and corresponding three-dimensional stereo images of the embodiment. It is assumed that the three-dimensional image Im has a scene object Sp, and the scene object Sp is, for example, a sphere. This set of multiresolution meshes has multiple meshes of different resolutions, such as mesh M. The distribution of the grids with different resolutions will be compared with the three-dimensional image I m 13 201025186 seen from this viewing angle.

TW4872FA 之場景物件Sp相關。 本實施例將使得不同解析度之網格的分佈情形依據 此三維影像Im之場景物件Sp之深度值的變化程度來決 定。舉例來說,三維影像Im之場景物件邡例如包括區域 A1及A2 ’區域A2為場景物件於邊界附近之區域區域 A1為其他區域。於區域A1中’此場景物件#之深度值 之變化量較小,故網格的解析度較低;而於區域A2中, %景物件Sp之冰度值之變化量較大,故網格的解析度較 高。也就是說,區域A1具有數量較少且解析度較低之網 格’而區域A2具有數量較多之且解析度較高之網格。 故知,本實施例確實能於場景物件之邊界上提供具有 較為精細(precise)之解析度的網格(亦即是解析度較高的 網格),將能使深度地圖的精確度提高,又能減少計算量。 之後,如步驟S112所示,依據此些第一解析度之頂 點之深度值,及此些第二解析度之頂點之深度值,以產生 非對應至此些頂點之其他畫素之深度值,以產生上述之深 度地圖。 4將步驟S112洋細說明如下。於實作中,步驟jg 112 可包括以下之步驟。首先,利用此些第一解析度之網格之 頂點之深度值,及此些第二解析度之頂點之深度值,對此 組多重解析度網格MRM1進行頂點填充(vertex miing)* 理。 请參照第7A圖及第7B圖,第7A圖緣示進行頂點填 充處理前之一組此組多重解析度網格MRM1之一例,而第 201025186 !· ▼» ΙΟ /▲!· 7Β則繪示進行頂點填充處理後之一組此組多重解析度網 格MRM2之一例。於第7Β圖中,係對不符合第一預定條 件之第一解析度之網格(如網格M2)再進行分割,以產生另 多個第二解析度之網格(如網格M2,)。以網格M2為例, 於進行頂點填充處理時,係利用網格M2之頂點 m2a〜m2c’來取得第二解析度之其他頂點mx及my之深夜 值。亦即,讓此組多重解析度網格MRM1以最高解析度之 頂點填滿之,所填入之網格例如第7B圖之空心圓點所示 ❹者。然後,可以線性插補的方式,利用已經求得深度值之 頂點(如實心圓點所示者),求得空心圓點所對應之頂點的 深度值。例如’利用頂點m2a及m2b來取得被填充之頂 點mx之深度值。如此,進行頂點填充處理後將產生一級 多重解析度網格MRM2,此組多重解析度網格MRM2之所 有網格皆為具有最高之解析度之網格。 接著’對頂點填充處理後之此組多重解析度網格 MRM2進行畫素播補(pixei interpolation)處理,以產生所有 ❹畫素皆有深度值之深度地圖。舉例來說,本實施例於進行 畫素插補處理時,係可利用雙線性(bi_linear)的插補方式, 亦即是橫向插補與縱向插補同時進行的方式,來取得對應 至各畫素的深度值◊如此,便能產生對應至各個畫素均有 深度值的深度地圖。 茲以取得對應至一預定畫素之深度值為例說明如 下。此預定畫素為非對應至多重解析度網格MRM2之頂點 之一個畫素。請參照第7C圖,其繪示為利用雙線性的插 15 201025186 L W46 /Ζ1Ά 補方式對一預定畫素進行畫素插補處理以取得其之深产 值之示意圖。預定晝素之深度值係以下式求得: X f(u0, v0)= f(u% νΉΐ β )+f(u5+l, vs) α (1 - ^ } +f(u’,ν’+1)(1— a)々+f(u,+1,ν,+1)α 石 其中:(u,,ν’)、(u,+ l,ν,)、(u’,ν,+1)、及(u,+1,ν,+ι 表示為相鄰之四個頂點之座標;(uo,vo)為預定金素之座 標;α表示為預定畫素(u0,v0)之座標與頂點(u,,v,)之 之橫向距離;錄示為預定畫素(UG,VG)之座標與頂1 v值座標之縱向距離;而f( ·)表示為對應至座標的深度 請參照附圖K5。附圖i顯示一二維原始影^则之 -例。附圖2㈣為對應至二維原始影像1则之 度網格MRM1之深度地圖。附圖3續示為將附圖2之= 解析度網格MRM1進行頂點填充處理後,所產生之夕二 析度網格MRM2之深度地圖。附圖4緣示為將_^ 重解析度網格MRM2進行畫素插補處理後,所產生 ^ 地圖DM。 '衣度 跟對所有的晝素一計算深度值以產生深度地圖的 作法相較,本實施例之影像處理方法中,係藉由產生 多重解析度網格之頂點的深度值,即可得到所有畫素深 度值。如此,本實施例之影像處理方法之資料處理量相董 地較低,故能提咼處理速度。且由附圖4可看出,本 例所產生之深度地圖確實很接近於附圖5所示之每個書^ 201025186 Λ. « * I \/ / ΛΛΜ. Λ Λ 為單位’分別找出每個畫素之深度值所產生的深度地圖 DM2。 進一步地’本發明之實施例所提出之影像處理方法, 係可由一多核心處理器(muiicore pr〇cess〇r)利用平行處 理的方式來執行。舉例來說,此多核心處理器可利用平行 處理的方式來執行上述之步驟,例如是將網格分群來平行 處理之’並藉由將相關資料儲存於多個記憶單元來平行處 理之。此多核心處理器亦可利用平行處理的方式來執行上 ❹述之頂點填充處理、或畫素插補處理等,以產生深度地圖 DM。如此’將能進一步提高本實施例之影像處理方法之 處理速度。 於另—實例中,上述之產生深度地圖dm之步驟亦可 包括以下之步驟。提供此些網格之此些頂點之深度值至一The scene object Sp of the TW4872FA is related. In this embodiment, the distribution of the meshes of different resolutions is determined according to the degree of change of the depth value of the scene object Sp of the three-dimensional image Im. For example, the scene object of the three-dimensional image Im includes, for example, the area A1 and the area A2, and the area A1 of the scene object near the boundary is another area. In the area A1, the variation of the depth value of the scene object is small, so the resolution of the grid is low; and in the area A2, the variation of the ice value of the % scene object Sp is large, so the grid The resolution is higher. That is, the area A1 has a smaller number of lower resolution grids and the area A2 has a larger number of grids with higher resolution. Therefore, it is known that the present embodiment can provide a mesh with a relatively high resolution (that is, a mesh with a higher resolution) on the boundary of the scene object, which will improve the accuracy of the depth map, and Can reduce the amount of calculations. Then, as shown in step S112, according to the depth values of the vertices of the first resolution and the depth values of the vertices of the second resolutions, to generate depth values of other pixels that do not correspond to the vertices, Generate the depth map described above. 4 The step S112 is described in detail below. In practice, step jg 112 may include the following steps. First, using the depth values of the vertices of the first resolution grid and the depth values of the vertices of the second resolutions, the set of multi-resolution grids MRM1 is vertex miing. Please refer to FIG. 7A and FIG. 7B. FIG. 7A shows an example of the group of multi-resolution grids MRM1 before the vertex filling process, and the 201025186 !· ▼» ΙΟ /▲!· 7Β One example of this group of multi-resolution grids MRM2 after performing vertex filling processing. In the seventh diagram, the mesh of the first resolution that does not meet the first predetermined condition (such as the mesh M2) is further divided to generate another mesh of the second resolution (such as the mesh M2, ). Taking the mesh M2 as an example, when the vertex filling process is performed, the vertices m2a to m2c' of the mesh M2 are used to obtain the midnight values of the other vertices mx and my of the second resolution. That is, the set of multi-resolution grids MRM1 is filled with the highest resolution vertices, and the filled grids are shown, for example, by the hollow dots of Figure 7B. Then, the depth of the vertex corresponding to the hollow dot can be obtained by linear interpolation using the vertex of the depth value that has been obtained (as indicated by the solid dot). For example, the vertex m2a and m2b are used to obtain the depth value of the filled apex mx. Thus, after the vertex filling process, a first-order multi-resolution mesh MRM2 is generated, and all the meshes of the set of multi-resolution mesh MRM2 are the mesh with the highest resolution. Then, the pixei interpolation process is performed on the set of multi-resolution mesh MRM2 after the vertex filling process to generate a depth map in which all the pixels have depth values. For example, in the embodiment, when the pixel interpolation processing is performed, the bilinear (bi_linear) interpolation method, that is, the horizontal interpolation and the vertical interpolation are simultaneously performed, to obtain corresponding to each The depth value of the pixels is such that a depth map corresponding to the depth values of each pixel can be generated. The following is an example of obtaining a depth value corresponding to a predetermined pixel. This predetermined pixel is a pixel that does not correspond to the vertices of the multi-resolution grid MRM2. Please refer to FIG. 7C, which is a schematic diagram of performing a pixel interpolation process on a predetermined pixel by using a bilinear interpolation 15 201025186 L W46 /Ζ1Ά complement method to obtain a deep production value thereof. The depth value of the predetermined element is obtained by the following formula: X f(u0, v0)= f(u% νΉΐ β )+f(u5+l, vs) α (1 - ^ } +f(u',ν' +1)(1—a)々+f(u,+1,ν,+1)α stone: (u,,ν′), (u,+ l,ν,), (u',ν, +1), and (u, +1, ν, +ι denote the coordinates of the adjacent four vertices; (uo, vo) is the coordinate of the predetermined gold; α denotes the coordinates of the predetermined pixel (u0, v0) The lateral distance of the vertex (u, v,); recorded as the longitudinal distance between the coordinates of the predetermined pixel (UG, VG) and the top 1 v value coordinate; and f ( ·) is expressed as the depth corresponding to the coordinate, please refer to Figure K5. Figure i shows a two-dimensional original image - an example. Figure 2 (d) is a depth map corresponding to the two-dimensional original image 1 degree grid MRM1. Figure 3 is continued to Figure 2 = The depth map of the generated divergence grid MRM2 after the vertex filling process is performed by the resolution grid MRM1. FIG. 4 shows the pixel interpolation process after the _^ re-resolution grid MRM2 is performed. Generated ^ Map DM. 'The degree of clothing and the calculation of depth values for all the elements to generate depth maps In the image processing method of the embodiment, all the pixel depth values are obtained by generating the depth values of the vertices of the multi-resolution grid. Thus, the data processing method of the image processing method of the embodiment is consistently It is lower, so the processing speed can be improved. As can be seen from Fig. 4, the depth map generated in this example is indeed very close to each of the books shown in Fig. 5 201025186 Λ. « * I \/ / ΛΛΜ Λ Λ is the unit' to find the depth map DM2 generated by the depth value of each pixel separately. Further, the image processing method proposed by the embodiment of the present invention can be a multi-core processor (muiicore pr〇cess) 〇r) is performed by means of parallel processing. For example, the multi-core processor can perform the above steps by using parallel processing, for example, grouping the grids in parallel to process 'by storing the related data. The multi-core processor can also perform parallel processing on the plurality of memory units. The multi-core processor can also perform the vertex filling processing, or the pixel interpolation processing, and the like described above in a parallel processing manner to generate the depth map DM. This will further improve the processing speed of the image processing method of the embodiment. In another example, the step of generating the depth map dm may further include the following steps: providing depth values of the vertices of the grids To one

圖形處理器(Graphic Processing Unit, GPU),並由此 GPU 利用其内部之成像(rendering)功能來產生此深度地圖。藉 ©由使用具有將上述之多重解析度網格MRM1之頂點的深 度值轉換為深度地圖DM之功能的GPU,亦能再進一步提 尚本實施例之影像處理方法之處理速度。 此外,本實施例之影像處理方法所產生之深度地圖, 結合此觀看視角402所對應之二維合成場景之後,可以用 於產生其他觀測視角的立體影像。 上述之本實施例雖以一組多重解析度網格具有第一 解析度及第二解析度之網格為例做說明,然並不限於此。 此組多重解析度網格亦可為具有N種解析度之一組多重 17 201025186A Graphic Processing Unit (GPU), and thus the GPU uses its internal rendering function to generate this depth map. The processing speed of the image processing method of the present embodiment can be further improved by using a GPU having a function of converting the depth value of the vertices of the multi-resolution grid MRM1 described above into the depth map DM. In addition, the depth map generated by the image processing method of the present embodiment, after combining the two-dimensional composite scene corresponding to the viewing angle 402, can be used to generate stereoscopic images of other viewing angles. The above embodiment is described by taking a grid having a first resolution and a second resolution as a set of multiple resolution grids, but is not limited thereto. This set of multi-resolution grids can also be a set of multiples with N resolutions. 17 201025186

1 W4»/^A 解析度網格。此處之N的值係可依據情況予以設定。舉例 來說,產生第三解析度之網格的方法如下。 首先,根據一個第二解析度之網格内之各頂點的深度 值的關係,選擇性地分割對應之第二解析度之網格以產生 多個第三解析度之網格。此些第三解析度之網格係各具有 多個第三解析度之頂點。然後,根據至少部份之此些拍攝 影像產生此些第三解析度之頂點之深度值。 故可推知,再依據第三解析度内之各頂點的深度值的 關係,將能再分割對應的第三解析度之網格以產生多個具❽ 有更高解析度之網格。如此,在計算出各網格之頂點的深 度值之後,便能產生具有N種解析度之一組多重解析度 格。 又、 Θ 本發明上述實施例所揭露之影像處理方法,用以處理 於多個拍攝視角下之多個拍攝影像,以產生於一觀看視角 下之一深度地圖。藉由建立之一組多重解析度網格,並叶 算此組多重解析度網格之頂點的深度值,可以快速地找出 其他畫素的深度值,以快速地得到此深度地圖。本發明具 有高效率、處理速度快、且深度地圖之精確度良好之優點、。 綜上所述,雖然本發明已以一較佳實施例揭露如上, 然其並非用以較本發明。本發㈣屬技觸域中具有通 常知識者,在不脫離本發明之精神和範圍内,當可作各種 之更動與潤飾。因此’本發明之保護範圍當視後附位 專利範圍所界定者為準。 。月 18 201025186 X TT -T«J / ΓΛ. 【圖式簡單說明】 第1圖繪示依照本發明一實施例之影像處理方法之 流程圖。 第2A圖緣示為一組初始網格之部分之示意圖。 第2B圖繪示為第一解析度之網格與第二解析度之網 格之一例之示意圖。 第3圖繪示為觀測位置與多個測試深度平面之關係 之一例。 ❹ 第4圖繪示乃步驟S104所包含之子步驟S104a、 S104b與S104c之流程圖。 第5圖繪示為所選取之鄰近拍攝影像之一例之示意 圖。 第6圖繪示本實施例之一組多重解析度網格與其所 對應到之三維立體影像之一例。 第7A圖繪示進行頂點填充處理前之一組此組多重解 析度網格MRM1之一例。 ® 第7B則繪示進行頂點填充處理後之此組多重解析度 網格MRM2之一例。 第7C圖繪示為利用雙線性的插補方式對一預定晝素 進行畫素插補處理以取得其之深度值之示意圖。 【主要元件符號說明】 C1〜C5 :拍攝位置 DM :深度地圖 201025186 ,1 W4»/^A resolution grid. The value of N here can be set according to the situation. For example, the method of generating a grid of third resolution is as follows. First, the grid of the corresponding second resolution is selectively segmented according to the relationship of the depth values of the vertices in the grid of the second resolution to generate a plurality of grids of the third resolution. The third resolution grids each have a plurality of vertices of a third resolution. Then, the depth values of the vertices of the third resolution are generated based on at least some of the captured images. Therefore, it can be inferred that according to the relationship between the depth values of the vertices in the third resolution, the corresponding third resolution mesh can be further divided to generate a plurality of meshes having higher resolution. Thus, after calculating the depth values of the vertices of the respective meshes, a set of multiple resolutions having N kinds of resolutions can be generated. Moreover, the image processing method disclosed in the above embodiments of the present invention is configured to process a plurality of captured images in a plurality of shooting angles to generate a depth map in a viewing angle. By establishing a set of multi-resolution grids and calculating the depth values of the vertices of the set of multi-resolution grids, you can quickly find the depth values of other pixels to quickly obtain this depth map. The invention has the advantages of high efficiency, fast processing speed, and good accuracy of the depth map. In view of the above, although the present invention has been disclosed above in a preferred embodiment, it is not intended to be used in comparison with the present invention. This (4) is a person of ordinary skill in the technical field, and can make various changes and refinements without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the present invention is defined by the scope of the appended patent. . Month 18 201025186 X TT -T«J / ΓΛ. [Simple Description of the Drawings] FIG. 1 is a flow chart showing an image processing method according to an embodiment of the present invention. Figure 2A is a schematic representation of a portion of a set of initial meshes. FIG. 2B is a schematic diagram showing an example of a grid of the first resolution and a grid of the second resolution. Figure 3 shows an example of the relationship between the observation position and a plurality of test depth planes. ❹ FIG. 4 is a flow chart showing sub-steps S104a, S104b and S104c included in step S104. Fig. 5 is a schematic view showing an example of the selected adjacent captured image. Figure 6 is a diagram showing an example of a set of multi-resolution grids and corresponding three-dimensional images of the present embodiment. Fig. 7A is a diagram showing an example of the set of multiple resolution grids MRM1 before the vertex filling process. ® 7B shows an example of this set of multi-resolution grid MRM2 after vertex filling processing. Fig. 7C is a schematic diagram showing the pixel value interpolation process of a predetermined pixel by using a bilinear interpolation method to obtain the depth value thereof. [Main component symbol description] C1~C5: Shooting position DM: Depth map 201025186

TW4872PATW4872PA

Im :三維影像Im : 3D image

Iml〜Im4 :鄰近拍攝影像Iml~Im4: adjacent shooting images

Imo :二維原始影像 mla〜mlc、mla’〜mlc’、m2a〜m2c :頂點Imo: two-dimensional original image mla~mlc, mla'~mlc', m2a~m2c: vertex

Ml、ΜΓ、M2 :網格 MRM0 :初始網格 MRM1 :多重解析度網格Ml, ΜΓ, M2: Grid MRM0: Initial Grid MRM1: Multiple Resolution Grid

Pal〜Pa4 :影像區塊 S102〜S112、S104a〜S104c :流程步驟 ❿Pal~Pa4: image block S102~S112, S104a~S104c: process steps ❿

Sp :場景物件 TP1〜TPM:測試深度平面Sp : Scene object TP1 ~ TPM: Test depth plane

Vp :觀測位置 402 :觀看視角 404 :虛擬影像平面Vp: observation position 402: viewing angle 404: virtual image plane

20 201025186 【附圖說明】 附圖1顯示一二維原始影像之一例。 附圖2繪示為對應至二維原始影像之多重解析度網 格MRM1之深度地圖。 附圖3繪示為將附圖2之多重解析度網格MRM1進 行頂點填充處理後,所產生之多重解析度網格MRM2之深 度地圖。 附圖4繪示為將附圖3之多重解析度網格MRM2進 ❹行晝素插補處理後,所產生之深度地圖。 附圖5繪示為以每個晝素為單位,分別找出每個晝素 之深度值所產生的深度地圖。20 201025186 [Description of the Drawings] Figure 1 shows an example of a two-dimensional original image. 2 is a depth map of a multi-resolution grid MRM1 corresponding to a two-dimensional original image. 3 is a depth map of the multi-resolution grid MRM2 generated after the vertex filling process of the multi-resolution grid MRM1 of FIG. 2 is performed. FIG. 4 is a diagram showing the depth map generated after the multi-resolution grid MRM2 of FIG. 3 is subjected to the pixel interpolation processing. Figure 5 is a diagram showing the depth map generated by finding the depth value of each element in each pixel.

AA

Claims (1)

201025186 i YY-tot^r^k 十、申請專利範圍: 1. 一種影像處理方法, ::r=r,二:== 1十觀看視角之一深度地圖(depth map),該方法包 括· ㈣建1組初始網格’該組初始網格係與該觀看視角所 I 擬影像平面相關’該組初始網格係具有複數個 “之網格’每個第—解析度之網格具有複數個第 一解析度之頂點(vertex); 根據至少部份之該些拍攝影像產生該些第 一解析度 之頂點之深度值; 判斷各個第-解析度之網格内之各頂點的深度值的 關係疋否符合-第—預定條件,若是,則分割符合該第一 預定條件之該第-解析度之網格以產生複數個第二解析 度之網格’並產生包含該些第-解析度之網格及該些第二 解析度之網格之一組多重解析度網格 ,該些第二解析度之 網格係各具有複數個第二解析度之頂點; 根據至少部份之該些拍攝影像產生該些第二解析度 之頂點之深度值;以及 依據該些第一解析度之頂點之深度值,及該些第二解 析度之頂點之深度值’以產生非對應至該些頂點之其他畫 素之深度值,以產生該深度地圖。 2·如申請專利範圍第1項所述之方法,更包括: 根據一個第二解析度之網格内之各頂點的深度值的 21 201025186 1 W4tt/^A 關係,選擇性地分割對應之該第二解析度之網格以產生複 數個第三解析度之網格,該些第三解析度之網格係各具有 複數個第三解析度之頂點;以及 根據至少部份之該些拍攝影像產生該些第三解析度 之頂點之深度值。 3. 如申請專利範圍第1項所述之方法,其中,於產 生該些第一解析度之頂點之深度值之步驟中,係使用平面 掃描法來產生。 4. 如申請專利範圍第1項所述之方法,其中,於該 ❿ 分割對應之該第一解析度之網格之步驟中,若一個第一解 析度之網格之任二個該些第一解析度之頂點之差值之最 大值大於一第一預定值,則該第一解析度之網格符合該第 一預定條件。 5·如申請專利範圍第1項所述之方法,其中產生該 深度地圖之步驟包括: 利用該些第一解析度之頂點之深度值,及該些第二解 析度之頂點之深度值,對該組多重解析度網格進行頂點填 〇 充(vertex filling)處理;以及 對頂點填充處理後之該組多重解析度網格進行畫素 插補(pixel interpolation)處理,以產生所有晝素皆有深度值 之該深度地圖。 6·如申請專利範圍第5項所述之方法,其中對該組 多重解析度網格進行頂點填充處理之步驟包括: 分割不符合該第一預定條件之該第一解析度之網 22 201025186 1 »**TU/^1 /-L· 格 以產生另複數個第二解析度之網格 解析度網格以該第二解析度之頂點填滿之藉 7·如申請專利範圍第6項所述之方法,其中對該組 夕重解析度網格進行頂點填充處理之 以線性插獅方式,利用已經求得深度值之該些第一 另今些第—網格之頂點’求得其它尚未求彳导深度值之 另該一第一網格之頂點的深度值。 Φ 【如申請專利範圍第5項所述之方法,其中對頂點 St 組多重解析度網格進行畫素插補處理之 金本利、Γ雙線性(bMlnear)的插補方式,來取得對應至各 ^家的》木度值0 9·如申请專利範圍第8項所述之方法,其中利用雙 =的插補方式來取得對應至各晝素的深度值之步驟更 包括: 取得對應至-預定畫素之深度值,該預定畫素為非對 ‘、=頂點填充處理後之該組多重解析度網格之該些頂點 之一個畫素’該預定晝素之深度值係以下式求得: f(uO, V〇)= f(U’,V’)(11 )(1 —沒)+f(u,+l,V,)α (1 - /5 ) +%’,v’+l) (1— a)々+f(u,+1,ν,+ 1)α 石 其中: (u,v )、(u’+1,v’)、(u’,v’+1)、及(u,+i,v,+l)表示為 相鄰之四個頂點之座標; 23 201025186 1 w*to/^r/v (uo,vo)為該預定畫素之座標; α表示為該預定晝素(uo,vo)之座標與該頂點(u,, 之座標之橫向距離; ’ > 泠表示為該預定畫素(uo,vo)之座標與該頂點(u,, 之座標之縱向距離;及 ’ v) f( ·)表示為對應至座標的深度值。 10.如申請專利範圍第1項所述之方法,其中產生談 些第一解析度之頂點之深度值之步驟包括: ~201025186 i YY-tot^r^k X. Patent application scope: 1. An image processing method, ::r=r, two:== 1 depth view of one of the viewing angles, the method includes · (4) Constructing a set of initial meshes 'The initial mesh system of the set is related to the pseudo-image plane of the viewing angle'. The initial mesh system has a plurality of "grids". Each of the first-resolution meshes has a plurality of a vertex of the first resolution; generating a depth value of the vertices of the first resolution according to at least some of the captured images; determining a relationship of depth values of the vertices in each of the first-resolution grids疋No--the predetermined condition, if yes, dividing the grid of the first-resolution corresponding to the first predetermined condition to generate a plurality of grids of the second resolution and generating the first-resolution a plurality of resolution grids of the grid and the second resolution grids, the second resolution grids each having a plurality of second resolution vertices; according to at least some of the shots The image produces the depth of the vertices of the second resolution a depth value; and a depth value of the vertices of the second resolutions, and depth values of the vertices of the second resolutions to generate depth values of other pixels that do not correspond to the vertices to generate the depth 2. The method of claim 1, further comprising: selectively segmenting the correspondence according to a 21 201025186 1 W4tt/^A relationship of depth values of vertices in a second resolution grid a grid of the second resolution to generate a plurality of third resolution grids, each of the third resolution grids having a plurality of third resolution vertices; and according to at least some of the The method of shooting the image to generate the depth value of the vertices of the third resolution. 3. The method of claim 1, wherein the step of generating the depth values of the vertices of the first resolution is used. 4. The method according to claim 1, wherein in the step of dividing the grid corresponding to the first resolution, if a grid of the first resolution is Any two of these The maximum value of the difference between the vertices of the resolution is greater than a first predetermined value, and the grid of the first resolution meets the first predetermined condition. 5. The method of claim 1, wherein the method is generated The step of the depth map includes: performing vertex filling on the set of multi-resolution grids by using the depth values of the vertices of the first resolution and the depth values of the vertices of the second resolutions. Processing; and performing pixel interpolation processing on the set of multi-resolution grids after vertex filling processing to generate the depth map in which all pixels have depth values. 6. The method of claim 5, wherein the step of performing a vertex fill process on the set of multi-resolution grids comprises: dividing the web of the first resolution that does not meet the first predetermined condition 22 201025186 1 » ** TU / ^ 1 / - L · grid to generate another plurality of second resolution grid resolution grid filled with the apex of the second resolution 7 · as claimed in the sixth item The method described in which the vertex fill processing of the set of re-resolution grids is performed by a linear lion insertion method, and the vertices of the first and other grids that have obtained the depth values are used to obtain other The depth value of the other vertices of the first mesh is obtained. Φ [As in the method of claim 5, in which the pixel interpolation method of the vertice St group multi-resolution grid is performed by the interpolation method of the gold and the bilinear (bMlnear) to obtain the corresponding The method of claim 2, wherein the step of using the double= interpolation method to obtain the depth value corresponding to each element further comprises: obtaining the corresponding to-predetermined The depth value of the pixel, the predetermined pixel is a pair of pixels of the set of multi-resolution grids after the vertex fill processing, and the depth value of the predetermined pixel is obtained by the following formula: f(uO, V〇)= f(U',V')(11 )(1—none)+f(u,+l,V,)α (1 - /5 ) +%',v'+l ) (1— a)々+f(u,+1,ν,+ 1)α stone: (u,v ), (u'+1,v'), (u',v'+1), And (u, +i, v, +l) are expressed as coordinates of the adjacent four vertices; 23 201025186 1 w*to/^r/v (uo, vo) is the coordinate of the predetermined pixel; α is expressed as The coordinates of the predetermined element (uo, vo) and the vertices (u,, the lateral distance of the coordinates; ' > 泠 denotes the coordinate of the predetermined pixel (uo, vo) and the vertex (u, the longitudinal distance of the coordinates; and ' v) f ( ·) is expressed as the depth value corresponding to the coordinate. The method of claim 1, wherein the step of generating depth values for vertices of the first resolution includes: 設定對應至複數個測試深度平面之複數個預定严 值; '衣度 根據該觀看視角,選取部份之該些拍攝影像,被 之該些拍攝影像係定義為複數個鄰近拍攝影像;以及取 分別針對各該些第一解析度之頂點,基於該些鄰近 攝,=之影像資訊進行顏色—致性驗證’並據以選擇該此 預定深度值之一作為對應之該第一解析度之頂點二二 度值。 °Λ/衣 〇 u.如申請專利範圍第10項所述之方法,其 該些鄰近拍攝影像之步驟包括: 依據各該影像擷取器之拍攝位置相對於該觀看視角 之方向之距離,選取與該觀看視角具有較近距離之部份之 ^攝位置所對應之該些拍攝影像,以作為該轉近拍攝影 12.如申請專利範圍第1項所述之方法,其中該方法 係由多核心處理器(multi-core processor)利用平行處理 24 201025186 的方式來執行。 13.如申請專利範圍第1項所述之方法,其中產生該 深度地圖之步驟包括: 提供該些網格之該些頂點之該些深度值至一圖形處 理器(Graphic Processing Unit,GPU);以及 由該GPU利用其内部之成像(rendering)功能來產生 該深度地圖。Setting a plurality of predetermined strict values corresponding to the plurality of test depth planes; 'clothing degree according to the viewing angle, selecting a part of the captured images, and the captured images are defined as a plurality of adjacent captured images; For each of the vertices of the first resolutions, performing color-based verification based on the image information of the adjacent photos, and selecting one of the predetermined depth values as the corresponding vertex of the first resolution Second degree. The method of claim 10, wherein the step of capturing the image adjacent to the image comprises: selecting, according to a distance between a shooting position of each image picker and a direction of the viewing angle; The photographed image corresponding to the photographing position of the portion of the viewing angle that is closer to the viewing angle, as the method of the first aspect of the patent application, wherein the method is The multi-core processor is implemented using parallel processing 24 201025186. The method of claim 1, wherein the step of generating the depth map comprises: providing the depth values of the vertices of the grids to a graphics processing unit (GPU); And the depth map is generated by the GPU using its internal rendering function. 2525
TW097151506A 2008-12-30 2008-12-30 Image processing method for providing depth information TWI370410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW097151506A TWI370410B (en) 2008-12-30 2008-12-30 Image processing method for providing depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW097151506A TWI370410B (en) 2008-12-30 2008-12-30 Image processing method for providing depth information

Publications (2)

Publication Number Publication Date
TW201025186A true TW201025186A (en) 2010-07-01
TWI370410B TWI370410B (en) 2012-08-11

Family

ID=44852500

Family Applications (1)

Application Number Title Priority Date Filing Date
TW097151506A TWI370410B (en) 2008-12-30 2008-12-30 Image processing method for providing depth information

Country Status (1)

Country Link
TW (1) TWI370410B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569888B2 (en) 2014-12-15 2017-02-14 Industrial Technology Research Institute Depth information-based modeling method, graphic processing apparatus and storage medium
US10027976B2 (en) 2010-11-04 2018-07-17 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
TWI686770B (en) * 2017-12-26 2020-03-01 宏達國際電子股份有限公司 Surface extrction method, apparatus, and non-transitory computer readable storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10027976B2 (en) 2010-11-04 2018-07-17 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
TWI644561B (en) * 2010-11-04 2018-12-11 Ge影像壓縮有限公司 Picture coding supporting block merging and skip mode, and related apparatus, method, computer program and digital storage medium
US10382776B2 (en) 2010-11-04 2019-08-13 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
US10602182B2 (en) 2010-11-04 2020-03-24 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
US10785500B2 (en) 2010-11-04 2020-09-22 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
US10841608B2 (en) 2010-11-04 2020-11-17 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
TWI725348B (en) * 2010-11-04 2021-04-21 美商Ge影像壓縮有限公司 Picture coding supporting block merging and skip mode, and related apparatus, method, computer program and digital storage medium
US11785246B2 (en) 2010-11-04 2023-10-10 Ge Video Compression, Llc Picture coding supporting block merging and skip mode
US9569888B2 (en) 2014-12-15 2017-02-14 Industrial Technology Research Institute Depth information-based modeling method, graphic processing apparatus and storage medium
TWI686770B (en) * 2017-12-26 2020-03-01 宏達國際電子股份有限公司 Surface extrction method, apparatus, and non-transitory computer readable storage medium
US10719982B2 (en) 2017-12-26 2020-07-21 Htc Corporation Surface extrction method, apparatus, and non-transitory computer readable storage medium thereof

Also Published As

Publication number Publication date
TWI370410B (en) 2012-08-11

Similar Documents

Publication Publication Date Title
JP4052331B2 (en) Virtual viewpoint image generation method, three-dimensional image display method and apparatus
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
US9013482B2 (en) Mesh generating apparatus, method and computer-readable medium, and image processing apparatus, method and computer-readable medium
CN102034265B (en) Three-dimensional view acquisition method
WO2018164852A1 (en) Image reconstruction for virtual 3d
CN112233165B (en) Baseline expansion implementation method based on multi-plane image learning visual angle synthesis
JP2015022510A (en) Free viewpoint image imaging device and method for the same
CN109791704B (en) Texture rendering method, system and device based on multi-layer UV mapping for free-running FVV application
JP2019046077A (en) Video synthesizing apparatus, program and method for synthesizing viewpoint video by projecting object information onto plural surfaces
RU2690757C1 (en) System for synthesis of intermediate types of light field and method of its operation
US8577202B2 (en) Method for processing a video data set
CN107147894B (en) A kind of virtual visual point image generating method in Auto-stereo display
CN102930593A (en) Real-time rendering method based on GPU (Graphics Processing Unit) in binocular system
JP7505481B2 (en) Image processing device and image processing method
US20220222842A1 (en) Image reconstruction for virtual 3d
Liu et al. Creating simplified 3D models with high quality textures
TW201025186A (en) Image processing method for providing depth information
JP2004287517A (en) Virtual viewpoint image generating method, virtual viewpoint image generating device, virtual viewpoint image generating program and recording medium
CN115841539A (en) Three-dimensional light field generation method and device based on visual shell
Andersen et al. An AR-guided system for fast image-based modeling of indoor scenes
JP2010152529A (en) Vertex texture mapping device and program
Hobloss et al. Hybrid dual stream blender for wide baseline view synthesis
Nobuhara et al. A real-time view-dependent shape optimization for high quality free-viewpoint rendering of 3D video
Tran et al. View synthesis with depth information based on graph cuts for FTV
JP6814036B2 (en) Element image group generator and its program