TW202201344A - Depth image based rendering method, electrical device and computer program product - Google Patents

Depth image based rendering method, electrical device and computer program product Download PDF

Info

Publication number
TW202201344A
TW202201344A TW109121387A TW109121387A TW202201344A TW 202201344 A TW202201344 A TW 202201344A TW 109121387 A TW109121387 A TW 109121387A TW 109121387 A TW109121387 A TW 109121387A TW 202201344 A TW202201344 A TW 202201344A
Authority
TW
Taiwan
Prior art keywords
pixel
image
depth
value
angle class
Prior art date
Application number
TW109121387A
Other languages
Chinese (zh)
Other versions
TWI736335B (en
Inventor
楊家輝
蕭桐
李彥醇
Original Assignee
國立成功大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立成功大學 filed Critical 國立成功大學
Priority to TW109121387A priority Critical patent/TWI736335B/en
Application granted granted Critical
Publication of TWI736335B publication Critical patent/TWI736335B/en
Publication of TW202201344A publication Critical patent/TW202201344A/en

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

A depth image based rendering method includes: obtaining a two-dimensional image and a corresponding depth image, and warping the two-dimensional image according to the depth image to obtain a warped image which includes at least one hole region; filling the hole region from down to up to generate a first filled image; filling the hole region from up to down to generate a second filled image; classifying each pixel of the hole region as a first angle type or a second angle type, filling the pixel by adopting the first filled image if the pixel belongs to the first angle type or by adopting the second filled image if the pixel belongs to the second angle type.

Description

基於深度影像生成方法、電子裝置與電腦程式產品Depth-based image generation method, electronic device and computer program product

本揭露是關於一種基於深度影像生成方法,其中採用了加權式3D小數曲移計算與雙徑梯度空洞填補方法。The present disclosure relates to a depth-based image generation method, which adopts a weighted 3D decimal warp calculation and a double-path gradient hole filling method.

舒適3D影像顯像已經是現今科技熱門的趨勢之一,近年來娛樂產業、醫療體系等等都將3D影像顯像應用在生活上。然而,舒適3D影像顯像需多視角影像,可惜因多視角攝影機拍攝成本昂貴且不易傳送,導致內容缺乏及品質下降。基於深度影像生成技術(depth image based rendering,DIBR)可根據單一2D影像與景深圖產生多視角的3D影像,現有傳統基於深度影像生成技術包括三個主要處理階段:深度圖預處理;3D曲移(3D Warping);空洞填補(Hole Filling)。深度圖預處理乃對深度圖進行平滑處理,以減少水平深度變化,進而降低3D曲移之後的空洞。3D曲移乃利用光學投射原理,2D影像中的像素依照景深值做線性水平移動,景深值大的像素移動較大,反之則移動較小。3D曲移會產生許多空洞,上述空洞填補的步驟便是透過例如內插、外插、修復等方法來填補3D曲移產生的空洞。如何改進上述流程,為此領域技術人員所關心的議題。Comfortable 3D imaging has become one of the hottest trends in technology. In recent years, entertainment industry, medical system, etc. have all applied 3D imaging in daily life. However, comfortable 3D image development requires multi-view images. Unfortunately, multi-view cameras are expensive and difficult to transmit, resulting in lack of content and quality degradation. Depth image based rendering (DIBR) can generate multi-view 3D images based on a single 2D image and depth map. The existing traditional depth image based rendering technology includes three main processing stages: depth map preprocessing; 3D warp shift (3D Warping); Hole Filling. Depth map preprocessing is to smooth the depth map to reduce horizontal depth variation and thus reduce voids after 3D warping. 3D warp shift uses the principle of optical projection. The pixels in the 2D image move linearly and horizontally according to the depth of field value. The pixel with a large depth of field value moves more, and vice versa. 3D warping will generate many holes, and the above-mentioned steps of filling the holes are to fill the holes produced by 3D warping through methods such as interpolation, extrapolation, and repairing. How to improve the above process is a topic of concern to those skilled in the art.

本發明的實施例提出一種基於深度影像生成方法,適用於一電子裝置,此基於深度影像生成方法包括:取得二維影像以及對應二維影像的深度影像,並且根據深度影像對二維影像做像素曲移以得到曲移後影像,此曲移後影像包括至少一個空洞區;由上而下填補空洞區以產生第一填補後影像;由下而上填補空洞區以產生第二填補後影像;以及將空洞區的每一個像素分為第一角度類別或第二角度類別,若是第一角度類別則採用第一填補後影像,若是第二角度類別則採用第二填補後影像來填補像素。An embodiment of the present invention provides a depth-based image generation method, which is suitable for an electronic device. The depth-based image generation method includes: obtaining a two-dimensional image and a depth image corresponding to the two-dimensional image, and performing pixel processing on the two-dimensional image according to the depth image. warping to obtain a warped image, the warped image includes at least one hollow area; filling the hollow area from top to bottom to generate a first filled image; filling the hollow area from bottom to top to generate a second filled image; and classifying each pixel in the cavity area into a first angle class or a second angle class, if the first angle class is the first padded image, and if the second angle class is the second padded image, the pixel is filled with the second padded image.

在一些實施例中,上述由上而下填補空洞區以產生第一填補後影像的步驟包括:對於空洞區的第一像素,根據第一像素的上方像素與左上方像素計算一水平梯度值,並且根據第一像素的左方像素與左上方像素計算一垂直梯度值;若水平梯度值大於垂直梯度值,採用上方像素來填補第一像素;以及若垂直梯度值大於水平梯度值,採用左方像素來填補第一像素。In some embodiments, the step of filling the hole area from top to bottom to generate the first filled image includes: for the first pixel in the hole area, calculating a horizontal gradient value according to the upper pixel and the upper left pixel of the first pixel, and calculate a vertical gradient value according to the left pixel and the upper left pixel of the first pixel; if the horizontal gradient value is greater than the vertical gradient value, the upper pixel is used to fill the first pixel; and if the vertical gradient value is greater than the horizontal gradient value, the left gradient value is used. pixels to fill the first pixel.

在一些實施例中,上述由下而上填補空洞區以產生第二填補後影像的步驟包括:對於空洞區的第一像素,根據第一像素的下方像素與左下方像素計算一水平梯度值,並且根據第一像素的左方像素與左下方像素計算一垂直梯度值;若水平梯度值大於垂直梯度值,採用下方像素來填補第一像素;以及若垂直梯度值大於水平梯度值,採用左方像素來填補第一像素。In some embodiments, the step of filling the hole region from bottom to top to generate the second filled image includes: for the first pixel of the hole region, calculating a horizontal gradient value according to the lower pixel and the lower left pixel of the first pixel, And calculate a vertical gradient value according to the left pixel and the lower left pixel of the first pixel; if the horizontal gradient value is greater than the vertical gradient value, use the lower pixel to fill the first pixel; and if the vertical gradient value is greater than the horizontal gradient value, use the left gradient value. pixels to fill the first pixel.

以另一個角度來說,上述將空洞區的每一像素分為第一角度類別或第二角度類別的步驟包括:對於空洞區的每一個像素,如果此像素為空洞像素且左方像素與上方像素為非空洞像素,將此像素分類為第一角度類別;對於空洞區的每一個像素,如果此像素為空洞像素且左方像素屬於第一角度類別,則將此像素分類為第一角度類別;對於空洞區的每一個像素,如果此像素為空洞像素、下方像素屬於第一角度類別且左方像素為非空洞像素,將此像素分類為第一角度類別;以及將空洞區的其餘空洞像素分類為第二角度類別。From another perspective, the above-mentioned step of classifying each pixel in the hole area into the first angle class or the second angle class includes: for each pixel in the hole area, if the pixel is a hole pixel and the left pixel is the same as the upper one. If the pixel is a non-hole pixel, the pixel is classified as the first angle category; for each pixel in the hole area, if the pixel is a hole pixel and the left pixel belongs to the first angle category, the pixel is classified as the first angle category ; for each pixel in the hole area, if the pixel is a hole pixel, the pixel below belongs to the first angle class and the pixel to the left is a non-hole pixel, classify this pixel as the first angle class; and classify the rest of the hole pixels in the hole area Classified as the second angle category.

在一些實施例中,上述根據深度影像對二維影像做像素曲移以得到曲移後影像的步驟包括:根據深度影像中的深度值計算一小數點位置,將深度值填入至小數點位置以產生曲移後深度影像,並將二維影像中對應的像素值填入小數點位置以產生曲移後二維影像;對於一整數點位置,計算整數點位置的鄰近範圍內的最大深度值,並且刪除鄰近範圍內與最大深度值相差超過一臨界值的深度值;以及根據鄰近範圍內剩餘的深度值所對應的像素值加權計算整數點位置上的像素值。In some embodiments, the above-mentioned step of performing pixel warping on the two-dimensional image according to the depth image to obtain the warped image includes: calculating the decimal point position according to the depth value in the depth image, and filling the depth value to the decimal point position To generate a warped depth image, and fill the corresponding pixel value in the 2D image into the decimal point position to generate a warped 2D image; for an integer point position, calculate the maximum depth value in the vicinity of the integer point position , and delete the depth values in the adjacent range that differ from the maximum depth value by more than a critical value; and calculate the pixel value at the integer point position by weighting according to the pixel values corresponding to the remaining depth values in the adjacent range.

在一些實施例中,上述根據鄰近範圍內剩餘的深度值所對應的像素值計算整數點位置上的像素值的步驟包括:計算整數點位置與鄰近範圍內剩餘的深度值的位置之間的高斯距離;以及以高斯距離作為權重來加總剩餘的深度值所對應的像素值以計算出整數點位置上的像素值。In some embodiments, the step of calculating the pixel value at the integer point position according to the pixel value corresponding to the remaining depth value in the adjacent range includes: calculating a Gaussian between the integer point position and the position of the remaining depth value in the adjacent range distance; and using the Gaussian distance as a weight to add up the pixel values corresponding to the remaining depth values to calculate the pixel value at the integer point position.

在一些實施例中,上述的臨界範圍包括水平方向與垂直方向。In some embodiments, the above-mentioned critical range includes a horizontal direction and a vertical direction.

以另一個角度來說,本發明的實施例提出一種電子裝置,包括記憶體與處理器。記憶體儲存有多個指令,處理器用以執行這些指令以完成上述的基於深度影像生成方法。From another perspective, an embodiment of the present invention provides an electronic device including a memory and a processor. The memory stores a plurality of instructions, and the processor is used for executing the instructions to complete the above-mentioned depth image-based generation method.

以另一個角度來說,本發明的實施例提出一種電腦程式產品,由電子裝置載入並執行以完成上述的基於深度影像生成方法。From another perspective, an embodiment of the present invention provides a computer program product, which is loaded and executed by an electronic device to implement the above-mentioned depth-based image generation method.

採用上述的基於深度影像生成方法可以減少空洞也可以減少計算量。Using the above-mentioned depth-based image generation method can reduce holes and also reduce the amount of calculation.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.

關於本文中所使用之「第一」、「第二」等,並非特別指次序或順位的意思,其僅為了區別以相同技術用語描述的元件或操作。The terms "first", "second", etc. used in this document do not mean a particular order or order, but are only used to distinguish elements or operations described in the same technical terms.

圖1是根據一實施例繪示電子裝置的示意圖。請參照圖1,電子裝置100可以是智慧型手機、平板電腦、個人電腦、筆記型電腦、伺服器、工業電腦或具有計算能力的各種電子裝置等,本發明並不在此限。電子裝置100包括了處理器110與記憶體120,其中處理器110可為中央處理器、微處理器、微控制器、影像處理晶片、特殊應用積體電路等,記憶體120可為隨機存取記憶體、唯讀記憶體、快閃記憶體、軟碟、硬碟、光碟、隨身碟、磁帶或是可透過網際網路存取之資料庫,其中儲存有多個指令,處理器110會執行這些指令來完成一基於深度影像生成方法,以下將詳細說明此方法。FIG. 1 is a schematic diagram illustrating an electronic device according to an embodiment. Referring to FIG. 1 , the electronic device 100 may be a smart phone, a tablet computer, a personal computer, a notebook computer, a server, an industrial computer, or various electronic devices with computing capabilities, etc., the invention is not limited thereto. The electronic device 100 includes a processor 110 and a memory 120, wherein the processor 110 can be a central processing unit, a microprocessor, a microcontroller, an image processing chip, a special application integrated circuit, etc., and the memory 120 can be a random access A memory, read-only memory, flash memory, floppy disk, hard disk, optical disk, pen drive, magnetic tape, or a database accessible through the Internet in which a plurality of instructions are stored and executed by the processor 110 These instructions implement a depth-based image generation method, which will be described in detail below.

圖2是根據一實施例繪示基於深度影像生成方法的示意圖。請參照圖2,基於深度影像生成方法是用以根據一張二維影像210與對應此二維影像210的深度影像220來產生不同視角的二維影像。例如,二維影像210的視角設定為視角0,深度影像220包括從此視角0量測到的深度值,在此實施例中越靠近鏡頭的景深值越大,但本發明並不在此限。根據二維影像210與深度影像可以產生視角1的二維影像230、視角2的二維影像240以及視角-1的二維影像250,以此類推。在此用視角1~視角3來表示右方的視角,視角-1~視角-3來表示左方的視角,但本發明並不在此限。舉例來說,如果二維影像210中具有一前景物件,從右方視角觀察時此前景物件會向左位移,從左方視角觀察時此前景物件會向右位移,當前景物件越靠近鏡頭時位移會越大。FIG. 2 is a schematic diagram illustrating a method for generating a depth image based on an embodiment. Referring to FIG. 2 , the depth-based image generation method is used to generate two-dimensional images with different viewing angles according to a two-dimensional image 210 and a depth image 220 corresponding to the two-dimensional image 210 . For example, the viewing angle of the two-dimensional image 210 is set to viewing angle 0, and the depth image 220 includes the depth value measured from the viewing angle 0. In this embodiment, the closer to the lens, the greater the depth value, but the invention is not limited thereto. According to the 2D image 210 and the depth image, the 2D image 230 of view 1, the 2D image 240 of view 2, the 2D image 250 of view-1 can be generated, and so on. Here, the viewing angle 1 to the viewing angle 3 are used to represent the right viewing angle, and the viewing angle-1 to the viewing angle-3 are used to represent the left viewing angle, but the present invention is not limited thereto. For example, if there is a foreground object in the 2D image 210, the foreground object will be shifted to the left when viewed from the right perspective, and the foreground object will be shifted to the right when viewed from the left perspective, when the foreground object is closer to the lens. The displacement will be larger.

在此,二維影像210與深度影像220可以分別透過影像感測器與深度感測器取得,也可從既有的資料庫或其他裝置取得。上述的影像感測器可包括感光耦合元件(Charge-coupled Device,CCD)感測器、互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor)感測器或其他合適的感光元件。上述的深度感測器可包括雙攝影機、人工智慧深度生成網路、結構光感測裝置或任意可以感測場景深度的裝置。Here, the 2D image 210 and the depth image 220 may be obtained through an image sensor and a depth sensor, respectively, or may be obtained from an existing database or other devices. The above-mentioned image sensor may include a Charge-coupled Device (CCD) sensor, a Complementary Metal-Oxide Semiconductor (Complementary Metal-Oxide Semiconductor) sensor, or other suitable photosensitive elements. The above-mentioned depth sensor may include dual cameras, an artificial intelligence depth generation network, a structured light sensing device, or any device that can sense the depth of a scene.

圖3是根據一實施例繪示基於深度影像生成方法的流程示意圖。請參照圖3,在步驟310會對深度影像220進行預處理,例如進行平滑處理以減少深度值變化,在此可以採用任意的預處理,本發明並不限制步驟310的內容。在步驟320中,根據預處理以後的深度影像220對二維影像210做像素曲移,特別的是在此實施例提出一種加權式3D小數曲移計算。執行步驟320後會取得一個曲移後影像,此曲移後影像包括一或多個空洞區,在步驟330會填補這些空洞區,在此實施例會採用一個雙徑梯度空洞填補方法,填補空洞區以後可以產生虛擬視角影像,例如為二維影像230。以下將舉實施例詳細說明步驟320與步驟330。FIG. 3 is a schematic flowchart illustrating a method for generating a depth image based on an embodiment. Referring to FIG. 3 , in step 310 , the depth image 220 is preprocessed, for example, smoothing is performed to reduce the variation of the depth value. Any preprocessing can be used here, and the present invention does not limit the content of step 310 . In step 320, a pixel warp is performed on the 2D image 210 according to the preprocessed depth image 220. In particular, this embodiment proposes a weighted 3D decimal warp calculation. After step 320 is executed, a warped image is obtained, and the warped image includes one or more hollow regions. These hollow regions are filled in step 330. In this embodiment, a dual-path gradient hole filling method is used to fill the hollow regions. A virtual perspective image, such as a two-dimensional image 230, can be generated later. Steps 320 and 330 will be described in detail below with examples.

圖4A與圖4B是根據一實施例繪示加權式3D小數曲移計算的流程示意圖。請參照圖4A,在此,二維影像210中的像素表示為

Figure 02_image001
,預處理過後的深度影像220的景深值表示為
Figure 02_image003
,x與y分別代表X座標與Y座標。上述的步驟320包括了步驟401~405,在步驟401中,根據深度影像220中的每一個深度值計算一小數點位置。具體來說,根據深度值可以計算出一視差位移值,如以下數學式1所示。 [數學式1]
Figure 02_image005
FIG. 4A and FIG. 4B are schematic flowcharts illustrating a weighted 3D decimal curvature calculation according to an embodiment. Please refer to FIG. 4A, where the pixels in the two-dimensional image 210 are represented as
Figure 02_image001
, the depth of field value of the preprocessed depth image 220 is expressed as
Figure 02_image003
, x and y represent the X coordinate and the Y coordinate, respectively. The above-mentioned step 320 includes steps 401 to 405 . In step 401 , the decimal point position is calculated according to each depth value in the depth image 220 . Specifically, a parallax displacement value can be calculated according to the depth value, as shown in the following equation 1. [Mathematical formula 1]
Figure 02_image005

其中

Figure 02_image007
為視差位移值。
Figure 02_image009
表示小數點位置的小數點精確度,K可為2、4、8或任意正整數。
Figure 02_image011
表示視角,例如為圖2所示的-3~-1與1~3的其中之一。
Figure 02_image013
為景深視差參數(depth disparity ratio),可依照相機及立體顯示器的參數來決定。
Figure 02_image015
表示捨去小數的整數。上述的數學式1是將傳統的視差位移值乘上K倍,經過四捨五入的運算以後再除以K倍,但值得注意的是視差位移值
Figure 02_image007
可以是整數也可以有小數,若以K=4為例,小數值可為.00、.25、.50、.75等四個值。接下來,將x座標加上視差位移值
Figure 02_image007
便可以得到一個小數點位置。in
Figure 02_image007
is the parallax displacement value.
Figure 02_image009
Indicates the decimal point precision of the decimal point position, K can be 2, 4, 8 or any positive integer.
Figure 02_image011
Indicates the viewing angle, for example, one of -3 to -1 and 1 to 3 shown in FIG. 2 .
Figure 02_image013
It is a depth disparity ratio, which can be determined according to the parameters of the camera and the stereoscopic display.
Figure 02_image015
Represents an integer with rounded decimals. The above mathematical formula 1 is to multiply the traditional parallax displacement value by K times, and then divide it by K times after rounding, but it is worth noting that the parallax displacement value
Figure 02_image007
It can be an integer or a decimal. If K=4 is used as an example, the decimal value can be four values such as .00, .25, .50, and .75. Next, add the x-coordinate to the parallax displacement value
Figure 02_image007
You can get a decimal point position.

在步驟402,將原本的景深值

Figure 02_image017
移動至上述的小數點位置以產生曲移後深度影像,此步驟可以表示為以下數學式2,
Figure 02_image019
即是曲移後深度影像,
Figure 02_image021
為小數點位置。值得注意的是Y座標不需要位移。 [數學式2]
Figure 02_image023
In step 402, the original depth of field value is
Figure 02_image017
Move to the above-mentioned decimal point position to generate the warped depth image, this step can be expressed as the following equation 2,
Figure 02_image019
That is, the depth image after the warp shift,
Figure 02_image021
for the decimal point position. It is worth noting that the Y coordinate does not need to be shifted. [Mathematical formula 2]
Figure 02_image023

在步驟403,將二維影像210中對應的像素值填入小數點位置以產生曲移後二維影像,此步驟可以表示為以下數學式3,

Figure 02_image025
即是曲移後二維影像。 [數學式3]
Figure 02_image027
In step 403, the corresponding pixel value in the two-dimensional image 210 is filled in the decimal point position to generate a two-dimensional image after warping. This step can be expressed as the following mathematical formula 3,
Figure 02_image025
That is, the two-dimensional image after the warp shift. [Mathematical formula 3]
Figure 02_image027

如果小數點位置發生重疊的情形,即不同的x對應至相同的

Figure 02_image021
,則可以保留較大的深度值,捨棄較小的深度值。If the position of the decimal point overlaps, that is, different x corresponds to the same
Figure 02_image021
, the larger depth value can be kept and the smaller depth value can be discarded.

由於在數學式1中視差位移值

Figure 02_image007
可以包括K個小數,這等同於曲移後二維影像
Figure 02_image025
的寬度會是二維影像
Figure 02_image029
的K倍,如曲移後二維影像411所示。相同的,曲移後深度影像
Figure 02_image019
的寬度會是深度影像
Figure 02_image031
的K倍,如曲移後深度影像412所示。Since the parallax displacement value in Equation 1
Figure 02_image007
Can include K decimals, which is equivalent to the 2D image after warp
Figure 02_image025
will be the width of the 2D image
Figure 02_image029
K times, as shown in the two-dimensional image 411 after the warp. The same, the depth image after warping
Figure 02_image019
The width of will be the depth image
Figure 02_image031
K times, as shown in the depth image 412 after the warp.

圖5A與圖5B是根據一實施例繪示曲移後二維影像與曲移後深度影像的局部示意圖。請參照圖5A,由於Y座標上沒有位移,在此以X座標來表示曲移後二維影像

Figure 02_image025
與曲移後深度影像
Figure 02_image019
,在此繪示了局部(
Figure 02_image033
)的像素值與深度值,值得注意的是較粗的標線表示整數點位置,如
Figure 02_image035
Figure 02_image037
Figure 02_image039
Figure 02_image041
等,而其他的標線表示小數點位置,如.25、.50、.75等。在此例子中,景深值“30”被填入
Figure 02_image043
的位置,景深值“121”被填入
Figure 02_image045
的位置,而景深值“120”被填入
Figure 02_image047
。每個景深值都對應至一個像素值,景深值“30”是對應至像素值“68”,景深值“121”是對應至像素值“215”,景深值“120”是對應至像素值“212”。接下來要計算每個整數點位置上的像素值。5A and FIG. 5B are partial schematic diagrams illustrating a warped two-dimensional image and a warped depth image according to an embodiment. Please refer to FIG. 5A , since there is no displacement on the Y coordinate, here the X coordinate is used to represent the two-dimensional image after warping.
Figure 02_image025
Depth image with warp
Figure 02_image019
, where the partial (
Figure 02_image033
) pixel value and depth value, it is worth noting that the thicker reticle indicates the integer point position, such as
Figure 02_image035
,
Figure 02_image037
,
Figure 02_image039
,
Figure 02_image041
etc., while other reticles indicate the decimal point position, such as .25, .50, .75, etc. In this example, the depth of field value "30" is filled in
Figure 02_image043
position, the depth of field value "121" is filled in
Figure 02_image045
position, and the depth of field value "120" is filled in
Figure 02_image047
. Each depth value corresponds to a pixel value, the depth value "30" corresponds to the pixel value "68", the depth value "121" corresponds to the pixel value "215", and the depth value "120" corresponds to the pixel value "212". The next step is to calculate the pixel value at each integer point position.

請參照圖4A與圖5A,在步驟404中,對於曲移後深度影像中的每個整數點位置,計算整數點位置的鄰近範圍內的最大深度值,並且刪除鄰近範圍內與最大深度值相差超過一臨界值的深度值。若整數點位置為x,則鄰近範圍可以表示為以下數學式4。 [數學式4]

Figure 02_image049
Referring to FIGS. 4A and 5A , in step 404, for each integer point position in the depth image after warping, the maximum depth value in the adjacent range of the integer point position is calculated, and the difference between the adjacent range and the maximum depth value is deleted. A depth value that exceeds a threshold. If the integer point position is x, the adjacent range can be expressed as the following Mathematical formula 4. [Mathematical formula 4]
Figure 02_image049

Figure 02_image051
表示鄰近範圍,也表示在此範圍內所有
Figure 02_image053
所形成的集合。在此實施例中K=4,因此鄰近範圍
Figure 02_image051
可以設定為左方2個小數點位置到右方2個小數點位置,以整數點位置“100”為例,鄰近範圍
Figure 02_image051
表示從
Figure 02_image043
Figure 02_image055
的範圍。接下來可以根據鄰近範圍
Figure 02_image051
的像素值來決定整數點位置
Figure 02_image039
上的像素值,如果有多個像素被位移至此鄰近範圍
Figure 02_image051
,則保留與最大景深值相差一臨界值內的景深值,因為較大的景深值在前景,會遮蓋位於背景的像素值。具體來說,鄰近範圍
Figure 02_image051
內的最大深度值為“121”,表示為
Figure 02_image057
,如以下數學式5所示。鄰近範圍
Figure 02_image051
內剩餘的景深值形成一個前景景深值集合,表示為以下數學式6。 [數學式5]
Figure 02_image059
[數學式6]
Figure 02_image061
Figure 02_image051
Indicates the adjacent range, also means all within this range
Figure 02_image053
formed collection. In this example K=4, so the adjacent range
Figure 02_image051
It can be set from 2 decimal points on the left to 2 decimal points on the right. Taking the integer point position "100" as an example, the adjacent range
Figure 02_image051
means from
Figure 02_image043
to
Figure 02_image055
range. Next, according to the adjacent range
Figure 02_image051
pixel value to determine the integer point position
Figure 02_image039
The pixel value on , if multiple pixels are shifted into this neighborhood
Figure 02_image051
, the depth of field value within a critical value from the maximum depth of field value is retained, because the larger depth of field value is in the foreground and will cover the pixel value in the background. Specifically, the proximity
Figure 02_image051
The maximum depth value inside is "121", which is expressed as
Figure 02_image057
, as shown in Equation 5 below. Proximity
Figure 02_image051
The remaining depth-of-field values within form a set of foreground depth-of-field values, which are expressed as Equation 6 below. [Mathematical formula 5]
Figure 02_image059
[Math 6]
Figure 02_image061

Figure 02_image063
為前景景深值集合,在此例子中包括了“121”與“120”,而景深值“30”則被刪除。
Figure 02_image065
為臨界值,可設定為任意合適的數值。
Figure 02_image063
For the foreground depth value set, "121" and "120" are included in this example, and the depth value "30" is deleted.
Figure 02_image065
is a critical value, which can be set to any suitable value.

此外,曲移後二維影像

Figure 02_image025
中對應的像素值也會被刪除,剩餘的景深值與像素值如圖5B所示,剩餘的像素值可以表示為以下數學式7。在此例子中剩下的像素值為“215”與“212”。 [數學式7]
Figure 02_image067
In addition, the 2D image after warping
Figure 02_image025
The corresponding pixel values in are also deleted, the remaining depth of field values and pixel values are shown in FIG. 5B , and the remaining pixel values can be expressed as the following mathematical formula 7. The remaining pixel values in this example are "215" and "212". [Math 7]
Figure 02_image067

值得注意的是,在圖5A與圖5B中鄰近範圍

Figure 02_image051
只包括水平方向,但在一些實施例中也可包括垂直方向。舉例來說,上述的鄰近範圍
Figure 02_image051
可以替換為
Figure 02_image069
,表示如以下數學式8。上述的數學式5可以替換為以下數學式9。 [數學式8]
Figure 02_image071
[數學式9]
Figure 02_image073
It is worth noting that in Figure 5A and Figure 5B the adjacent range
Figure 02_image051
Only the horizontal direction is included, but may also include the vertical direction in some embodiments. For example, the aforementioned proximity range
Figure 02_image051
can be replaced with
Figure 02_image069
, which is represented by the following Mathematical formula 8. The above-described mathematical formula 5 may be replaced with the following mathematical formula 9. [Math 8]
Figure 02_image071
[Math 9]
Figure 02_image073

參照圖4A,接下來在步驟405,可以根據鄰近範圍

Figure 02_image051
內剩餘的深度值所對應的像素值計算整數點位置上的像素值。因為距離整數點位置越近的像素值越重要,在此實施例中可先計算整數點位置與鄰近範圍內剩餘的深度值的位置之間的高斯距離,然後以高斯距離作為權重來加總這些剩餘的深度值所對應的像素值以計算出整數點位置上的像素值,在其他實施例中上述高斯距離亦可以替換為其他線性或非線性距離權重。步驟405可以表示為以下數學式10與數學式11。 [數學式10]
Figure 02_image075
[數學式11]
Figure 02_image077
Referring to FIG. 4A, next in step 405, according to the proximity range
Figure 02_image051
The pixel value corresponding to the remaining depth value within the calculation of the pixel value at the integer point position. Because the pixel value closer to the integer point position is more important, in this embodiment, the Gaussian distance between the integer point position and the position of the remaining depth values in the adjacent range can be calculated first, and then the Gaussian distance is used as a weight to sum these The pixel values corresponding to the remaining depth values are used to calculate the pixel values at the integer point positions. In other embodiments, the above-mentioned Gaussian distance can also be replaced with other linear or nonlinear distance weights. Step 405 can be expressed as the following Math 10 and Math 11. [Math 10]
Figure 02_image075
[Math 11]
Figure 02_image077

其中(x,y)為整數點位置的X座標與Y座標。

Figure 02_image079
為參數,代表高斯分布的變異數。
Figure 02_image081
為鄰近範圍
Figure 02_image069
內剩餘像素值的X座標與Y座標所形成的集合。雖然圖5A與圖5B只採用了X方向的鄰近範圍,但上述的數學式10、11是更泛用(general)的表示,不只採用X方向上的相鄰像素值,也採用了Y方向上的相鄰像素值。在計算出整數點位置上的像素值以後可以得到曲移後影像
Figure 02_image083
,例如圖4B的曲移後影像410,其中包括至少一個空洞區420。在此實施例中是要產生出左方的視角,因此影像中的物件會往右位移,在物件的左方會產生空洞區420。Where (x, y) is the X-coordinate and Y-coordinate of the integer point position.
Figure 02_image079
is a parameter representing the variance of the Gaussian distribution.
Figure 02_image081
for the adjacent range
Figure 02_image069
The set of X and Y coordinates of the remaining pixel values within. Although only the adjacent range in the X direction is used in FIGS. 5A and 5B , the above-mentioned mathematical expressions 10 and 11 are more general expressions, not only using the adjacent pixel values in the X direction, but also using the adjacent pixel values in the Y direction. adjacent pixel values. After calculating the pixel value at the integer point position, the warped image can be obtained
Figure 02_image083
For example, the warped image 410 in FIG. 4B includes at least one hollow region 420 therein. In this embodiment, a left viewing angle is to be generated, so the object in the image will be displaced to the right, and a hollow area 420 will be generated on the left side of the object.

習知的像素曲移方法因為只採用整數值,當有小數點時只好捨去小數點,因此會額外產生許多孔洞。相反地,上述的做法保留小數點,因此可大量減少孔洞的產生。Since the conventional pixel warping method only uses integer values, the decimal point has to be discarded when there is a decimal point, so many additional holes are generated. Conversely, the above approach preserves the decimal point, thus greatly reducing the generation of holes.

接下來執行雙徑梯度空洞填補方法,如圖6所示。在此實施例中會用兩個方向來填補曲移後影像410。具體來說,在步驟610中,由上而下填補曲移後影像410中的空洞區以產生第一填補後影像611。當由上而下,由左而右地掃瞄曲移後影像410時,可以確保空洞的上方像素、左上方像素與左方像素有值(不是空洞),因此對於空洞區中的一個像素

Figure 02_image083
,可以根據上方像素與左上方像素計算水平梯度值,如以下數學式12所示,另外可根據左方像素與左上方像素計算垂直梯度值,如以下數學式13所示。 [數學式12]
Figure 02_image085
[數學式13]
Figure 02_image087
Next, the dual-path gradient hole filling method is performed, as shown in Figure 6. In this embodiment, the warped image 410 is filled with two directions. Specifically, in step 610 , the hollow area in the warped image 410 is filled from top to bottom to generate a first filled image 611 . When scanning the warped image 410 from top to bottom and left to right, it can ensure that the upper pixel, upper left pixel and left pixel of the hole have values (not holes), so for a pixel in the hole area
Figure 02_image083
, the horizontal gradient value can be calculated according to the upper pixel and the upper left pixel, as shown in the following Equation 12, and the vertical gradient value can be calculated according to the left pixel and the upper left pixel, as shown in the following Equation 13. [Math 12]
Figure 02_image085
[Math 13]
Figure 02_image087

其中

Figure 02_image089
為左上方像素,
Figure 02_image091
為上方像素,
Figure 02_image093
為左方像素。若水平梯度值
Figure 02_image095
大於垂直梯度值
Figure 02_image097
,表示在像素
Figure 02_image083
周圍存在垂直邊緣,因此採用上方像素
Figure 02_image091
來填補像素
Figure 02_image083
。另一方面,如果垂直梯度值
Figure 02_image097
大於水平梯度值
Figure 02_image095
,表示在像素
Figure 02_image083
周圍存在水平邊緣,則採用左方像素
Figure 02_image093
來填補像素
Figure 02_image083
。換言之,步驟610是根據以下數學式14來進行由上往下填補。 [數學式14]
Figure 02_image099
in
Figure 02_image089
is the upper left pixel,
Figure 02_image091
is the upper pixel,
Figure 02_image093
is the left pixel. If the horizontal gradient value
Figure 02_image095
greater than the vertical gradient value
Figure 02_image097
, expressed in pixels
Figure 02_image083
There is a vertical edge around it, so the pixel above is taken
Figure 02_image091
to fill pixels
Figure 02_image083
. On the other hand, if the vertical gradient value
Figure 02_image097
greater than the horizontal gradient value
Figure 02_image095
, expressed in pixels
Figure 02_image083
There is a horizontal edge around it, the left pixel is used
Figure 02_image093
to fill pixels
Figure 02_image083
. In other words, step 610 is to perform top-to-bottom padding according to the following equation 14. [Math 14]
Figure 02_image099

所得到的影像

Figure 02_image101
即是第一填補後影像611。在此實施例中當如果垂直梯度值
Figure 02_image097
等於水平梯度值
Figure 02_image095
時是採用左方像素,但在其他實施例也可以採用上方像素,本發明並不在此限。obtained image
Figure 02_image101
That is, the first filled image 611 . In this example, if the vertical gradient value
Figure 02_image097
equal to the horizontal gradient value
Figure 02_image095
In other embodiments, the left pixel is used, but the upper pixel may also be used in other embodiments, and the present invention is not limited thereto.

另一方面,在步驟620中,由下而上填補曲移後影像410中的空洞區以產生第二填補後影像621。如果從下而上,由左到上進行掃描,下方像素

Figure 02_image103
、左方像素
Figure 02_image093
與左下方像素
Figure 02_image105
有值(不是空洞),因此可以根據下方像素與左下方像素計算水平梯度值,如以下數學式15所示,並且根據左方像素與左下方像素計算垂直梯度值,如以下數學式16所示。 [數學式15]
Figure 02_image107
[數學式16]
Figure 02_image109
On the other hand, in step 620 , the hollow area in the warped image 410 is filled from bottom to top to generate a second filled image 621 . If scanning from bottom to top, left to top, the bottom pixel
Figure 02_image103
, left pixel
Figure 02_image093
with the bottom left pixel
Figure 02_image105
has a value (not a hole), so the horizontal gradient value can be calculated from the lower pixel and the lower left pixel, as shown in the following Equation 15, and the vertical gradient value can be calculated according to the left pixel and the lower left pixel, as shown in the following Equation 16 . [Math 15]
Figure 02_image107
[Math 16]
Figure 02_image109

若水平梯度值

Figure 02_image111
大於垂直梯度值
Figure 02_image113
,表示像素
Figure 02_image083
周圍存在垂直邊緣,因此採用下方像素
Figure 02_image103
來填補像素
Figure 02_image083
。如果垂直梯度值
Figure 02_image113
大於水平梯度值
Figure 02_image111
,採用左方像素
Figure 02_image093
來填補像素
Figure 02_image083
。換言之,步驟620是根據以下數學式17進行由下往上填補。 [數學式17]
Figure 02_image115
If the horizontal gradient value
Figure 02_image111
greater than the vertical gradient value
Figure 02_image113
, representing pixels
Figure 02_image083
There is a vertical edge around it, so the pixel below is taken
Figure 02_image103
to fill pixels
Figure 02_image083
. If the vertical gradient value
Figure 02_image113
greater than the horizontal gradient value
Figure 02_image111
, using pixels on the left
Figure 02_image093
to fill pixels
Figure 02_image083
. In other words, step 620 is to perform bottom-up padding according to the following mathematical formula 17. [Math 17]
Figure 02_image115

所得到的影像

Figure 02_image117
即是第二填補後影像621。值得注意的是,當像素周圍有45度的邊緣時,第一填補後影像611的填補效果較好,當像素周圍有135度的邊緣時,第二填補後影像621的填補效果較好。因此,在步驟630中會將空洞區的每一個像素分為兩個角度類別,接下來可以根據角度類別來從第一填補後影像611與第二填補後影像621擇一以填補對應的像素。實作上我們採用一個遮罩M(x,y),如果是空洞區則在對應的位置填入0,非空洞區則填入255,如以下數學式18所示。 [數學式18]
Figure 02_image119
obtained image
Figure 02_image117
That is, the second padded image 621 . It is worth noting that when there is an edge of 45 degrees around the pixel, the filling effect of the first filled image 611 is better, and when there is an edge of 135 degrees around the pixel, the filling effect of the second filled image 621 is better. Therefore, in step 630, each pixel in the cavity area is divided into two angle categories, and then one of the first padded image 611 and the second padded image 621 can be selected according to the angle categories to fill the corresponding pixel. In practice, we use a mask M(x,y), if it is a hole area, fill in 0 in the corresponding position, and fill in 255 in the non-hole area, as shown in the following equation 18. [Math 18]
Figure 02_image119

對於空洞區的每一個像素,如果左方像素與上方像素為非空洞像素,將對應的像素分類為45度的角度類別,並且在遮罩M(x,y)中對應的位置填入128,此判斷也可表示為以下數學式19。 [數學式19]

Figure 02_image121
For each pixel in the hole area, if the left pixel and the upper pixel are non-hole pixels, classify the corresponding pixel into the 45-degree angle category, and fill in 128 in the corresponding position in the mask M(x,y), This judgment can also be expressed as the following Mathematical formula 19. [Math 19]
Figure 02_image121

接下來基於曲移概念,空洞像素會水平延伸,將45度的角素類別往右延伸,此水平延伸可以表示為以下數學式20。 [數學式20]

Figure 02_image123
Next, based on the concept of warp shift, the void pixel will extend horizontally, extending the 45-degree angle pixel category to the right, and this horizontal extension can be expressed as the following mathematical formula 20. [Math 20]
Figure 02_image123

最後再進行垂直延伸,如以下數學式21所示。 [數學式21]

Figure 02_image125
Finally, vertical extension is performed, as shown in the following Equation 21. [Math 21]
Figure 02_image125

在垂直延伸以後,遮罩M(x,y)中剩餘標記為“0”的位置則分類為135度的角度類別。如果某像素屬於45度的角度類別,則採用第一填補後影像611的結果進行填補;如果某像素屬於135度的角度類別,則採用第二填補後影像621的結果進行填補,此步驟可以表示為以下數學式22。 [數學式22]

Figure 02_image127
After the vertical extension, the remaining positions marked "0" in the mask M(x,y) are classified into the angle category of 135 degrees. If a pixel belongs to the angle category of 45 degrees, the result of the first padded image 611 is used for padding; if a pixel belongs to the angle category of 135 degrees, the result of the second padded image 621 is used for padding. This step can be expressed as is the following Mathematical formula 22. [Math 22]
Figure 02_image127

最後得到的影像

Figure 02_image129
例如為圖6的影像631,在各個角度類別都填補的很好。final image
Figure 02_image129
For example, the image 631 in FIG. 6 is well filled in every angle category.

在此實施例中是以遮罩M(x,y)來對像素進行分類,但在其他實施例中也可以採用任意資料結構、任意符號來進行分類。以另一個角度來說,對於空洞區的每一個像素,如果此像素為空洞像素且左方像素與上方像素為非空洞像素,將此像素分類為45度的角度類別;如果此像素為空洞像素且左方像素屬於45度的角度類別,則將此像素分類為45度的角度類別;如果此像素為空洞像素,下方像素屬於45度的角度類別,並且左方像素為非空洞像素,將此像素分類為45度的角度類別;在經過上述步驟以後將其餘的空洞像素分類為135度的角度類別。本發明並不限制用什麼數字、符號或字母來標記上述的角度類別。In this embodiment, the pixels are classified by the mask M(x, y), but in other embodiments, arbitrary data structures and arbitrary symbols can also be used for classification. From another point of view, for each pixel in the hole area, if this pixel is a hole pixel and the left pixel and the upper pixel are non-hole pixels, this pixel is classified as an angle class of 45 degrees; if this pixel is a hole pixel And the left pixel belongs to the 45 degree angle category, then this pixel is classified as the 45 degree angle category; if this pixel is a hole pixel, the lower pixel belongs to the 45 degree angle category, and the left pixel is a non-hole pixel, this is Pixels are classified into the 45 degree angle class; after the above steps the remaining empty pixels are classified into the 135 degree angle class. The present invention does not limit what numbers, symbols or letters are used to mark the above-mentioned angle categories.

以另外一個角度來說,本發明也提出了一電腦程式產品,此產品可由任意的程式語言及/或平台所撰寫,當此電腦程式產品被載入至電腦系統並執行時,可執行上述的基於深度影像生成方法。From another perspective, the present invention also proposes a computer program product, which can be written in any programming language and/or platform. When the computer program product is loaded into a computer system and executed, the above-mentioned program can be executed. Based on the depth image generation method.

在上述提出的基於深度影像生成方法中,相較於現有方法來說可以獲得更自然且計算低的和成虛擬視角影像。In the above-mentioned depth-based image generation method, a more natural and low-computation virtual perspective image can be obtained compared to existing methods.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.

100:電子裝置 110:處理器 120:記憶體 210,230,240,250:二維影像 220:深度影像 310,320,330,401~405:步驟 411:曲移後的二維影像 412:曲移後的深度影像 410:曲移後影像 420:空洞區 610,620,630:步驟 611:第一填補後影像 621:第二填補後影像 631:影像100: Electronics 110: Processor 120: memory 210, 230, 240, 250: 2D images 220: Deep Image 310, 320, 330, 401~405: Steps 411: 2D image after warping 412: Depth image after warp 410: Image after warp 420: Hollow Zone 610, 620, 630: Steps 611: Image after first padding 621: Image after second padding 631: Video

[圖1]是根據一實施例繪示電子裝置的示意圖。 [圖2]是根據一實施例繪示基於深度影像生成方法的示意圖。 [圖3]是根據一實施例繪示基於深度影像生成方法的流程示意圖。 [圖4A]與[圖4B]是根據一實施例繪示加權式3D小數曲移計算的流程示意圖。 [圖5A與圖5B]是根據一實施例繪示曲移後二維影像與曲移後深度影像的局部示意圖。 [圖6]是根據一實施例繪示雙徑梯度空洞填補方法的示意圖。[ FIG. 1 ] is a schematic diagram illustrating an electronic device according to an embodiment. [ FIG. 2 ] is a schematic diagram illustrating a method for generating a depth image based on an embodiment. [ FIG. 3 ] is a schematic flowchart illustrating a method for generating a depth image based on an embodiment. [ FIG. 4A ] and [ FIG. 4B ] are schematic flowcharts illustrating a weighted 3D decimal curvature calculation according to an embodiment. 5A and FIG. 5B are partial schematic diagrams illustrating a two-dimensional image after warping and a depth image after warping, according to an embodiment. [FIG. 6] A schematic diagram illustrating a method for filling dual-path gradient holes according to an embodiment.

210,230:二維影像210, 230: 2D Imagery

220:深度影像220: Deep Image

310,320,330:步驟310, 320, 330: Steps

Claims (9)

一種基於深度影像生成方法,適用於一電子裝置,該深度影像生成方法包括: 取得一二維影像以及對應該二維影像的一深度影像,並且根據該深度影像對該二維影像做像素曲移以得到一曲移後影像,其中該曲移後影像包括至少一空洞區; 由上而下填補該至少一空洞區以產生第一填補後影像; 由下而上填補該至少一空洞區以產生第二填補後影像;以及 將該至少一空洞區的每一像素分為第一角度類別或第二角度類別,若是該第一角度類別則採用該第一填補後影像,若是該第二角度類別則採用該第二填補後影像來填補該像素。A method for generating a depth image, suitable for an electronic device, the method for generating a depth image includes: obtaining a two-dimensional image and a depth image corresponding to the two-dimensional image, and performing pixel warping on the two-dimensional image according to the depth image to obtain a warped image, wherein the warped image includes at least one hollow area; filling the at least one void area from top to bottom to generate a first filled image; filling the at least one hole region from bottom to top to generate a second filled image; and classifying each pixel of the at least one hole region into a first angle class or a second angle class, if the first angle class is the first padded image, and if the second angle class is the second padded image image to fill that pixel. 如請求項1所述之基於深度影像生成方法,其中由上而下填補該至少一空洞區以產生該第一填補後影像的步驟包括: 對於該至少一空洞區的一第一像素,根據該第一像素的上方像素與左上方像素計算一水平梯度值,並且根據該第一像素的左方像素與該左上方像素計算一垂直梯度值; 若該水平梯度值大於該垂直梯度值,採用該上方像素來填補該第一像素;以及 若該垂直梯度值大於該水平梯度值,採用該左方像素來填補該第一像素。The depth image-based generation method according to claim 1, wherein the step of filling the at least one hole region from top to bottom to generate the first filled image comprises: For a first pixel of the at least one cavity region, a horizontal gradient value is calculated according to the upper pixel and the upper left pixel of the first pixel, and a vertical gradient value is calculated according to the left pixel and the upper left pixel of the first pixel. ; If the horizontal gradient value is greater than the vertical gradient value, use the upper pixel to fill the first pixel; and If the vertical gradient value is greater than the horizontal gradient value, the left pixel is used to fill the first pixel. 如請求項1所述之基於深度影像生成方法,其中由下而上填補該至少一空洞區以產生該第二填補後影像的步驟包括: 對於該至少一空洞區的一第一像素,根據該第一像素的下方像素與左下方像素計算一水平梯度值,並且根據該第一像素的左方像素與該左下方像素計算一垂直梯度值; 若該水平梯度值大於該垂直梯度值,採用該下方像素來填補該第一像素;以及 若該垂直梯度值大於該水平梯度值,採用該左方像素來填補該第一像素。The depth image-based generating method according to claim 1, wherein the step of filling the at least one hole region from bottom to top to generate the second filled image comprises: For a first pixel of the at least one cavity region, a horizontal gradient value is calculated according to the lower pixel and the lower left pixel of the first pixel, and a vertical gradient value is calculated according to the left pixel and the lower left pixel of the first pixel ; If the horizontal gradient value is greater than the vertical gradient value, use the lower pixel to fill the first pixel; and If the vertical gradient value is greater than the horizontal gradient value, the left pixel is used to fill the first pixel. 如請求項1所述之基於深度影像生成方法,其中將該至少一空洞區的每一像素分為該第一角度類別該第二角度類別的步驟包括: 對於該至少一空洞區的每一該些像素,如果該像素為空洞像素,且該像素的左方像素與上方像素為非空洞像素,將該像素分類為該第一角度類別; 對於該至少一空洞區的每一該些像素,如果該像素為空洞像素且該像素的該左方像素屬於第一角度類別,則將該像素分類為該第一角度類別; 對於該至少一空洞區的每一該些像素,如果該像素為空洞像素、該像素的下方像素屬於該第一角度類別且該像素的左方像素為非空洞像素,將該像素分類為該第一角度類別;以及 將該至少一空洞區的其餘空洞像素分類為該第二角度類別。The depth image-based generation method as claimed in claim 1, wherein the step of classifying each pixel of the at least one hollow region into the first angle class and the second angle class includes: For each of the pixels in the at least one hole region, if the pixel is a hole pixel, and the pixel to the left and the upper pixel of the pixel are non-hole pixels, classifying the pixel into the first angle class; For each of the pixels in the at least one hole region, if the pixel is a hole pixel and the pixel to the left of the pixel belongs to a first angle class, classifying the pixel as the first angle class; For each of the pixels in the at least one hole region, if the pixel is a hole pixel, the pixel below the pixel belongs to the first angle class, and the pixel to the left of the pixel is a non-hole pixel, the pixel is classified as the first angle class an angle category; and The remaining hole pixels of the at least one hole region are classified into the second angle class. 如請求項1所述之基於深度影像生成方法,其中根據該深度影像對該二維影像做像素曲移以得到該曲移後影像的步驟包括: 根據該深度影像中的一深度值計算一小數點位置,將該深度值填入至該小數點位置以產生一曲移後深度影像,並將該二維影像中對應的像素值填入該小數點位置以產生一曲移後二維影像; 對於一整數點位置,計算該整數點位置的一鄰近範圍內的一最大深度值,並且刪除該鄰近範圍內與該最大深度值相差超過一臨界值的深度值;以及 根據該鄰近範圍內剩餘的深度值所對應的像素值計算該整數點位置上的像素值。The depth image-based generating method according to claim 1, wherein the step of performing pixel warping on the two-dimensional image according to the depth image to obtain the warped image comprises: Calculate the decimal point position according to a depth value in the depth image, fill in the depth value to the decimal point position to generate a depth image after warping, and fill in the decimal point with the corresponding pixel value in the 2D image point position to generate a shifted 2D image; for an integer point location, calculating a maximum depth value within a neighboring range of the integer point location, and deleting depth values within the neighboring region that differ from the maximum depth value by more than a threshold; and The pixel value at the position of the integer point is calculated according to the pixel value corresponding to the remaining depth value in the adjacent range. 如請求項5所述之基於深度影像生成方法,其中根據該鄰近範圍內剩餘的深度值所對應的像素值計算該整數點位置上的像素值的步驟包括: 計算該整數點位置與該鄰近範圍內該些剩餘的深度值的位置之間的高斯距離;以及 以該高斯距離作為權重來加總該些剩餘的深度值所對應的該些像素值以計算出該整數點位置上的該像素值。The method for generating a depth image according to claim 5, wherein the step of calculating the pixel value at the integer point position according to the pixel value corresponding to the remaining depth value in the adjacent range includes: calculating the Gaussian distance between the integer point location and the location of the remaining depth values in the neighborhood; and Using the Gaussian distance as a weight, the pixel values corresponding to the remaining depth values are summed to calculate the pixel value at the integer point position. 如請求項6所述之基於深度影像生成方法,其中該臨界範圍包括水平方向與垂直方向。The depth image-based generation method according to claim 6, wherein the critical range includes a horizontal direction and a vertical direction. 一種電子裝置,包括: 一記憶體,儲存有多個指令;以及 一處理器,用以執行該些指令以完成多個步驟: 取得一二維影像以及對應該二維影像的一深度影像,並且根據該深度影像對該二維影像做像素曲移以得到一曲移後影像,其中該曲移後影像包括至少一空洞區; 由上而下填補該至少一空洞區以產生第一填補後影像; 由下而上填補該至少一空洞區以產生第二填補後影像;以及 將該至少一空洞區的每一像素分為第一角度類別或第二角度類別,若是該第一角度類別則採用該第一填補後影像,若是該第二角度類別則採用該第二填補後影像來填補該像素。An electronic device, comprising: a memory that stores a plurality of instructions; and a processor for executing the instructions to complete the steps: obtaining a two-dimensional image and a depth image corresponding to the two-dimensional image, and performing pixel warping on the two-dimensional image according to the depth image to obtain a warped image, wherein the warped image includes at least one hollow area; filling the at least one void area from top to bottom to generate a first filled image; filling the at least one hole region from bottom to top to generate a second filled image; and classifying each pixel of the at least one hole region into a first angle class or a second angle class, if the first angle class is the first padded image, and if the second angle class is the second padded image image to fill that pixel. 一種電腦程式產品,由一電子裝置載入並執行以完成多個步驟: 取得一二維影像以及對應該二維影像的一深度影像,並且根據該深度影像對該二維影像做像素曲移以得到一曲移後影像,其中該曲移後影像包括至少一空洞區; 由上而下填補該至少一空洞區以產生第一填補後影像; 由下而上填補該至少一空洞區以產生第二填補後影像;以及 將該至少一空洞區的每一像素分為第一角度類別或第二角度類別,若是該第一角度類別則採用該第一填補後影像,若是該第二角度類別則採用該第二填補後影像來填補該像素。A computer program product loaded and executed by an electronic device to complete the steps: obtaining a two-dimensional image and a depth image corresponding to the two-dimensional image, and performing pixel warping on the two-dimensional image according to the depth image to obtain a warped image, wherein the warped image includes at least one hollow area; filling the at least one void area from top to bottom to generate a first filled image; filling the at least one hole region from bottom to top to generate a second filled image; and classifying each pixel of the at least one hole region into a first angle class or a second angle class, if the first angle class is the first padded image, and if the second angle class is the second padded image image to fill that pixel.
TW109121387A 2020-06-23 2020-06-23 Depth image based rendering method, electrical device and computer program product TWI736335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109121387A TWI736335B (en) 2020-06-23 2020-06-23 Depth image based rendering method, electrical device and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109121387A TWI736335B (en) 2020-06-23 2020-06-23 Depth image based rendering method, electrical device and computer program product

Publications (2)

Publication Number Publication Date
TWI736335B TWI736335B (en) 2021-08-11
TW202201344A true TW202201344A (en) 2022-01-01

Family

ID=78283135

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109121387A TWI736335B (en) 2020-06-23 2020-06-23 Depth image based rendering method, electrical device and computer program product

Country Status (1)

Country Link
TW (1) TWI736335B (en)

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630408A (en) * 2009-08-14 2010-01-20 清华大学 Depth map treatment method and device
CN101640809B (en) * 2009-08-17 2010-11-03 浙江大学 Depth extraction method of merging motion information and geometric information
US8537200B2 (en) * 2009-10-23 2013-09-17 Qualcomm Incorporated Depth map generation techniques for conversion of 2D video data to 3D video data
TWI439961B (en) * 2011-03-08 2014-06-01 Univ Nat Chi Nan Conversion algorithm for voids generated after converting 2D images
TWI419078B (en) * 2011-03-25 2013-12-11 Univ Chung Hua Apparatus for generating a real-time stereoscopic image and method thereof
TWI493963B (en) * 2011-11-01 2015-07-21 Acer Inc Image generating device and image adjusting method
US20130271565A1 (en) * 2012-04-16 2013-10-17 Qualcomm Incorporated View synthesis based on asymmetric texture and depth resolutions
US9596448B2 (en) * 2013-03-18 2017-03-14 Qualcomm Incorporated Simplifications on disparity vector derivation and motion vector prediction in 3D video coding
CN105144714B (en) * 2013-04-09 2019-03-29 寰发股份有限公司 Three-dimensional or multi-view video coding or decoded method and device
ITTO20130784A1 (en) * 2013-09-30 2015-03-31 Sisvel Technology Srl METHOD AND DEVICE FOR EDGE SHAPE ENFORCEMENT FOR VISUAL ENHANCEMENT OF DEPTH IMAGE BASED RENDERING
TWI497444B (en) * 2013-11-27 2015-08-21 Au Optronics Corp Method and apparatus for converting 2d image to 3d image
TWM529333U (en) * 2016-06-28 2016-09-21 Nat Univ Tainan Embedded three-dimensional image system
US10699466B2 (en) * 2016-12-06 2020-06-30 Koninklijke Philips N.V. Apparatus and method for generating a light intensity image
CN107147906B (en) * 2017-06-12 2019-04-02 中国矿业大学 A kind of virtual perspective synthetic video quality without reference evaluation method
CN107578418B (en) * 2017-09-08 2020-05-19 华中科技大学 Indoor scene contour detection method fusing color and depth information
US10931956B2 (en) * 2018-04-12 2021-02-23 Ostendo Technologies, Inc. Methods for MR-DIBR disparity map merging and disparity threshold determination

Also Published As

Publication number Publication date
TWI736335B (en) 2021-08-11

Similar Documents

Publication Publication Date Title
JP7181977B2 (en) Method and system for detecting and combining structural features in 3D reconstruction
US10540576B1 (en) Panoramic camera systems
Li et al. PMSC: PatchMatch-based superpixel cut for accurate stereo matching
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
Shen et al. Depth-aware image seam carving
TWI398158B (en) Method for generating the depth of a stereo image
US11688075B2 (en) Machine learning feature vector generator using depth image foreground attributes
US9165401B1 (en) Multi-perspective stereoscopy from light fields
US20220148143A1 (en) Image fusion method based on gradient domain mapping
JP2023519728A (en) 2D image 3D conversion method, apparatus, equipment, and computer program
CN102436671A (en) Virtual viewpoint drawing method based on depth value non-linear transformation
US8633926B2 (en) Mesoscopic geometry modulation
US20160232420A1 (en) Method and apparatus for processing signal data
Matsuo et al. Efficient edge-awareness propagation via single-map filtering for edge-preserving stereo matching
TWI536316B (en) Apparatus and computer-implemented method for generating a three-dimensional scene
TW202201344A (en) Depth image based rendering method, electrical device and computer program product
TWM628629U (en) Stereo image generating device
TWI526045B (en) Method and image processing device for adjusting stereo images
Kadmin et al. Local Stereo Matching Algorithm Using Modified Dynamic Cost Computation [J]
Chai et al. Seam manipulator: Leveraging pixel fusion for depth-adjustable stereoscopic image retargeting
WO2023233575A1 (en) Estimation device, learning device, estimation method, learning method, and program
TWI857801B (en) Image synthesis method and image synthesis system
Zhang et al. [Retracted] A 3D Face Modeling and Recognition Method Based on Binocular Stereo Vision and Depth‐Sensing Detection
KR102690903B1 (en) The Method and System to Construct Multi-point Real-time Metaverse Content Data Based on Selective Super-resolution
TWI825566B (en) Stereo image generating device and stereo image generating method