TWM628629U - Stereo image generating device - Google Patents
Stereo image generating device Download PDFInfo
- Publication number
- TWM628629U TWM628629U TW111200961U TW111200961U TWM628629U TW M628629 U TWM628629 U TW M628629U TW 111200961 U TW111200961 U TW 111200961U TW 111200961 U TW111200961 U TW 111200961U TW M628629 U TWM628629 U TW M628629U
- Authority
- TW
- Taiwan
- Prior art keywords
- depth information
- image
- information map
- pixel
- depth
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims description 75
- 238000004148 unit process Methods 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 26
- 230000008569 process Effects 0.000 abstract description 10
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002730 additional effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Eye Examination Apparatus (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
本新型是關於影像處理裝置與影像處理方法,特別是關於立體影像產生裝置。The present invention relates to an image processing device and an image processing method, in particular to a stereoscopic image generating device.
在習知的立體影像產生方法當中,有一種採用單眼深度估測技術的方法。Among the conventional stereoscopic image generation methods, there is a method using a monocular depth estimation technique.
第1圖示意上述習知的立體影像產生方法。首先,該方法是利用卷積神經網路模型,來估測第1影像21(即用來當作第1視角的原始影像)的每個像素點的深度資訊,以得到第1深度資訊圖22。然後,該方法會根據第1深度資訊圖22的每一個像素點對應的深度資訊,來設定第1影像21中每個像素點的像素偏移量23。最後,該方法利用像素偏移量23對第1影像21進行像素偏移處理,以產生第2影像(即用來當作第2視角的參考影像,未圖示)。另外,以下為了簡便說明,第1圖的第1影像21、第1深度資訊圖22、以及像素偏移量23,是以影像寬度為5個像素點(Pixel)作為範例;此外,為簡潔起見,第1圖中僅示出2行像素點的深度資訊以及像素偏移量。FIG. 1 illustrates the above-mentioned conventional stereoscopic image generation method. First, the method uses a convolutional neural network model to estimate the depth information of each pixel of the first image 21 (that is, the original image used as the first viewing angle) to obtain the first
上述方法中,若是將第1影像21進行向右偏移的像素偏移處理,則產生的第2影像當中,位於左邊緣的一至多行像素點的複數個邊緣像素點,就必須以黑點來填補原本不存在的像素值。同理可知,若是將第1影像21進行向左偏移的像素偏移處理,則第2影像的右邊緣也會發生相同情形。In the above method, if the
由於向左或是向右的像素偏移量,是由深度資訊來決定的,因此,利用習知技術產生第2影像的成效,會受到第1深度資訊圖22的深度資訊所影響。例如,當第1深度資訊圖22左側與右側的邊緣像素點對應的深度資訊並非均一化數值時,對應的像素偏移量23也並非均一化數值,使得第2影像的左邊緣或右邊緣出現不均勻的黑色影像區塊。如此一來,以前述第1影像21與第2影像所產生的立體影像,就有可能影響使用者的觀看體驗。Since the left or right pixel offset is determined by the depth information, the effect of generating the second image by using the conventional technique will be affected by the depth information of the first
本新型有鑑於先前技術的上述問題,而提供了一種立體影像產生方法與立體影像產生裝置,來解決先前技術產生第2影像時,左邊緣或右邊緣會出現不均勻的黑色影像區塊的問題。In view of the above problems of the prior art, the present invention provides a 3D image generating method and a 3D image generating device to solve the problem of uneven black image blocks on the left edge or right edge of the prior art when generating the second image .
本揭露提供的立體影像產生裝置,包含:一儲存單元;一顯示單元,包含一顯示螢幕;以及一處理單元,與該儲存單元以及該顯示單元連接,該處理單元從該儲存單元取得一第1影像;該處理單元,處理該第1影像,以取得該第1影像每一像素點的深度資料,且配置為一第1深度資訊圖,該第1深度資訊圖具有該每一像素點所對應的一深度資訊;該處理單元,以該第1深度資訊圖的複數個邊緣為基準,且以距離該複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的該複數個邊緣像素點具有相同的對應深度資訊,以建立一第2深度資訊圖;該處理單元,基於該第2深度資訊圖的每一像素點對應的該深度資訊,來設定該第1影像的每一像素點對應的一像素偏移量;該處理單元,對該第1影像進行像素偏移處理,以產生一第2影像;以及該處理單元,輸出該第1影像以及該第2影像至該顯示單元以顯示一立體影像。The stereoscopic image generating device provided by the present disclosure includes: a storage unit; a display unit including a display screen; and a processing unit connected to the storage unit and the display unit, and the processing unit obtains a first an image; the processing unit processes the first image to obtain depth data of each pixel of the first image, and is configured as a first depth information map, the first depth information map has the corresponding information of each pixel a depth information; the processing unit, based on a plurality of edges of the first depth information map, and a plurality of edge pixels within a predetermined width from the plurality of edges as objects, perform consistent processing, so that the processing The subsequent plurality of edge pixels have the same corresponding depth information to create a second depth information map; the processing unit sets the first depth information based on the depth information corresponding to each pixel of the second depth information map a pixel offset corresponding to each pixel of the 1 image; the processing unit for performing pixel offset processing on the first image to generate a second image; and the processing unit for outputting the first image and the
一個實施例中,該複數個邊緣為該第1深度資訊圖的上邊緣以及下邊緣。In one embodiment, the plurality of edges are upper and lower edges of the first depth information map.
一個實施例中,該複數個邊緣為該第1深度資訊圖的左邊緣以及右邊緣。In one embodiment, the plurality of edges are left and right edges of the first depth information map.
一個實施例中,該既定寬度為1個像素點。In one embodiment, the predetermined width is 1 pixel.
一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最大景深。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is the maximum depth of field in the first depth information map.
一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最小景深。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is the minimum depth of field in the first depth information map.
一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊為一常數。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is a constant.
一個實施例中,該第2深度資訊圖的該複數個邊緣像素點對應的該深度資訊,為該第1深度資訊圖的該複數個邊緣像素點對應的該深度資訊之算術平均值。In one embodiment, the depth information corresponding to the plurality of edge pixels in the second depth information map is an arithmetic mean of the depth information corresponding to the plurality of edge pixels in the first depth information map.
一個實施例中,若該深度資訊的值越大,則對應的該像素偏移量就越小;若該深度資訊的值越小,則對應的該像素偏移量就越大。In one embodiment, if the value of the depth information is larger, the corresponding pixel offset is smaller; if the value of the depth information is smaller, the corresponding pixel offset is larger.
本揭露提供的立體影像產生方法,包含:取得一第1影像,並處理該第1影像,以取得該第1影像每一像素點的深度資料,且配置為一第1深度資訊圖,該第1深度資訊圖具有該每一像素點所對應的一深度資訊;以該第1深度資訊圖的複數個邊緣為基準,且以距離該複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的該複數個邊緣像素點具有相同的對應深度資訊,以建立一第2深度資訊圖;基於該第2深度資訊圖的每一像素點對應的該深度資訊,來設定該第1影像的每一像素點對應的一像素偏移量;對該第1影像進行像素偏移處理,以產生一第2影像;以及輸出該第1影像以及該第2影像以顯示一立體影像。The method for generating a stereoscopic image provided by the present disclosure includes: obtaining a first image, and processing the first image to obtain depth data of each pixel of the first image, and configuring a first depth information map; 1. The depth information map has a depth information corresponding to each pixel point; with a plurality of edges of the first depth information map as a reference, and a plurality of edge pixels within a predetermined width from the plurality of edges are used as objects. , carry out the uniform processing, so that the processed edge pixels have the same corresponding depth information to establish a second depth information map; based on the depth information corresponding to each pixel of the second depth information map, to set a pixel offset corresponding to each pixel of the first image; perform pixel offset processing on the first image to generate a second image; and output the first image and the second image for display A stereoscopic image.
根據本新型,由於是根據第2深度資訊圖來設定第1影像的像素偏移量,因此使用本新型所產生的第2影像,左邊緣或右邊緣並不會出現不均勻的黑色影像區塊。According to the present invention, since the pixel offset of the first image is set according to the second depth information map, the second image generated by the present invention will not have uneven black image blocks on the left or right edges. .
本新型之上述及其他目的及優點,在參考後面描述的詳細說明並搭配所附的新型圖式之後,將能更加明顯易懂。The above and other objects and advantages of the present invention will become more apparent upon reference to the detailed description hereinafter described in conjunction with the accompanying drawings of the new model.
第2A圖示意本新型的立體影像產生裝置的功能方塊概要圖,第2B圖示意本新型的立體影像產生裝置的硬體架構圖。FIG. 2A is a schematic functional block diagram of the stereoscopic image generating apparatus of the present invention, and FIG. 2B is a hardware structure diagram of the stereoscopic image generating apparatus of the present invention.
第2A圖當中,立體影像產生裝置1至少包含:儲存單元11、顯示單元12、以及處理單元13。第2B圖中,示意立體影像產生裝置1為筆記型電腦的範例。然而,筆記型電腦僅為其中一個示意性範例,立體影像產生裝置1也可以是桌上型電腦、平板電腦、智慧型手機、頭戴顯示器、伺服器、可攜式電子裝置或其他具有類似運算能力之電子裝置…等。In FIG. 2A , the stereoscopic
儲存單元11舉例來說,可以是隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read Only Memory,ROM)、快閃記憶體、可抹除可程式唯讀記憶體(Erasable Programmable Random Access Memory,EPROM)、可電氣抹除可程式唯讀記憶體(Electrically Erasable Programmable Random Access Memory,EEPROM)…等非揮發性記憶體或揮發性半導體記憶體。The
另外,儲存單元11還可以是磁碟、軟碟、光碟、CD、小型磁碟、或是數位多功能影音光碟(Digital Versatile Disc,DVD)…等。In addition, the
換言之,儲存單元11可以儲存本說明書中提到的「第1影像」、「第2影像」、「第1深度資訊圖」、「第2深度資訊圖」、以及「像素偏移量」等任何一者或其組合,以及後面描述的處理單元13在執行處理時所有使用到的參數、公式、演算法、程式碼等。In other words, the
顯示單元12舉例來說,可以是立體成像顯示器、系統整合型面板、發光二極體顯示器、觸控式螢幕等具有顯示螢幕的輸出裝置。換言之,顯示單元12可以因應使用者的需要,將本說明書中提到的「第1影像」、「第2影像」、「第1深度資訊圖」、「第2深度資訊圖」、以及「像素偏移量」等任何一者或其組合顯示於顯示螢幕。在一些實施例中,立體影像產生裝置1可以不包含顯示單元,其可將產生的立體成像輸出至外接的顯示單元。The
處理單元13與儲存單元11以及顯示單元12連接,與儲存單元11以及顯示單元12進行單向或雙向的互動。處理單元13使用儲存於儲存單元11當中的參數、公式、演算法、程式碼等,來執行後面描述的各種處理。另外,處理單元13可以由硬體、軟體、或是硬體與軟體的組合來實施。The
處理單元13可以是由單一電路、複合電路、程式化處理器、平行程式化處理器、圖形處理器、應用特定積體電路(Application Specific Integrated Circuit,ASIC)、場式可程式閘陣列(Field Programmable Gate Array,FPGA)、或該等的組合,來實現本說明書提到的特定功能與處理。The
另外,處理單元13還可以藉由讀取並執行儲存於儲存單元11的程式碼,來實現本說明書提到的特定功能與處理。換句話說,處理單元13可用來實現本說明書提到的「立體影像產生方法」。In addition, the
第3圖示意本新型的立體影像產生裝置1之運作方式的一種實施例。FIG. 3 illustrates an embodiment of the operation mode of the stereoscopic
如第3圖所示,處理單元13從儲存單元11取得第1影像31。處理單元13並利用如儲存於儲存單元11的深度估測模型,例如卷積神經網路模型(但是並非限定於此)等習知的深度估測模型,取得第1影像31的每個像素點的深度資料,以產生第1深度資訊圖32。第1影像31可以與第1圖所示的第1影像21相同,也可以不同。若第1影像31與第1影像21相同,則經過相同的深度估測處理之後,第1深度資訊圖32(與第1影像31對應)會與第2圖所示的第1深度資訊圖22(與第1影像21對應)相同或相似。以下為了簡便說明,我們預設第1影像31與第1影像21是相同的,且第1深度資訊圖32與第1深度資訊圖22也是相同的。As shown in FIG. 3 , the
另外,以下為了簡便說明,第3圖的第1影像31、第1深度資訊圖32、第2深度資訊圖321A~321D、以及像素偏移量322A~322D,是以影像寬度為5個像素點(Pixel)作為範例。然而,以其中一個實施例來說,第1影像31也可以是尺寸為有256x256像素點的影像、1920x1080像素點的影像、1024x768像素點的影像、或是其他任意尺寸的影像。In addition, for the sake of simplicity, the
第1深度資訊圖32的每一像素點,具有對應的深度資訊。本說明書中,深度資訊為一個可以量化的數值,其定義為某個像素點在3維空間的位置與相機的相對距離。若深度資訊的值越大,則代表該像素點的拍攝位置距離相機較遠;相對地,若深度資訊的值越小,則代表該像素點的拍攝位置距離相機較近。Each pixel of the first
實際上,深度資訊可以因不同規格而變更數值範圍。本說明書為了簡便說明,故將深度資訊的範圍界定在0~10之間。0表示第1深度資訊圖32可偵測到的最小景深(即距離相機最近),10表示第1深度資訊圖32可偵測到的最大景深(即距離相機最遠)。In fact, depth information can change the range of values for different specifications. In this manual, for the sake of simplicity, the range of depth information is defined between 0 and 10. 0 represents the minimum depth of field that can be detected by the first depth information map 32 (ie, the closest to the camera), and 10 represents the maximum depth of field that can be detected by the first depth information map 32 (ie, the farthest from the camera).
「深度資訊」與後面描述的「像素偏移量」具有某種對應關係。具體而言,在產生立體影像的過程中,針對深度資訊越大的像素點,會設定越小的像素偏移量;針對深度資訊越小的像素點,會設定越大的像素偏移量。會如此設定的原理是:將我們將一物體置於雙眼面前,進行相同位移量的橫向平移時,若該物體距離雙眼較遠,則雙眼所感受到的橫向位移變化較小;若該物體距離雙眼較近,則雙眼所感受到的橫向位移變化較大。因此,利用「深度資訊與像素偏移量」呈現「負相關」的特性,所產生的立體影像,就能反映出人眼觀看立體影像時的真實感受。而這也可以回頭解釋第1圖當中,第1深度資訊圖22的各邊緣像素點的深度資訊、與對應的像素偏移量之間的關聯性。The "depth information" has a certain correspondence with the "pixel offset" described later. Specifically, in the process of generating a stereoscopic image, a smaller pixel offset is set for a pixel with larger depth information; a larger pixel offset is set for a pixel with smaller depth information. The principle of this setting is: when we place an object in front of the eyes and perform lateral translation with the same displacement, if the object is farther away from the eyes, the lateral displacement change felt by the eyes will be smaller; The closer the distance to the eyes, the greater the change in the lateral displacement felt by the eyes. Therefore, using the characteristic of "negative correlation" between "depth information and pixel offset", the generated stereoscopic image can reflect the real feeling of the human eye when viewing the stereoscopic image. And this can also be explained back in the first figure, the correlation between the depth information of each edge pixel in the first
另外,在第1圖當中,深度資訊與像素偏移量之間的關係式,可以由「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式表達。此關係式在後面的第3圖、第4圖也將繼續沿用,以利於讀者理解本說明書的技術內容,但不以該數學式為限。In addition, in FIG. 1, the relational expression between the depth information and the pixel offset can be expressed by a mathematical expression of “pixel offset=21−depth information×2” (depth information: 0~10). This relational formula will continue to be used in the third and fourth figures below, so as to facilitate the reader's understanding of the technical content of this specification, but is not limited to this mathematical formula.
接著,處理單元13對第1深度資訊圖32的深度資訊進行處理。處理單元13以第1深度資訊圖32的複數個邊緣為基準,並以距離複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的複數個邊緣像素點具有相同的深度資訊,以建立一第2深度資訊圖。Next, the
此處提到的複數個邊緣,可以是影像的上邊緣以及下邊緣,也可以是影像的左邊緣以及右邊緣。上邊緣以及下邊緣意味著後面提到的像素偏移處理,是以向上或是向下的方式對第1影像進行像素偏移;而左邊緣以及右邊緣意味著後面提到的像素偏移處理,是以向左或是向右的方式對第1影像進行像素偏移。由於習知技術中,向左或是向右進行像素偏移的方式較為常見,因此,在以下的說明中,都是以左邊緣以及右邊緣的複數個邊緣像素點為對象。The multiple edges mentioned here can be the upper and lower edges of the image, or the left and right edges of the image. The upper edge and the lower edge mean the pixel offset processing mentioned later, which is to perform pixel offset on the first image in an upward or downward manner; while the left edge and the right edge mean the pixel offset processing mentioned later. , which is to perform pixel shift to the left or right of the first image. Since it is common to perform pixel shift to the left or right in the conventional technology, in the following description, a plurality of edge pixels of the left edge and the right edge are used as objects.
另外,從演算法的觀點來看,我們可以用2維座標來嚴格定義本說明書所稱的「邊緣」。以尺寸為256x256的影像為例,若以影像最左下角的像素點為原點O(0,0),向右為+x方向,向上為+y方向,則所有x座標為0的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「左邊緣」;所有x座標為255的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「右邊緣」;所有y座標為255的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「上邊緣」;所有y座標為0的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「下邊緣」。Also, from an algorithmic point of view, we can strictly define what this specification calls an "edge" in terms of 2-dimensional coordinates. Taking an image with a size of 256x256 as an example, if the pixel in the lower left corner of the image is the origin O(0,0), the rightward is the +x direction, and the upward is the +y direction, then all the pixels whose x-coordinate is 0 The line segment formed, the side of the line segment that is not adjacent to other pixels can be defined as the "left edge"; the line segment formed by all pixels with an x-coordinate of 255, the line segment is not adjacent to other pixels. , which can be defined as the "right edge"; the line segment formed by all pixels with a y-coordinate of 255, the side of the line segment that is not adjacent to other pixels can be defined as the "upper edge"; all y-coordinates are 0. The line segment composed of pixels, the side of the line segment not adjacent to other pixels, can be defined as the "lower edge".
另外,距離任一邊緣的「既定寬度」,是以像素點(Pixel)為單位。舉例來說,若距離左邊緣以及右邊緣的既定寬度為2,則依照前述定義,一致化處理的對象即為所有x座標為0、1、254、255的邊緣像素點。若距離左邊緣以及右邊緣的既定寬度為1,則一致化處理的對象即為所有x座標為0、255的邊緣像素點,以此類推。In addition, the "predetermined width" from any edge is in units of pixels. For example, if the predetermined width from the left edge and the right edge is 2, then according to the above definition, the object of the uniform processing is all the edge pixels whose x-coordinates are 0, 1, 254, and 255. If the predetermined width from the left edge and the right edge is 1, the object of the uniform processing is all the edge pixels whose x-coordinates are 0 and 255, and so on.
換言之,既定寬度可以是任意自然數。但為了避免產生的立體影像損失過多資訊,一般會將既定寬度設為1個像素點。第3圖示意「既定寬度=1」的實施例,第4圖示意「既定寬度=2」的實施例。In other words, the predetermined width may be any natural number. However, in order to avoid the loss of too much information in the generated stereoscopic image, the predetermined width is generally set to 1 pixel. Fig. 3 shows an embodiment of "predetermined width=1", and Fig. 4 shows an embodiment of "predetermined width=2".
此處由處理單元13進行的「一致化處理」,是將第1深度資訊圖32的複數個邊緣像素點對應的深度資訊,都調整為一個相同的值。於第3圖示意的實施例中,處理單元13可以選擇4種實施方式(A)~(D)之中的任何一種方式,來調整第1深度資訊圖32的複數個邊緣像素點對應的深度資訊。而第1深度資訊圖32當中,複數個邊緣像素點以外的像素點對應的深度資訊,則維持原狀而不進行調整。藉此,就可以使用一部分經過調整後的深度資訊,以及另一部分未調整的深度資訊,來建立以下所述的4種第2深度資訊圖321A~321D。Here, the "unification processing" performed by the
實施方式(A) 實施方式(A)當中(參照第3圖(A)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖32當中的最大景深,也就是前面預設的10。藉此,第2深度資訊圖321A當中的10個邊緣像素點對應的深度資訊皆為10。Embodiment (A) In the embodiment (A) (refer to FIG. 3 (A)), the
實施方式(B) 實施方式(B)當中(參照第3圖(B)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖32當中的最小景深,也就是前面預設的0。藉此,第2深度資訊圖321B當中的10個邊緣像素點對應的深度資訊皆為0。Embodiment (B) In Embodiment (B) (refer to FIG. 3 (B)), the
實施方式(C) 實施方式(C)當中(參照第3圖(C)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為0~10之間的任一常數,例如設定為9。藉此,第2深度資訊圖321C當中的10個邊緣像素點對應的深度資訊皆為9。Embodiment (C) In Embodiment (C) (refer to FIG. 3 (C)), the
實施方式(D) 實施方式(D)當中(參照第3圖(D)),處理單元13可以對第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊計算算術平均值,並設定為該算術平均值。第3圖當中,由於第1深度資訊圖32的10個邊緣像素點對應的2組邊緣深度資訊分別為7、6、5、4、3與9、8、7、6、5,因此算出的算術平均值為6。藉此,第2深度資訊圖321D當中的10個邊緣像素點對應的深度資訊皆為6。Embodiment (D) In the embodiment (D) (refer to FIG. 3 (D)), the
處理單元13依照上述實施方式(A)~(D)之中任何一種方式所產生的第2深度資訊圖321A~321D,由於10個邊緣像素點對應的深度資訊都已經一致化;因此,當處理單元13基於第2深度資訊圖321A~321D的每一像素點對應的深度資訊,來設定第1影像31的每一個像素點對應的像素偏移量322A~322D時,就可以確保第1影像31的10個(2組)邊緣像素點,其對應的像素偏移量皆為相同。For the second depth information maps 321A to 321D generated by the
舉例來說,若根據實施方式(A),使得第2深度資訊圖321A的10個邊緣像素點對應的深度資訊皆為10(最大景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322A皆為1。For example, if according to the embodiment (A), the depth information corresponding to the 10 edge pixels of the second
舉例來說,若根據實施方式(B),使得第2深度資訊圖321B的10個邊緣像素點對應的深度資訊皆為0(最小景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322B皆為21。For example, if according to the embodiment (B), the depth information corresponding to the 10 edge pixels of the second
舉例來說,若根據實施方式(C),使得第2深度資訊圖321C的10個邊緣像素點對應的深度資訊皆為9(0~10之間的任一常數),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322C皆為3。For example, if according to the embodiment (C), the depth information corresponding to the 10 edge pixels of the second
舉例來說,若根據實施方式(D),使得第2深度資訊圖321D的10個邊緣像素點對應的深度資訊,皆為第1深度資訊圖32的10個邊緣像素點對應的深度資訊之算術平均值(本例為6),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322D皆為9。For example, according to Embodiment (D), the depth information corresponding to the 10 edge pixels in the second
從上述實施方式(A)~(D)來看,為了使處理單元13的演算法盡可能精簡,而節省程式資源,則我們可以考慮直接將10個邊緣像素點對應的深度資訊,都設定為某個常數(9)。如此一來,處理單元13針對第1影像31的連續畫格(Frame)逐次進行處理時,就可以確保逐次設定的像素偏移量保持恆定而不會隨著時間而浮動。From the perspective of the above-mentioned embodiments (A) to (D), in order to simplify the algorithm of the
另外,若為了避免對第1影像31進行像素偏移處理,所產生的第2影像產生過度的失真(即均勻的黑色影像區塊面積過大),則處理單元13亦可直接將10個邊緣像素點對應的深度資訊,都設定為最大景深(10)。如此一來,就可以確保對應的像素偏移量為最小值(1),使得第2影像當中均勻的黑色影像區塊面積為最小。In addition, in order to avoid the pixel shift processing on the
因此,依據本新型實施例的處理單元13,並不是根據第1深度資訊圖32(22)對應的像素偏移量(23),而是根據複數個邊緣像素點已經過一致化處理的第2深度資訊圖321A~321D對應的像素偏移量322A~322D,對第1影像31進行像素偏移處理,以產生第2影像。藉此,當處理單元13將立體影像的第1影像31以及第2影像顯示於顯示單元12時,就不會在立體影像的邊緣出現不均勻的黑色影像區塊,而能讓觀見的使用者維持良好的體驗,故可以達成本案所欲實現的功效。Therefore, the
以上說明是使用第3圖,來說明「既定寬度」設定為1的情況。而在第4圖當中,則是說明「既定寬度」設定為2的情況。第4圖示意本新型的立體影像產生裝置運作的另一種實施例。In the above description, the case where the "predetermined width" is set to 1 is described using FIG. 3 . On the other hand, in Fig. 4, the case where the "predetermined width" is set to 2 is explained. FIG. 4 illustrates another embodiment of the operation of the stereoscopic image generating apparatus of the present invention.
本實施例與第3圖之間的差異在於:處理單元13必須將第1深度資訊圖42當中,距離左邊緣寬度為2的5個邊緣像素點、以及距離右邊緣寬度為2的5個邊緣像素點也納入處理對象之內。因此,第4圖所示的一致化處理,將會有20個邊緣像素點對應的深度資訊需要進行調整。The difference between this embodiment and FIG. 3 is that the
實施方式(A) 實施方式(A)當中(參照第4圖(A)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖42當中的最大景深,也就是前面預設的10。藉此,第2深度資訊圖421A當中的20個邊緣像素點對應的深度資訊皆為10。Embodiment (A) In the embodiment (A) (refer to FIG. 4 (A)), the
實施方式(B) 實施方式(B)當中(參照第4圖(B)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖42當中的最小景深,也就是前面預設的0。藉此,第2深度資訊圖421B當中的20個邊緣像素點對應的深度資訊皆為0。Embodiment (B) In Embodiment (B) (refer to FIG. 4 (B)), the
實施方式(C) 實施方式(C)當中(參照第4圖(C)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為0~10之間的任一常數,例如設定為8。藉此,第2深度資訊圖421C當中的20個邊緣像素點對應的深度資訊皆為8。Embodiment (C) In the embodiment (C) (refer to FIG. 4 (C)), the
實施方式(D) 實施方式(D)當中(參照第4圖(D)),處理單元13可以對第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊計算算術平均值,並設定為該算術平均值。第4圖當中,由於第1深度資訊圖42的20個邊緣像素點對應的4組邊緣深度資訊分別為「5、4、3、2、1」、「6、5、4、3、2」、「8、7、6、5、4」以及「9、8、7、6、5」,因此算出的算術平均值為5。藉此,第2深度資訊圖421D當中的20個邊緣像素點對應的深度資訊皆為5。Embodiment (D) In the embodiment (D) (refer to FIG. 4 (D)), the
依照上述實施方式(A)~(D)之中任何一種方式所產生的第2深度資訊圖421A~421D,由於20個(4組)邊緣像素點對應的深度資訊都已經一致化,因此,當處理單元13基於第2深度資訊圖421A~421D的每一像素點對應的深度資訊,來設定第1影像41的每一個像素點對應的像素偏移量422A~422D時,同樣也可以確保第1影像41的20個邊緣像素點,其對應的像素偏移量皆為相同。For the second depth information maps 421A to 421D generated according to any one of the above-mentioned embodiments (A) to (D), since the depth information corresponding to the 20 (4 groups) edge pixels has been consistent, when When the
舉例來說,若根據實施方式(A),使得第2深度資訊圖421A的20個邊緣像素點對應的深度資訊皆為10(最大景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422A皆為1。For example, if according to the embodiment (A), the depth information corresponding to the 20 edge pixels of the second
舉例來說,若根據實施方式(B),使得第2深度資訊圖421B的20個邊緣像素點對應的深度資訊皆為0(最小景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422B皆為21。For example, if according to the embodiment (B), the depth information corresponding to the 20 edge pixels of the second
舉例來說,若根據實施方式(C),使得第2深度資訊圖421C的20個邊緣像素點對應的深度資訊皆為8(0~10之間的任一常數),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422C皆為5。For example, if according to the embodiment (C), the depth information corresponding to the 20 edge pixels of the second
舉例來說,若根據實施方式(D),使得第2深度資訊圖421D的20個邊緣像素點對應的深度資訊,皆為第1深度資訊圖32的20個邊緣像素點對應的深度資訊之算術平均值(本例為5),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422D皆為11。For example, according to Embodiment (D), the depth information corresponding to the 20 edge pixels in the second
因此,本案提供的處理單元13,並不是根據第1深度資訊圖42對應的像素偏移量,而是根據複數個邊緣像素點已經過一致化處理的第2深度資訊圖421A~421D對應的像素偏移量422A~422D,對第1影像41進行像素偏移處理,以產生第2影像。藉此,當處理單元13將立體影像的第1影像41以及第2影像顯示於顯示單元12時,使用者就不會在立體影像的邊緣看到不均勻的黑色影像區塊,而能讓使用者維持良好的觀看體驗,故可以達成本案所欲實現的功效。Therefore, the
另外,與第3圖的實施例(既定寬度為1)相同,第4圖的實施例(既定寬度為2)當中,同樣也可以將深度資訊直接設定為某個常數,或是設定為最大景深,以達成第3圖的實施例當中提到的額外功效。In addition, as in the embodiment in Fig. 3 (the predetermined width is 1), in the embodiment in Fig. 4 (the predetermined width is 2), the depth information can also be directly set to a certain constant, or set to the maximum depth of field. , in order to achieve the additional effect mentioned in the embodiment of FIG. 3 .
需注意的是,第2深度資訊圖321A~321D、421A~421D當中,複數個邊緣像素點以外的像素點,也都具有各自對應的深度資訊、以及經過數學式「像素偏移量=21-深度資訊x2」(深度資訊:0~10)所轉換的像素偏移量。但如同本說明書前面所描述,由於本案只需要讓第2深度資訊圖的邊緣像素點對應的深度資訊一致化即可,因此,第2深度資訊圖的邊緣像素點以外的其他像素點,其對應的深度資訊一致與否,就不在本案的考量範圍之內了(反過來說,深度資訊不一致才是符合常態)。故於第3圖至第4圖當中並未特別繪製,並省略相關的說明。It should be noted that, in the second depth information maps 321A~321D and 421A~421D, the pixels other than the plurality of edge pixels also have their corresponding depth information and are processed by the mathematical formula "pixel offset = 21- The pixel offset converted from depth information x2" (depth information: 0~10). However, as described earlier in this specification, since this case only needs to make the depth information corresponding to the edge pixels of the second depth information map consistent, therefore, other pixels other than the edge pixels of the second depth information map correspond to Whether the in-depth information is consistent or not is not within the scope of consideration in this case (conversely, inconsistency in in-depth information is the norm). Therefore, the drawings are not particularly drawn in Figs. 3 to 4, and relevant descriptions are omitted.
另外,雖然本案是以「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式來表達深度資訊與像素偏移量之間的負相關性,但像素偏移量也不必然等同於偏移了同等數量的像素點,僅為一種以數值來傳達偏移程度的示意方式。無論是線性或非線性關係,其中的參數都可以參考其他習知技術進行調整。In addition, although this case expresses the negative correlation between the depth information and the pixel offset by the mathematical formula of "pixel offset = 21 - depth information x2" (depth information: 0~10), the pixel offset It is also not necessarily equivalent to offsetting the same number of pixels, but is only a schematic way of conveying the degree of offset by numerical values. Whether it is a linear or non-linear relationship, the parameters therein can be adjusted with reference to other conventional techniques.
綜上說明,無論處理單元13是採用第3圖至第4圖描述的總共8個實施方式之任何一者,由處理單元13將立體影像的第1影像以及第2影像顯示於顯示單元12時,使用者就不會在立體影像的邊緣看到不均勻的黑色影像區塊,而能讓使用者維持良好的觀看體驗,故可以達成本案所欲實現的功效。To sum up, no matter whether the
以上已詳述本新型的立體影像產生裝置及其運作的方式與方法。需注意的是,上述的實施方式僅為例示性說明本新型的原理及其功效,而並非用於限制本新型的範圍。本領域具通常知識者在不違背本新型的技術原理及精神下,均可以對實施例進行修改與適當變更。因此,本新型的權利保護範圍,應以後面的申請專利範圍為準。The three-dimensional image generating device of the present invention and its operation method and method have been described in detail above. It should be noted that, the above-mentioned embodiments are only illustrative of the principles and effects of the present invention, and are not intended to limit the scope of the present invention. Those skilled in the art can make modifications and appropriate changes to the embodiments without departing from the technical principles and spirit of the present invention. Therefore, the scope of protection of the rights of this new model shall be subject to the scope of the following patent application.
1:立體影像產生裝置
11:儲存單元
12:顯示單元
13:處理單元
21:第1影像
22:第1深度資訊圖
23:像素偏移量
31:第1影像
32:第1深度資訊圖
321A~321D:第2深度資訊圖
322A~322D:像素偏移量
41:第1影像
42:第1深度資訊圖
421A~421D:第2深度資訊圖
422A~422D:像素偏移量1: Stereoscopic image generation device
11: Storage unit
12: Display unit
13: Processing unit
21: 1st image
22: The first in-depth infographic
23: Pixel offset
31: 1st image
32: 1st in-
第1圖示意習知的立體影像產生方法。 第2A圖示意第2A圖示意本新型的立體影像產生裝置的功能方塊概要圖,第2B圖示意本新型的立體影像產生裝置的硬體架構圖。 第3圖示意本新型的立體影像產生方法的其中一種實施例。 第4圖示意本新型的立體影像產生方法的其中一種實施例。FIG. 1 illustrates a conventional stereoscopic image generation method. FIG. 2A is a schematic diagram of a functional block of the stereoscopic image generating apparatus of the present invention, and FIG. 2B is a hardware structure diagram of the stereoscopic image generating apparatus of the present invention. FIG. 3 illustrates one embodiment of the novel stereoscopic image generation method. FIG. 4 illustrates one embodiment of the novel stereoscopic image generation method.
1:立體影像產生裝置 1: Stereoscopic image generation device
11:儲存單元 11: Storage unit
12:顯示單元 12: Display unit
13:處理單元 13: Processing unit
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111200961U TWM628629U (en) | 2022-01-24 | 2022-01-24 | Stereo image generating device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111200961U TWM628629U (en) | 2022-01-24 | 2022-01-24 | Stereo image generating device |
Publications (1)
Publication Number | Publication Date |
---|---|
TWM628629U true TWM628629U (en) | 2022-06-21 |
Family
ID=83063498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111200961U TWM628629U (en) | 2022-01-24 | 2022-01-24 | Stereo image generating device |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWM628629U (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI825566B (en) * | 2022-01-24 | 2023-12-11 | 宏碁股份有限公司 | Stereo image generating device and stereo image generating method |
-
2022
- 2022-01-24 TW TW111200961U patent/TWM628629U/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI825566B (en) * | 2022-01-24 | 2023-12-11 | 宏碁股份有限公司 | Stereo image generating device and stereo image generating method |
US12341942B2 (en) | 2022-01-24 | 2025-06-24 | Acer Incorporated | Stereoscopic image generating device and stereoscopic image generating method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7403528B2 (en) | Method and system for reconstructing color and depth information of a scene | |
US10499046B2 (en) | Generating depth maps for panoramic camera systems | |
US9661296B2 (en) | Image processing apparatus and method | |
JP5536115B2 (en) | Rendering of 3D video images on stereoscopic display | |
US11256328B2 (en) | Three-dimensional (3D) rendering method and apparatus for user' eyes | |
US9554114B2 (en) | Depth range adjustment for three-dimensional images | |
Yan et al. | Depth mapping for stereoscopic videos | |
US20210012561A1 (en) | Deep novel view and lighting synthesis from sparse images | |
US20130106841A1 (en) | Dynamic depth image adjusting device and method thereof | |
TW201505420A (en) | Content-aware display adaptation methods | |
CN101729791A (en) | Apparatus and method for image processing | |
CN101605270A (en) | Method and device for generating depth map | |
KR20230022153A (en) | Single-image 3D photo with soft layering and depth-aware restoration | |
CN114929331A (en) | Salient object detection for artificial vision | |
TWM628629U (en) | Stereo image generating device | |
TWI825566B (en) | Stereo image generating device and stereo image generating method | |
CN108154549A (en) | Three-dimensional image processing method | |
CN102307310B (en) | Image depth estimation method and device | |
CN118158375A (en) | Stereoscopic video quality evaluation method, stereoscopic video quality evaluation device and computer equipment | |
CN116708736A (en) | Stereoscopic image generating device and stereoscopic image generating method | |
CN115176459B (en) | Virtual viewpoint synthesis method, electronic device, and computer-readable medium | |
CN114879377B (en) | Method, device and equipment for determining parameters of horizontal parallax three-dimensional light field display system | |
CN102857772B (en) | Image treatment method and image processor | |
CN112188186B (en) | A method for obtaining naked-eye 3D synthetic images based on normalized infinite viewpoints | |
TWI736335B (en) | Depth image based rendering method, electrical device and computer program product |