TWM628629U - Stereo image generating device - Google Patents

Stereo image generating device Download PDF

Info

Publication number
TWM628629U
TWM628629U TW111200961U TW111200961U TWM628629U TW M628629 U TWM628629 U TW M628629U TW 111200961 U TW111200961 U TW 111200961U TW 111200961 U TW111200961 U TW 111200961U TW M628629 U TWM628629 U TW M628629U
Authority
TW
Taiwan
Prior art keywords
depth information
image
information map
pixel
depth
Prior art date
Application number
TW111200961U
Other languages
Chinese (zh)
Inventor
譚馳澔
徐文正
林士豪
佑 和
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW111200961U priority Critical patent/TWM628629U/en
Publication of TWM628629U publication Critical patent/TWM628629U/en

Links

Images

Landscapes

  • Eye Examination Apparatus (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A stereo image generating device is provided, which performs: obtaining a first depth information map of a first image, each pixels of the first depth information map has a corresponding depth information; performing uniformity process on a plurality of edge pixels which are within a predetermined width from a plurality of edges of the first depth information map, to make the processed edge pixels have the same depth information, to establish a second depth information map; setting a pixel shifting amount corresponding to each pixels of the first image based on the depth information corresponding to each pixels of the second depth information map; performing pixel shifting operation on the first image to generate a second image; and displaying the first image and the second image of a stereo image.

Description

立體影像產生裝置Stereoscopic image generation device

本新型是關於影像處理裝置與影像處理方法,特別是關於立體影像產生裝置。The present invention relates to an image processing device and an image processing method, in particular to a stereoscopic image generating device.

在習知的立體影像產生方法當中,有一種採用單眼深度估測技術的方法。Among the conventional stereoscopic image generation methods, there is a method using a monocular depth estimation technique.

第1圖示意上述習知的立體影像產生方法。首先,該方法是利用卷積神經網路模型,來估測第1影像21(即用來當作第1視角的原始影像)的每個像素點的深度資訊,以得到第1深度資訊圖22。然後,該方法會根據第1深度資訊圖22的每一個像素點對應的深度資訊,來設定第1影像21中每個像素點的像素偏移量23。最後,該方法利用像素偏移量23對第1影像21進行像素偏移處理,以產生第2影像(即用來當作第2視角的參考影像,未圖示)。另外,以下為了簡便說明,第1圖的第1影像21、第1深度資訊圖22、以及像素偏移量23,是以影像寬度為5個像素點(Pixel)作為範例;此外,為簡潔起見,第1圖中僅示出2行像素點的深度資訊以及像素偏移量。FIG. 1 illustrates the above-mentioned conventional stereoscopic image generation method. First, the method uses a convolutional neural network model to estimate the depth information of each pixel of the first image 21 (that is, the original image used as the first viewing angle) to obtain the first depth information map 22 . Then, the method sets the pixel offset 23 of each pixel in the first image 21 according to the depth information corresponding to each pixel in the first depth information map 22 . Finally, the method uses the pixel offset 23 to perform pixel offset processing on the first image 21 to generate a second image (ie, a reference image used as a second viewing angle, not shown). In addition, for the sake of simplicity, the first image 21, the first depth information map 22, and the pixel offset 23 in Fig. 1 are taken as an example with an image width of 5 pixels (Pixel); in addition, for the sake of brevity See, Figure 1 only shows the depth information and pixel offset of 2 rows of pixels.

上述方法中,若是將第1影像21進行向右偏移的像素偏移處理,則產生的第2影像當中,位於左邊緣的一至多行像素點的複數個邊緣像素點,就必須以黑點來填補原本不存在的像素值。同理可知,若是將第1影像21進行向左偏移的像素偏移處理,則第2影像的右邊緣也會發生相同情形。In the above method, if the first image 21 is subjected to the pixel offset process of shifting to the right, in the generated second image, a plurality of edge pixels in one or more rows of pixels located at the left edge must be represented by black dots. to fill in pixel values that did not exist originally. Similarly, it can be seen that if the first image 21 is subjected to the pixel shift processing of shifting to the left, the same situation will also occur at the right edge of the second image.

由於向左或是向右的像素偏移量,是由深度資訊來決定的,因此,利用習知技術產生第2影像的成效,會受到第1深度資訊圖22的深度資訊所影響。例如,當第1深度資訊圖22左側與右側的邊緣像素點對應的深度資訊並非均一化數值時,對應的像素偏移量23也並非均一化數值,使得第2影像的左邊緣或右邊緣出現不均勻的黑色影像區塊。如此一來,以前述第1影像21與第2影像所產生的立體影像,就有可能影響使用者的觀看體驗。Since the left or right pixel offset is determined by the depth information, the effect of generating the second image by using the conventional technique will be affected by the depth information of the first depth information map 22 . For example, when the depth information corresponding to the edge pixels on the left and right sides of the first depth information image 22 is not a uniform value, the corresponding pixel offset 23 is also not a uniform value, so that the left edge or the right edge of the second image appears Uneven black image blocks. In this way, the stereoscopic image generated by the first image 21 and the second image may affect the viewing experience of the user.

本新型有鑑於先前技術的上述問題,而提供了一種立體影像產生方法與立體影像產生裝置,來解決先前技術產生第2影像時,左邊緣或右邊緣會出現不均勻的黑色影像區塊的問題。In view of the above problems of the prior art, the present invention provides a 3D image generating method and a 3D image generating device to solve the problem of uneven black image blocks on the left edge or right edge of the prior art when generating the second image .

本揭露提供的立體影像產生裝置,包含:一儲存單元;一顯示單元,包含一顯示螢幕;以及一處理單元,與該儲存單元以及該顯示單元連接,該處理單元從該儲存單元取得一第1影像;該處理單元,處理該第1影像,以取得該第1影像每一像素點的深度資料,且配置為一第1深度資訊圖,該第1深度資訊圖具有該每一像素點所對應的一深度資訊;該處理單元,以該第1深度資訊圖的複數個邊緣為基準,且以距離該複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的該複數個邊緣像素點具有相同的對應深度資訊,以建立一第2深度資訊圖;該處理單元,基於該第2深度資訊圖的每一像素點對應的該深度資訊,來設定該第1影像的每一像素點對應的一像素偏移量;該處理單元,對該第1影像進行像素偏移處理,以產生一第2影像;以及該處理單元,輸出該第1影像以及該第2影像至該顯示單元以顯示一立體影像。The stereoscopic image generating device provided by the present disclosure includes: a storage unit; a display unit including a display screen; and a processing unit connected to the storage unit and the display unit, and the processing unit obtains a first an image; the processing unit processes the first image to obtain depth data of each pixel of the first image, and is configured as a first depth information map, the first depth information map has the corresponding information of each pixel a depth information; the processing unit, based on a plurality of edges of the first depth information map, and a plurality of edge pixels within a predetermined width from the plurality of edges as objects, perform consistent processing, so that the processing The subsequent plurality of edge pixels have the same corresponding depth information to create a second depth information map; the processing unit sets the first depth information based on the depth information corresponding to each pixel of the second depth information map a pixel offset corresponding to each pixel of the 1 image; the processing unit for performing pixel offset processing on the first image to generate a second image; and the processing unit for outputting the first image and the first image 2 images are sent to the display unit to display a stereoscopic image.

一個實施例中,該複數個邊緣為該第1深度資訊圖的上邊緣以及下邊緣。In one embodiment, the plurality of edges are upper and lower edges of the first depth information map.

一個實施例中,該複數個邊緣為該第1深度資訊圖的左邊緣以及右邊緣。In one embodiment, the plurality of edges are left and right edges of the first depth information map.

一個實施例中,該既定寬度為1個像素點。In one embodiment, the predetermined width is 1 pixel.

一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最大景深。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is the maximum depth of field in the first depth information map.

一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最小景深。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is the minimum depth of field in the first depth information map.

一個實施例中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊為一常數。In one embodiment, the depth information corresponding to each of the plurality of edge pixels in the second depth information map is a constant.

一個實施例中,該第2深度資訊圖的該複數個邊緣像素點對應的該深度資訊,為該第1深度資訊圖的該複數個邊緣像素點對應的該深度資訊之算術平均值。In one embodiment, the depth information corresponding to the plurality of edge pixels in the second depth information map is an arithmetic mean of the depth information corresponding to the plurality of edge pixels in the first depth information map.

一個實施例中,若該深度資訊的值越大,則對應的該像素偏移量就越小;若該深度資訊的值越小,則對應的該像素偏移量就越大。In one embodiment, if the value of the depth information is larger, the corresponding pixel offset is smaller; if the value of the depth information is smaller, the corresponding pixel offset is larger.

本揭露提供的立體影像產生方法,包含:取得一第1影像,並處理該第1影像,以取得該第1影像每一像素點的深度資料,且配置為一第1深度資訊圖,該第1深度資訊圖具有該每一像素點所對應的一深度資訊;以該第1深度資訊圖的複數個邊緣為基準,且以距離該複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的該複數個邊緣像素點具有相同的對應深度資訊,以建立一第2深度資訊圖;基於該第2深度資訊圖的每一像素點對應的該深度資訊,來設定該第1影像的每一像素點對應的一像素偏移量;對該第1影像進行像素偏移處理,以產生一第2影像;以及輸出該第1影像以及該第2影像以顯示一立體影像。The method for generating a stereoscopic image provided by the present disclosure includes: obtaining a first image, and processing the first image to obtain depth data of each pixel of the first image, and configuring a first depth information map; 1. The depth information map has a depth information corresponding to each pixel point; with a plurality of edges of the first depth information map as a reference, and a plurality of edge pixels within a predetermined width from the plurality of edges are used as objects. , carry out the uniform processing, so that the processed edge pixels have the same corresponding depth information to establish a second depth information map; based on the depth information corresponding to each pixel of the second depth information map, to set a pixel offset corresponding to each pixel of the first image; perform pixel offset processing on the first image to generate a second image; and output the first image and the second image for display A stereoscopic image.

根據本新型,由於是根據第2深度資訊圖來設定第1影像的像素偏移量,因此使用本新型所產生的第2影像,左邊緣或右邊緣並不會出現不均勻的黑色影像區塊。According to the present invention, since the pixel offset of the first image is set according to the second depth information map, the second image generated by the present invention will not have uneven black image blocks on the left or right edges. .

本新型之上述及其他目的及優點,在參考後面描述的詳細說明並搭配所附的新型圖式之後,將能更加明顯易懂。The above and other objects and advantages of the present invention will become more apparent upon reference to the detailed description hereinafter described in conjunction with the accompanying drawings of the new model.

第2A圖示意本新型的立體影像產生裝置的功能方塊概要圖,第2B圖示意本新型的立體影像產生裝置的硬體架構圖。FIG. 2A is a schematic functional block diagram of the stereoscopic image generating apparatus of the present invention, and FIG. 2B is a hardware structure diagram of the stereoscopic image generating apparatus of the present invention.

第2A圖當中,立體影像產生裝置1至少包含:儲存單元11、顯示單元12、以及處理單元13。第2B圖中,示意立體影像產生裝置1為筆記型電腦的範例。然而,筆記型電腦僅為其中一個示意性範例,立體影像產生裝置1也可以是桌上型電腦、平板電腦、智慧型手機、頭戴顯示器、伺服器、可攜式電子裝置或其他具有類似運算能力之電子裝置…等。In FIG. 2A , the stereoscopic image generating apparatus 1 at least includes: a storage unit 11 , a display unit 12 , and a processing unit 13 . FIG. 2B shows an example in which the stereoscopic image generating apparatus 1 is a notebook computer. However, the notebook computer is only an exemplary example, and the stereoscopic image generating device 1 can also be a desktop computer, a tablet computer, a smart phone, a head-mounted display, a server, a portable electronic device, or other similar computing devices. Power electronics...etc.

儲存單元11舉例來說,可以是隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read Only Memory,ROM)、快閃記憶體、可抹除可程式唯讀記憶體(Erasable Programmable Random Access Memory,EPROM)、可電氣抹除可程式唯讀記憶體(Electrically Erasable Programmable Random Access Memory,EEPROM)…等非揮發性記憶體或揮發性半導體記憶體。The storage unit 11 can be, for example, a random access memory (Random Access Memory, RAM), a read only memory (Read Only Memory, ROM), a flash memory, an erasable programmable read only memory (Erasable memory). Non-volatile memory or volatile semiconductor memory such as Programmable Random Access Memory, EPROM), Electrically Erasable Programmable Random Access Memory (EEPROM), etc.

另外,儲存單元11還可以是磁碟、軟碟、光碟、CD、小型磁碟、或是數位多功能影音光碟(Digital Versatile Disc,DVD)…等。In addition, the storage unit 11 may also be a magnetic disk, a floppy disk, an optical disk, a CD, a compact disk, or a Digital Versatile Disc (DVD).

換言之,儲存單元11可以儲存本說明書中提到的「第1影像」、「第2影像」、「第1深度資訊圖」、「第2深度資訊圖」、以及「像素偏移量」等任何一者或其組合,以及後面描述的處理單元13在執行處理時所有使用到的參數、公式、演算法、程式碼等。In other words, the storage unit 11 can store any of the “first image”, “second image”, “first depth information map”, “second depth information map”, and “pixel offset” mentioned in this specification. One or a combination thereof, as well as all parameters, formulas, algorithms, program codes, etc. used by the processing unit 13 described later when performing processing.

顯示單元12舉例來說,可以是立體成像顯示器、系統整合型面板、發光二極體顯示器、觸控式螢幕等具有顯示螢幕的輸出裝置。換言之,顯示單元12可以因應使用者的需要,將本說明書中提到的「第1影像」、「第2影像」、「第1深度資訊圖」、「第2深度資訊圖」、以及「像素偏移量」等任何一者或其組合顯示於顯示螢幕。在一些實施例中,立體影像產生裝置1可以不包含顯示單元,其可將產生的立體成像輸出至外接的顯示單元。The display unit 12 can be, for example, an output device having a display screen, such as a stereoscopic imaging display, a system-integrated panel, a light-emitting diode display, a touch screen, and the like. In other words, the display unit 12 can display the “first image”, “second image”, “first depth information map”, “second depth information map”, and “pixels” mentioned in this specification according to the needs of the user. Offset", etc., or any combination thereof is displayed on the display screen. In some embodiments, the stereoscopic image generating apparatus 1 may not include a display unit, which can output the generated stereoscopic image to an external display unit.

處理單元13與儲存單元11以及顯示單元12連接,與儲存單元11以及顯示單元12進行單向或雙向的互動。處理單元13使用儲存於儲存單元11當中的參數、公式、演算法、程式碼等,來執行後面描述的各種處理。另外,處理單元13可以由硬體、軟體、或是硬體與軟體的組合來實施。The processing unit 13 is connected to the storage unit 11 and the display unit 12 , and performs one-way or two-way interaction with the storage unit 11 and the display unit 12 . The processing unit 13 uses parameters, formulas, algorithms, program codes, etc. stored in the storage unit 11 to execute various processes described later. In addition, the processing unit 13 may be implemented by hardware, software, or a combination of hardware and software.

處理單元13可以是由單一電路、複合電路、程式化處理器、平行程式化處理器、圖形處理器、應用特定積體電路(Application Specific Integrated Circuit,ASIC)、場式可程式閘陣列(Field Programmable Gate Array,FPGA)、或該等的組合,來實現本說明書提到的特定功能與處理。The processing unit 13 can be composed of a single circuit, a composite circuit, a programmable processor, a parallel programmable processor, a graphics processor, an application specific integrated circuit (ASIC), a field programmable gate array (Field Programmable) Gate Array, FPGA), or a combination thereof, to implement the specific functions and processes mentioned in this specification.

另外,處理單元13還可以藉由讀取並執行儲存於儲存單元11的程式碼,來實現本說明書提到的特定功能與處理。換句話說,處理單元13可用來實現本說明書提到的「立體影像產生方法」。In addition, the processing unit 13 can also realize the specific functions and processes mentioned in this specification by reading and executing the program codes stored in the storage unit 11 . In other words, the processing unit 13 can be used to implement the "stereoscopic image generation method" mentioned in this specification.

第3圖示意本新型的立體影像產生裝置1之運作方式的一種實施例。FIG. 3 illustrates an embodiment of the operation mode of the stereoscopic image generating apparatus 1 of the present invention.

如第3圖所示,處理單元13從儲存單元11取得第1影像31。處理單元13並利用如儲存於儲存單元11的深度估測模型,例如卷積神經網路模型(但是並非限定於此)等習知的深度估測模型,取得第1影像31的每個像素點的深度資料,以產生第1深度資訊圖32。第1影像31可以與第1圖所示的第1影像21相同,也可以不同。若第1影像31與第1影像21相同,則經過相同的深度估測處理之後,第1深度資訊圖32(與第1影像31對應)會與第2圖所示的第1深度資訊圖22(與第1影像21對應)相同或相似。以下為了簡便說明,我們預設第1影像31與第1影像21是相同的,且第1深度資訊圖32與第1深度資訊圖22也是相同的。As shown in FIG. 3 , the processing unit 13 acquires the first image 31 from the storage unit 11 . The processing unit 13 uses the depth estimation model stored in the storage unit 11, such as a conventional depth estimation model such as a convolutional neural network model (but not limited to this), to obtain each pixel of the first image 31 , to generate the first depth information map 32 . The first video 31 may be the same as or different from the first video 21 shown in FIG. 1 . If the first image 31 is the same as the first image 21 , after the same depth estimation process, the first depth information map 32 (corresponding to the first image 31 ) will be the same as the first depth information map 22 shown in FIG. 2 . (corresponding to the first image 21) the same or similar. In the following, for the sake of simplicity, we assume that the first image 31 and the first image 21 are the same, and the first depth information map 32 and the first depth information map 22 are also the same.

另外,以下為了簡便說明,第3圖的第1影像31、第1深度資訊圖32、第2深度資訊圖321A~321D、以及像素偏移量322A~322D,是以影像寬度為5個像素點(Pixel)作為範例。然而,以其中一個實施例來說,第1影像31也可以是尺寸為有256x256像素點的影像、1920x1080像素點的影像、1024x768像素點的影像、或是其他任意尺寸的影像。In addition, for the sake of simplicity, the first image 31, the first depth information map 32, the second depth information maps 321A to 321D, and the pixel offsets 322A to 322D in FIG. 3 are set to have an image width of 5 pixels. (Pixel) as an example. However, in one embodiment, the first image 31 may also be an image with a size of 256×256 pixels, an image with a size of 1920×1080 pixels, an image with a size of 1024×768 pixels, or an image of any other size.

第1深度資訊圖32的每一像素點,具有對應的深度資訊。本說明書中,深度資訊為一個可以量化的數值,其定義為某個像素點在3維空間的位置與相機的相對距離。若深度資訊的值越大,則代表該像素點的拍攝位置距離相機較遠;相對地,若深度資訊的值越小,則代表該像素點的拍攝位置距離相機較近。Each pixel of the first depth information map 32 has corresponding depth information. In this specification, depth information is a quantifiable value, which is defined as the relative distance between the position of a pixel in 3-dimensional space and the camera. If the value of the depth information is larger, it means that the shooting position of the pixel is farther from the camera; on the contrary, if the value of the depth information is smaller, it means that the shooting position of the pixel is closer to the camera.

實際上,深度資訊可以因不同規格而變更數值範圍。本說明書為了簡便說明,故將深度資訊的範圍界定在0~10之間。0表示第1深度資訊圖32可偵測到的最小景深(即距離相機最近),10表示第1深度資訊圖32可偵測到的最大景深(即距離相機最遠)。In fact, depth information can change the range of values for different specifications. In this manual, for the sake of simplicity, the range of depth information is defined between 0 and 10. 0 represents the minimum depth of field that can be detected by the first depth information map 32 (ie, the closest to the camera), and 10 represents the maximum depth of field that can be detected by the first depth information map 32 (ie, the farthest from the camera).

「深度資訊」與後面描述的「像素偏移量」具有某種對應關係。具體而言,在產生立體影像的過程中,針對深度資訊越大的像素點,會設定越小的像素偏移量;針對深度資訊越小的像素點,會設定越大的像素偏移量。會如此設定的原理是:將我們將一物體置於雙眼面前,進行相同位移量的橫向平移時,若該物體距離雙眼較遠,則雙眼所感受到的橫向位移變化較小;若該物體距離雙眼較近,則雙眼所感受到的橫向位移變化較大。因此,利用「深度資訊與像素偏移量」呈現「負相關」的特性,所產生的立體影像,就能反映出人眼觀看立體影像時的真實感受。而這也可以回頭解釋第1圖當中,第1深度資訊圖22的各邊緣像素點的深度資訊、與對應的像素偏移量之間的關聯性。The "depth information" has a certain correspondence with the "pixel offset" described later. Specifically, in the process of generating a stereoscopic image, a smaller pixel offset is set for a pixel with larger depth information; a larger pixel offset is set for a pixel with smaller depth information. The principle of this setting is: when we place an object in front of the eyes and perform lateral translation with the same displacement, if the object is farther away from the eyes, the lateral displacement change felt by the eyes will be smaller; The closer the distance to the eyes, the greater the change in the lateral displacement felt by the eyes. Therefore, using the characteristic of "negative correlation" between "depth information and pixel offset", the generated stereoscopic image can reflect the real feeling of the human eye when viewing the stereoscopic image. And this can also be explained back in the first figure, the correlation between the depth information of each edge pixel in the first depth information map 22 and the corresponding pixel offset.

另外,在第1圖當中,深度資訊與像素偏移量之間的關係式,可以由「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式表達。此關係式在後面的第3圖、第4圖也將繼續沿用,以利於讀者理解本說明書的技術內容,但不以該數學式為限。In addition, in FIG. 1, the relational expression between the depth information and the pixel offset can be expressed by a mathematical expression of “pixel offset=21−depth information×2” (depth information: 0~10). This relational formula will continue to be used in the third and fourth figures below, so as to facilitate the reader's understanding of the technical content of this specification, but is not limited to this mathematical formula.

接著,處理單元13對第1深度資訊圖32的深度資訊進行處理。處理單元13以第1深度資訊圖32的複數個邊緣為基準,並以距離複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的複數個邊緣像素點具有相同的深度資訊,以建立一第2深度資訊圖。Next, the processing unit 13 processes the depth information of the first depth information map 32 . The processing unit 13 uses a plurality of edges of the first depth information map 32 as a reference, and takes a plurality of edge pixels within a predetermined width from the plurality of edges as objects, and performs uniform processing, so that the processed plurality of edge pixels have the same depth information to create a second depth information map.

此處提到的複數個邊緣,可以是影像的上邊緣以及下邊緣,也可以是影像的左邊緣以及右邊緣。上邊緣以及下邊緣意味著後面提到的像素偏移處理,是以向上或是向下的方式對第1影像進行像素偏移;而左邊緣以及右邊緣意味著後面提到的像素偏移處理,是以向左或是向右的方式對第1影像進行像素偏移。由於習知技術中,向左或是向右進行像素偏移的方式較為常見,因此,在以下的說明中,都是以左邊緣以及右邊緣的複數個邊緣像素點為對象。The multiple edges mentioned here can be the upper and lower edges of the image, or the left and right edges of the image. The upper edge and the lower edge mean the pixel offset processing mentioned later, which is to perform pixel offset on the first image in an upward or downward manner; while the left edge and the right edge mean the pixel offset processing mentioned later. , which is to perform pixel shift to the left or right of the first image. Since it is common to perform pixel shift to the left or right in the conventional technology, in the following description, a plurality of edge pixels of the left edge and the right edge are used as objects.

另外,從演算法的觀點來看,我們可以用2維座標來嚴格定義本說明書所稱的「邊緣」。以尺寸為256x256的影像為例,若以影像最左下角的像素點為原點O(0,0),向右為+x方向,向上為+y方向,則所有x座標為0的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「左邊緣」;所有x座標為255的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「右邊緣」;所有y座標為255的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「上邊緣」;所有y座標為0的像素點構成的線段,其線段未與其他像素點相鄰的一側,既可定義為「下邊緣」。Also, from an algorithmic point of view, we can strictly define what this specification calls an "edge" in terms of 2-dimensional coordinates. Taking an image with a size of 256x256 as an example, if the pixel in the lower left corner of the image is the origin O(0,0), the rightward is the +x direction, and the upward is the +y direction, then all the pixels whose x-coordinate is 0 The line segment formed, the side of the line segment that is not adjacent to other pixels can be defined as the "left edge"; the line segment formed by all pixels with an x-coordinate of 255, the line segment is not adjacent to other pixels. , which can be defined as the "right edge"; the line segment formed by all pixels with a y-coordinate of 255, the side of the line segment that is not adjacent to other pixels can be defined as the "upper edge"; all y-coordinates are 0. The line segment composed of pixels, the side of the line segment not adjacent to other pixels, can be defined as the "lower edge".

另外,距離任一邊緣的「既定寬度」,是以像素點(Pixel)為單位。舉例來說,若距離左邊緣以及右邊緣的既定寬度為2,則依照前述定義,一致化處理的對象即為所有x座標為0、1、254、255的邊緣像素點。若距離左邊緣以及右邊緣的既定寬度為1,則一致化處理的對象即為所有x座標為0、255的邊緣像素點,以此類推。In addition, the "predetermined width" from any edge is in units of pixels. For example, if the predetermined width from the left edge and the right edge is 2, then according to the above definition, the object of the uniform processing is all the edge pixels whose x-coordinates are 0, 1, 254, and 255. If the predetermined width from the left edge and the right edge is 1, the object of the uniform processing is all the edge pixels whose x-coordinates are 0 and 255, and so on.

換言之,既定寬度可以是任意自然數。但為了避免產生的立體影像損失過多資訊,一般會將既定寬度設為1個像素點。第3圖示意「既定寬度=1」的實施例,第4圖示意「既定寬度=2」的實施例。In other words, the predetermined width may be any natural number. However, in order to avoid the loss of too much information in the generated stereoscopic image, the predetermined width is generally set to 1 pixel. Fig. 3 shows an embodiment of "predetermined width=1", and Fig. 4 shows an embodiment of "predetermined width=2".

此處由處理單元13進行的「一致化處理」,是將第1深度資訊圖32的複數個邊緣像素點對應的深度資訊,都調整為一個相同的值。於第3圖示意的實施例中,處理單元13可以選擇4種實施方式(A)~(D)之中的任何一種方式,來調整第1深度資訊圖32的複數個邊緣像素點對應的深度資訊。而第1深度資訊圖32當中,複數個邊緣像素點以外的像素點對應的深度資訊,則維持原狀而不進行調整。藉此,就可以使用一部分經過調整後的深度資訊,以及另一部分未調整的深度資訊,來建立以下所述的4種第2深度資訊圖321A~321D。Here, the "unification processing" performed by the processing unit 13 is to adjust the depth information corresponding to the plurality of edge pixels in the first depth information map 32 to the same value. In the embodiment shown in FIG. 3, the processing unit 13 can select any one of the four implementations (A) to (D) to adjust the corresponding edge pixels of the first depth information map 32. In-depth information. In the first depth information map 32, the depth information corresponding to the pixels other than the plurality of edge pixels is maintained as it is without adjustment. In this way, a part of the adjusted depth information and the other part of the unadjusted depth information can be used to create the four second depth information maps 321A to 321D described below.

實施方式(A) 實施方式(A)當中(參照第3圖(A)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖32當中的最大景深,也就是前面預設的10。藉此,第2深度資訊圖321A當中的10個邊緣像素點對應的深度資訊皆為10。Embodiment (A) In the embodiment (A) (refer to FIG. 3 (A)), the processing unit 13 may set the depth information corresponding to the 10 edge pixels in the first depth information map 32 as the first depth The maximum depth of field in the infographic 32 is the preset 10. Therefore, the depth information corresponding to the 10 edge pixels in the second depth information map 321A is all 10.

實施方式(B) 實施方式(B)當中(參照第3圖(B)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖32當中的最小景深,也就是前面預設的0。藉此,第2深度資訊圖321B當中的10個邊緣像素點對應的深度資訊皆為0。Embodiment (B) In Embodiment (B) (refer to FIG. 3 (B)), the processing unit 13 may set the depth information corresponding to the 10 edge pixels in the first depth information map 32 as the first depth The minimum depth of field in the infographic 32, which is the preset 0 previously. Therefore, the depth information corresponding to the 10 edge pixels in the second depth information map 321B are all 0.

實施方式(C) 實施方式(C)當中(參照第3圖(C)),處理單元13可以將第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊,都設定為0~10之間的任一常數,例如設定為9。藉此,第2深度資訊圖321C當中的10個邊緣像素點對應的深度資訊皆為9。Embodiment (C) In Embodiment (C) (refer to FIG. 3 (C)), the processing unit 13 can set the depth information corresponding to the 10 edge pixels in the first depth information map 32 to 0~10. Any constant in between, for example, set it to 9. Therefore, the depth information corresponding to the 10 edge pixels in the second depth information map 321C is all 9.

實施方式(D) 實施方式(D)當中(參照第3圖(D)),處理單元13可以對第1深度資訊圖32當中的10個邊緣像素點對應的深度資訊計算算術平均值,並設定為該算術平均值。第3圖當中,由於第1深度資訊圖32的10個邊緣像素點對應的2組邊緣深度資訊分別為7、6、5、4、3與9、8、7、6、5,因此算出的算術平均值為6。藉此,第2深度資訊圖321D當中的10個邊緣像素點對應的深度資訊皆為6。Embodiment (D) In the embodiment (D) (refer to FIG. 3 (D)), the processing unit 13 may calculate the arithmetic mean value of the depth information corresponding to the 10 edge pixels in the first depth information map 32, and set is the arithmetic mean. In Figure 3, since the two sets of edge depth information corresponding to the 10 edge pixels in the first depth information map 32 are 7, 6, 5, 4, 3 and 9, 8, 7, 6, and 5, respectively, the calculated The arithmetic mean is 6. Therefore, the depth information corresponding to the 10 edge pixels in the second depth information map 321D is all 6.

處理單元13依照上述實施方式(A)~(D)之中任何一種方式所產生的第2深度資訊圖321A~321D,由於10個邊緣像素點對應的深度資訊都已經一致化;因此,當處理單元13基於第2深度資訊圖321A~321D的每一像素點對應的深度資訊,來設定第1影像31的每一個像素點對應的像素偏移量322A~322D時,就可以確保第1影像31的10個(2組)邊緣像素點,其對應的像素偏移量皆為相同。For the second depth information maps 321A to 321D generated by the processing unit 13 according to any one of the above-mentioned embodiments (A) to (D), since the depth information corresponding to the 10 edge pixels has been consistent; therefore, when processing When the unit 13 sets the pixel offsets 322A to 322D corresponding to each pixel of the first image 31 based on the depth information corresponding to each pixel of the second depth information maps 321A to 321D, the first image 31 can be guaranteed. The 10 (2 groups) edge pixels of , and their corresponding pixel offsets are all the same.

舉例來說,若根據實施方式(A),使得第2深度資訊圖321A的10個邊緣像素點對應的深度資訊皆為10(最大景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322A皆為1。For example, if according to the embodiment (A), the depth information corresponding to the 10 edge pixels of the second depth information map 321A is all 10 (maximum depth of field), then according to "pixel offset = 21 - depth information x2" ” (depth information: 0~10), the corresponding pixel offsets 322A are all 1.

舉例來說,若根據實施方式(B),使得第2深度資訊圖321B的10個邊緣像素點對應的深度資訊皆為0(最小景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322B皆為21。For example, if according to the embodiment (B), the depth information corresponding to the 10 edge pixels of the second depth information map 321B are all 0 (minimum depth of field), then according to "pixel offset = 21 - depth information x2" ” (depth information: 0~10), the corresponding pixel offsets 322B are all 21.

舉例來說,若根據實施方式(C),使得第2深度資訊圖321C的10個邊緣像素點對應的深度資訊皆為9(0~10之間的任一常數),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322C皆為3。For example, if according to the embodiment (C), the depth information corresponding to the 10 edge pixels of the second depth information map 321C are all 9 (any constant between 0 and 10), then according to the "pixel offset" Quantity=21-depth information x2" (depth information: 0~10), the corresponding pixel offsets 322C are all 3.

舉例來說,若根據實施方式(D),使得第2深度資訊圖321D的10個邊緣像素點對應的深度資訊,皆為第1深度資訊圖32的10個邊緣像素點對應的深度資訊之算術平均值(本例為6),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量322D皆為9。For example, according to Embodiment (D), the depth information corresponding to the 10 edge pixels in the second depth information map 321D is the arithmetic operation of the depth information corresponding to the 10 edge pixels in the first depth information map 32 . The average value (6 in this example) is based on the mathematical formula of "pixel offset = 21-depth information x2" (depth information: 0~10), and the corresponding pixel offset 322D is 9.

從上述實施方式(A)~(D)來看,為了使處理單元13的演算法盡可能精簡,而節省程式資源,則我們可以考慮直接將10個邊緣像素點對應的深度資訊,都設定為某個常數(9)。如此一來,處理單元13針對第1影像31的連續畫格(Frame)逐次進行處理時,就可以確保逐次設定的像素偏移量保持恆定而不會隨著時間而浮動。From the perspective of the above-mentioned embodiments (A) to (D), in order to simplify the algorithm of the processing unit 13 as much as possible and save program resources, we can consider directly setting the depth information corresponding to the 10 edge pixels as some constant (9). In this way, when the processing unit 13 successively processes the continuous frames of the first image 31 , it can ensure that the pixel offsets set successively remain constant and do not fluctuate with time.

另外,若為了避免對第1影像31進行像素偏移處理,所產生的第2影像產生過度的失真(即均勻的黑色影像區塊面積過大),則處理單元13亦可直接將10個邊緣像素點對應的深度資訊,都設定為最大景深(10)。如此一來,就可以確保對應的像素偏移量為最小值(1),使得第2影像當中均勻的黑色影像區塊面積為最小。In addition, in order to avoid the pixel shift processing on the first image 31, if the second image generated is excessively distorted (ie, the area of the uniform black image block is too large), the processing unit 13 can also directly convert the 10 edge pixels The depth information corresponding to the point is set to the maximum depth of field (10). In this way, the corresponding pixel offset can be ensured to be the minimum value (1), so that the area of the uniform black image block in the second image is minimized.

因此,依據本新型實施例的處理單元13,並不是根據第1深度資訊圖32(22)對應的像素偏移量(23),而是根據複數個邊緣像素點已經過一致化處理的第2深度資訊圖321A~321D對應的像素偏移量322A~322D,對第1影像31進行像素偏移處理,以產生第2影像。藉此,當處理單元13將立體影像的第1影像31以及第2影像顯示於顯示單元12時,就不會在立體影像的邊緣出現不均勻的黑色影像區塊,而能讓觀見的使用者維持良好的體驗,故可以達成本案所欲實現的功效。Therefore, the processing unit 13 according to this new embodiment is not based on the pixel offset ( 23 ) corresponding to the first depth information map 32 ( 22 ), but is based on the second pixel offset ( 23 ) that has undergone uniform processing for a plurality of edge pixels. The pixel offsets 322A to 322D corresponding to the depth information maps 321A to 321D are subjected to pixel offset processing on the first image 31 to generate a second image. In this way, when the processing unit 13 displays the first image 31 and the second image of the stereoscopic image on the display unit 12, there will be no uneven black image blocks appearing at the edges of the stereoscopic image, and the visual use The user maintains a good experience, so it can achieve the desired effect of this case.

以上說明是使用第3圖,來說明「既定寬度」設定為1的情況。而在第4圖當中,則是說明「既定寬度」設定為2的情況。第4圖示意本新型的立體影像產生裝置運作的另一種實施例。In the above description, the case where the "predetermined width" is set to 1 is described using FIG. 3 . On the other hand, in Fig. 4, the case where the "predetermined width" is set to 2 is explained. FIG. 4 illustrates another embodiment of the operation of the stereoscopic image generating apparatus of the present invention.

本實施例與第3圖之間的差異在於:處理單元13必須將第1深度資訊圖42當中,距離左邊緣寬度為2的5個邊緣像素點、以及距離右邊緣寬度為2的5個邊緣像素點也納入處理對象之內。因此,第4圖所示的一致化處理,將會有20個邊緣像素點對應的深度資訊需要進行調整。The difference between this embodiment and FIG. 3 is that the processing unit 13 must convert the first depth information map 42 into five edge pixels with a width of 2 from the left edge and five edges with a width of 2 from the right edge Pixels are also included in the processing object. Therefore, in the unification process shown in FIG. 4, there will be depth information corresponding to 20 edge pixels that need to be adjusted.

實施方式(A) 實施方式(A)當中(參照第4圖(A)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖42當中的最大景深,也就是前面預設的10。藉此,第2深度資訊圖421A當中的20個邊緣像素點對應的深度資訊皆為10。Embodiment (A) In the embodiment (A) (refer to FIG. 4 (A)), the processing unit 13 can set the depth information corresponding to the 20 edge pixels in the first depth information map 42 as the first depth The maximum depth of field in the infographic 42 is the preset 10. Therefore, the depth information corresponding to the 20 edge pixels in the second depth information map 421A is all 10.

實施方式(B) 實施方式(B)當中(參照第4圖(B)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為第1深度資訊圖42當中的最小景深,也就是前面預設的0。藉此,第2深度資訊圖421B當中的20個邊緣像素點對應的深度資訊皆為0。Embodiment (B) In Embodiment (B) (refer to FIG. 4 (B)), the processing unit 13 may set the depth information corresponding to the 20 edge pixels in the first depth information map 42 as the first depth The minimum depth of field in the infographic 42 is the 0 preset previously. Therefore, the depth information corresponding to the 20 edge pixels in the second depth information map 421B are all 0.

實施方式(C) 實施方式(C)當中(參照第4圖(C)),處理單元13可以將第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊,都設定為0~10之間的任一常數,例如設定為8。藉此,第2深度資訊圖421C當中的20個邊緣像素點對應的深度資訊皆為8。Embodiment (C) In the embodiment (C) (refer to FIG. 4 (C)), the processing unit 13 can set the depth information corresponding to the 20 edge pixels in the first depth information map 42 to 0~10. Any constant in between, for example, set it to 8. Therefore, the depth information corresponding to the 20 edge pixels in the second depth information map 421C is all 8.

實施方式(D) 實施方式(D)當中(參照第4圖(D)),處理單元13可以對第1深度資訊圖42當中的20個邊緣像素點對應的深度資訊計算算術平均值,並設定為該算術平均值。第4圖當中,由於第1深度資訊圖42的20個邊緣像素點對應的4組邊緣深度資訊分別為「5、4、3、2、1」、「6、5、4、3、2」、「8、7、6、5、4」以及「9、8、7、6、5」,因此算出的算術平均值為5。藉此,第2深度資訊圖421D當中的20個邊緣像素點對應的深度資訊皆為5。Embodiment (D) In the embodiment (D) (refer to FIG. 4 (D)), the processing unit 13 may calculate the arithmetic mean value of the depth information corresponding to the 20 edge pixels in the first depth information map 42, and set is the arithmetic mean. In Figure 4, since the 4 sets of edge depth information corresponding to the 20 edge pixels in the first depth information map 42 are "5, 4, 3, 2, 1", "6, 5, 4, 3, 2" respectively , "8, 7, 6, 5, 4" and "9, 8, 7, 6, 5", so the calculated arithmetic mean is 5. Therefore, the depth information corresponding to the 20 edge pixels in the second depth information map 421D is all 5.

依照上述實施方式(A)~(D)之中任何一種方式所產生的第2深度資訊圖421A~421D,由於20個(4組)邊緣像素點對應的深度資訊都已經一致化,因此,當處理單元13基於第2深度資訊圖421A~421D的每一像素點對應的深度資訊,來設定第1影像41的每一個像素點對應的像素偏移量422A~422D時,同樣也可以確保第1影像41的20個邊緣像素點,其對應的像素偏移量皆為相同。For the second depth information maps 421A to 421D generated according to any one of the above-mentioned embodiments (A) to (D), since the depth information corresponding to the 20 (4 groups) edge pixels has been consistent, when When the processing unit 13 sets the pixel offsets 422A to 422D corresponding to each pixel of the first image 41 based on the depth information corresponding to each pixel of the second depth information map 421A to 421D, the first For the 20 edge pixels of the image 41, the corresponding pixel offsets are all the same.

舉例來說,若根據實施方式(A),使得第2深度資訊圖421A的20個邊緣像素點對應的深度資訊皆為10(最大景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422A皆為1。For example, if according to the embodiment (A), the depth information corresponding to the 20 edge pixels of the second depth information map 421A is all 10 (maximum depth of field), then according to "pixel offset = 21 - depth information x2" ” (depth information: 0~10), the corresponding pixel offsets 422A are all 1.

舉例來說,若根據實施方式(B),使得第2深度資訊圖421B的20個邊緣像素點對應的深度資訊皆為0(最小景深),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422B皆為21。For example, if according to the embodiment (B), the depth information corresponding to the 20 edge pixels of the second depth information map 421B are all 0 (minimum depth of field), then follow the "pixel offset = 21 - depth information x2" ” (depth information: 0~10), the corresponding pixel offsets 422B are all 21.

舉例來說,若根據實施方式(C),使得第2深度資訊圖421C的20個邊緣像素點對應的深度資訊皆為8(0~10之間的任一常數),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422C皆為5。For example, if according to the embodiment (C), the depth information corresponding to the 20 edge pixels of the second depth information map 421C is all 8 (any constant between 0 and 10), then according to the "pixel offset" Quantity=21-depth information x2" (depth information: 0~10), the corresponding pixel offsets 422C are all 5.

舉例來說,若根據實施方式(D),使得第2深度資訊圖421D的20個邊緣像素點對應的深度資訊,皆為第1深度資訊圖32的20個邊緣像素點對應的深度資訊之算術平均值(本例為5),則依照「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式,其對應的像素偏移量422D皆為11。For example, according to Embodiment (D), the depth information corresponding to the 20 edge pixels in the second depth information map 421D is the arithmetic operation of the depth information corresponding to the 20 edge pixels in the first depth information map 32 . The average value (5 in this example) is based on the mathematical formula of "pixel offset = 21-depth information x2" (depth information: 0~10), and the corresponding pixel offset 422D is all 11.

因此,本案提供的處理單元13,並不是根據第1深度資訊圖42對應的像素偏移量,而是根據複數個邊緣像素點已經過一致化處理的第2深度資訊圖421A~421D對應的像素偏移量422A~422D,對第1影像41進行像素偏移處理,以產生第2影像。藉此,當處理單元13將立體影像的第1影像41以及第2影像顯示於顯示單元12時,使用者就不會在立體影像的邊緣看到不均勻的黑色影像區塊,而能讓使用者維持良好的觀看體驗,故可以達成本案所欲實現的功效。Therefore, the processing unit 13 provided in this application is not based on the pixel offsets corresponding to the first depth information map 42 , but is based on the pixels corresponding to the second depth information maps 421A to 421D whose edge pixels have undergone uniform processing. The offsets 422A to 422D are used to perform pixel offset processing on the first image 41 to generate a second image. Therefore, when the processing unit 13 displays the first image 41 and the second image of the stereoscopic image on the display unit 12, the user will not see uneven black image blocks at the edge of the stereoscopic image, and the user can use the The user can maintain a good viewing experience, so it can achieve the desired effect of this case.

另外,與第3圖的實施例(既定寬度為1)相同,第4圖的實施例(既定寬度為2)當中,同樣也可以將深度資訊直接設定為某個常數,或是設定為最大景深,以達成第3圖的實施例當中提到的額外功效。In addition, as in the embodiment in Fig. 3 (the predetermined width is 1), in the embodiment in Fig. 4 (the predetermined width is 2), the depth information can also be directly set to a certain constant, or set to the maximum depth of field. , in order to achieve the additional effect mentioned in the embodiment of FIG. 3 .

需注意的是,第2深度資訊圖321A~321D、421A~421D當中,複數個邊緣像素點以外的像素點,也都具有各自對應的深度資訊、以及經過數學式「像素偏移量=21-深度資訊x2」(深度資訊:0~10)所轉換的像素偏移量。但如同本說明書前面所描述,由於本案只需要讓第2深度資訊圖的邊緣像素點對應的深度資訊一致化即可,因此,第2深度資訊圖的邊緣像素點以外的其他像素點,其對應的深度資訊一致與否,就不在本案的考量範圍之內了(反過來說,深度資訊不一致才是符合常態)。故於第3圖至第4圖當中並未特別繪製,並省略相關的說明。It should be noted that, in the second depth information maps 321A~321D and 421A~421D, the pixels other than the plurality of edge pixels also have their corresponding depth information and are processed by the mathematical formula "pixel offset = 21- The pixel offset converted from depth information x2" (depth information: 0~10). However, as described earlier in this specification, since this case only needs to make the depth information corresponding to the edge pixels of the second depth information map consistent, therefore, other pixels other than the edge pixels of the second depth information map correspond to Whether the in-depth information is consistent or not is not within the scope of consideration in this case (conversely, inconsistency in in-depth information is the norm). Therefore, the drawings are not particularly drawn in Figs. 3 to 4, and relevant descriptions are omitted.

另外,雖然本案是以「像素偏移量=21-深度資訊x2」(深度資訊:0~10)之數學式來表達深度資訊與像素偏移量之間的負相關性,但像素偏移量也不必然等同於偏移了同等數量的像素點,僅為一種以數值來傳達偏移程度的示意方式。無論是線性或非線性關係,其中的參數都可以參考其他習知技術進行調整。In addition, although this case expresses the negative correlation between the depth information and the pixel offset by the mathematical formula of "pixel offset = 21 - depth information x2" (depth information: 0~10), the pixel offset It is also not necessarily equivalent to offsetting the same number of pixels, but is only a schematic way of conveying the degree of offset by numerical values. Whether it is a linear or non-linear relationship, the parameters therein can be adjusted with reference to other conventional techniques.

綜上說明,無論處理單元13是採用第3圖至第4圖描述的總共8個實施方式之任何一者,由處理單元13將立體影像的第1影像以及第2影像顯示於顯示單元12時,使用者就不會在立體影像的邊緣看到不均勻的黑色影像區塊,而能讓使用者維持良好的觀看體驗,故可以達成本案所欲實現的功效。To sum up, no matter whether the processing unit 13 adopts any one of the eight embodiments described in FIG. 3 to FIG. 4 , when the processing unit 13 displays the first image and the second image of the stereoscopic image on the display unit 12 Therefore, the user will not see uneven black image blocks at the edge of the stereoscopic image, and the user can maintain a good viewing experience, thus achieving the desired effect of the present solution.

以上已詳述本新型的立體影像產生裝置及其運作的方式與方法。需注意的是,上述的實施方式僅為例示性說明本新型的原理及其功效,而並非用於限制本新型的範圍。本領域具通常知識者在不違背本新型的技術原理及精神下,均可以對實施例進行修改與適當變更。因此,本新型的權利保護範圍,應以後面的申請專利範圍為準。The three-dimensional image generating device of the present invention and its operation method and method have been described in detail above. It should be noted that, the above-mentioned embodiments are only illustrative of the principles and effects of the present invention, and are not intended to limit the scope of the present invention. Those skilled in the art can make modifications and appropriate changes to the embodiments without departing from the technical principles and spirit of the present invention. Therefore, the scope of protection of the rights of this new model shall be subject to the scope of the following patent application.

1:立體影像產生裝置 11:儲存單元 12:顯示單元 13:處理單元 21:第1影像 22:第1深度資訊圖 23:像素偏移量 31:第1影像 32:第1深度資訊圖 321A~321D:第2深度資訊圖 322A~322D:像素偏移量 41:第1影像 42:第1深度資訊圖 421A~421D:第2深度資訊圖 422A~422D:像素偏移量1: Stereoscopic image generation device 11: Storage unit 12: Display unit 13: Processing unit 21: 1st image 22: The first in-depth infographic 23: Pixel offset 31: 1st image 32: 1st in-depth infographic 321A~321D: 2nd depth infographic 322A~322D: Pixel offset 41: 1st image 42: 1st in-depth infographic 421A~421D: 2nd depth infographic 422A~422D: Pixel offset

第1圖示意習知的立體影像產生方法。 第2A圖示意第2A圖示意本新型的立體影像產生裝置的功能方塊概要圖,第2B圖示意本新型的立體影像產生裝置的硬體架構圖。 第3圖示意本新型的立體影像產生方法的其中一種實施例。 第4圖示意本新型的立體影像產生方法的其中一種實施例。FIG. 1 illustrates a conventional stereoscopic image generation method. FIG. 2A is a schematic diagram of a functional block of the stereoscopic image generating apparatus of the present invention, and FIG. 2B is a hardware structure diagram of the stereoscopic image generating apparatus of the present invention. FIG. 3 illustrates one embodiment of the novel stereoscopic image generation method. FIG. 4 illustrates one embodiment of the novel stereoscopic image generation method.

1:立體影像產生裝置 1: Stereoscopic image generation device

11:儲存單元 11: Storage unit

12:顯示單元 12: Display unit

13:處理單元 13: Processing unit

Claims (9)

一種立體影像產生裝置,包含: 一儲存單元; 一顯示單元,包含一顯示螢幕;以及 一處理單元,與該儲存單元以及該顯示單元連接,該處理單元從該儲存單元取得一第1影像; 該處理單元,處理該第1影像,以取得該第1影像每一像素點的深度資料,且配置為一第1深度資訊圖,該第1深度資訊圖具有該每一像素點所對應的一深度資訊; 該處理單元,以該第1深度資訊圖的複數個邊緣為基準,且以距離該複數個邊緣一既定寬度以內的複數個邊緣像素點為對象,進行一致化處理,使得處理後的該複數個邊緣像素點具有相同的對應深度資訊,以建立一第2深度資訊圖; 該處理單元,基於該第2深度資訊圖的每一像素點對應的該深度資訊,來設定該第1影像的每一像素點對應的一像素偏移量; 該處理單元,對該第1影像進行像素偏移處理,以產生一第2影像;以及 該處理單元,輸出該第1影像以及該第2影像至該顯示單元以顯示一立體影像。 A stereoscopic image generating device, comprising: a storage unit; a display unit including a display screen; and a processing unit connected to the storage unit and the display unit, and the processing unit obtains a first image from the storage unit; The processing unit processes the first image to obtain depth data of each pixel of the first image, and is configured as a first depth information map, and the first depth information map has a corresponding pixel of the first image. in-depth information; The processing unit, based on a plurality of edges of the first depth information map, and taking a plurality of edge pixels within a predetermined width from the plurality of edges as objects, performs uniform processing, so that the plurality of processed The edge pixels have the same corresponding depth information to establish a second depth information map; The processing unit, based on the depth information corresponding to each pixel of the second depth information map, sets a pixel offset corresponding to each pixel of the first image; The processing unit performs pixel offset processing on the first image to generate a second image; and The processing unit outputs the first image and the second image to the display unit to display a stereoscopic image. 如請求項1之立體影像產生裝置, 其中,該複數個邊緣為該第1深度資訊圖的上邊緣以及下邊緣。 According to the stereoscopic image generating device of claim 1, Wherein, the plurality of edges are the upper edge and the lower edge of the first depth information map. 如請求項1之立體影像產生裝置, 其中,該複數個邊緣為該第1深度資訊圖的左邊緣以及右邊緣。 According to the stereoscopic image generating device of claim 1, Wherein, the plurality of edges are the left edge and the right edge of the first depth information map. 如請求項3之立體影像產生裝置, 其中,該既定寬度為1個像素點。 According to the stereoscopic image generating device of claim 3, Wherein, the predetermined width is 1 pixel. 如請求項4之立體影像產生裝置, 其中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最大景深。 According to the stereoscopic image generating device of claim 4, The depth information corresponding to each of the plurality of edge pixels in the second depth information map is the maximum depth of field in the first depth information map. 如請求項4之立體影像產生裝置, 其中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊,為該第1深度資訊圖當中的最小景深。 According to the stereoscopic image generating device of claim 4, The depth information corresponding to each of the plurality of edge pixels in the second depth information map is the minimum depth of field in the first depth information map. 如請求項4之立體影像產生裝置, 其中,該第2深度資訊圖的該複數個邊緣像素點各自對應的該深度資訊為一常數。 According to the stereoscopic image generating device of claim 4, The depth information corresponding to each of the plurality of edge pixels in the second depth information map is a constant. 如請求項4之立體影像產生裝置, 其中,該第2深度資訊圖的該複數個邊緣像素點對應的該深度資訊,為該第1深度資訊圖的該複數個邊緣像素點對應的該深度資訊之算術平均值。 According to the stereoscopic image generating device of claim 4, Wherein, the depth information corresponding to the plurality of edge pixels in the second depth information map is the arithmetic mean of the depth information corresponding to the plurality of edge pixels in the first depth information map. 如請求項1至8任一項之立體影像產生裝置, 其中,若該深度資訊的值越大,則對應的該像素偏移量就越小; 其中,若該深度資訊的值越小,則對應的該像素偏移量就越大。 According to the stereoscopic image generating device of any one of claims 1 to 8, Wherein, if the value of the depth information is larger, the corresponding pixel offset is smaller; Wherein, if the value of the depth information is smaller, the corresponding pixel offset is larger.
TW111200961U 2022-01-24 2022-01-24 Stereo image generating device TWM628629U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111200961U TWM628629U (en) 2022-01-24 2022-01-24 Stereo image generating device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111200961U TWM628629U (en) 2022-01-24 2022-01-24 Stereo image generating device

Publications (1)

Publication Number Publication Date
TWM628629U true TWM628629U (en) 2022-06-21

Family

ID=83063498

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111200961U TWM628629U (en) 2022-01-24 2022-01-24 Stereo image generating device

Country Status (1)

Country Link
TW (1) TWM628629U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI825566B (en) * 2022-01-24 2023-12-11 宏碁股份有限公司 Stereo image generating device and stereo image generating method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI825566B (en) * 2022-01-24 2023-12-11 宏碁股份有限公司 Stereo image generating device and stereo image generating method
US12341942B2 (en) 2022-01-24 2025-06-24 Acer Incorporated Stereoscopic image generating device and stereoscopic image generating method

Similar Documents

Publication Publication Date Title
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
US10499046B2 (en) Generating depth maps for panoramic camera systems
US9661296B2 (en) Image processing apparatus and method
JP5536115B2 (en) Rendering of 3D video images on stereoscopic display
US11256328B2 (en) Three-dimensional (3D) rendering method and apparatus for user' eyes
US9554114B2 (en) Depth range adjustment for three-dimensional images
Yan et al. Depth mapping for stereoscopic videos
US20210012561A1 (en) Deep novel view and lighting synthesis from sparse images
US20130106841A1 (en) Dynamic depth image adjusting device and method thereof
TW201505420A (en) Content-aware display adaptation methods
CN101729791A (en) Apparatus and method for image processing
CN101605270A (en) Method and device for generating depth map
KR20230022153A (en) Single-image 3D photo with soft layering and depth-aware restoration
CN114929331A (en) Salient object detection for artificial vision
TWM628629U (en) Stereo image generating device
TWI825566B (en) Stereo image generating device and stereo image generating method
CN108154549A (en) Three-dimensional image processing method
CN102307310B (en) Image depth estimation method and device
CN118158375A (en) Stereoscopic video quality evaluation method, stereoscopic video quality evaluation device and computer equipment
CN116708736A (en) Stereoscopic image generating device and stereoscopic image generating method
CN115176459B (en) Virtual viewpoint synthesis method, electronic device, and computer-readable medium
CN114879377B (en) Method, device and equipment for determining parameters of horizontal parallax three-dimensional light field display system
CN102857772B (en) Image treatment method and image processor
CN112188186B (en) A method for obtaining naked-eye 3D synthetic images based on normalized infinite viewpoints
TWI736335B (en) Depth image based rendering method, electrical device and computer program product