TWM626646U - Electronic apparatus - Google Patents

Electronic apparatus Download PDF

Info

Publication number
TWM626646U
TWM626646U TW111200902U TW111200902U TWM626646U TW M626646 U TWM626646 U TW M626646U TW 111200902 U TW111200902 U TW 111200902U TW 111200902 U TW111200902 U TW 111200902U TW M626646 U TWM626646 U TW M626646U
Authority
TW
Taiwan
Prior art keywords
image
pixel
processor
interpupillary distance
original image
Prior art date
Application number
TW111200902U
Other languages
Chinese (zh)
Inventor
譚馳澔
徐文正
黃志文
林士豪
佑 和
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW111200902U priority Critical patent/TWM626646U/en
Publication of TWM626646U publication Critical patent/TWM626646U/en

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

An electronic apparatus is provided. The electronic apparatus includes an interpupillary distance detection device, a storage device and a processor. A two-dimensional (2D) original image corresponding to a first view angle is obtained by the processor, and a depth map of the 2D original image is estimated by the processor. Interpupillary distance information of a user is detected by the interpupillary distance detection device. A pixel shift processing is performed for the 2D original image according to the interpupillary distance information and the depth map by the processor to generate a reference image corresponding to a second viewing angle. An image inpainting processing is performed on the reference image by the processor to obtain a restored image. The restored image and the 2D original image are merged by the processor to generate a three-dimensional image conforming to a three-dimensional image format and including image content corresponding to different view angles.

Description

電子裝置electronic device

本新型創作是有關於一種電子裝置,且特別是有關於一種提供立體影像的電子裝置。The present invention relates to an electronic device, and particularly to an electronic device that provides stereoscopic images.

隨著顯示技術的進步,支援三維(three dimension,3D)影像播放的顯示器已逐漸普及。3D顯示與二維(two dimension,2D)顯示的差異在於,3D顯示技術可讓觀賞者感受到影像畫面中的立體感,例如人物立體的五官與景深(depth of field)等等,而傳統的2D影像則無法呈現出此種效果。3D顯示技術的原理是讓觀賞者的左眼觀看左眼影像及讓觀賞者的右眼觀看右眼影像,以讓觀賞者感受到3D視覺效果。隨著3D立體顯示器技術的蓬勃發展,可提供人們視覺上有身歷其境之感受。可知的,3D顯示器需針對特定3D影像格式的影像採用對應的3D顯示技術播放,否則將會造成顯示器無法正確顯示影像。此外,使用者可自行獲取的3D影像內容目前也非十分充足,因而即便使用者具有一台裸視3D顯示器,但使用者還是無法充分且任意享受裸視3D顯示器帶來的顯示效果。With the advancement of display technology, displays supporting three-dimensional (3D) image playback have gradually become popular. The difference between 3D display and two-dimensional (two dimension, 2D) display is that 3D display technology allows viewers to feel the three-dimensional sense in the image, such as the three-dimensional facial features and depth of field of characters, etc. 2D images cannot show this effect. The principle of the 3D display technology is to allow the viewer's left eye to view the left-eye image and the viewer's right eye to view the right-eye image, so that the viewer can experience the 3D visual effect. With the vigorous development of 3D stereoscopic display technology, it can provide people with a visually immersive experience. It can be known that a 3D display needs to use a corresponding 3D display technology to play an image of a specific 3D image format, otherwise the display will not be able to display the image correctly. In addition, the 3D image content that the user can obtain by himself is not very sufficient at present, so even if the user has a naked-view 3D display, the user cannot fully and arbitrarily enjoy the display effect brought by the naked-view 3D display.

有鑑於此,本新型創作提出一種提供立體影像的電子裝置,其可依據使用者的真實瞳距將二維影像轉換為符合立體影像格式的立體影像(stereo image)。In view of this, the present invention proposes an electronic device for providing a stereoscopic image, which can convert a two-dimensional image into a stereo image conforming to a stereoscopic image format according to a user's real interpupillary distance.

本新型創作實施例提供一種電子裝置,其包括瞳距追蹤裝置、儲存裝置以及處理器。處理器連接瞳距追蹤裝置與儲存裝置,經配置以執行下列步驟。獲取對應至第一視角的二維原始影像,並估測二維原始影像的深度圖。透過瞳距追蹤裝置偵測使用者的瞳距資訊。依據瞳距資訊與深度圖對二維原始影像進行像素偏移處理而產生對應至第二視角的參考影像。對參考影像進行影像修復處理而獲取經修復影像。合併經修復影像與二維原始影像而產生符合立體影像格式的立體影像,其中立體影像包括對應至不同視角的影像內容。The novel creative embodiment provides an electronic device, which includes a pupil distance tracking device, a storage device, and a processor. The processor connects the interpupillary distance tracking device and the storage device and is configured to perform the following steps. A 2D original image corresponding to the first viewing angle is acquired, and a depth map of the 2D original image is estimated. The pupil distance information of the user is detected by the pupil distance tracking device. A reference image corresponding to the second viewing angle is generated by performing pixel offset processing on the two-dimensional original image according to the pupil distance information and the depth map. An image restoration process is performed on the reference image to obtain a restored image. The restored image and the 2D original image are combined to generate a stereoscopic image conforming to a stereoscopic image format, wherein the stereoscopic image includes image content corresponding to different viewing angles.

基於上述,於本新型創作的實施例中,可先依據一張二維原始影像產生其對應的深度圖,再依據觀看者的瞳距與深度圖來產生符合立體影像格式的立體影像。據此,本新型創作實施例可大幅擴充3D顯示器可顯示的3D內容,並提供更舒適的立體視覺觀看體驗。Based on the above, in the embodiment of the present invention, a corresponding depth map can be generated according to a two-dimensional original image, and then a stereoscopic image conforming to the stereoscopic image format can be generated according to the viewer's interpupillary distance and the depth map. Accordingly, the novel creation embodiment can greatly expand the 3D content that can be displayed on the 3D display, and provide a more comfortable stereoscopic viewing experience.

為讓本新型創作的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the novel creation more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.

本新型創作的部份實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些實施例只是本新型創作的一部份,並未揭示所有本新型創作的可實施方式。更確切的說,這些實施例只是本新型創作的專利申請範圍中的裝置的範例。Part of the embodiments of the present invention will be described in detail with reference to the accompanying drawings in the following. The component symbols quoted in the following description, when the same component symbols appear in different drawings, will be regarded as the same or similar components. These embodiments are only a part of the present invention, and do not disclose all possible implementations of the present invention. Rather, these embodiments are only examples of devices within the scope of the patent application of the present invention.

圖1是依照本新型創作一實施例的電子裝置的示意圖。請參照圖1,電子裝置10可包括瞳距偵測裝置110、儲存裝置120與處理器130。處理器130耦接儲存裝置120。於一實施例中,電子裝置10可與立體(3D)顯示器20組成3D顯示系統。3D顯示器20可以是裸視3D顯示器或眼鏡式3D顯示器。從另一方面來看,3D顯示器可以是頭戴顯示裝置或提供3D影像顯示功能的電腦螢幕、桌上型螢幕或電視等等。3D顯示系統可為單一整合系統或分離式系統。具體而言,3D顯示系統中的3D顯示器20、瞳距偵測裝置110、儲存裝置120與處理器130可實作成一體式(all-in-one,AIO)電子裝置,例如頭戴顯示裝置、筆記型電腦、智慧型手機、平板電腦或遊戲機等等。或者,3D顯示器20可透過有線傳輸介面或是無線傳輸介面與電腦系統的處理器130相連,像是與電腦系統相連的頭戴顯示裝置、桌上型螢幕或電子看板等等。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention. Referring to FIG. 1 , the electronic device 10 may include a pupil distance detection device 110 , a storage device 120 and a processor 130 . The processor 130 is coupled to the storage device 120 . In one embodiment, the electronic device 10 and the stereoscopic (3D) display 20 can form a 3D display system. The 3D display 20 may be a naked-view 3D display or a glasses-type 3D display. On the other hand, the 3D display can be a head-mounted display device or a computer screen, a desktop screen or a TV that provides a 3D image display function. The 3D display system can be a single integrated system or a separate system. Specifically, the 3D display 20 , the interpupillary distance detection device 110 , the storage device 120 and the processor 130 in the 3D display system can be implemented as an all-in-one (AIO) electronic device, such as a head-mounted display device, Laptops, smartphones, tablets or game consoles, etc. Alternatively, the 3D display 20 can be connected to the processor 130 of the computer system through a wired transmission interface or a wireless transmission interface, such as a head mounted display device, a desktop screen or an electronic signboard connected to the computer system.

瞳距偵測裝置110可用以偵測使用者的瞳距(Interpupillary Distance,IPD)資訊,其例如是眼球追蹤裝置、眼動儀或影像擷取裝置等等可獲取眼球資訊的相關裝置。於一些實施例中,具備計算能力的裝置(例如處理器130)可接收影像擷取裝置所拍攝的使用者影像,並可藉由執行人臉辨識、眼球辨識等影像處理,計算出使用者的瞳距資訊。或者,於一些實施例中,眼球追蹤裝置可透過發射紅外線光束來獲取瞳孔位置而計算出瞳距資訊。The interpupillary distance detection device 110 can be used to detect the interpupillary distance (IPD) information of the user, such as an eye tracking device, an eye tracker, or an image capture device, etc. related devices that can obtain eyeball information. In some embodiments, a device with computing capabilities (such as the processor 130 ) can receive the user image captured by the image capture device, and can calculate the user's Interpupillary distance information. Alternatively, in some embodiments, the eye tracking device can calculate the pupillary distance information by emitting an infrared beam to obtain the pupil position.

儲存裝置120用以儲存影像、資料與供處理器130存取的程式碼(例如作業系統、應用程式、驅動程式)等資料,其可以例如是任意型式的固定式或可移動式隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟或其組合。The storage device 120 is used to store data such as images, data, and code (eg, operating system, application program, driver program) for the processor 130 to access, and it can be, for example, any type of fixed or removable random access memory random access memory (RAM), read-only memory (ROM), flash memory (flash memory), hard disk or a combination thereof.

處理器130耦接瞳距偵測裝置110與儲存裝置120,例如是中央處理單元(central processing unit,CPU)、應用處理器(application processor,AP),或是其他可程式化之一般用途或特殊用途的微處理器(microprocessor)、數位訊號處理器(digital signal processor,DSP)、影像訊號處理器(image signal processor,ISP)、圖形處理器(graphics processing unit,GPU)或其他類似裝置、積體電路及其組合。處理器130可存取並執行記錄在儲存裝置120中的程式碼與軟體模組,以實現本新型創作實施例中的立體影像產生方法。The processor 130 is coupled to the interpupillary distance detection device 110 and the storage device 120 , such as a central processing unit (CPU), an application processor (AP), or other programmable general-purpose or special Microprocessor (microprocessor), digital signal processor (digital signal processor, DSP), image signal processor (image signal processor, ISP), graphics processing unit (graphics processing unit, GPU) or other similar devices, integrated circuits and their combinations. The processor 130 can access and execute the code and software modules recorded in the storage device 120, so as to realize the stereoscopic image generation method in the novel creative embodiment.

為了讓使用者透過3D顯示器20感受到3D視覺效果,3D顯示器20可基於多種設計讓使用者的左眼與右眼分別觀看到對應至不同視角的影像內容(即左眼影像與右眼影像)。於本新型創作的實施例中,電子裝置10可依據對應至單一視角的一張二維平面影像來產生符合立體影像格式的立體影像,而此立體影像可包括對應至不同視角的影像內容。據此,3D顯示器20便可基於其3D顯示技術顯示符合立體影像格式的立體影像,以讓使用者觀賞到立體影像內容。In order to allow the user to experience the 3D visual effect through the 3D display 20 , the 3D display 20 can allow the user's left eye and right eye to view image content corresponding to different viewing angles (ie, the left eye image and the right eye image) based on various designs. . In the embodiment of the present invention, the electronic device 10 can generate a stereoscopic image conforming to a stereoscopic image format according to a 2D planar image corresponding to a single viewing angle, and the stereoscopic image can include image contents corresponding to different viewing angles. Accordingly, the 3D display 20 can display a stereoscopic image conforming to the stereoscopic image format based on its 3D display technology, so that the user can view the content of the stereoscopic image.

圖2是依照本新型創作一實施例的立體影像產生方法的流程圖。請參照圖2,本實施例的方式適用於上述實施例中的電子裝置10,以下即搭配電子裝置10中的各項元件說明本實施例的詳細步驟。FIG. 2 is a flowchart of a method for generating a stereoscopic image according to an embodiment of the present invention. Referring to FIG. 2 , the method of this embodiment is applicable to the electronic device 10 in the above-mentioned embodiment. The following describes the detailed steps of this embodiment in conjunction with various elements in the electronic device 10 .

於步驟S210,處理器130獲取對應至第一視角的二維原始影像,並估測二維原始影像的深度圖。二維原始影像可以是一般攝像裝置依據單一視角所拍攝的相片。二維原始影像也可以是由繪圖軟體所產生的影像內容。從另一方面來看,二維原始影像也可為操作於全螢幕模式下的某一應用程式所提供的影像內容。或者,此二維原始影像也可以是應用程式顯示於顯示器上的影像內容。或者,此二維原始影像也可為影像串流中的單幀影像。處理器130可使用像是Windows作業系統的“Desktop Duplication API”等等的螢幕擷取技術來獲取二維原始影像。又或者,處理器130可透過任何影像傳輸技術獲取二維原始影像。需特別說明的是,二維原始影像是適於利用二維顯示技術來進行顯示的影像資料。In step S210, the processor 130 acquires the 2D original image corresponding to the first viewing angle, and estimates the depth map of the 2D original image. The two-dimensional original image may be a photo taken by a common camera device according to a single viewing angle. The 2D original image can also be the image content generated by the graphics software. On the other hand, the 2D raw image can also be the image content provided by an application operating in full screen mode. Alternatively, the 2D original image can also be the image content displayed on the display by the application. Alternatively, the 2D original image can also be a single frame image in the video stream. The processor 130 may use a screen capture technology such as the "Desktop Duplication API" of the Windows operating system to obtain the 2D raw image. Alternatively, the processor 130 can acquire the 2D original image through any image transmission technology. It should be particularly noted that the two-dimensional original image is image data suitable for display using a two-dimensional display technology.

於一些實施例中,處理器130可利用各式單目深度估測技術來估測二維原始影像的深度圖。單目深度估測技術可以利用卷積神經網路架構中的DenseNet模型、SeNet模型或MiDaS模型,或是生成對抗網路架構中的MegaDepth模型或CycleGAN模型來估測二維原始影像的深度資訊。換言之,處理器130可將二維原始影像輸入至訓練完成的深度學習模型,而使深度學習模型可據以產生二維原始影像的深度圖。經訓練的深度學習模型的模型參數(例如神經網路層數目與各神經網路層的權重等等)已經由事前訓練而決定並儲存於儲存裝置120中。或者,於一些實施例中,處理器130可利用移動估算(Structure From Motion,SFM)等技術來估測出二維原始影像的深度圖。In some embodiments, the processor 130 may utilize various monocular depth estimation techniques to estimate the depth map of the 2D raw image. The monocular depth estimation technology can use the DenseNet model, SeNet model or MiDaS model in the convolutional neural network architecture, or the MegaDepth model or the CycleGAN model in the generative adversarial network architecture to estimate the depth information of the two-dimensional original image. In other words, the processor 130 can input the 2D raw image to the trained deep learning model, so that the deep learning model can generate the depth map of the 2D raw image accordingly. The model parameters of the trained deep learning model (such as the number of neural network layers and the weight of each neural network layer, etc.) have been determined by pre-training and stored in the storage device 120 . Alternatively, in some embodiments, the processor 130 may estimate the depth map of the two-dimensional original image by using techniques such as Structure From Motion (SFM).

於步驟S220,處理器130透過瞳距偵測裝置110偵測使用者的瞳距資訊。瞳距資訊會因人而異,但主要與種族、性別、年齡較為相關。換言之,不同使用者會具有不同的瞳距。一般而言,3D顯示系統會依據使用者的瞳距資訊來進行立體顯示。像是,處理器130會依據使用者的瞳距資訊來控制3D顯示器20的硬件配置或執行對應的影像處理,從而進行立體顯示。In step S220, the processor 130 detects the interpupillary distance information of the user through the interpupillary distance detection device 110. Interpupillary distance information will vary from person to person, but it is mainly related to race, gender, and age. In other words, different users will have different interpupillary distances. Generally speaking, the 3D display system performs stereoscopic display according to the information of the user's interpupillary distance. For example, the processor 130 controls the hardware configuration of the 3D display 20 or performs corresponding image processing according to the user's interpupillary distance information, so as to perform stereoscopic display.

於步驟S230,處理器130依據瞳距資訊與深度圖對二維原始影像進行像素偏移處理而產生對應至第二視角的參考影像。基於二維原始影像而產生的深度圖包括多個深度值,這些深度值一對一地對應至二維原始影像中的多個像素。為了產生對應至第二視角的影像資料,處理器130可參考深度圖與使用者的瞳距資訊來建立出對應至第二視角的參考影像,而且第一視角不同於第二視角。更具體而言,處理器130可依據瞳距資訊與深度圖中各個深度值來決定二維原始影像中的各個像素所對應的像素偏移量(即像差資訊),並據此擷取二維原始影像的影像資料來建立對應至第二視角的參考影像。可知的,深度越深表示像素偏移量越小,深度越淺表示像素偏移量越大。In step S230 , the processor 130 performs pixel offset processing on the two-dimensional original image according to the interpupillary distance information and the depth map to generate a reference image corresponding to the second viewing angle. The depth map generated based on the 2D original image includes a plurality of depth values, and the depth values correspond to a plurality of pixels in the 2D original image on a one-to-one basis. In order to generate the image data corresponding to the second viewing angle, the processor 130 may refer to the depth map and the user's interpupillary distance information to create a reference image corresponding to the second viewing angle, and the first viewing angle is different from the second viewing angle. More specifically, the processor 130 can determine a pixel offset (ie, aberration information) corresponding to each pixel in the two-dimensional original image according to the interpupillary distance information and each depth value in the depth map, and extract two A reference image corresponding to the second viewing angle is established by dimensioning the image data of the original image. It can be known that the deeper the depth is, the smaller the pixel offset is, and the shallower the depth is, the larger the pixel offset is.

於步驟S240,處理器130對參考影像進行影像修復處理而獲取經修復影像。詳細而言,由於參考影像是包括對應至新的第二視角的影像內容,因此部分原本於二維原始影像中被遮擋的場景資訊可能會在參考影像中顯露出來,這些場景資訊是無法從對應至第一視角的二維原始影像中取得的。此外,對應至第二視角的參考影像的影像邊緣也包括原本就不存在於對應至第一視角的二維原始影像中的場景資訊。因此,基於深度圖與二維原始影像而建立的參考影像會包括影像缺失區塊。於一些實施例中,處理器130可進行影像修復處理來填補參考影像中的影像缺失區塊(或稱為破洞(hole))。於一些實施例中,處理器130可利用影像缺失區塊周遭的像素資訊來填補參考影像中的影像缺失區塊。或者,於一些實施例中,處理器130可使用卷積神經網路模型來進行影像修復。舉例而言,處理器130可透過常數補洞法(constant color filling)、基於深度的水平深度資訊法(Horizontal extrapolation using depth information)、基於深度的變數修補法(variational inpainting using depth information)或其他相關演算法來對參考影像進行影像修復處理。In step S240, the processor 130 performs image restoration processing on the reference image to obtain the restored image. In detail, since the reference image includes the image content corresponding to the new second viewing angle, some scene information originally occluded in the 2D original image may be revealed in the reference image, which cannot be obtained from the corresponding scene information. obtained from the 2D original image from the first viewing angle. In addition, the image edge of the reference image corresponding to the second viewing angle also includes scene information that does not originally exist in the two-dimensional original image corresponding to the first viewing angle. Therefore, the reference image created based on the depth map and the 2D original image will include image missing blocks. In some embodiments, the processor 130 may perform image restoration processing to fill in missing image blocks (or holes) in the reference image. In some embodiments, the processor 130 may use pixel information around the missing image blocks to fill in the missing image blocks in the reference image. Alternatively, in some embodiments, the processor 130 may use a convolutional neural network model for image inpainting. For example, the processor 130 may use constant color filling, horizontal extrapolation using depth information, variational inpainting using depth information, or other related methods. Algorithms to perform image restoration processing on reference images.

於步驟S250,處理器130合併經修復影像與二維原始影像而產生符合立體影像格式的立體影像。可知的,立體影像包括對應至第一視角的二維原始影像以及對應至第二視角的經修復影像。換言之,二維原始影像以及經修復影像可分別為左眼影像與右眼影像。立體影像格式包括左右並排(Side by Side,SBS)格式或上下並排(Top and Bottom,TB)格式。基於前述可知,對應至第二視角的經修復影像是依據瞳距偵測裝置110所偵測的瞳距資訊而產生,因此本新型創作實施例的電子裝置10可依據不同使用者的瞳距資訊產生不同的立體影像。換言之,本新型創作實施例的電子裝置10可依據二維原始影像產生符合使用者真實瞳距的立體影像,因而提昇立體影像的觀看體驗。In step S250, the processor 130 combines the repaired image and the 2D original image to generate a stereoscopic image conforming to the stereoscopic image format. It can be known that the stereoscopic image includes a two-dimensional original image corresponding to the first viewing angle and a restored image corresponding to the second viewing angle. In other words, the 2D original image and the repaired image can be a left-eye image and a right-eye image, respectively. Stereoscopic image formats include Side by Side (SBS) format or Top and Bottom (TB) format. Based on the foregoing, it can be seen that the restored image corresponding to the second viewing angle is generated according to the interpupillary distance information detected by the interpupillary distance detection device 110 , so the electronic device 10 of the new creative embodiment can be based on the interpupillary distance information of different users produce different stereoscopic images. In other words, the electronic device 10 according to the new creative embodiment can generate a stereoscopic image conforming to the user's real interpupillary distance according to the two-dimensional original image, thereby improving the viewing experience of the stereoscopic image.

另外需要說明的是,於一些實施例中,處理器130需要依據使用者的瞳距資訊對符合立體影像格式的立體影像再進行其他影像處理,從而產生3D顯示器20適於播放的影像資料。舉例而言,當3D顯示器20為裸眼式3D顯示器時,其是透過透鏡折射原理或光柵技術而提供具有視差(parallax)的兩張影像給左眼與右眼,以讓觀看者體驗到立體顯示效果。因此,處理器130會對立體影像進行影像編織(image weaving)處理,以將左眼影像的像素資料與右眼影像的像素資料交錯排列而產生適於由裸眼式3D顯示器播放的單幀影像。為了將左眼影像與右眼影像準確地分別提供至左眼與右眼,處理器130需要依據使用者的瞳距資訊來進行影像編織處理,以決定如何交錯排列左眼影像的像素資料與右眼影像的像素資料。據此,由於本新型創作實施例所產生的立體影像是依據使用者的真實瞳距資訊而產生,且處理器130是依據一致的瞳距資訊來進行影像編織處理,因而可提昇立體影像內容的觀看舒適度並讓使用者可感受到立體效果更佳的3D視覺效果。It should also be noted that, in some embodiments, the processor 130 needs to perform other image processing on the stereoscopic image conforming to the stereoscopic image format according to the interpupillary distance information of the user, so as to generate image data suitable for playing on the 3D display 20 . For example, when the 3D display 20 is a naked-eye 3D display, it provides two images with parallax to the left eye and the right eye through the lens refraction principle or the grating technology, so that the viewer can experience the stereoscopic display Effect. Therefore, the processor 130 performs image weaving processing on the stereoscopic image, so as to stagger the pixel data of the left-eye image and the pixel data of the right-eye image to generate a single frame image suitable for playing by the naked-eye 3D display. In order to accurately provide the left eye image and the right eye image to the left eye and the right eye, respectively, the processor 130 needs to perform image weaving processing according to the user's interpupillary distance information, so as to determine how to stagger the pixel data of the left eye image and the right eye image. Pixel data of the eye image. Accordingly, since the stereoscopic image generated by the novel creative embodiment is generated according to the user's real interpupillary distance information, and the processor 130 performs image weaving processing according to the consistent interpupillary distance information, the content of the stereoscopic image can be improved. Viewing comfort and allow users to experience better stereoscopic 3D visual effects.

圖3是依照本新型創作一實施例的立體影像產生方法的流程圖。圖4是依照本新型創作一實施例的立體影像產生方法的示意圖。請參照圖3與圖4,本實施例的方式適用於上述實施例中的電子裝置10,以下即搭配電子裝置10中的各項元件說明本實施例的詳細步驟。FIG. 3 is a flowchart of a method for generating a stereoscopic image according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a method for generating a stereoscopic image according to an embodiment of the present invention. Please refer to FIG. 3 and FIG. 4 , the method of this embodiment is applicable to the electronic device 10 in the above-mentioned embodiment, and the detailed steps of this embodiment are described below in conjunction with various elements in the electronic device 10 .

於步驟S310,處理器130獲取初始影像Img_int。舉例而言,處理器130可擷取電子裝置10的桌面影像而獲取初始影像Img_int。於步驟S320,處理器130透過瞳距偵測裝置110偵測使用者的瞳距資訊IPD_1。於步驟S330,處理器130判斷初始影像Img_int是否包括符合立體影像格式的立體影像Img_3D1。於一些實施例中,符合立體影像格式的立體影像Img_3D1為初始影像Img_int。於一些實施例中,符合立體影像格式的立體影像Img_3D1為初始影像Img_int中的部份影像區塊。In step S310, the processor 130 acquires the initial image Img_int. For example, the processor 130 can capture the desktop image of the electronic device 10 to obtain the initial image Img_int. In step S320 , the processor 130 detects the interpupillary distance information IPD_1 of the user through the interpupillary distance detecting device 110 . In step S330, the processor 130 determines whether the initial image Img_int includes the stereoscopic image Img_3D1 conforming to the stereoscopic image format. In some embodiments, the stereoscopic image Img_3D1 conforming to the stereoscopic image format is the initial image Img_int. In some embodiments, the stereoscopic image Img_3D1 conforming to the stereoscopic image format is a partial image block in the original image Img_int.

若步驟S330判斷為是,接續步驟S380,處理器130依據瞳距資訊IPD_1對立體影像Img_3D1進行影像編織處理,以獲取適於由3D顯示裝置20播放的影像資料。如同前述,處理器130將重新交錯排列立體影像Img_3D1中的左眼影像的像素資料與右眼影像的像素資料,以產生3D顯示裝置20可播放的影像資料。If the determination in step S330 is yes, then step S380 is continued, and the processor 130 performs image weaving processing on the stereoscopic image Img_3D1 according to the interpupillary distance information IPD_1 to obtain image data suitable for being played by the 3D display device 20 . As described above, the processor 130 will rearrange the pixel data of the left-eye image and the pixel data of the right-eye image in the stereoscopic image Img_3D1 to generate image data playable by the 3D display device 20 .

若步驟S330判斷為否,於步驟S340,處理器130獲取對應至第一視角的二維原始影像Img_2D,並估測二維原始影像Img_2D的深度圖d_m。處理器130可將初始影像Img_int直接作為二維原始影像Img_2D,或是從初始影像Img_int擷取部份影像區塊而獲取二維原始影像Img_2D。深度圖d_m包括分別對應至二維原始影像Img_2D中各個像素的深度值,而這些深度值可介於一預設初始範圍內,例如0至255。If the determination in step S330 is NO, in step S340, the processor 130 acquires the 2D original image Img_2D corresponding to the first viewing angle, and estimates the depth map d_m of the 2D original image Img_2D. The processor 130 can directly use the original image Img_int as the two-dimensional original image Img_2D, or extract some image blocks from the original image Img_int to obtain the two-dimensional original image Img_2D. The depth map d_m includes depth values corresponding to each pixel in the two-dimensional original image Img_2D, and these depth values may be within a predetermined initial range, such as 0 to 255.

於步驟S350,處理器130依據瞳距資訊IPD_1與深度圖d_m對二維原始影像Img_2D進行像素偏移處理而產生對應至第二視角的參考影像Img_ref。步驟S350可包括子步驟S351~S352。In step S350, the processor 130 performs pixel offset processing on the two-dimensional original image Img_2D according to the interpupillary distance information IPD_1 and the depth map d_m to generate a reference image Img_ref corresponding to the second viewing angle. Step S350 may include sub-steps S351-S352.

於步驟S351,處理器130依據瞳距資訊IPD_1與深度圖d_m中對應於二維原始影像Img_2D中第一像素的深度值獲取像素偏移量。具體而言,處理器130可依據瞳距資訊IPD_1與深度圖d_m中的各個深度值分別決定二維原始影像Img_2D中各個第一像素的像素偏移量。In step S351, the processor 130 obtains the pixel offset according to the interpupillary distance information IPD_1 and the depth value corresponding to the first pixel in the two-dimensional original image Img_2D in the depth map d_m. Specifically, the processor 130 may determine the pixel offset of each first pixel in the two-dimensional original image Img_2D according to the interpupillary distance information IPD_1 and each depth value in the depth map d_m, respectively.

於一些實施例中,處理器130可將深度圖d_m中的各深度值正規化至一預設數值範圍內,例如0至1,以產生適於計算像素偏移量的深度資訊。也就是說,處理器130可將介於一預設初始範圍內的深度值正規化至一預設數值範圍內。In some embodiments, the processor 130 may normalize each depth value in the depth map d_m to a predetermined value range, such as 0 to 1, to generate depth information suitable for calculating the pixel offset. That is, the processor 130 can normalize the depth value within a predetermined initial range to a predetermined value range.

於一些實施例中,像素偏移量可為瞳距資訊IPD_1與深度圖d_m中的深度值之間的相乘結果。亦即,處理器130可透過分別將深度圖d_m中的各深度值乘上使用者的瞳距資訊IPD_1,而獲取二維原始影像Img_2D中各第一像素的像素偏移量。於一些實施例中,當深度圖d_m中的各深度值正規化至介於0至1之間時,處理器130可依據四捨五入、無條件捨去或無條件進位等等整數化處理來決定像素偏移量。In some embodiments, the pixel offset may be a multiplication result between the interpupillary distance information IPD_1 and the depth value in the depth map d_m. That is, the processor 130 can obtain the pixel offset of each first pixel in the two-dimensional original image Img_2D by multiplying each depth value in the depth map d_m by the user's interpupillary distance information IPD_1. In some embodiments, when each depth value in the depth map d_m is normalized to be between 0 and 1, the processor 130 may determine the pixel offset according to integer processing such as rounding, unconditional rounding, or unconditional rounding. quantity.

於一些實施例中,像素偏移量可為瞳距資訊IPD_1與相關於深度圖d_m中的深度值的一函式輸出值的相乘結果。換言之,處理器130可先將某一深度值輸入至一函式而產生關於該深度值的函式輸出值,再將此函式輸出值乘上瞳距資訊IPD_1而決定對應的像素偏移量。此函式可例如為一次線性函數。亦即,像素偏移量可透過公式(1)而產生。 像素偏移量=瞳距 × f(d)           公式(1) 其中,f(˙)為將深度值d作為輸入值的函數。相似的,當公式(1)所產生的像素偏移量並非為整數時,處理器130可依據四捨五入、無條件捨去或無條件進位等等整數化處理來決定像素偏移量。 In some embodiments, the pixel offset may be the multiplication result of the interpupillary distance information IPD_1 and the output value of a function related to the depth value in the depth map d_m. In other words, the processor 130 can first input a certain depth value into a function to generate a function output value related to the depth value, and then multiply the function output value by the interpupillary distance information IPD_1 to determine the corresponding pixel offset . This function can be, for example, a first-order linear function. That is, the pixel offset can be generated by formula (1). Pixel offset = pupil distance × f(d) formula (1) Among them, f(˙) is a function that takes the depth value d as the input value. Similarly, when the pixel offset generated by the formula (1) is not an integer, the processor 130 can determine the pixel offset according to integer processing such as rounding, unconditional rounding, or unconditional carry.

於步驟S352,處理器130依據像素偏移量沿預設軸向平位移二維原始影像Img_2D中第一像素而獲取參考影像Img_ref中第二像素。上述預設軸向可包括正X軸方向或負X軸方向。亦即,處理器130可依據像素偏移量向右平移二維原始影像Img_2D中第一像素而獲取參考影像Img_ref中第二像素,在此情況下,是將二維原始影像Img_2D作為右眼影像而建立左眼影像。或者,處理器130可依據像素偏移量向左平移二維原始影像Img_2D中第一像素而獲取參考影像Img_ref中第二像素,在此情況下,是將二維原始影像Img_2D作為左眼影像而建立右眼影像。In step S352, the processor 130 translates the first pixel in the two-dimensional original image Img_2D along the predetermined axis according to the pixel offset to obtain the second pixel in the reference image Img_ref. The above-mentioned preset axial direction may include a positive X-axis direction or a negative X-axis direction. That is, the processor 130 can shift the first pixel in the 2D original image Img_2D to the right according to the pixel offset to obtain the second pixel in the reference image Img_ref. In this case, the 2D original image Img_2D is used as the right eye image And create a left eye image. Alternatively, the processor 130 may shift the first pixel in the 2D original image Img_2D to the left according to the pixel offset to obtain the second pixel in the reference image Img_ref. In this case, the 2D original image Img_2D is used as the left eye image to obtain the second pixel. Create a right eye image.

於一些實施例中,在平位移二維原始影像Img_2D中第一像素而獲取第二像素之後,處理器130可判斷此第二像素的像素座標是否落在參考影像Img_ref之內。反應於第二像素的像素座標未落在參考影像Img_ref之內,處理器130可捨棄第二像素。舉例而言,假設第一像素的像素座標為(0,0)且像素偏移量為∆s,若處理器130沿負X軸向平移第一像素會獲取像素座標為(-∆s,0)的第二像素。基此,處理器130可據以判定此第二像素未落在參考影像Img_ref之內而捨棄像素座標為(-∆s,0)的第二像素。In some embodiments, after the first pixel in the two-dimensional original image Img_2D is translated to obtain the second pixel, the processor 130 may determine whether the pixel coordinates of the second pixel fall within the reference image Img_ref. In response to the pixel coordinates of the second pixel not falling within the reference image Img_ref, the processor 130 may discard the second pixel. For example, assuming that the pixel coordinate of the first pixel is (0,0) and the pixel offset is Δs, if the processor 130 translates the first pixel along the negative X-axis, the pixel coordinate obtained is (-Δs,0 ) of the second pixel. Based on this, the processor 130 can determine that the second pixel does not fall within the reference image Img_ref and discard the second pixel whose pixel coordinate is (-Δs, 0).

於一些實施例中,處理器130可依據像素偏移量沿預設軸向平位移二維原始影像Img_2D中第一像素。處理器130還可依據另一像素偏移量沿預設軸向平位移二維原始影像Img_2D中的另一第一像素。反應於第一像素與另一像素皆對應至第二像素的像素座標,處理器130可選擇將第一像素設置為參考影像Img_ref中第二像素。換言之,若多個第一像素依據對應的像素位移量進行位移後皆對應至相同的像素座標,處理器130可選擇這些第一像素其中之一來作為參考影像Img_ref中第二像素。於一些實施例中,處理器130可依據各第一像素對應的深度值來決定參考影像Img_ref中第二像素。或者,於一些實施例中,處理器130可依據各第一像素對應的計算先後順序來決定參考影像Img_ref中第二像素。In some embodiments, the processor 130 may translate the first pixel in the two-dimensional original image Img_2D along a predetermined axis according to the pixel offset. The processor 130 may further translate another first pixel in the two-dimensional original image Img_2D along a predetermined axis according to another pixel offset. In response to both the first pixel and the other pixel corresponding to the pixel coordinates of the second pixel, the processor 130 may select to set the first pixel as the second pixel in the reference image Img_ref. In other words, if the plurality of first pixels are shifted according to the corresponding pixel displacements and all correspond to the same pixel coordinates, the processor 130 may select one of the first pixels as the second pixel in the reference image Img_ref. In some embodiments, the processor 130 may determine the second pixel in the reference image Img_ref according to the depth value corresponding to each first pixel. Alternatively, in some embodiments, the processor 130 may determine the second pixel in the reference image Img_ref according to the calculation sequence corresponding to each first pixel.

於步驟S360,處理器130對參考影像Img_ref進行影像修復處理而獲取經修復影像Img_rec。於步驟S370,處理器130合併經修復影像Img_rec與二維原始影像Img_2D而產生符合立體影像格式的立體影像Img_3D2。圖5是依據本新型創作一實施例的產生符合立體影像格式的立體影像Img_3D2的範例示意圖。請參照圖5,處理器130可將經修復影像Img_rec與二維原始影像Img_2D左右並排而獲取符合並排格式的立體影像Img_3D2。於一些實施例中,處理器130也可先對經修復影像Img_rec與二維原始影像Img_2D進行影像縮放處理後再合併此兩張經縮放的影像。之後,於步驟S380,處理器130依據瞳距資訊IPD_1對立體影像Img_3D2進行一影像編織處理,以獲取適於由立體顯示裝置20播放的影像資料。In step S360, the processor 130 performs image restoration processing on the reference image Img_ref to obtain the restored image Img_rec. In step S370, the processor 130 combines the repaired image Img_rec with the 2D original image Img_2D to generate a stereoscopic image Img_3D2 conforming to the stereoscopic image format. FIG. 5 is a schematic diagram of an example of generating a stereoscopic image Img_3D2 conforming to a stereoscopic image format according to an embodiment of the present invention. Referring to FIG. 5 , the processor 130 may side-by-side the restored image Img_rec and the 2D original image Img_2D to obtain a stereoscopic image Img_3D2 conforming to the side-by-side format. In some embodiments, the processor 130 may also perform image scaling processing on the repaired image Img_rec and the 2D original image Img_2D before combining the two scaled images. Then, in step S380 , the processor 130 performs an image weaving process on the stereoscopic image Img_3D2 according to the interpupillary distance information IPD_1 to obtain image data suitable for being played by the stereoscopic display device 20 .

需說明的是,於本新型創作的實施例中,像素偏移量是依據使用者的真實瞳距資訊而產生,因此基於這些像素偏移量而建立參考影像與二維原始影像之間的像差資訊可符合使用者的兩眼真實距離。因此,當3D顯示系統依據真實瞳距資訊進行其他3D顯示所需的硬體配置或其他後續影像處理時,由於符合立體影像格式的立體影像也是基於真實瞳距資訊而產生,因而可大幅提昇觀看舒適度與感受到更佳的立體效果。It should be noted that, in the embodiment of the present invention, the pixel offsets are generated according to the user's real interpupillary distance information, so the image between the reference image and the two-dimensional original image is established based on these pixel offsets. The difference information can match the real distance between the eyes of the user. Therefore, when the 3D display system performs other hardware configurations required for 3D display or other subsequent image processing according to the real pupil distance information, since the stereoscopic images conforming to the stereoscopic image format are also generated based on the real pupil distance information, the viewing can be greatly improved. Comfort and feel better three-dimensional effect.

綜上所述,於本新型創作實施例中,使用者可將二維平面影像轉換為符合立體影像格式的立體影像,以豐富3D顯示器可顯示的3D內容。此外,由於依據使用者的真實瞳距資訊來產生立體影像,因此可確保裸視3D顯示技術中的影像編織處理所使用的瞳距資訊與生成立體影像的瞳距資訊一致。據以,可大幅提昇觀看舒適度並讓使用者可感受到更佳的立體效果。To sum up, in the novel creative embodiment, the user can convert the 2D flat image into the 3D image conforming to the 3D image format, so as to enrich the 3D content that can be displayed on the 3D display. In addition, since the stereoscopic image is generated according to the user's real interpupillary distance information, it can be ensured that the interpupillary distance information used in the image weaving processing in the naked-view 3D display technology is consistent with the interpupillary distance information for generating the stereoscopic image. Accordingly, viewing comfort can be greatly improved and users can feel a better three-dimensional effect.

雖然本新型創作已以實施例揭露如上,然其並非用以限定本新型創作,任何所屬技術領域中具有通常知識者,在不脫離本新型創作的精神和範圍內,當可作些許的更動與潤飾,故本新型創作的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above with examples, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field may make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the new creation should be determined by the scope of the appended patent application.

10:電子裝置 110:瞳距偵測裝置 120:儲存裝置 130:處理器 20:3D顯示器 Img_int:初始影像 IPD_1:瞳距資訊 Img_3D1、Img_3D2:立體影像 Img_2D:二維原始影像 d_m:深度圖 Img_ref:參考影像 Img_rec:經修復影像 S210~S250、S310~S380:步驟 10: Electronics 110: Interpupillary distance detection device 120: Storage Device 130: Processor 20: 3D display Img_int: initial image IPD_1: Interpupillary Distance Information Img_3D1, Img_3D2: Stereoscopic image Img_2D: 2D original image d_m: depth map Img_ref: reference image Img_rec: repaired image S210~S250, S310~S380: Steps

圖1是依照本新型創作一實施例的電子裝置的示意圖。 圖2是依照本新型創作一實施例的立體影像產生方法的流程圖。 圖3是依照本新型創作一實施例的立體影像產生方法的流程圖。 圖4是依照本新型創作一實施例的立體影像產生方法的示意圖。 圖5是依據本新型創作一實施例的產生符合立體影像格式的立體影像的範例示意圖。 FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention. FIG. 2 is a flowchart of a method for generating a stereoscopic image according to an embodiment of the present invention. FIG. 3 is a flowchart of a method for generating a stereoscopic image according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a method for generating a stereoscopic image according to an embodiment of the present invention. FIG. 5 is an exemplary schematic diagram of generating a stereoscopic image conforming to a stereoscopic image format according to an embodiment of the present invention.

10:電子裝置 10: Electronics

110:瞳距偵測裝置 110: Interpupillary distance detection device

120:儲存裝置 120: Storage Device

130:處理器 130: Processor

20:3D顯示器 20: 3D display

Claims (9)

一種電子裝置,包括: 一瞳距偵測裝置; 一儲存裝置,記錄有多個模組;以及 一處理器,連接該瞳距偵測裝置與該儲存裝置,經配置以: 獲取對應至第一視角的一二維原始影像,並估測該二維原始影像的一深度圖; 透過該瞳距偵測裝置偵測一使用者的該瞳距資訊; 依據該瞳距資訊與該深度圖對該二維原始影像進行一像素偏移處理而產生對應至第二視角的一參考影像; 對該參考影像進行一影像修復處理而獲取一經修復影像;以及 合併該經修復影像與該二維原始影像而產生符合一立體影像格式的一立體影像,其中該立體影像包括對應至不同視角的影像內容。 An electronic device, comprising: an interpupillary distance detection device; a storage device that records a plurality of modules; and a processor, connected to the interpupillary distance detection device and the storage device, configured to: acquiring a two-dimensional original image corresponding to the first viewing angle, and estimating a depth map of the two-dimensional original image; detecting the interpupillary distance information of a user through the interpupillary distance detection device; performing a pixel offset process on the two-dimensional original image according to the interpupillary distance information and the depth map to generate a reference image corresponding to the second viewing angle; performing an image restoration process on the reference image to obtain a restored image; and A stereoscopic image conforming to a stereoscopic image format is generated by combining the repaired image and the two-dimensional original image, wherein the stereoscopic image includes image contents corresponding to different viewing angles. 如請求項1所述的電子裝置,其中該處理器更經配置以:依據該瞳距資訊對該立體影像進行一影像編織處理,以獲取適於由一立體顯示裝置播放的影像資料。The electronic device of claim 1, wherein the processor is further configured to: perform an image weaving process on the stereoscopic image according to the interpupillary distance information to obtain image data suitable for being played by a stereoscopic display device. 如請求項2所述的電子裝置,其中該立體影像格式包括左右並排格式或上下並排格式。The electronic device of claim 2, wherein the stereoscopic image format includes a side-by-side format or a side-by-side format. 如請求項2所述的電子裝置,其中該處理器更經配置以: 依據該瞳距資訊與該深度圖中對應於該二維原始影像中一第一像素的一深度值獲取一像素偏移量;以及 依據該像素偏移量沿一預設軸向平位移該二維原始影像中該第一像素而獲取該參考影像中一第二像素。 The electronic device of claim 2, wherein the processor is further configured to: obtaining a pixel offset according to the interpupillary distance information and a depth value of the depth map corresponding to a first pixel in the two-dimensional original image; and A second pixel in the reference image is obtained by translationally shifting the first pixel in the two-dimensional original image along a predetermined axis according to the pixel offset. 如請求項4所述的電子裝置,其中該像素偏移量為該瞳距資訊與該深度圖中的該深度值之間的相乘結果。The electronic device of claim 4, wherein the pixel offset is a multiplication result between the interpupillary distance information and the depth value in the depth map. 如請求項4所述的電子裝置,其中該像素偏移量為該瞳距資訊與相關於該深度圖中的該深度值的一函式輸出值的相乘結果。The electronic device of claim 4, wherein the pixel offset is a multiplication result of the interpupillary distance information and a function output value related to the depth value in the depth map. 如請求項5所述的電子裝置,其中該處理器更經配置以:將該深度圖中的各深度值正規化至一預設數值範圍內。The electronic device of claim 5, wherein the processor is further configured to: normalize each depth value in the depth map to a predetermined value range. 如請求項4所述的電子裝置,其中該處理器更經配置以: 判斷該第二像素的像素座標是否落在該參考影像之內;以及 反應於該第二像素的像素座標未落在該參考影像之內,捨棄該第二像素。 The electronic device of claim 4, wherein the processor is further configured to: determining whether the pixel coordinates of the second pixel fall within the reference image; and In response to the pixel coordinates of the second pixel not falling within the reference image, the second pixel is discarded. 如請求項4所述的電子裝置,其中該處理器更經配置以: 依據該像素偏移量沿該預設軸向平位移該二維原始影像中該第一像素; 依據另一像素偏移量沿該預設軸向平位移該二維原始影像中另一第一像素;以及 反應於該第一像素與該另一像素皆對應至該第二像素的像素座標,選擇該第一像素設置為該參考影像中該第二像素。 The electronic device of claim 4, wherein the processor is further configured to: translate the first pixel in the two-dimensional original image along the predetermined axis according to the pixel offset; translate another first pixel in the 2D original image along the predetermined axis according to another pixel offset; and In response to the pixel coordinates of the first pixel and the other pixel both corresponding to the second pixel, the first pixel is selected and set as the second pixel in the reference image.
TW111200902U 2021-03-03 2021-03-03 Electronic apparatus TWM626646U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111200902U TWM626646U (en) 2021-03-03 2021-03-03 Electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111200902U TWM626646U (en) 2021-03-03 2021-03-03 Electronic apparatus

Publications (1)

Publication Number Publication Date
TWM626646U true TWM626646U (en) 2022-05-01

Family

ID=82559350

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111200902U TWM626646U (en) 2021-03-03 2021-03-03 Electronic apparatus

Country Status (1)

Country Link
TW (1) TWM626646U (en)

Similar Documents

Publication Publication Date Title
KR102215166B1 (en) Providing apparatus, providing method and computer program
TWI584222B (en) Stereoscopic image processor, stereoscopic image interaction system, and stereoscopic image displaying method
US9723289B2 (en) Dynamic adjustment of predetermined three-dimensional video settings based on scene content
US8514225B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
JP6407460B1 (en) Image processing apparatus, image processing method, and program
JP2012058968A (en) Program, information storage medium and image generation system
WO2012094075A1 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
KR101198557B1 (en) 3D stereoscopic image and video that is responsive to viewing angle and position
CN113795863A (en) Processing of depth maps for images
TWI784428B (en) Stereo image generation method and electronic apparatus using the same
JP2012234411A (en) Image generation device, image generation system, image generation program and image generation method
TWM626646U (en) Electronic apparatus
TW202332263A (en) Stereoscopic image playback apparatus and method of generating stereoscopic images thereof
US11468653B2 (en) Image processing device, image processing method, program, and display device
KR20100036683A (en) Method and apparatus for output image
CN115118949A (en) Stereoscopic image generation method and electronic device using same
TWI790560B (en) Side by side image detection method and electronic apparatus using the same
TWI628619B (en) Method and device for generating stereoscopic images
US20220232201A1 (en) Image generation system and method
WO2024174050A1 (en) Video communication method and device
JP2012169822A (en) Image processing method and image processing device
US9609313B2 (en) Enhanced 3D display method and system
GB2585060A (en) Audio generation system and method
JP5720475B2 (en) 3D image control apparatus, 3D image control method, and 3D image control program
TW202335494A (en) Scaling of three-dimensional content for display on an autostereoscopic display device