TW202411936A - Method and program for providing augmented reality image by using depth data - Google Patents

Method and program for providing augmented reality image by using depth data Download PDF

Info

Publication number
TW202411936A
TW202411936A TW111133622A TW111133622A TW202411936A TW 202411936 A TW202411936 A TW 202411936A TW 111133622 A TW111133622 A TW 111133622A TW 111133622 A TW111133622 A TW 111133622A TW 202411936 A TW202411936 A TW 202411936A
Authority
TW
Taiwan
Prior art keywords
data
image data
virtual image
depth
client
Prior art date
Application number
TW111133622A
Other languages
Chinese (zh)
Inventor
金泰郁
李贺冉
丁德榮
Original Assignee
南韓商科理特股份有限公司
Filing date
Publication date
Application filed by 南韓商科理特股份有限公司 filed Critical 南韓商科理特股份有限公司
Publication of TW202411936A publication Critical patent/TW202411936A/en

Links

Abstract

本發明涉及一種利用深度數據的增強現實影像提供方法以及程式。根據本發明的一實施例的利用深度數據的增強現實影像提供方法包括:虛擬影像數據接收步驟,所述客戶機從伺服器接收虛擬影像數據;顯示位置確定步驟,所述客戶機基於所述深度數據來確定每個像素應顯示在現實空間內的位置;以及虛擬影像數據顯示步驟,基於所述確定的位置來將所述虛擬影像數據顯示在顯示空間上。The present invention relates to a method and a program for providing an enhanced reality image using depth data. According to an embodiment of the present invention, the method for providing an enhanced reality image using depth data includes: a virtual image data receiving step, wherein the client receives virtual image data from a server; a display position determining step, wherein the client determines the position where each pixel should be displayed in a real space based on the depth data; and a virtual image data displaying step, wherein the virtual image data is displayed on a display space based on the determined position.

Description

利用深度數據的增強現實影像提供方法以及程式Method and program for providing enhanced reality image using depth data

本發明是有關於一種利用深度數據提供增強現實圖像的方法以及程式。The present invention relates to a method and a program for providing an enhanced reality image using depth data.

增強現實是一種在顯示世界中將諸如文字或圖像的虛擬的物體重疊而呈現為一個影像的技術。增強現實技術到2000年代中期為止處在研究開發和實驗應用階段,然而最近隨著技術環境的完善而進入到了實用化階段。尤其,最近隨著智慧手機的登場以及網路技術發達,使得增強現實技術開始得到關注。Augmented reality is a technology that superimposes virtual objects such as text or images on the display world to present them as a single image. Augmented reality technology was in the research and development and experimental application stage until the mid-2000s, but has recently entered the practical stage as the technical environment has improved. In particular, with the recent emergence of smartphones and the development of network technology, augmented reality technology has begun to attract attention.

實現增強現實的最為通常的方法是如下的方法:利用智慧手機的相機拍攝現實的世界,並在其之上疊加預生成的電腦圖像而輸出,從而使用戶感覺虛擬和現實融合的一樣。這樣的方法由於用戶能夠較容易地利用智慧手機的相機獲得現實的影像,並且電腦圖像也能夠通過智慧手機的計算功能較容易地實現,因此大部分的增強現實應用實現在智慧手機。並且,最近隨著眼鏡形態的穿戴式設備登場,對於增強現實技術的關注度正在增加。The most common method of realizing augmented reality is to use a smartphone camera to shoot the real world and superimpose a pre-generated computer image on it to output it, so that the user feels that the virtual and real world are fused. This method allows users to easily obtain real images using smartphone cameras, and computer images can also be easily realized through the computing function of smartphones, so most augmented reality applications are realized on smartphones. In addition, with the recent appearance of wearable devices in the form of glasses, the attention to augmented reality technology is increasing.

增強現實影像應增強在現實空間的準確的位置才可以為用戶提供現實感。然而,在因用戶的移動而使結合增強現實影像的對象體(例如,標記)在空間上的佈置改變的情況下,若預定時間點的影像幀遺漏,則增強現實影像可能會佈置在不適當的位置,不自然地顯示在顯示空間,並且由於晃動而使現實感降低。因此,本發明期望提供一種如下的利用深度數據的增強現實影像提供方法以及程式:利用深度數據將作為二維影像的虛擬影像數據佈置於現實空間內的適當的位置,從而當用戶移動或者幀遺漏時,也能夠提供具有現實感並且不會晃動的增強現實影像。The enhanced reality image should be enhanced at the exact position in the real space to provide a sense of reality for the user. However, when the spatial layout of the object (e.g., a marker) combined with the enhanced reality image changes due to the movement of the user, if the image frame at a predetermined time point is missed, the enhanced reality image may be arranged at an inappropriate position, displayed unnaturally in the display space, and the sense of reality is reduced due to shaking. Therefore, the present invention is expected to provide a method and program for providing enhanced reality images using depth data as follows: using depth data to arrange virtual image data as a two-dimensional image at an appropriate position in the real space, so that when the user moves or frames are missed, an enhanced reality image with a sense of reality and no shaking can be provided.

並且,本發明期望提供一種如下的利用深度數據的增強現實影像提供方法以及程式:可以基於針對虛擬實境影像的每個幀的附加的深度數據來指定需要進行透明處理的區域,來生成增強現實用影像,而無需單獨的遮罩數據。Furthermore, the present invention is intended to provide a method and program for providing an augmented reality image using depth data as follows: an augmented reality image can be generated by specifying an area that needs to be transparently processed based on additional depth data for each frame of a virtual reality image without the need for separate mask data.

並且,本發明期望提供一種用於如下的利用深度數據的增強現實影像提供方法以及程式:將虛擬實境內容變形而應用為增強現實內容,而不必與虛擬實境用影像單獨製作單獨在現實空間增強而顯示的增強現實用影像。Furthermore, the present invention is intended to provide a method and program for providing augmented reality images using depth data as follows: virtual reality content is deformed and applied as augmented reality content, without having to separately produce an augmented reality image that is enhanced and displayed in real space separately from the virtual reality image.

本發明要解決的技術問題不局限於以上提及的技術問題,本領域技術人員可通過下文中的記載明確理解未提及的其他技術問題。The technical problems to be solved by the present invention are not limited to the technical problems mentioned above, and those skilled in the art can clearly understand other technical problems not mentioned through the description below.

根據本發明的一實施例的一種利用深度數據的增強現實影像提供方法包括:虛擬影像數據接收步驟,所述客戶機從伺服器接收虛擬影像數據,所述虛擬影像數據包括顏色數據以及深度數據;顯示位置確定步驟,所述客戶機基於所述深度數據來確定每個像素應顯示在現實空間內的位置;以及虛擬影像數據顯示步驟,基於所述確定的位置來將所述虛擬影像數據顯示在顯示空間上。According to an embodiment of the present invention, a method for providing an augmented reality image using depth data includes: a virtual image data receiving step, in which the client receives virtual image data from a server, wherein the virtual image data includes color data and depth data; a display position determining step, in which the client determines the position where each pixel should be displayed in the real space based on the depth data; and a virtual image data displaying step, in which the virtual image data is displayed on the display space based on the determined position.

並且,作為另一實施例,所述深度數據按照每個像素包括於與顏色數據通道不同的單獨的通道,所述深度數據以及所述顏色數據被同步化而傳送。Furthermore, as another embodiment, the depth data is included in a separate channel different from the color data channel for each pixel, and the depth data and the color data are transmitted in synchronization.

並且,作為另一實施例,所述虛擬影像數據是當拍攝或者生成時獲取每個地點的深度數據而按照每個像素儲存的二維圖像。Furthermore, as another embodiment, the virtual image data is a two-dimensional image that is stored according to each pixel by acquiring depth data of each location when shooting or generating.

並且,作為另一實施例,所述顯示位置確定步驟包括如下步驟:所述客戶機將特定的深度確定為透明度調節基準;以及基於所述透明度調節基準區分深度範圍,來確定是否進行透明處理,其中,所述透明度調節基準是設定將被顯示在畫面上的內容的邊界線的基準。Furthermore, as another embodiment, the display position determination step includes the following steps: the client determines a specific depth as a transparency adjustment benchmark; and distinguishes the depth range based on the transparency adjustment benchmark to determine whether to perform transparent processing, wherein the transparency adjustment benchmark is a benchmark for setting the boundary of the content to be displayed on the screen.

並且,作為另一實施例,所述深度範圍是按照所述透明度調節基準設定多個深度使得以所述多個深度為基準而被劃分的多個區域。Furthermore, as another embodiment, the depth range is a plurality of areas divided based on the plurality of depths by setting a plurality of depths according to the transparency adjustment standard.

並且,作為另一實施例,所述虛擬影像數據還包括獲取位置數據以及影像方向數據,所述顯示位置確定步驟包括如下步驟:比較由所述客戶機獲取的當前位置數據和所述獲取位置數據,比較由所述客戶機獲取的再現方向數據和所述影像方向數據;基於比較結果來調節所述虛擬影像數據內的像素的位置。Furthermore, as another embodiment, the virtual image data further includes acquired position data and image direction data, and the display position determination step includes the following steps: comparing the current position data acquired by the client and the acquired position data, and comparing the reproduction direction data acquired by the client and the image direction data; and adjusting the position of the pixels in the virtual image data based on the comparison result.

並且,作為另一實施例,所述顯示位置確定步驟還包括如下步驟:所述客戶機基於現實空間的光照射方向來調節所述虛擬影像數據內的每個像素的顏色或者色度。Furthermore, as another embodiment, the display position determination step further includes the following step: the client adjusts the color or chromaticity of each pixel in the virtual image data based on the light illumination direction in the real space.

並且,作為另一實施例,還包括如下步驟:在所述客戶機是輸出將借由相機獲取的現實影像數據和所述虛擬影像數據結合的結合影像數據的裝置的情況下,所述客戶機基於輸出延遲時間來補正現實影像數據,其中,所述輸出延遲時間是所述現實影像數據被拍攝之後至輸出於畫面上為止所需的時間。Furthermore, as another embodiment, the following step is also included: when the client machine is a device that outputs combined image data that is a combination of real image data obtained by a camera and the virtual image data, the client machine corrects the real image data based on an output delay time, wherein the output delay time is the time required from the time the real image data is captured to the time it is output on the screen.

根據本發明的另一實施例的利用深度數據的增強現實影像提供程式與作為硬體的電腦結合而執行上述所提及的利用深度數據的增強現實影像提供方法並將其儲存於介質。According to another embodiment of the present invention, a program for providing an augmented reality image using depth data is combined with a computer as hardware to execute the above-mentioned method for providing an augmented reality image using depth data and store it in a medium.

根據本發明的另一實施例的一種利用深度數據的增強現實影像提供裝置,包括:虛擬影像數據接收部,從伺服器接收虛擬影像數據,所述虛擬影像數據包括顏色數據以及深度數據;控制部,基於所述深度數據來確定每個像素應顯示在現實空間內的位置;以及影像輸出部,基於所述確定的位置來將所述虛擬影像數據顯示在顯示空間上。According to another embodiment of the present invention, an augmented reality image providing device using depth data includes: a virtual image data receiving unit, receiving virtual image data from a server, wherein the virtual image data includes color data and depth data; a control unit, determining a position where each pixel should be displayed in a real space based on the depth data; and an image output unit, displaying the virtual image data on a display space based on the determined position.

根據本發明的另一實施例的一種利用深度數據的增強現實影像通過方法,包括如下步驟:所述伺服器獲取向所述客戶機在預定時間點提供的所述虛擬影像數據的按照每個像素的顏色數據;獲取所述虛擬影像數據的按照每個像素的深度數據而儲存;以及所述伺服器將所述顏色數據以及所述深度數據同步化而傳送至所述客戶機,其中,所述深度數據是若沒有接收第二時間點的第二虛擬影像數據則基於第一時間點的第一虛擬影像數據來進行補正時所利用的數據,所述第二時間點是從所述第一時間點經過虛擬影像數據傳送週期的時間點。According to another embodiment of the present invention, a method for enhancing reality images using depth data includes the following steps: the server obtains color data of each pixel of the virtual image data provided to the client at a predetermined time point; obtains depth data of each pixel of the virtual image data and stores it; and the server synchronizes the color data and the depth data and transmits them to the client, wherein the depth data is data used when correcting based on the first virtual image data at the first time point if the second virtual image data at the second time point is not received, and the second time point is a time point that passes through the virtual image data transmission cycle from the first time point.

並且,作為另一實施例,所述虛擬影像數據還包括獲取位置數據以及影像方向數據,所述客戶機比較當前位置數據和所述獲取位置數據,並比較再現方向數據和所述影像方向數據,並基於比較結果來調節所述虛擬影像數據內的像素的位置,所述當前位置數據以及所述再現方向數據是所述客戶機即時地或者按照單位時間獲取的數據。Furthermore, as another embodiment, the virtual image data also includes acquired position data and image direction data, the client compares the current position data with the acquired position data, and compares the reproduced direction data with the image direction data, and adjusts the position of the pixels in the virtual image data based on the comparison result, the current position data and the reproduced direction data are data acquired by the client in real time or per unit time.

根據如上的本發明,具有如下的多種效果:According to the present invention as above, there are the following multiple effects:

第一,在因用戶移動而使將結合增強現實影像的對象體(例如,標記)在空間上的佈置改變的情況下,提供能夠代替在特定時間點遺漏的影像幀的補正影像數據,使得增強現實影像顯示在現實空間的準確的位置,以能夠自然地再現,而沒有晃動。First, in the case where the spatial layout of objects (e.g., markers) to be combined with augmented reality images changes due to user movement, corrected image data is provided that can replace image frames that are missing at a specific point in time, so that the augmented reality images are displayed at the exact location in the real space so that they can be reproduced naturally without shaking.

第二,將深度數據結合於以顏色數據構成的二維虛擬影像數據,從而可以將二維的虛擬影像數據自然地顯示在三維的現實空間。即,增強現實影像再現裝置識別三維的現實空間而將二維的影像的每個像素顯示在適當的深度,從而實現三維增強現實的效果。Second, the depth data is combined with the two-dimensional virtual image data composed of color data, so that the two-dimensional virtual image data can be naturally displayed in the three-dimensional real space. That is, the enhanced reality image reproduction device recognizes the three-dimensional real space and displays each pixel of the two-dimensional image at an appropriate depth, thereby achieving a three-dimensional enhanced reality effect.

第三,由於在製作為虛擬顯示用的虛擬影像數據可以基於適用於每個像素的深度數據來確定透明處理水準,從而基於當獲取虛擬影像數據時所獲得的深度數據可以直接將虛擬實境用影像應用為增強現實用,而無需製作單獨的增強現實影像。Third, since the level of transparency processing can be determined based on the depth data applied to each pixel when producing virtual image data for virtual display, virtual reality images can be directly applied as augmented reality based on the depth data obtained when the virtual image data is obtained, without producing a separate augmented reality image.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above features and advantages of the present invention more clearly understood, embodiments are specifically cited below and described in detail with reference to the accompanying drawings.

以下,參照附圖針對本發明的優選的實施例進行詳細說明。參考結合附圖而詳細後述的實施例即可明確地理解本發明的優點、特徵及用於實現這些的方法。然而,本發明並不局限於以下公開的實施例,其可以由互不相同的多樣的形態實現,提供本實施例僅僅旨在使本發明的公開完整並用於將本發明的範圍完整地告知本發明所屬的技術領域中具備基本知識的人員,本發明僅由請求項的範圍定義。在整個說明書中,相同的附圖標號指代相同的構成要素。Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The advantages, features and methods for implementing the present invention can be clearly understood by referring to the embodiments described in detail below in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, and it can be implemented in various forms that are different from each other. The present embodiments are provided only to make the disclosure of the present invention complete and to fully inform the scope of the present invention to those who have basic knowledge in the technical field to which the present invention belongs. The present invention is only defined by the scope of the claims. Throughout the specification, the same figure numbers refer to the same components.

除非另有定義,本說明書中使用的所有術語(包括技術及科學術語)可以以本發明所屬的技術領域中具備基本知識的人員所能夠共同理解的含義而被使用。並且,對於定義於一般使用的詞典的術語而言,除非另有明確而特別的定義,否則不應被理想化或者過度化地解釋。Unless otherwise defined, all terms (including technical and scientific terms) used in this specification may be used with the meanings that can be commonly understood by people with basic knowledge in the technical field to which the present invention belongs. In addition, for terms defined in commonly used dictionaries, unless otherwise clearly and specifically defined, they should not be idealized or over-interpreted.

本說明書中使用的術語用於說明實施例而非旨在限定本發明。在本說明書中,除非特別說明,否則單數型在語句中也包括複數型。說明書中使用的術語「包括(comprises)」和/或「包含於(comprising)」不排除除了所提及的構成要素之外的一個以上的其他構成要素的存在或附加。The terms used in this specification are used to illustrate the embodiments and are not intended to limit the present invention. In this specification, unless otherwise specified, the singular form also includes the plural form in the sentence. The terms "comprises" and/or "comprising" used in the specification do not exclude the existence or addition of one or more other constituent elements other than the mentioned constituent elements.

本說明書中,「虛擬影像數據」表示為了實現虛擬實境或者增強現實而製作的影像數據。「虛擬影像數據」可以通過相機拍攝現實空間而生成,也可以通過建模過程而製造。In this manual, "virtual image data" refers to image data produced to realize virtual reality or augmented reality. "Virtual image data" can be generated by photographing real space with a camera, or it can be produced through a modeling process.

本說明書中,「第一虛擬影像數據」表示在第一時間點從伺服器提供至客戶機的虛擬影像數據。本說明書中,「第二虛擬影像數據」表示在第二時間點(即,從第一時間點經過作為影像接收週期的單位時間的時間點)從伺服器提供至客戶機的虛擬影像數據。In this specification, "first virtual image data" means virtual image data provided from a server to a client at a first time point. In this specification, "second virtual image data" means virtual image data provided from a server to a client at a second time point (i.e., a time point that is a unit time of an image receiving cycle from the first time point).

本說明書中,「深度數據」作為針對三維空間上的深度的值,是為特定的虛擬影像數據內的被劃分的細節單位中的每一個(例如,每個像素)賦予的值。In this specification, "depth data" refers to a value for depth in three-dimensional space, and is a value assigned to each of divided detail units (for example, each pixel) in specific virtual image data.

本說明書中,「顏色數據」是針對虛擬影像數據顯示於畫面的顏色的數據。例如,顏色數據可以按照虛擬影像數據的像素而被包括。並且,「顏色數據」可以利用能夠表現出RGB(Red-Green-Blue)顏色模式等的顏色的預定的顏色模式來構成。In this specification, "color data" refers to data on the color of virtual image data displayed on the screen. For example, color data can be included according to the pixel of virtual image data. In addition, "color data" can be constructed using a predetermined color model that can express colors such as RGB (Red-Green-Blue) color model.

本說明書中,「現實影像數據」表示拍攝現實空間而獲取的影像數據。In this manual, "real image data" refers to image data obtained by photographing real space.

本說明書中,「客戶機」表示從伺服器接收虛擬影像數據而再現的增強現實影像再現裝置。即,「客戶機」表示能夠提供向用戶的眼睛直接提供的現實圖像或者能夠同時顯示拍攝現實空間而獲取的現實影像數據和增強現實內容來提供的所有裝置。In this specification, "client" refers to an augmented reality image reproduction device that receives virtual image data from a server and reproduces it. In other words, "client" refers to all devices that can provide real images directly to the user's eyes or can simultaneously display real image data obtained by shooting real space and augmented reality content to provide.

以下,參照附圖針對根據本發明的實施例的利用深度數據提供增強現實影像的方法以及程式進行詳細說明。Below, the method and program for providing an enhanced reality image using depth data according to an embodiment of the present invention are described in detail with reference to the accompanying drawings.

圖1是根據本發明的一實施例的增強現實影像系統的結構圖。FIG1 is a structural diagram of an enhanced reality imaging system according to an embodiment of the present invention.

圖2是根據本發明的一實施例的利用深度數據的增強現實影像提供方法的流程圖。FIG2 is a flow chart of a method for providing an enhanced reality image using depth data according to an embodiment of the present invention.

參照圖1以及圖2,根據本發明的實施例的利用深度數據的增強現實影像提供方法包括如下步驟:客戶機200從伺服器100接收虛擬影像數據(S120:虛擬影像數據接收步驟);客戶機200基於深度數據來確定每個像素將被顯示在現實空間內的位置(S140:顯示位置確定步驟);以及基於所述確定的位置來將所述虛擬影像數據顯示在現實空間上(S160:虛擬影像數據顯示步驟)。以下,記述針對每個步驟的詳細說明。1 and 2 , the method for providing an enhanced reality image using depth data according to an embodiment of the present invention includes the following steps: the client 200 receives virtual image data from the server 100 (S120: virtual image data receiving step); the client 200 determines the position of each pixel to be displayed in the real space based on the depth data (S140: display position determination step); and displays the virtual image data on the real space based on the determined position (S160: virtual image data display step). The following describes a detailed description of each step.

客戶機200從伺服器100接收虛擬影像數據(S120:虛擬影像接收步驟)。所述虛擬影像數據是包括顏色數據以及深度數據的數據。The client 200 receives virtual image data from the server 100 (S120: virtual image receiving step). The virtual image data includes color data and depth data.

作為一實施例,所述虛擬影像數據是當拍攝或者生成時獲取每個地點的深度數據而按照每個像素來儲存的二維圖像。即,客戶機200接收包含深度數據的作為二維圖像的虛擬影像數據並將其在作為三維的現實空間內生成為增強現實影像。通過此,客戶機200可以不用從伺服器100接收以三維建模的高容量的三維影像數據,而能夠在三維的現實空間內實現具有現實感的增強現實。As an embodiment, the virtual image data is a two-dimensional image that is stored for each pixel by obtaining the depth data of each location when shooting or generating. That is, the client 200 receives the virtual image data as a two-dimensional image including the depth data and generates it as an enhanced reality image in a three-dimensional real space. Through this, the client 200 can realize an augmented reality with a sense of reality in a three-dimensional real space without receiving a high-capacity three-dimensional image data modeled in three dimensions from the server 100.

作為一實施例,所述深度數據按照每個像素包含在與顏色數據通道區分的單獨通道。即,伺服器100通過顏色數據通道和深度數據通道將針對虛擬影像數據的每個像素的顏色數據以及深度數據傳送至客戶機200。此時,伺服器100將通過深度數據通道傳送的深度數據和通過顏色數據通道傳送的顏色數據同步化而傳送至客戶機200。客戶機200獲取通過顏色數據通道和深度數據通道同時接收的按照每個像素的顏色數據和深度數據作為該時間點的虛擬影像數據。As an embodiment, the depth data is included in a separate channel for each pixel that is distinguished from the color data channel. That is, the server 100 transmits the color data and the depth data for each pixel of the virtual image data to the client 200 through the color data channel and the depth data channel. At this time, the server 100 synchronizes the depth data transmitted through the depth data channel and the color data transmitted through the color data channel and transmits them to the client 200. The client 200 obtains the color data and the depth data for each pixel simultaneously received through the color data channel and the depth data channel as the virtual image data at that time point.

客戶機200基於所述深度數據來確定每個像素將顯示在現實空間內的位置(S140:顯示位置確定步驟)。即,客戶機200確定在與虛擬影像數據的每個像素位置對應的通過客戶機200使用戶看到的現實空間位置將顏色數據顯示在與深度數據對應的深度。具體地,在所述客戶機200是在透明的畫面上輸出虛擬影像數據的裝置(例如,玻璃型可穿戴設備)的情況下,虛擬影像數據對應於所述裝置的畫面大小,從而客戶機200基於深度數據來將虛擬影像數據的每個像素輸出在畫面的對應的每個地點。The client 200 determines the position of each pixel to be displayed in the real space based on the depth data (S140: display position determination step). That is, the client 200 determines to display the color data at the depth corresponding to the depth data at the real space position corresponding to each pixel position of the virtual image data and allowed to be seen by the user through the client 200. Specifically, in the case where the client 200 is a device that outputs virtual image data on a transparent screen (for example, a glass-type wearable device), the virtual image data corresponds to the screen size of the device, so that the client 200 outputs each pixel of the virtual image data at each corresponding location on the screen based on the depth data.

並且,作為另一實施例,如圖3,所述顯示位置確定步驟(S140)包括如下步驟:所述客戶機200將特定深度確定為透明度調節基準(S141);以及基於所述透明度調節基準來區分深度範圍而確定是否進行透明處理(S142)。即,為了在現實空間內僅輸出期望的虛擬影像數據的期望的部分,客戶機200基於深度數據來指定輸出區域。Furthermore, as another embodiment, as shown in FIG3 , the display position determination step (S140) includes the following steps: the client 200 determines a specific depth as a transparency adjustment reference (S141); and distinguishes a depth range based on the transparency adjustment reference to determine whether to perform transparent processing (S142). That is, in order to output only a desired portion of the desired virtual image data in the real space, the client 200 specifies an output area based on the depth data.

首先,客戶機200將特定深度確定為透明度調節基準(S141)。所述透明度調節基準可以是與顯示在畫面上的內容的邊界線對應的深度值。例如,如圖4,在客戶機200期望僅將包含在虛擬影像數據內的角色顯示為增強現實內容的情況下,客戶機200將與角色的邊界線對應的像素的深度數據確定為透明度調節基準。通過此,客戶機200可以僅將虛擬影像數據內的角色(例如,對象體2)輸出至畫面,並且可以將剩餘區域處理為透明。First, the client 200 determines a specific depth as a transparency adjustment criterion (S141). The transparency adjustment criterion may be a depth value corresponding to a boundary line of the content displayed on the screen. For example, as shown in FIG. 4 , when the client 200 desires to display only a character contained in the virtual image data as augmented reality content, the client 200 determines the depth data of the pixel corresponding to the boundary line of the character as the transparency adjustment criterion. Through this, the client 200 can output only the character (e.g., object 2) in the virtual image data to the screen, and the remaining area can be processed as transparent.

隨後,客戶機200基於透明度調節基準來區分深度範圍(S142)。即,客戶機200將區域分為比預定的深度近的區域和遠的區域。客戶機200也可以將多個深度設定為所述透明度調節基準。例如。如圖4,若將2個深度值(例如,深度值A和深度值B(B是比A大的值)的2個深度值)設定為透明度調節基準,則客戶機200將深度範圍劃分為比深度A近的第一區域、深度A和深度B之間的第二區域以及比深度B遠的第三區域。Subsequently, the client 200 distinguishes the depth range based on the transparency adjustment standard (S142). That is, the client 200 divides the area into an area closer to and farther from a predetermined depth. The client 200 may also set multiple depths as the transparency adjustment standard. For example. As shown in FIG. 4 , if two depth values (for example, two depth values of depth value A and depth value B (B is a value greater than A)) are set as the transparency adjustment standard, the client 200 divides the depth range into a first area closer to depth A, a second area between depth A and depth B, and a third area farther from depth B.

隨後,客戶機200針對區分的深度範圍應用是否進行透明處理(S142)。作為一實施例,在以預定的深度值(例如,深度值A)為基準區分深度範圍的情況下,客戶機200將比深度A遠的區域確定為透明處理區域,從而僅使比深度A近的範圍的影像顯示於畫面上。並且,作為另一實施例,在由於設定2個以上的透明度調節基準而區分為3個以上的深度範圍的情況下,客戶機200確定是否將每個深度範圍處理為透明。並且,客戶機200也針對每個深度範圍確定透明度數值並以使包含在特定的深度範圍的影像內容處理為半透明使其與現實空間同時被看到。Then, the client 200 determines whether to perform transparent processing for the distinguished depth range (S142). As one embodiment, when the depth range is distinguished based on a predetermined depth value (for example, depth value A), the client 200 determines the area farther than the depth A as a transparent processing area, so that only the image of the range closer than the depth A is displayed on the screen. And, as another embodiment, when more than three depth ranges are distinguished due to setting more than two transparency adjustment criteria, the client 200 determines whether to process each depth range as transparent. In addition, the client 200 also determines the transparency value for each depth range and processes the image content contained in a specific depth range as semi-transparent so that it can be seen simultaneously with the real space.

通過此,客戶機200可以通過深度數據在虛擬影像數據內僅提取期望的內容(例如,對象體)而顯示在現實影像數據上。即,客戶機為了將虛擬影像顯示在現實空間內適當的位置而利用同時接收的深度數據設定遮罩區域,從而客戶機200無需從伺服器100單獨接收用於遮罩虛擬影像數據內的一部分區域的遮罩數據。例如,在對為了實現現有的虛擬實境影像而生成的影像僅添加深度數據而應用於實現增強現實影像的情況下,服務提供者僅生成與現有的虛擬實境影像同步化的每個像素的深度數據,並且在虛擬影像數據設定與特定的對象體邊界對應的特定的深度,從而可以將虛擬實境影像實現為一部分區域被遮罩(Masking)的增強現實影像。By this, the client 200 can extract only the desired content (e.g., an object) in the virtual image data through the depth data and display it on the real image data. That is, the client sets the mask area using the depth data received at the same time in order to display the virtual image at an appropriate position in the real space, so that the client 200 does not need to receive the mask data for masking a part of the area in the virtual image data separately from the server 100. For example, in the case where depth data is only added to an image generated to realize an existing virtual reality image and applied to realize an augmented reality image, the service provider only generates depth data for each pixel synchronized with the existing virtual reality image, and sets a specific depth corresponding to a specific object boundary in the virtual image data, so that the virtual reality image can be realized as an augmented reality image with a partial area being masked.

並且,作為另一實施例,如圖5,客戶機200將區域指定數據與特定的深度數據一同應用為透明度調節基準。虛擬影像數據可以包括具有與期望在現實空間增強的對象體(例如,圖5的第一客體)相同的深度值的另一對象體(例如,圖5的第二客體)。此時,若僅以深度數據設定不進行透明處理而暴露於現實空間的區域,則可能使不應當被顯示的對象體一同顯示於現實空間內。因此,客戶機200從伺服器100接收針對應處理為透明的二維畫面內區域的數據(即,區域指定數據),並可以使在指定的二維區域的影像內容中僅包含在指定為將被顯示的區域的深度範圍的影像內容顯示在畫面上。借由深度數據可以詳細地區分出在特定的指定區域內應透明處理的部分,從而伺服器100以指定大致的區域的形態(例如,矩形形態)設定區域指定數據而傳送至客戶機200。Furthermore, as another embodiment, as shown in FIG5 , the client 200 applies the region designation data together with the specific depth data as a transparency adjustment standard. The virtual image data may include another object (e.g., the second object in FIG5 ) having the same depth value as the object (e.g., the first object in FIG5 ) that is desired to be enhanced in the real space. At this time, if the area that is not subjected to transparent processing and is exposed to the real space is set only by the depth data, objects that should not be displayed may be displayed in the real space. Therefore, the client 200 receives data (i.e., region designation data) corresponding to the area within the two-dimensional screen that should be processed as transparent from the server 100, and can make the image content of the designated two-dimensional area only the image content contained in the depth range designated as the area to be displayed be displayed on the screen. The depth data can be used to distinguish in detail the portion that should be processed transparently within a specific designated area, so that the server 100 sets the area designation data in the shape of a designated approximate area (for example, a rectangular shape) and transmits it to the client 200.

並且,作為另一實施例,在所述顯示位置確定步驟(S140)中,在客戶機200確定用戶注視的方向和虛擬影像數據被拍攝的方向不一致的情況下,客戶機200可以將虛擬影像數據調節為適合顯示於畫面上的形態。Furthermore, as another embodiment, in the display position determination step (S140), when the client 200 determines that the direction in which the user is looking is inconsistent with the direction in which the virtual image data is captured, the client 200 may adjust the virtual image data to a form suitable for display on the screen.

為此,所述虛擬影像數據還可以包括獲取位置數據以及影像方向數據。所述獲取位置數據表示當拍攝虛擬影像數據或者生成虛擬影像數據(即,建模(Modeling))時的相機位置。所述影像方向數據表示在所述獲取位置獲取虛擬影像數據的方向。例如,在利用相機裝置拍攝虛擬影像數據的情況下,影像方向數據是在特定的獲取位置相機所面向的方向,可以借由包含在相機裝置的感測器而獲取。並且,例如,在外部伺服器100通過建模過程生成虛擬影像數據的情況下,影像方向數據可以是以獲取位置為基準當建模時針對該虛擬影像數據而定的方向。To this end, the virtual image data may also include acquisition position data and image direction data. The acquisition position data indicates the camera position when shooting virtual image data or generating virtual image data (i.e., modeling). The image direction data indicates the direction in which the virtual image data is acquired at the acquisition position. For example, in the case of shooting virtual image data using a camera device, the image direction data is the direction in which the camera is facing at a specific acquisition position, and can be acquired by a sensor included in the camera device. And, for example, in the case where the external server 100 generates virtual image data through a modeling process, the image direction data can be a direction determined for the virtual image data when modeling based on the acquisition position.

如圖6,作為利用所述獲取位置數據以及影像方向數據的顯示位置確定步驟(S140)的一實施例包括如下步驟:比較由所述客戶機200獲取的當前位置數據和所述獲取位置數據,比較由所述客戶機200獲取的再現方向數據和所述影像方向數據(S143);以及基於所述比較結果來調節所述虛擬影像數據內的像素的位置(S144)。As shown in FIG6 , an embodiment of a display position determination step (S140) using the acquired position data and the image direction data includes the following steps: comparing the current position data acquired by the client 200 with the acquired position data, comparing the reproduction direction data acquired by the client 200 with the image direction data (S143); and adjusting the position of the pixel in the virtual image data based on the comparison result (S144).

首先,客戶機200比較當前位置數據和所述獲取位置數據,並比較由所述客戶機200獲取的再現方向數據和所述影像方向數據(S143)。所述當前位置數據是由客戶機200基於多種位置測量方式中的至少任意一個來獲取的位置數據。所述再現方向數據是借由客戶機200內部的動作感測器獲取的數據,是針對客戶機200所朝向的即時方向的數據。客戶機200比較虛擬影像數據被獲取的位置以及拍攝方向(即,影像方向)和當前位置以及所朝向的方向(即,再現方向)而算出差異值。即,客戶機200算出影像被獲取的位置和當前位置之間的差異(例如,空間上的位置變化),並算出客戶機200的朝向的方向和虛擬影像數據被獲取的方向之間的差異(例如,方位值差異)。First, the client 200 compares the current position data with the acquired position data, and compares the reproduction direction data acquired by the client 200 with the image direction data (S143). The current position data is position data acquired by the client 200 based on at least one of a plurality of position measurement methods. The reproduction direction data is data acquired by a motion sensor inside the client 200, and is data for the real-time direction toward which the client 200 is facing. The client 200 compares the acquired position and shooting direction (i.e., image direction) of the virtual image data with the current position and the facing direction (i.e., reproduction direction) to calculate a difference value. That is, the client 200 calculates the difference between the position where the image is acquired and the current position (e.g., a change in position in space), and calculates the difference between the orientation direction of the client 200 and the direction where the virtual image data is acquired (e.g., a difference in orientation value).

隨後,客戶機200基於所述比較結果調節所述虛擬影像數據內的像素的位置(S144)。客戶機200將虛擬影像數據的每個像素的位置調節為符合當前位置數據以及再現方向數據。客戶機200可以將比較結果不僅反映在每個像素的二維畫面上的位置,而且還反映在每個像素的深度數據,來調節在現實空間內顯示的深度。Then, the client 200 adjusts the position of the pixel in the virtual image data based on the comparison result (S144). The client 200 adjusts the position of each pixel of the virtual image data to conform to the current position data and the reproduction direction data. The client 200 can reflect the comparison result not only in the position of each pixel on the two-dimensional screen, but also in the depth data of each pixel to adjust the depth displayed in the real space.

通過此,即使用戶在現實空間利用客戶機200看向增強現實影像的位置以及方向與獲取虛擬影像數據的位置以及方向並不準確地一致,客戶機200可以調節虛擬影像數據內的每個像素在畫面內的顯示位置而在準確的位置顯示增強現實內容。Through this, even if the position and direction at which the user uses the client 200 to view the augmented reality image in real space are not exactly consistent with the position and direction at which the virtual image data is acquired, the client 200 can adjust the display position of each pixel in the virtual image data within the screen and display the augmented reality content at the exact position.

並且,如圖7,在虛擬影像數據內,以單位時間間距連續地提供的幀中的一部分遺漏的情況(例如,在第一時間點接收第一虛擬影像數據(即,第一幀)之後繼續在第二時間點應被提供的第二虛擬影像數據(即,第二幀)遺漏的情況)下,客戶機200比較第一虛擬影像數據所包含的獲取位置數據以及影像方向數據和在第二時間點由客戶機200獲取的當前位置數據和再現方向數據,並基於比較結果(即,空間上的位置差異以及注視方向差異)來調節第一虛擬影像數據內的每個像素的位置。所述第二時間點是從所述第一時間點經過虛擬影像數據傳送週期的時間點。即,客戶機200基於包含於第一幀的獲取位置數據以及影像方向數據和在第二時間點由客戶機200獲取的當前位置數據以及再現方向數據的比較結果移動包含於第一幀(即,第一虛擬影像數據)的具有顏色數據和深度數據的每個像素,來生成代替遺漏的第二幀的第二補正影像數據並將其提供。通過此,在由伺服器100提供的虛擬影像數據遺漏的情況下,通過對先前虛擬影像數據進行補正而代替,可以自然地提供增強現實內容。Furthermore, as shown in FIG7 , in a case where a portion of frames continuously provided at unit time intervals within the virtual image data is missing (for example, in a case where the second virtual image data (i.e., the second frame) that should be provided at a second time point after receiving the first virtual image data (i.e., the first frame) at a first time point is missing), the client 200 compares the acquired position data and the image direction data contained in the first virtual image data with the current position data and the reproduction direction data acquired by the client 200 at the second time point, and adjusts the position of each pixel within the first virtual image data based on the comparison result (i.e., the spatial position difference and the gaze direction difference). The second time point is a time point that passes through a virtual image data transmission cycle from the first time point. That is, the client 200 moves each pixel having color data and depth data included in the first frame (i.e., the first virtual image data) based on a comparison result of the acquired position data and image direction data included in the first frame and the current position data and reproduction direction data acquired by the client 200 at the second time point to generate and provide the second corrected image data that replaces the missing second frame. By this, in the case where the virtual image data provided by the server 100 is missing, by correcting the previous virtual image data instead, augmented reality content can be naturally provided.

具體地,若特定的再現時間點(即,第二時間點)的虛擬影像數據(即,第二虛擬影像數據)遺漏並再現,則增強現實內容無法顯示在現實空間的準確的位置,並在單位時間經過後(即,第三時間點)接收虛擬影像數據,增強現實內容會突然移動位置而使影像發生晃動。即,若因第二幀遺漏而從第一虛擬影像數據(即,第一幀)直接變更為第三虛擬影像數據(即,第三幀),則顯示為對象體突然移動,從而可能使用戶誘發頭暈。隨著利用第一時間點的第一虛擬影像數據為第二時間點提供補正影像數據,可以將未接收的影像幀(即,第二虛擬影像數據)代替為代替幀(即,基於第一虛擬影像數據的第二補正影像數據)而為使用者提供不晃動且佈置於現實空間上的準確地位置的增強現實內容。尤其,在客戶機200通過無線通訊從伺服器100接收虛擬影像數據而發生多數遺漏的情況下,更加可以提供有用的效果。Specifically, if the virtual image data (i.e., the second virtual image data) of a specific reproduction time point (i.e., the second time point) is missing and reproduced, the augmented reality content cannot be displayed at the accurate position in the real space, and when the virtual image data is received after the unit time has passed (i.e., the third time point), the augmented reality content will suddenly move and cause the image to shake. That is, if the first virtual image data (i.e., the first frame) is directly changed to the third virtual image data (i.e., the third frame) due to the omission of the second frame, it will be displayed as a sudden movement of the object, which may induce dizziness in the user. By using the first virtual image data at the first time point to provide the corrected image data at the second time point, the unreceived image frame (i.e., the second virtual image data) can be replaced with the substitute frame (i.e., the second corrected image data based on the first virtual image data) to provide the user with an augmented reality content that is not shaken and is accurately positioned in the real space. In particular, in the case where the client 200 receives the virtual image data from the server 100 through wireless communication and a large number of omissions occur, a more useful effect can be provided.

並且,作為另一實施例,如圖9所示,客戶機200可以生成填充由伺服器100生成而提供的幀(即,虛擬影像數據)之間的附加幀並將其提供。例如,在由伺服器100每秒提供60幀的情況下,客戶機200可以生成與伺服器100所提供的的幀之間的時間點對應的附加幀,來以每秒輸出120幀。即,由於伺服器性能或者互聯網頻寬限制等的各種因素,伺服器100可能為客戶機200提供有限數量的幀(例如,每秒60幀),此時,客戶機200可以自行增加每秒的幀數而生成更加自然的影像。Furthermore, as another embodiment, as shown in FIG9 , the client 200 may generate and provide additional frames to fill in between frames (i.e., virtual image data) generated and provided by the server 100. For example, in the case where 60 frames per second are provided by the server 100, the client 200 may generate additional frames corresponding to the time points between the frames provided by the server 100 to output 120 frames per second. That is, due to various factors such as server performance or Internet bandwidth limitations, the server 100 may provide a limited number of frames (e.g., 60 frames per second) to the client 200. At this time, the client 200 may increase the number of frames per second by itself to generate a more natural image.

第一幀和第二幀之間的附加幀(即,第1.5幀)通過第一幀的補正而生成。即,客戶機200比較包含於第一幀(即,第一虛擬影像數據)的獲取位置數據以及影像方向數據和在期望添加幀的第1.5時間點由客戶機200所獲取的當前位置數據以及再現方向數據而生成比較結果(即,空間上的位置差異及注視方向差異)。隨後,客戶機基於所述比較結果來調節第一幀(即,第一虛擬影像數據)內的每個像素的位置而生成1.5幀。並且,作為另一實施例,所述顯示位置確定步驟(S140)還包括如下步驟:所述客戶機200調節所述虛擬影像數據內的每個像素的顏色或者色度。即,客戶機200基於現實空間的光照射方向或者每個像素的佈置位置來調節每個像素的顏色或者色度。通過此,客戶機200可以將由伺服器100提供的虛擬影像數據自然地顯示於現實空間。The additional frame between the first frame and the second frame (i.e., the 1.5th frame) is generated by correcting the first frame. That is, the client 200 compares the acquired position data and image direction data contained in the first frame (i.e., the first virtual image data) with the current position data and reproduction direction data acquired by the client 200 at the 1.5th time point when the frame is expected to be added, and generates a comparison result (i.e., a spatial position difference and a gaze direction difference). Subsequently, the client adjusts the position of each pixel in the first frame (i.e., the first virtual image data) based on the comparison result to generate a 1.5 frame. Furthermore, as another embodiment, the display position determination step (S140) further includes the following step: the client 200 adjusts the color or chromaticity of each pixel in the virtual image data. That is, the client 200 adjusts the color or chromaticity of each pixel based on the light irradiation direction of the real space or the layout position of each pixel. Through this, the client 200 can naturally display the virtual image data provided by the server 100 in the real space.

客戶機200基於所述確定的位置在現實空間上顯示所述虛擬影像數據(S160:虛擬影像數據顯示步驟)。作為一實施例,在客戶機200為通過透明的顯示器直接觀看現實空間的同時看到顯示於透明的顯示器的虛擬影像數據的裝置(例如,玻璃型可穿戴設備)的情況下,客戶機200將已確定每個像素將被顯示的位置的增強現實內容(例如,基於深度數針對虛擬影像數據執行了遮罩處理的影像內容)顯示於透明顯示器上。The client 200 displays the virtual image data in the real space based on the determined position (S160: virtual image data display step). As an embodiment, when the client 200 is a device (e.g., a glass-type wearable device) that directly views the real space through a transparent display while seeing the virtual image data displayed on the transparent display, the client 200 displays the augmented reality content (e.g., image content that has been masked for the virtual image data based on the depth number) at the position where each pixel is to be displayed on the transparent display.

並且,作為另一實施例,在所述客戶機200是輸出結合借由相機獲取的現實影像數據和所述虛擬影像數據的結合影像數據的裝置的情況(例如,智慧手機或者平板電腦的情況)下,還包括如下步驟:所述客戶機200基於輸出延遲時間對現實影像數據進行補正。所述輸出延遲時間是所述現實影像數據被拍攝之後至輸出於畫面上所需的時間。即,客戶機200反映客戶機200的移動來補正現實影像數據,使得在同一時間點客戶機200注視的現實空間和顯示於畫面上的現實影像數據一致。例如,在客戶機200通過相機獲取與畫面尺寸相同的現實影像數據的情況下,基於客戶機200的移動來使現實影像數據移動(shift)而顯示於畫面上。並且,例如,在客戶機200獲取比畫面尺寸大的現實影像數據的情況下,客戶機200從獲取的顯示影像數據內提取需要反映客戶機200的移動而輸出於畫面的區域並將其顯示。Furthermore, as another embodiment, when the client 200 is a device that outputs combined image data combining real image data acquired by a camera and the virtual image data (for example, in the case of a smart phone or a tablet computer), the following step is further included: the client 200 corrects the real image data based on the output delay time. The output delay time is the time required from when the real image data is captured to when it is output on the screen. That is, the client 200 reflects the movement of the client 200 to correct the real image data, so that the real space that the client 200 is looking at at the same time point is consistent with the real image data displayed on the screen. For example, when the client 200 acquires real image data of the same size as the screen through a camera, the real image data is shifted and displayed on the screen based on the movement of the client 200. Also, for example, when the client 200 acquires real image data larger than the screen size, the client 200 extracts an area that needs to be output on the screen to reflect the movement of the client 200 from the acquired display image data and displays it.

參照圖1,根據本發明的另一實施例的利用深度數據的增強現實影像提供裝置包括虛擬影像數據接收部210、控制部220、影像輸出部230。根據本發明的一實施例的增強現實影像提供裝置對應於根據本發明的實施例的利用深度數據的增強現實影像通過方法中的客戶機200。以下,省略針對已說明的構成的詳細說明。1, an apparatus for providing an enhanced reality image using depth data according to another embodiment of the present invention includes a virtual image data receiving unit 210, a control unit 220, and an image output unit 230. The apparatus for providing an enhanced reality image according to an embodiment of the present invention corresponds to the client 200 in the method for providing an enhanced reality image using depth data according to an embodiment of the present invention. Hereinafter, detailed descriptions of the already described structures are omitted.

虛擬影像數據接收部210使所述客戶機200從伺服器100接收虛擬影像數據。所述虛擬影像數據是包括顏色數據以及深度數據的數據。虛擬影像數據接收部210可以通過單獨通道接收每個像素的顏色數據以及深度數據。即,虛擬影像數據接收部210通過顏色數據通道接收每個像素的顏色數據(例如,RGB值),通過深度數據通道接收每個像素的深度數據。The virtual image data receiving unit 210 enables the client 200 to receive virtual image data from the server 100. The virtual image data is data including color data and depth data. The virtual image data receiving unit 210 may receive the color data and depth data of each pixel through separate channels. That is, the virtual image data receiving unit 210 receives the color data (e.g., RGB value) of each pixel through the color data channel, and receives the depth data of each pixel through the depth data channel.

虛擬影像數據接收部210通過有線或者無線通訊接收虛擬影像數據。在通過無線通訊接收的情況下,虛擬影像數據接收部210可以對應於無線互聯網模組或者近距離通信模組。The virtual image data receiving unit 210 receives the virtual image data through wired or wireless communication. In the case of receiving through wireless communication, the virtual image data receiving unit 210 may correspond to a wireless Internet module or a short distance communication module.

無線互聯網模組是指用於無線互聯網連接的模組,可以內置或者外置於移動終端。作為無線互聯網技術可以利用無線區域網(WLAN:Wireless LAN)(Wi-Fi)、無線寬頻(Wibro)、世界微波接入互通性(Wimax:World Interoperability for Microwave Access)、高速下行鏈路分組接入(HSDPA:High Speed Downlink Packet Access)、長期演進(LTE:Long Term Evolution)、LTE-A(Long Term Evolution-Advanced)等。Wireless Internet module refers to a module used for wireless Internet connection, which can be built-in or external to mobile terminals. Wireless Internet technologies include Wireless LAN (WLAN) (Wi-Fi), Wireless Broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), and LTE-A (Long Term Evolution-Advanced).

近距離通信模組是指用於近距離通信的模組。作為近距離通信(short range communication)技術可以利用藍牙(Bluetooth)、藍牙低能耗(BLE:Bluetooth Low Energy)、信標(Beacon)、紅外線通信(IrDA:Infrared Data Association)、超寬頻(UWB:Ultra Wideband)、ZigBee等。A short range communication module is a module used for short range communication. Short range communication technologies include Bluetooth, Bluetooth Low Energy (BLE), Beacon, Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, etc.

控制部220起到處理通過虛擬影像數據接收部210接收的虛擬影像數據的作用。作為一實施例,控制部220確定所述客戶機200基於所述深度數據而使每個像素應顯示在現實空間內的位置。The control unit 220 plays a role in processing the virtual image data received by the virtual image data receiving unit 210. As an embodiment, the control unit 220 determines the position of each pixel that the client 200 should display in the real space based on the depth data.

並且,控制部220基於深度數據來確定是否進行與每個深度對應的像素的透明處理。即,控制部220將除了位於特定的深度範圍的對象體之外的與剩餘深度範圍對應的像素處理為透明,使得僅特定的對象體顯示於畫面上。Furthermore, the control unit 220 determines whether to perform transparent processing on pixels corresponding to each depth based on the depth data. That is, the control unit 220 processes pixels corresponding to the remaining depth range except for the object located in the specific depth range as transparent, so that only the specific object is displayed on the screen.

並且,若特定的再現時間點的虛擬影像數據(即,第二時間點的第二虛擬影像數據)遺漏,則控制部220利用先前的虛擬影像數據(即,第一時間點的第一虛擬影像數據)來生成代替遺漏的影像數據的補正影像數據。並且,在虛擬影像數據的獲取位置或者影像方向(即,獲取影像的方向)與客戶機200的當前位置以及借由客戶機200而注視的方向(即,再現方向)不一致的情況下,控制部220生成進行補正而在顯示空間上的準確的位置佈置的補正影像數據。Furthermore, if the virtual image data of a specific reproduction time point (i.e., the second virtual image data of the second time point) is missing, the control unit 220 generates the correction image data to replace the missing image data using the previous virtual image data (i.e., the first virtual image data of the first time point). Furthermore, when the acquisition position or image direction of the virtual image data (i.e., the image acquisition direction) is inconsistent with the current position of the client 200 and the direction of the client 200 (i.e., the reproduction direction), the control unit 220 generates the correction image data to be corrected and arranged at an accurate position on the display space.

影像輸出部230基於所述確定的位置將所述虛擬影像數據顯示在顯示空間上。例如,在客戶機200為通過透明鏡的顯示器直接觀看現實空間的同時看到顯示於透明的顯示器的虛擬影像數據的裝置(例如,玻璃型可穿戴設備)的情況下,影像輸出部230可以是顯示基於虛擬影像數據而生成的增強現實內容的透明的顯示器。The image output unit 230 displays the virtual image data on the display space based on the determined position. For example, when the client 200 is a device (e.g., a glass-type wearable device) that directly views the real space through a transparent display and sees the virtual image data displayed on the transparent display, the image output unit 230 can be a transparent display that displays the augmented reality content generated based on the virtual image data.

圖8是根據本發明的另一實施例的針對伺服器100為客戶機生成包含有深度數據的增強現實影像而提供的過程的流程圖。FIG8 is a flow chart of a process for the server 100 to generate an augmented reality image including depth data for a client according to another embodiment of the present invention.

參照圖8,根據本發明的另一實施例的利用深度數據的增強現實影像提供方法包括如下步驟:所述伺服器100獲取向所述客戶機200在特定時間點提供的所述虛擬影像數據的按照每個劃分單位的顏色數據(S220);獲取所述虛擬影像數據的按照每個劃分單位的深度數據而儲存(S240);以及所述伺服器100將所述顏色數據以及所述深度數據同步化而傳送至客戶機200(S260)。8 , a method for providing an augmented reality image using depth data according to another embodiment of the present invention includes the following steps: the server 100 obtains color data of each division unit of the virtual image data provided to the client 200 at a specific time point (S220); obtains depth data of each division unit of the virtual image data and stores it (S240); and the server 100 synchronizes the color data and the depth data and transmits them to the client 200 (S260).

伺服器100生成將向客戶機200提供的虛擬影像數據。虛擬影像數據包括多個劃分單位,並且按照每個劃分單位包括顏色數據和深度數據。所述劃分單位可以對應於影像數據的像素。所述深度數據是若沒有接收到第二時間點的第二虛擬影像數據則基於第一時間點的第一虛擬影像數據來進行補正時利用的數據,所述第二時間點是從第一時間點經過虛擬影像數據傳送週期的時間點。The server 100 generates virtual image data to be provided to the client 200. The virtual image data includes a plurality of division units, and includes color data and depth data according to each division unit. The division unit may correspond to a pixel of the image data. The depth data is data used when correcting based on the first virtual image data at the first time point if the second virtual image data at the second time point is not received, and the second time point is a time point after the virtual image data transmission cycle from the first time point.

伺服器100獲取按照每個劃分單位(例如,像素)的顏色數據和深度數據(S220以及S240)。伺服器100將顏色數據和深度數據同步化而傳送至客戶機200(S260)。伺服器100通過單獨的通道將同步化的顏色數據和深度數據傳送至客戶機200。The server 100 obtains color data and depth data for each division unit (eg, pixel) (S220 and S240). The server 100 synchronizes the color data and the depth data and transmits them to the client 200 (S260). The server 100 transmits the synchronized color data and depth data to the client 200 through a separate channel.

並且,作為另一實施例,所述虛擬影像數據還包括獲取位置數據以及影像方向數據,所述客戶機200的特徵在於比較當前位置數據和所述獲取位置數據,比較再現方向數據和所述影像方向數據,並基於所述比較結果調節在所述虛擬影像數據內的像素的位置,所述當前位置數據以及所述再現方向數據是從所述客戶機200即時地或者按單位時間獲取的數據。Furthermore, as another embodiment, the virtual image data further includes acquired position data and image direction data, and the client machine 200 is characterized by comparing the current position data with the acquired position data, comparing the reproduced direction data with the image direction data, and adjusting the position of the pixel in the virtual image data based on the comparison result, and the current position data and the reproduced direction data are data acquired from the client machine 200 in real time or per unit time.

以上,根據上述的本發明的一實施例的增強現實影像提供方法可以實現為程式而儲存於介質,以與作為硬體的電腦結合而被執行。As described above, the method for providing an augmented reality image according to an embodiment of the present invention can be implemented as a program and stored in a medium to be executed in combination with a computer as hardware.

上述前述的程式可以包括,電腦的處理器(CPU)通過所述電腦的設備介面可讀取的諸如C、C ++、JAVA、機器語言等的電腦語言編碼的代碼(Code),以使所述電腦讀取程式來執行實現為程式的所述方法。這樣的代碼可以包括與定義執行所述方法所需的功能的函數等相關的功能性的代碼(Functional Code),所述功能可以包括所述電腦的處理器按預定的循序執行所需的執行步驟相關控制代碼。並且,這樣的代碼還可以包括所述電腦的處理器執行所述功能所需的附加資訊或媒體需要參照在所述電腦的內部或者外部記憶體的哪一位置的記憶體參照相關代碼。並且,在所述電腦的處理器為了執行所述功能而需要與位於遠端(Remote)的任意其他電腦或者伺服器100等進行通信的情況下,代碼還可以包括針對如下的通信相關代碼:利用所述電腦的通信模組來與遠端的任意其他電腦或者伺服器100等如何進行通信;當通信時需要傳送何種資訊或媒體等。The aforementioned program may include a code (Code) encoded in a computer language such as C, C++, JAVA, machine language, etc., which can be read by the computer processor (CPU) through the device interface of the computer, so that the computer reads the program to execute the method implemented as a program. Such code may include functional code (Functional Code) related to functions that define the functions required to execute the method, and the functions may include control codes related to the execution steps required for the computer processor to execute in a predetermined sequence. In addition, such code may also include additional information required by the computer processor to execute the function or memory reference codes related to which location in the internal or external memory of the computer the media needs to refer to. Furthermore, when the processor of the computer needs to communicate with any other remote computer or server 100 in order to execute the function, the code may also include communication-related codes for: how to communicate with any other remote computer or server 100 using the communication module of the computer; what information or media needs to be transmitted during communication, etc.

所述儲存的介質不是儲存諸如寄存器、快取記憶體、記憶體等的短時間儲存數據的介質,而是表示半永久性地儲存數據並且能夠被設備讀取的介質。具體地,所述儲存的介質的示例有ROM、RAM、CD-ROM、磁帶、軟碟和光學數據存放裝置等,然而並不局限於此。即,所述程式可以儲存在電腦可以訪問的各種伺服器100上的各種記錄介質中或使用者的所述電腦上的各種記錄介質中。並且,所述介質可以分佈在通過網路連接的電腦系統上,以分佈的方式按照電腦可讀代碼被儲存。The storage medium is not a medium for storing data for a short time such as a register, a cache memory, a memory, etc., but a medium that stores data semi-permanently and can be read by a device. Specifically, examples of the storage medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device, but are not limited to this. That is, the program can be stored in various recording media on various servers 100 that can be accessed by the computer or in various recording media on the user's computer. In addition, the medium can be distributed on a computer system connected via a network and stored in a distributed manner according to computer-readable code.

根據如上的本發明,具有如下的多種效果。According to the present invention as described above, there are the following various effects.

第一,在因用戶移動而使將結合增強現實影像的對象體(例如,標記)在空間上的佈置改變的情況下,提供能夠代替在特定時間點遺漏的影像幀的補正影像數據,使得增強現實影像顯示在現實空間的準確的位置,以能夠自然地再現,而沒有晃動。First, in the case where the spatial layout of objects (e.g., markers) to be combined with augmented reality images changes due to user movement, corrected image data is provided that can replace image frames that are missing at a specific point in time, so that the augmented reality images are displayed at the exact location in the real space so that they can be reproduced naturally without shaking.

第二,將深度數據結合於以顏色數據構成的二維虛擬影像數據,從而可以將二維的虛擬影像數據自然地顯示在三維的現實空間。即,增強現實影像再現裝置識別三維的現實空間而將二維的影像的每個像素顯示在適當的深度,從而實現三維增強現實的效果。Second, the depth data is combined with the two-dimensional virtual image data composed of color data, so that the two-dimensional virtual image data can be naturally displayed in the three-dimensional real space. That is, the enhanced reality image reproduction device recognizes the three-dimensional real space and displays each pixel of the two-dimensional image at an appropriate depth, thereby achieving a three-dimensional enhanced reality effect.

第三,由於在製作為虛擬顯示用的虛擬影像數據中,可以基於適用於每個像素的深度數據來確定透明處理水準,從而基於當獲取虛擬影像數據時所獲得的深度數據可以直接將虛擬實境用影像應用為增強現實用,而無需製作單獨的增強現實影像。Third, since the level of transparency processing can be determined based on the depth data applied to each pixel in producing virtual image data for virtual display, virtual reality images can be directly applied as augmented reality based on the depth data obtained when the virtual image data is obtained, without producing a separate augmented reality image.

以上參照附圖說明了本發明的實施例,但在本發明所屬技術領域中具有普通知識的人員可以理解的是,可以在不改變本發明的技術思想或者必要特徵的情況下以其他具體形態實施。因此,以上記載的實施例應當理解為在所有方面均為示例性的,而不是限定性的。The embodiments of the present invention are described above with reference to the attached drawings, but it is understood by those with ordinary knowledge in the technical field to which the present invention belongs that the present invention can be implemented in other specific forms without changing the technical ideas or essential features of the present invention. Therefore, the embodiments described above should be understood as being exemplary in all aspects, rather than restrictive.

100:伺服器 200:客戶機 210:虛擬影像數據接收部 220:控制部 230:影像輸出部 A、B:深度值 S120、S140、S160、S141、S142、S143、S144、S220、S240、S260:步驟 100: Server 200: Client 210: Virtual image data receiving unit 220: Control unit 230: Image output unit A, B: Depth value S120, S140, S160, S141, S142, S143, S144, S220, S240, S260: Steps

圖1是根據本發明的一實施例的增強現實影像系統的結構圖。 圖2是根據本發明的一實施例的客戶機提供利用深度數據的增強現實影像的方法的流程圖。 圖3是根據本發明的一實施例的針對基於深度數據來調節虛擬影像數據的透明度的過程的流程圖。 圖4是根據本發明的一實施例的基於深度數據來設定透明度調節基準並劃分區域的示例圖。 圖5是根據本發明的一實施例的輸入區域指定數據的示例圖。 圖6是根據本發明的一實施例的針對調整每個像素在空間上的位置的過程的流程圖。 圖7是根據本發明的一實施例的基於第一幀的顏色數據以及深度數據來生成遺漏的第二幀的示例圖。 圖8是根據本發明的一實施例的伺服器生成並提供包括深度數據的虛擬影像數據的方法的流程圖。 圖9是示出根據本發明的一實施例的在第一幀和第二幀之間添加第1.5幀的過程的示例圖。 FIG. 1 is a structural diagram of an enhanced reality image system according to an embodiment of the present invention. FIG. 2 is a flow chart of a method for providing an enhanced reality image using depth data by a client according to an embodiment of the present invention. FIG. 3 is a flow chart of a process for adjusting the transparency of virtual image data based on depth data according to an embodiment of the present invention. FIG. 4 is an example diagram of setting a transparency adjustment benchmark and dividing regions based on depth data according to an embodiment of the present invention. FIG. 5 is an example diagram of inputting region designation data according to an embodiment of the present invention. FIG. 6 is a flow chart of a process for adjusting the spatial position of each pixel according to an embodiment of the present invention. FIG. 7 is an example diagram of generating a missing second frame based on color data and depth data of a first frame according to an embodiment of the present invention. FIG8 is a flow chart of a method for a server to generate and provide virtual image data including depth data according to an embodiment of the present invention. FIG9 is an example diagram showing a process of adding a 1.5th frame between a first frame and a second frame according to an embodiment of the present invention.

S120~S160:步驟 S120~S160: Steps

Claims (12)

一種利用深度數據的增強現實影像提供方法,作為客戶機提供增強現實影像的方法,包括: 虛擬影像數據接收步驟,所述客戶機從伺服器接收虛擬影像數據,其中所述虛擬影像數據包括顏色數據以及深度數據; 顯示位置確定步驟,所述客戶機基於所述深度數據來確定每個像素應顯示在現實空間內的位置;以及 虛擬影像數據顯示步驟,基於所述確定的位置來將所述虛擬影像數據顯示在顯示空間上。 A method for providing an enhanced reality image using depth data, as a method for providing an enhanced reality image for a client, comprises: a virtual image data receiving step, wherein the client receives virtual image data from a server, wherein the virtual image data includes color data and depth data; a display position determining step, wherein the client determines the position where each pixel should be displayed in the real space based on the depth data; and a virtual image data displaying step, wherein the virtual image data is displayed on the display space based on the determined position. 如請求項1所述的利用深度數據的增強現實影像提供方法,其中 所述深度數據按照每個像素包括於與顏色數據通道不同的單獨的通道, 所述深度數據以及所述顏色數據被同步化而傳送。 A method for providing an enhanced reality image using depth data as described in claim 1, wherein the depth data is included in a separate channel different from the color data channel for each pixel, and the depth data and the color data are transmitted synchronously. 如請求項1所述的利用深度數據的增強現實影像提供方法,其中 所述虛擬影像數據是當拍攝或者生成時獲取每個地點的深度數據而按照每個像素儲存的二維圖像。 A method for providing an augmented reality image using depth data as described in claim 1, wherein the virtual image data is a two-dimensional image stored per pixel by obtaining the depth data of each location when shooting or generating. 如請求項1所述的利用深度數據的增強現實影像提供方法,其中所述顯示位置確定步驟包括: 所述客戶機將特定的深度確定為透明度調節基準;以及 基於所述透明度調節基準區分深度範圍,來確定是否進行透明處理, 其中,所述透明度調節基準是設定將被顯示在畫面上的內容的邊界線的基準。 The method for providing an augmented reality image using depth data as described in claim 1, wherein the display position determination step includes: The client determines a specific depth as a transparency adjustment benchmark; and Based on the transparency adjustment benchmark, the depth range is distinguished to determine whether to perform transparent processing, Wherein, the transparency adjustment benchmark is a benchmark for setting the boundary of the content to be displayed on the screen. 如請求項4所述的利用深度數據的增強現實影像提供方法,其中, 所述深度範圍是按照所述透明度調節基準設定多個深度使得以所述多個深度為基準而被劃分的多個區域。 A method for providing an enhanced reality image using depth data as described in claim 4, wherein the depth range is a plurality of areas divided based on the plurality of depths set according to the transparency adjustment standard. 如請求項1所述的利用深度數據的增強現實影像提供方法,其中 所述虛擬影像數據還包括獲取位置數據以及影像方向數據, 其中所述顯示位置確定步驟包括: 比較由所述客戶機獲取的當前位置數據和所述獲取位置數據,比較由所述客戶機獲取的再現方向數據和所述影像方向數據;以及 基於比較結果來調節所述虛擬影像數據內的像素的位置。 A method for providing an enhanced reality image using depth data as described in claim 1, wherein the virtual image data further includes acquired position data and image direction data, wherein the display position determination step includes: comparing the current position data acquired by the client and the acquired position data, and comparing the reproduction direction data acquired by the client and the image direction data; and adjusting the position of the pixel in the virtual image data based on the comparison result. 如請求項6所述的利用深度數據的增強現實影像提供方法,其中所述顯示位置確定步驟還包括: 所述客戶機基於現實空間的光照射方向來調節所述虛擬影像數據內的每個像素的顏色或者色度。 The method for providing an enhanced reality image using depth data as described in claim 6, wherein the display position determination step further includes: The client adjusts the color or chromaticity of each pixel in the virtual image data based on the light illumination direction in the real space. 如請求項1所述的利用深度數據的增強現實影像提供方法,還包括: 在所述客戶機是輸出將借由相機獲取的現實影像數據和所述虛擬影像數據結合的結合影像數據的裝置的情況下,所述客戶機基於輸出延遲時間來補正現實影像數據, 其中,所述輸出延遲時間是所述現實影像數據被拍攝之後至輸出於畫面上為止所需的時間。 The method for providing an enhanced real image using depth data as described in claim 1 further includes: In the case where the client is a device that outputs combined image data that combines real image data acquired by a camera and the virtual image data, the client corrects the real image data based on an output delay time, wherein the output delay time is the time required from when the real image data is captured to when it is output on the screen. 一種利用深度數據的增強現實影像提供的程式,其中, 與作為硬體的電腦結合,並為了執行如請求項1至請求項8中的任意一項的方法而儲存於介質。 A program for providing an augmented reality image using depth data, wherein, is combined with a computer as hardware and stored in a medium for executing the method of any one of claim 1 to claim 8. 一種利用深度數據的增強現實影像提供裝置,作為提供增強現實影像的裝置,包括: 虛擬影像數據接收部,從伺服器接收虛擬影像數據,所述虛擬影像數據包括顏色數據以及深度數據; 控制部,基於所述深度數據來確定每個像素應顯示在現實空間內的位置;以及 影像輸出部,基於所述確定的位置來將所述虛擬影像數據顯示在顯示空間上。 An enhanced reality image providing device using depth data, as a device for providing enhanced reality images, includes: A virtual image data receiving unit, receiving virtual image data from a server, wherein the virtual image data includes color data and depth data; A control unit, determining the position of each pixel to be displayed in the real space based on the depth data; and An image output unit, displaying the virtual image data on the display space based on the determined position. 一種利用深度數據的增強現實影像提供方法,作為伺服器生成用於在客戶機實現增強現實的虛擬影像數據的方法,包括: 所述伺服器獲取向所述客戶機在預定時間點提供的所述虛擬影像數據的按照每個像素的顏色數據; 獲取所述虛擬影像數據的按照每個像素的深度數據而儲存;以及 所述伺服器將所述顏色數據以及所述深度數據同步化而傳送至所述客戶機, 其中,所述深度數據是若沒有接收第二時間點的第二虛擬影像數據則基於第一時間點的第一虛擬影像數據來進行補正時所利用的數據, 其中,所述第二時間點是從所述第一時間點經過虛擬影像數據傳送週期的時間點。 A method for providing an augmented reality image using depth data, as a method for a server to generate virtual image data for realizing augmented reality on a client, comprising: The server obtains color data of each pixel of the virtual image data provided to the client at a predetermined time point; The depth data of each pixel of the virtual image data is obtained and stored; and The server synchronizes the color data and the depth data and transmits them to the client, Wherein, the depth data is data used when correcting based on the first virtual image data at the first time point if the second virtual image data at the second time point is not received, Wherein, the second time point is a time point after the virtual image data transmission cycle from the first time point. 如請求項11所述的利用深度數據的增強現實影像提供方法,其中, 所述虛擬影像數據還包括獲取位置數據以及影像方向數據, 所述客戶機比較當前位置數據和所述獲取位置數據,比較再現方向數據和所述影像方向數據,並基於比較結果來調節所述虛擬影像數據內的像素的位置, 所述當前位置數據以及所述再現方向數據是所述客戶機即時地或者按照單位時間獲取的數據。 A method for providing an enhanced reality image using depth data as described in claim 11, wherein, the virtual image data further includes acquisition position data and image direction data, the client compares the current position data with the acquisition position data, compares the reproduction direction data with the image direction data, and adjusts the position of the pixel in the virtual image data based on the comparison result, the current position data and the reproduction direction data are data acquired by the client in real time or per unit time.
TW111133622A 2022-09-06 Method and program for providing augmented reality image by using depth data TW202411936A (en)

Publications (1)

Publication Number Publication Date
TW202411936A true TW202411936A (en) 2024-03-16

Family

ID=

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US20230283762A1 (en) Method and system for near-eye focal plane overlays for 3d perception of content on 2d displays
US10382680B2 (en) Methods and systems for generating stitched video content from multiple overlapping and concurrently-generated video instances
CN107862718B (en) 4D holographic video capture method
JP2020522926A (en) Method and system for providing virtual reality content using captured 2D landscape images
JP2014176024A (en) Imaging apparatus, image processing method and image processing program
WO2018176927A1 (en) Binocular rendering method and system for virtual active parallax computation compensation
US11328487B2 (en) Method and program for providing augmented reality image by using depth data
CN108632538B (en) CG animation and camera array combined bullet time shooting system and method
KR101665988B1 (en) Image generation method
TW202411936A (en) Method and program for providing augmented reality image by using depth data
CN108898650B (en) Human-shaped material creating method and related device
KR100422470B1 (en) Method and apparatus for replacing a model face of moving image
KR102291682B1 (en) Method and program for providing augmented reality by using depth data
US20180365865A1 (en) Subtitle beat generation method, image processing method, terminal, and server
KR20170044319A (en) Method for extending field of view of head mounted display
US20210297649A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method
KR20170059310A (en) Device for transmitting tele-presence image, device for receiving tele-presence image and system for providing tele-presence image
WO2023026543A1 (en) Information processing device, information processing method, and program
CN109089105B (en) Model generation device and method based on depth perception coding
CN109379511B (en) 3D data security encryption algorithm and device
CN110876050B (en) Data processing device and method based on 3D camera
WO2019161717A1 (en) Method and device for generating raster image, and storage medium
Ye et al. When Green Screen Meets Panoramic Videos: An Interesting Video Combination Framework for Virtual Studio and Cellphone Applications
JP2022123941A (en) Motion capture display system and program thereof