TWI747294B - Remote virtual scene building system, method and non-transitory computer-readable storage medium - Google Patents

Remote virtual scene building system, method and non-transitory computer-readable storage medium Download PDF

Info

Publication number
TWI747294B
TWI747294B TW109116842A TW109116842A TWI747294B TW I747294 B TWI747294 B TW I747294B TW 109116842 A TW109116842 A TW 109116842A TW 109116842 A TW109116842 A TW 109116842A TW I747294 B TWI747294 B TW I747294B
Authority
TW
Taiwan
Prior art keywords
optical label
scene
optical
information
scene information
Prior art date
Application number
TW109116842A
Other languages
Chinese (zh)
Other versions
TW202145061A (en
Inventor
林翰霙
Original Assignee
光時代科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 光時代科技有限公司 filed Critical 光時代科技有限公司
Priority to TW109116842A priority Critical patent/TWI747294B/en
Application granted granted Critical
Publication of TWI747294B publication Critical patent/TWI747294B/en
Publication of TW202145061A publication Critical patent/TW202145061A/en

Links

Images

Abstract

The present invention provides a remote virtual scene building system comprising a first optical label device, a second optical label device, and an optical label recognition device. The first optical label device comprises a first optical label which is corresponding to a device posture-position information. The second optical label device comprises a second optical label. The optical label recognition device comprises a camera unit, a display, and a processor. The optical label recognition device captures the image of the second optical label by the camera unit and obtains a posture information via the processor. The processor constructs a scene information corresponding to the device posture-position information of the first optical label device according to the posture information and the user’s chosen first optical label device, and outputs the scene information to the display to provide a virtual object placement function.

Description

遠端建構虛擬場景的系統、方法及非暫存性電腦可讀取記錄媒體System and method for remotely constructing virtual scene and non-temporary computer readable recording medium

本發明有關於一種建構虛擬場景的系統、方法與非暫存性電腦可讀取記錄媒體,尤指一種遠端建構虛擬場景的系統、方法與非暫存性電腦可讀取記錄媒體。The present invention relates to a system and method for constructing a virtual scene, and a non-transitory computer-readable recording medium, in particular to a system and method for remotely constructing a virtual scene, and a non-transitory computer-readable recording medium.

擴增實境(Augmented Reality, AR),也可稱之為實擬虛境或擴張現實,是指透過攝影機影像的位置及角度精算並加上圖像分析技術,讓螢幕上的虛擬世界能夠與現實世界場景進行結合與互動的技術。Augmented Reality (Augmented Reality, AR), also called virtual reality or expanded reality, refers to the position and angle of the camera image through the actuarial calculation and image analysis technology, so that the virtual world on the screen can be compared with The technology of combining and interacting with real world scenes.

而在純粹討論虛擬世界時,虛擬世界中所有的物件(例如:天空、地面、人、動物、桌椅、高樓大廈等)都可稱之為虛擬物件。而在擴增實境系統的關鍵在於如何將虛擬世界的虛擬物件與實際環境結合,因此擴增實境演算法軟體必須要取得真實世界的坐標,再將虛擬物件疊合到坐標上。When discussing the virtual world purely, all objects in the virtual world (for example: sky, ground, people, animals, tables and chairs, high-rise buildings, etc.) can be called virtual objects. The key to the augmented reality system is how to combine the virtual objects in the virtual world with the actual environment. Therefore, the augmented reality algorithm software must obtain the coordinates of the real world, and then superimpose the virtual objects on the coordinates.

為了取得真實世界的坐標以及建立準確的坐標位置,通常需要專門的拍攝人員使用專用相機拍攝,並且前去真實世界的環境中取得相應的圖像,才能夠取得真實世界的精確坐標位置,並加以將虛擬物件結合,已達到擴增實境的需求。In order to obtain real-world coordinates and establish accurate coordinate positions, it is usually necessary for a dedicated photographer to use a dedicated camera to take pictures and go to the real-world environment to obtain corresponding images. Only then can the real-world accurate coordinate positions be obtained and added. Combining virtual objects has met the needs of augmented reality.

本發明提供一種遠端建構虛擬場景的系統,包含至少一第一光標籤裝置、一第二光標籤裝置、以及一光標籤識別裝置。該第一光標籤裝置具有一第一光標籤,該第一光標籤至少對應於一裝置位姿訊息。該第二光標籤裝置具有一第二光標籤。該光標籤識別裝置包含一攝影單元、一顯示器、以及一處理器,該光標籤識別裝置經由該攝影單元拍攝該第二光標籤的圖像並經由該處理器計算獲得一位姿訊息,並依據該位姿訊息與用戶選擇的該第一光標籤裝置建立相對該第一光標籤裝置的裝置位姿訊息的場景資訊,將該場景資訊顯示於該顯示器上,並提供虛擬物件設置功能。The invention provides a system for remotely constructing a virtual scene, which includes at least a first optical label device, a second optical label device, and an optical label identification device. The first optical tag device has a first optical tag, and the first optical tag at least corresponds to a device pose information. The second optical label device has a second optical label. The optical label identification device includes a photographing unit, a display, and a processor. The optical label identification device photographs the image of the second optical label through the photographing unit and calculates the position information by the processor, and according to The pose information and the first optical tag device selected by the user establish scene information relative to the device pose information of the first optical tag device, display the scene information on the display, and provide a virtual object setting function.

本發明另外提供一種遠端建構虛擬場景的方法,包含一光標籤識別裝置的一攝影單元拍攝一第二光標籤裝置的圖像。該圖像經由該光標籤識別裝置的一處理器計算獲得一位姿訊息。該處理器依據該位姿訊息與用戶選擇的一第一光標籤裝置建立相對該第一光標籤裝置的一裝置位姿訊息的場景資訊。該場景資訊顯示於該光標籤識別裝置的一顯示器上。以及該處理器設置虛擬物件於該場景資訊中。The present invention also provides a method for remotely constructing a virtual scene, which includes a photographing unit of an optical label recognition device to take an image of a second optical label device. The image is calculated by a processor of the optical tag recognition device to obtain position information. The processor creates scene information relative to a device pose information of the first optical tag device based on the pose information and a first optical tag device selected by the user. The scene information is displayed on a display of the optical label identification device. And the processor sets the virtual object in the scene information.

本發明另外提供一種非暫存性電腦可讀取記錄媒體,係用於儲存一程式,當一處理晶片載入該程式後將可執行如前述的方法。The present invention also provides a non-transitory computer-readable recording medium, which is used to store a program. When the program is loaded into a processing chip, the aforementioned method can be executed.

是以,比起習知技術,本發明無需親自到達現實環境去取得圖像,並且可以透過光標籤取得三維空間中的位置且在遠端進行虛擬物件的設置。Therefore, compared with the conventional technology, the present invention does not need to personally go to the real environment to obtain the image, and can obtain the position in the three-dimensional space through the light tag and set the virtual object at the remote end.

有關本發明之詳細說明及技術內容,現就配合圖式說明如下。再者,本發明中之圖式,為說明方便,其比例未必照實際比例繪製,該等圖式及其比例並非用以限制本發明之範圍,在此先行敘明。The detailed description and technical content of the present invention will now be described in conjunction with the drawings as follows. Furthermore, for the convenience of description, the figures in the present invention are not necessarily drawn according to actual proportions. These figures and their proportions are not intended to limit the scope of the present invention, and are described here first.

以下請參閱「圖1」,為本發明遠端建構虛擬場景的系統方塊示意圖,如圖所示:Please refer to "Figure 1" below, which is a block diagram of the system for remotely constructing virtual scenes according to the present invention, as shown in the figure:

本實施例提供一種遠端建構虛擬場景的系統100,主要包括至少一第一光標籤裝置10、具有第二光標籤的第二光標籤裝置20以及光標籤識別裝置30。「圖1」中所示的虛線,係指光標籤識別裝置30對第二光標籤裝置20的拍攝關係,於此先行敘明。This embodiment provides a system 100 for remotely constructing a virtual scene, which mainly includes at least one first optical label device 10, a second optical label device 20 having a second optical label, and an optical label identification device 30. The dotted line shown in "FIG. 1" refers to the photographing relationship of the optical label identification device 30 to the second optical label device 20, which is described here first.

所述的第一光標籤裝置10具有第一光標籤,第一光標籤至少對應於裝置位姿訊息。The first optical tag device 10 has a first optical tag, and the first optical tag at least corresponds to the pose information of the device.

前述的第一光標籤裝置10、第二光標籤裝置20是一種能通過不同發光方式來傳遞資訊的裝置,不同於傳統二維碼,光標籤具有識別距離遠、指向性強、不受可見光條件的限制,因此光標籤能提供更遠的識別距離以及更強的資訊交換能力。通常光標籤通常可包括控制器或至少一個光源,該控制器可以藉由不同模式驅動光源,使光標籤能向外傳遞不同的資訊。而每個光標籤可以被分配一個標示資訊,作為光標籤的識別資訊,且通過該識別資訊能獲得光標籤的裝置位姿訊息。在此須特別注意的是,於本實施例中雖然僅揭示兩組光標籤裝置(第一光標籤裝置10、第二光標籤裝置20),在實務上可選用的光標籤裝置可以為三個或三個以上,用戶可以經由應用程式所提供的服務選定對應的光標籤裝置,並基於該光標籤裝置所對應的場景資訊進行編修,於本發明中不予以限制。The aforementioned first optical label device 10 and second optical label device 20 are devices that can transmit information through different light emitting methods. Unlike traditional two-dimensional codes, the optical label has a long recognition distance, strong directivity, and is not subject to visible light conditions. Therefore, the optical tag can provide a longer recognition distance and a stronger information exchange capability. Generally, the optical label usually includes a controller or at least one light source, and the controller can drive the light source in different modes, so that the optical label can transmit different information to the outside. Each optical tag can be assigned a piece of label information as the identification information of the optical tag, and the device pose information of the optical tag can be obtained through the identification information. It should be noted here that although only two sets of optical label devices (the first optical label device 10 and the second optical label device 20) are disclosed in this embodiment, three optical label devices can be selected in practice. Or more than three, the user can select the corresponding optical label device through the service provided by the application, and edit based on the scene information corresponding to the optical label device, which is not limited in the present invention.

前述的裝置位姿訊息,包含了位置資訊(坐標)與姿態資訊(該姿態資訊例如是指光標籤的朝向等,例如朝向正北方向)。前述的姿態資訊是指第一光標籤裝置10、第二光標籤裝置20在某個坐標系(例如世界坐標系或光標籤坐標系)中的朝向資訊。當第一光標籤裝置10、第二光標籤裝置20平移而沒有旋轉時,位置資訊(坐標)變化,但第一光標籤裝置10、第二光標籤裝置20姿態資訊會保持不變。當第一光標籤裝置10、第二光標籤裝置20僅旋轉而不平移時,第一光標籤裝置10、第二光標籤裝置20的位置資訊保持不變,但姿態資訊會發生變化。The aforementioned device pose information includes position information (coordinates) and posture information (the posture information refers, for example, to the orientation of the light tag, for example, toward true north). The aforementioned posture information refers to the orientation information of the first optical label device 10 and the second optical label device 20 in a certain coordinate system (for example, a world coordinate system or an optical label coordinate system). When the first optical label device 10 and the second optical label device 20 are translated without rotating, the position information (coordinates) changes, but the posture information of the first optical label device 10 and the second optical label device 20 will remain unchanged. When the first optical label device 10 and the second optical label device 20 only rotate without translation, the position information of the first optical label device 10 and the second optical label device 20 remain unchanged, but the posture information will change.

於另外的實施例中,前述的位置資訊於一可行的實施例中可以是該裝置本身於世界坐標中的位址資訊(例如世界大地測量系統(World Geodetic System, WGS)、經緯坐標系等)。於其他可行的實施例中,裝置位置資訊亦可以是基於用戶設定相對穩定的錨點而預先建立的空間坐標系、或其他任意可供作為絕對位置或相對位置參考的空間坐標資訊,於本發明中不予以限制。In another embodiment, the aforementioned location information may be, in a feasible embodiment, the address information of the device itself in world coordinates (for example, World Geodetic System (WGS), latitude and longitude coordinate system, etc.) . In other feasible embodiments, the device location information can also be a spatial coordinate system pre-established based on the user setting a relatively stable anchor point, or any other spatial coordinate information that can be used as an absolute or relative position reference. In the present invention There is no restriction in it.

請參酌「圖2」,所述的光標籤識別裝置30包含攝影單元32、顯示器34、以及處理器36。光標籤識別裝置30可以為(但不限定於)具有拍攝功能、顯示功能、運算功能的手機(Smart Phone)、平板電腦(Tablet)、智慧眼鏡(Smart Glasses)、穿戴式裝置(Wearable Devices)等或其他具有傳感器並具有攝像鏡頭、顯示螢幕、運算處理器及聯網功能的裝置,能將拍攝的影像及/或其資訊藉由網際網路傳送至其他裝置(例如伺服器),該光標籤識別裝置30的選擇於本發明中不予以限制。於一實施例中,本發明中所述的處理器36可以更進一步配合一儲存單元執行,經由該儲存單元儲存程式,藉以經由載入該儲存單元執行相應的功能;於另一可行的實施例中,該處理器36可以與該儲存單元共構而作成單晶片實施,於本發明中不予以限制。其中該儲存單元可以是但不限定於快取記憶體(Cache memory)、動態隨機存取記憶體(DRAM)、持續性記憶體(Persistent Memory)等可以做為儲存資料和取出資料用途之裝置或其組合,於本發明中不予以限制 。Please refer to “FIG. 2”. The optical label identification device 30 includes a photographing unit 32, a display 34, and a processor 36. The optical label recognition device 30 can be (but is not limited to) a mobile phone (Smart Phone), a tablet computer (Tablet), smart glasses (Smart Glasses), a wearable device (Wearable Devices), etc., which have shooting functions, display functions, and computing functions. Or other devices with sensors and camera lenses, display screens, computing processors and networking functions that can transmit the captured images and/or their information to other devices (such as servers) via the Internet. The optical tag identifies The choice of the device 30 is not limited in the present invention. In one embodiment, the processor 36 described in the present invention may further cooperate with a storage unit to execute, store a program through the storage unit, and thereby perform corresponding functions by loading the storage unit; in another feasible embodiment In this case, the processor 36 can be co-constructed with the storage unit to be implemented as a single chip, which is not limited in the present invention. The storage unit can be, but is not limited to, cache memory, dynamic random access memory (DRAM), persistent memory, etc., which can be used as a device for storing and fetching data. The combination is not limited in the present invention.

於其中一實施例中,本發明能搭配一伺服器,該伺服器具有儲存單元。所述的伺服器(Server)包括中央處理器、硬碟、記憶體(儲存單元)等,並由該等硬體協同執行對應的軟體(Software)以實現本發明中所述的功能及演算法,該等軟硬體於電訊號上的協同關係非屬本發明所欲限制的範圍。於本發明其中一實施例中,部分的程序可以由光標籤識別裝置30執行,亦可以由該伺服器執行,於本發明中不予以限制。In one of the embodiments, the present invention can be used with a server having a storage unit. The server (Server) includes a central processing unit, a hard disk, a memory (storage unit), etc., and the hardware cooperates to execute the corresponding software (Software) to realize the functions and algorithms described in the present invention The cooperative relationship between the software and hardware on the electrical signal is not within the scope of the present invention. In one of the embodiments of the present invention, part of the program can be executed by the optical label identification device 30 or the server, which is not limited in the present invention.

以上已將本發明的硬體架構進行大致的說明,針對本發明配合硬體所執行的演算法及功能後面將進行更進一步的說明,請一併參閱「圖3」,係揭示本發明遠端設置虛擬物件的方法的流程示意圖,如圖所示:The hardware architecture of the present invention has been roughly described above. The algorithm and functions executed by the hardware in conjunction with the present invention will be further described later. Please also refer to "Figure 3" to disclose the remote end of the present invention. The flow diagram of the method of setting virtual objects is shown in the figure:

針對各該光標籤裝置(例如第一光標籤裝置10、第二光標籤裝置20)的場景資訊係依據對應的光標籤代碼(或坐標位置)作為索引預先以資料庫的形式儲存於該光標籤識別裝置30或是連接或耦接至該光標籤識別裝置30的伺服器。The scene information for each optical label device (for example, the first optical label device 10, the second optical label device 20) is pre-stored in the optical label in the form of a database according to the corresponding optical label code (or coordinate position) as an index The identification device 30 may be a server connected or coupled to the optical label identification device 30.

於光標籤識別裝置30啟動應用程序所提供的服務時,首先,用戶先操作光標籤識別裝置30,經由該光標籤識別裝置30的攝影單元32拍攝第二光標籤裝置20上第二光標籤的圖像(步驟S201)。When the optical label recognition device 30 starts the service provided by the application, first, the user operates the optical label recognition device 30, and takes a picture of the second optical label on the second optical label device 20 through the photographing unit 32 of the optical label recognition device 30 Image (step S201).

接續,該處理器36於獲得該第二光標籤的圖像後,經由該圖像計算獲得該光標籤識別裝置30的位姿訊息(步驟S202)。Subsequently, after obtaining the image of the second optical label, the processor 36 obtains the pose information of the optical label identification device 30 through the image calculation (step S202).

所述的位姿訊息(包含位置資訊與姿態資訊)係指光標籤識別裝置30與第二光標籤裝置20的相對位置關係及相對姿態。The pose information (including position information and posture information) refers to the relative positional relationship and relative posture of the optical tag identification device 30 and the second optical tag device 20.

於一實施例中,可以藉由下述方式來確定光標籤識別裝置30相對於第二光標籤裝置20的位姿訊息(位置資訊與姿態資訊)。首先,根據第二光標籤裝置20的光標籤建立一個坐標系,該坐標系可以被稱為光標籤坐標系。可以將光標籤上的一些點確定為在光標籤坐標系中的一些空間點,並且可以根據光標籤的物理尺寸資訊及/或物理形狀資訊來確定這些空間點在光標籤坐標系中的坐標。光標籤上的一些點例如可以是光標籤的外殼的角、光標籤中的光源的端部、光標籤中的一些標識點、等等。根據光標籤的物理結構特徵或幾何結構特徵,可以在攝影單元32拍攝的圖像中找到與這些空間點分別對應的像點,並確定各個像點在圖像中的位置。根據各個空間點在光標籤坐標系中的坐標以及對應的各個像點在圖像中的位置,結合光標籤識別裝置30的內參資訊,可以計算得到拍攝該圖像時光標籤識別裝置30在光標籤坐標系中的位姿訊息(R,t),其中R為旋轉矩陣,其可以用於表示光標籤識別裝置30在光標籤坐標系中的姿態資訊(也可稱為朝向資訊),t為位移向量,其可以用於表示光標籤識別裝置30在光標籤坐標系中的位置資訊。前述計算R、t的方法能利用已知的現有技術來計算取得,例如,用於3D-2D技術的PnP(Perspective-n-Point)方法來計算R、t。In one embodiment, the posture information (position information and posture information) of the optical tag identification device 30 relative to the second optical tag device 20 can be determined by the following method. First, a coordinate system is established based on the optical label of the second optical label device 20, and the coordinate system may be called an optical label coordinate system. Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or physical shape information of the optical label. Some points on the optical label may be, for example, the corners of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on. According to the physical structure feature or geometric structure feature of the optical tag, the image points corresponding to these spatial points can be found in the image taken by the photographing unit 32, and the position of each image point in the image can be determined. According to the coordinates of each spatial point in the optical label coordinate system and the position of each corresponding image point in the image, combined with the internal reference information of the optical label recognition device 30, it can be calculated that the optical label recognition device 30 is in the optical label when the image is taken. The pose information (R, t) in the coordinate system, where R is the rotation matrix, which can be used to represent the posture information (also called orientation information) of the optical tag recognition device 30 in the optical tag coordinate system, and t is the displacement The vector can be used to represent the position information of the optical label identification device 30 in the optical label coordinate system. The aforementioned method of calculating R and t can be calculated using known existing technology, for example, the PnP (Perspective-n-Point) method used in 3D-2D technology to calculate R and t.

接續,於獲得位姿訊息後,該處理器36進一步依據該位姿訊息與用戶選擇的第一光標籤裝置10建立相對第一光標籤裝置10的裝置位姿訊息的場景資訊(步驟S203)。Subsequently, after obtaining the pose information, the processor 36 further establishes scene information relative to the device pose information of the first optical tag device 10 according to the pose information and the first optical tag device 10 selected by the user (step S203).

具體而言,當該處理器36已取得位姿訊息後,用戶能藉由處理器36內建的軟體去選擇需要的第一光標籤裝置10,處理器36依據用戶的選擇作為索引由資料庫中找到對應該第一光標籤裝置10的場景資訊,再根據已取得的光標籤識別裝置30與第二光標籤裝置20的位姿訊息錨定至該第一光標籤裝置10所對應的坐標,以虛擬設置用戶於第一光標籤裝置10於空間中所對應的位置及姿態,並基於該等資訊建立用戶的場景資訊。Specifically, after the processor 36 has obtained the pose information, the user can use the software built into the processor 36 to select the required first optical label device 10, and the processor 36 serves as an index from the database according to the user's selection. Find the scene information corresponding to the first optical tag device 10, and anchor it to the coordinates corresponding to the first optical tag device 10 according to the acquired pose information of the optical tag identification device 30 and the second optical tag device 20, The user's corresponding position and posture in the space of the first optical label device 10 are set virtually, and the user's scene information is established based on the information.

以下針對前述的內容列舉一具體實施例,請參閱「圖4」。用戶P手持光標籤識別裝置30A,拍攝具有坐標

Figure 02_image001
的第二光標籤裝置20A,並運算光標籤識別裝置30A與第二光標籤裝置20A的向量
Figure 02_image003
、roll 角(
Figure 02_image005
)(圖未示)、pitch 角(
Figure 02_image007
)(圖未示)、yaw 角(
Figure 02_image009
)(圖未示)後錨定至第一光標籤裝置10A所對應的坐標
Figure 02_image011
。所述的向量
Figure 02_image003
Figure 02_image013
之間
Figure 02_image015
軸坐標變化的純量為
Figure 02_image017
Figure 02_image019
軸坐標變化的純量為
Figure 02_image021
Figure 02_image023
軸坐標變化的純量為
Figure 02_image025
。所述的roll 角(
Figure 02_image005
)為在yz平面逆時針的旋轉方向。所述的pitch 角(
Figure 02_image007
)為在zx平面逆時針的旋轉方向。yaw 角(
Figure 02_image009
)為在xy平面逆時針的旋轉方向。所述的第一光標籤裝置10A於圖4中為虛線,該虛線係指對於用戶P來說,第一光標籤裝置10A並非實際存在用戶P周圍。 The following lists a specific embodiment for the foregoing content, please refer to "FIG. 4". The user P holds the optical tag recognition device 30A, and photographs with coordinates
Figure 02_image001
The second optical label device 20A, and calculate the vector of the optical label identification device 30A and the second optical label device 20A
Figure 02_image003
, Roll angle (
Figure 02_image005
) (Not shown), pitch angle (
Figure 02_image007
) (Not shown), yaw angle (
Figure 02_image009
) (Not shown) and anchored to the coordinates corresponding to the first optical label device 10A
Figure 02_image011
. Said vector
Figure 02_image003
and
Figure 02_image013
between
Figure 02_image015
The scalar of the axis coordinate change is
Figure 02_image017
,
Figure 02_image019
The scalar of the axis coordinate change is
Figure 02_image021
,
Figure 02_image023
The scalar of the axis coordinate change is
Figure 02_image025
. The roll angle (
Figure 02_image005
) Is the counterclockwise rotation direction in the yz plane. The pitch angle (
Figure 02_image007
) Is the counterclockwise direction of rotation in the zx plane. yaw angle (
Figure 02_image009
) Is the counterclockwise direction of rotation in the xy plane. The first optical label device 10A is shown as a dashed line in FIG.

除上述的方式外,第一光標籤裝置10的場景資訊亦可以由處理器36經由網際網路、區域網路或實體連線連接至伺服器向伺服器下載取得,於本發明中不予以限制場景資訊所儲存的媒介。於其中所述的場景資訊除了對應於光標籤裝置所在場景空間的三維資訊,亦可以包括配合所述的光標籤裝置已經預先建立的虛擬物件。在此所述的虛擬物件例如是3D 模型、2D 平面圖片、透空遮罩影片、影片或虛擬文字訊息等,於本發明中不予以限制。In addition to the above methods, the scene information of the first optical label device 10 can also be downloaded from the server by the processor 36 via the Internet, local area network or physical connection connected to the server, which is not limited in the present invention. The medium where the scene information is stored. In addition to the three-dimensional information corresponding to the scene space where the optical label device is located, the scene information described therein may also include a virtual object that has been created in advance in conjunction with the optical label device. The virtual objects described here are, for example, 3D models, 2D plane pictures, transparent masked videos, videos or virtual text messages, etc., which are not limited in the present invention.

更進一步在一較佳實施例中,所述的第一光標籤裝置10周圍的場景資訊包括複數個對應個別時間點的場景資訊。前述包括複數個對應個別時間點的場景資訊,請參閱「圖5」,例如:場景資訊包括早上九點的場景資訊(請參閱圖5(A))、晚上八點的場景資訊(請參閱圖5(B))。前述場景資訊的時間點非屬本發明所欲限制的範圍。Furthermore, in a preferred embodiment, the scene information around the first optical label device 10 includes a plurality of scene information corresponding to individual time points. The foregoing includes a plurality of scene information corresponding to individual time points, please refer to "Figure 5". For example, the scene information includes scene information at nine o'clock in the morning (see Figure 5 (A)) and scene information at eight o'clock in the evening (see Figure 5). 5(B)). The time point of the aforementioned scene information is not within the scope of the present invention.

於其他的實施例中,當第一光標籤裝置10周圍的場景資訊包括複數個對應個別時間點的場景資訊時,處理器36能依據用戶P選擇的時間點建立相對應時間點的場景資訊。於其中一可行的應用實施例中,例如可以依據店鋪的營業時間顯示不同店鋪的資訊,或是依據時間點建立環境亮度(白晝或夜晚)等,如「圖6」,圖6(A)為早上九點時,餐廳與銀行營業情況下的場景資訊;圖6(B)為晚上八點時,餐廳轉營業成酒吧,並且銀行晚上沒有營業情況下的場景資訊,前述該等場景資訊建立方式非屬本發明所欲限制的範圍。其中,圖中的斜線係指晚上的天空。In other embodiments, when the scene information surrounding the first optical label device 10 includes a plurality of scene information corresponding to individual time points, the processor 36 can create the scene information corresponding to the time point according to the time point selected by the user P. In one of the feasible application embodiments, for example, the information of different stores can be displayed according to the business hours of the store, or the ambient brightness (day or night) can be established according to the time point, as shown in "Figure 6", Figure 6(A) is Scene information under the business conditions of the restaurant and the bank at nine o'clock in the morning; Figure 6(B) shows the scene information when the restaurant turns into a bar at eight o'clock in the evening, and the bank is not open at night. The aforementioned scene information is created. It is not within the scope of the present invention. Among them, the diagonal line in the figure refers to the night sky.

於該處理器36確認場景資訊後,該處理器36係將該場景資訊顯示於該光標籤識別裝置30的顯示器34上(步驟S204)。After the processor 36 confirms the scene information, the processor 36 displays the scene information on the display 34 of the optical label identification device 30 (step S204).

在此步驟中,處理器36所建構的場景資訊係基於該第一光標籤裝置10的裝置位姿訊息,此時顯示器34所顯示的影像將對應至第一光標籤裝置10的場景資訊,並且用戶P於顯示器34中所看到的用戶相對於第一光標籤裝置10的方位與距離相同於光標籤識別裝置30與第二光標籤裝置20的方位與距離。In this step, the scene information constructed by the processor 36 is based on the device pose information of the first light tag device 10, and the image displayed on the display 34 will correspond to the scene information of the first light tag device 10 at this time, and The orientation and distance of the user relative to the first optical label device 10 seen by the user P in the display 34 are the same as the orientation and distance of the optical label identification device 30 and the second optical label device 20.

於其他的實施例中,資料庫中可進一步包括複數個對應個別時間點的場景資訊,用戶P能藉由應用程式所提供的服務依據需求的時間點(或是依據當下的時間點)選擇對應時間點的場景資訊並顯示於光標籤識別裝置30的顯示器34上。如「圖7」所示,當光標籤識別裝置30為智慧手機30B,並且用戶選擇下午八點的場景資訊時,智慧手機30B的顯示器上顯示用戶P所選擇的下午八點的場景資訊(圖6(B))。In other embodiments, the database may further include a plurality of scene information corresponding to individual time points, and the user P can select the corresponding information according to the required time point (or according to the current time point) through the service provided by the application The scene information at the time point is also displayed on the display 34 of the optical label recognition device 30. As shown in "Figure 7", when the optical tag recognition device 30 is a smart phone 30B and the user selects the scene information at 8 pm, the display of the smart phone 30B displays the scene information at 8 pm selected by the user P (Figure 6(B)).

最後,該處理器36依據設置虛擬物件於該場景資訊中(步驟S205)。Finally, the processor 36 sets the virtual object in the scene information according to the setting (step S205).

於此步驟中處理器36設置虛擬物件的功能,是由處理器36所提供的應用程序而執行的場景建立工具,所述的場景建立工具例如可以在三維空間中標定坐標,並依據所標定的坐標建立三維物件(例如擴充實境物件、對話框、圖像等),經由建立三維物件修改場景資訊。編修後的場景資訊可以儲存於該光標籤識別裝置30的儲存單元內或直接上傳至伺服器以更新場景資訊,以便第三方存取伺服器後下載編修過後的場景資訊。In this step, the function of the processor 36 to set the virtual object is a scene creation tool executed by the application program provided by the processor 36. The scene creation tool can, for example, calibrate the coordinates in a three-dimensional space, and according to the calibration Coordinates create three-dimensional objects (such as extended reality objects, dialog boxes, images, etc.), and modify scene information by creating three-dimensional objects. The edited scene information can be stored in the storage unit of the optical label identification device 30 or directly uploaded to the server to update the scene information, so that a third party can access the server and download the edited scene information.

於本實施例中,處理器36具有能夠將顯示器34中所顯示的場景資訊進行虛擬物件的軟件,使用戶P能對場景資訊設置虛擬物件。所述的軟件包括(但不限於)ARKit、 ARCore、 Unity、Vufori、HP Reveal、MAKAR等軟體開發套件(Software Development Kit, SDK)或其他能將虛擬物件與實際環境結合的演算法軟體,於本發明中不予以限制。In this embodiment, the processor 36 has software that can convert the scene information displayed on the display 34 into a virtual object, so that the user P can set a virtual object to the scene information. The said software includes (but is not limited to) ARKit, ARCore, Unity, Vufori, HP Reveal, MAKAR and other software development kits (Software Development Kit, SDK) or other algorithm software that can combine virtual objects with the actual environment. There is no limitation in the invention.

於其他的實施例中,處理器36能夠將顯示器34中所顯示已具有虛擬物件的場景資訊進行設置,使用戶P能對場景資訊中的虛擬物件進行調整。所述的處理器36具有能夠調整虛擬物件的軟體,所述的軟體與設置虛擬物件的軟體相同,因此不再贅述。In other embodiments, the processor 36 can set the scene information that already has the virtual object displayed on the display 34, so that the user P can adjust the virtual object in the scene information. The processor 36 has software capable of adjusting the virtual object, and the software is the same as the software for setting the virtual object, so it will not be repeated here.

本發明之方法亦可記錄於電腦可讀取記錄媒體,所述的「電腦可讀取記錄媒體」包括(但不限於)攜帶型或非攜帶型儲存裝置、光儲存器件,及能夠儲存、含有或攜載指令及/或資料之各種其他媒體。電腦可讀取記錄媒體可包括非暫存性媒體,其中可儲存資料並且不包括載波及/或無線地或經由有線連接傳播之暫時電子信號。The method of the present invention can also be recorded on a computer-readable recording medium. The "computer-readable recording medium" includes (but is not limited to) portable or non-portable storage devices, optical storage devices, and capable of storing and containing Or various other media that carry instructions and/or information. The computer-readable recording medium may include a non-transitory medium in which data can be stored and does not include carrier waves and/or temporary electronic signals transmitted wirelessly or via wired connections.

非暫存性媒體之實例可包括(但不限於)磁碟或磁帶、諸如緊密光碟(CD)或數位化通用光碟(DVD)之光學儲存媒體、快閃記憶體、記憶體或記憶體器件。Examples of non-transitory media may include (but are not limited to) magnetic disks or tapes, optical storage media such as compact discs (CDs) or digital versatile discs (DVDs), flash memory, memory, or memory devices.

電腦可讀取記錄媒體可具有儲存於其上之代碼及/或機器可執行指令,該等代碼及/或機器可執行指令可表示程序、函數、子程式、程式、常式、次常式、模組、軟體套件、類別,或指令、資料結構或程式語句之任何組合。一個碼段可藉由傳遞及/或接收資訊、資料、引數、參數或記憶體內容耦接至另一碼段或硬體電路。資訊、引數、參數、資料等可經由包括記憶體共用、訊息傳遞、符記傳遞、網路傳輸或類似者之任何合適方式傳遞、轉遞或傳輸。The computer-readable recording medium may have codes and/or machine-executable instructions stored thereon, and these codes and/or machine-executable instructions may represent programs, functions, subroutines, programs, routines, subroutines, Modules, software packages, categories, or any combination of commands, data structures, or program statements. A code segment can be coupled to another code segment or hardware circuit by transmitting and/or receiving information, data, parameters, parameters, or memory content. Information, parameters, parameters, data, etc. may be transferred, transferred or transmitted via any suitable method including memory sharing, message transfer, token transfer, network transfer or the like.

此外,可由硬體、軟體、韌體、中間軟體、微碼、硬體描述語言或其任何組合實施實施例。當實施於軟體、韌體、中間軟體或微碼中時,用以執行必要任務之程式碼或碼段(例如電腦程式產品)可儲存於電腦可讀或機器可讀媒體中。處理器可執行必要任務。In addition, the embodiment can be implemented by hardware, software, firmware, middleware, microcode, hardware description language, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the code or code segments (such as computer program products) used to perform the necessary tasks can be stored in a computer-readable or machine-readable medium. The processor can perform the necessary tasks.

綜上所述,本發明無需親自到達現實環境去取得圖像,並且可以透過光標籤取得三維空間中的位置且在遠端進行虛擬物件的設置。In summary, the present invention does not need to personally go to the real environment to obtain the image, and can obtain the position in the three-dimensional space through the light tag and set the virtual object at the remote end.

以上已將本發明做一詳細說明,惟以上所述者,僅為本發明之一較佳實施例而已,當不能以此限定本發明實施之範圍,即凡依本發明申請專利範圍所作之均等變化與修飾,皆應仍屬本發明之專利涵蓋範圍內。The present invention has been described in detail above, but what is described above is only a preferred embodiment of the present invention, and should not be used to limit the scope of implementation of the present invention, that is, everything made in accordance with the scope of the patent application of the present invention is equal Changes and modifications should still fall within the scope of the patent of the present invention.

100:遠端建構虛擬場景的系統 10:第一光標籤裝置 10A:第一光標籤裝置 20:第二光標籤裝置 20A:第二光標籤裝置 30:光標籤識別裝置 30A:光標籤識別裝置 30B:智慧手機 32:攝影單元 34:顯示器 36:處理器 P:用戶

Figure 02_image027
:向量
Figure 02_image029
:坐標
Figure 02_image031
:坐標
Figure 02_image033
:純量
Figure 02_image035
:純量
Figure 02_image037
:純量 S201-205:步驟 100: System for remotely constructing virtual scenes 10: First optical label device 10A: First optical label device 20: Second optical label device 20A: Second optical label device 30: Optical label identification device 30A: Optical label identification device 30B : Smart phone 32: Camera unit 34: Display 36: Processor P: User
Figure 02_image027
:vector
Figure 02_image029
:coordinate
Figure 02_image031
:coordinate
Figure 02_image033
: Scalar
Figure 02_image035
: Scalar
Figure 02_image037
: Scalar S201-205: Step

圖1,本發明遠端建構虛擬場景的系統的方塊示意圖。Fig. 1 is a block diagram of a system for remotely constructing a virtual scene according to the present invention.

圖2,本發明光標籤識別裝置的方塊示意圖。Figure 2 is a block diagram of the optical label identification device of the present invention.

圖3,本發明遠端建構虛擬場景的方法的流程示意圖。Fig. 3 is a schematic flowchart of a method for remotely constructing a virtual scene of the present invention.

圖4,本發明藉由光標籤識別裝置進行錨定的方塊示意圖。Fig. 4 is a schematic block diagram of anchoring by the optical label identification device of the present invention.

圖5,本發明場景資訊不同時間點的示意圖。Fig. 5 is a schematic diagram of scene information at different time points of the present invention.

圖6,本發明場景資訊中不同時間點店鋪營業情況的示意圖。FIG. 6 is a schematic diagram of the business situation of the store at different time points in the scene information of the present invention.

圖7,本發明晚上八點場景資訊顯示於智慧手機上的示意圖。FIG. 7 is a schematic diagram of the present invention displaying scene information at 8 o'clock in the evening on a smart phone.

100:遠端建構虛擬場景的系統 100: System for remotely constructing virtual scenes

10:第一光標籤裝置 10: The first optical label device

20:第二光標籤裝置 20: The second optical label device

30:光標籤識別裝置 30: Optical label recognition device

Claims (11)

一種遠端建構虛擬場景的系統,包含:至少一第一光標籤裝置,具有一第一光標籤,該第一光標籤至少對應於一裝置位姿訊息;一第二光標籤裝置,具有一第二光標籤;以及一光標籤識別裝置,該光標籤識別裝置包含一攝影單元、一顯示器、以及一處理器,該光標籤識別裝置經由該攝影單元拍攝該第二光標籤的圖像並經由該處理器計算獲得一位姿訊息,並依據該位姿訊息與用戶選擇的該第一光標籤裝置建立相對該第一光標籤裝置的裝置位姿訊息的場景資訊,將該場景資訊顯示於該顯示器上,並提供虛擬物件設置功能,該場景資訊係包括對應於該第一光標籤裝置所在場景空間的三維資訊,以及於該顯示器中所看到的該用戶相對於該第一光標籤裝置的方位與距離相同於該光標籤識別裝置與該第二光標籤裝置的方位與距離。 A system for remotely constructing a virtual scene includes: at least one first optical tag device with a first optical tag, the first optical tag at least corresponding to a device pose information; a second optical tag device with a first optical tag Two optical labels; and an optical label identification device including a photographing unit, a display, and a processor. The processor calculates and obtains the pose information, and creates scene information relative to the device pose information of the first light tag device based on the pose information and the first light tag device selected by the user, and displays the scene information on the display And provide a virtual object setting function. The scene information includes three-dimensional information corresponding to the scene space where the first optical label device is located, and the position of the user relative to the first optical label device seen in the display The distance is the same as the orientation and distance of the optical label identification device and the second optical label device. 如申請專利範圍第1項所述的遠端建構虛擬場景的系統,其中該第一光標籤裝置周圍的場景資訊藉由一伺服器或該處理器中的一儲存單元取得。 In the system for remotely constructing a virtual scene as described in the first item of the scope of patent application, the scene information around the first optical label device is obtained by a server or a storage unit in the processor. 如申請專利範圍第1項或第2項所述的遠端建構虛擬場景的系統,其中該第一光標籤裝置周圍的場景資訊包含已建 立的虛擬物件,該處理器能對已建立的該虛擬物件進行調整。 For example, the system for remotely constructing a virtual scene as described in item 1 or item 2 of the scope of patent application, wherein the scene information around the first optical label device includes the established The processor can adjust the created virtual object. 如申請專利範圍第3項所述的遠端建構虛擬場景的系統,其中該伺服器或該儲存單元的第一光標籤裝置周圍場景資訊包括複數個對應個別時間點的場景資訊並將該場景資訊顯示於該顯示器上供用戶選擇。 For example, in the system for remotely constructing a virtual scene as described in item 3 of the scope of patent application, the scene information around the server or the first optical label device of the storage unit includes a plurality of scene information corresponding to individual time points and the scene information Displayed on the display for the user to choose. 如申請專利範圍第4項所述的遠端建構虛擬場景的系統,其中該處理器依據選擇的時間點建立對應時間點的該場景資訊。 For the system for remotely constructing a virtual scene as described in item 4 of the scope of patent application, the processor creates the scene information corresponding to the time point according to the selected time point. 一種遠端建構虛擬場景的方法,包含:一光標籤識別裝置的一攝影單元拍攝一第二光標籤裝置的圖像;該圖像經由該光標籤識別裝置的一處理器計算獲得一位姿訊息;該處理器依據該位姿訊息與用戶選擇的一第一光標籤裝置建立相對該第一光標籤裝置的一裝置位姿訊息的一場景資訊;該場景資訊顯示於該光標籤識別裝置的一顯示器上;以及該處理器設置虛擬物件於該場景資訊中,該場景資訊係包括對應於該第一光標籤裝置所在場景空間的三維資訊,以及於該顯示器中所看到的該用戶相對於該第一光標籤裝 置的方位與距離相同於該光標籤識別裝置與該第二光標籤裝置的方位與距離。 A method for remotely constructing a virtual scene, including: a photographing unit of an optical label recognition device takes an image of a second optical label device; the image is calculated by a processor of the optical label recognition device to obtain position information The processor establishes a scene information relative to a device pose information of the first optical label device based on the pose information and a first optical tag device selected by the user; the scene information is displayed on a piece of the optical tag identification device And the processor sets a virtual object in the scene information, the scene information includes three-dimensional information corresponding to the scene space where the first light tag device is located, and the user seen in the display relative to the First light label The orientation and distance of the positioning are the same as the orientation and distance of the optical tag identification device and the second optical tag device. 如申請專利範圍第6項所述的遠端建構虛擬場景的方法,其中該第一光標籤裝置周圍的場景資訊藉由一伺服器或該處理器中的一儲存單元取得。 In the method for remotely constructing a virtual scene as described in item 6 of the scope of patent application, the scene information around the first optical label device is obtained by a server or a storage unit in the processor. 如申請專利範圍第6項或第7項所述的遠端建構虛擬場景的方法,其中該處理器能對該第一光標籤裝置周圍的場景資訊中已建立的該虛擬物件進行調整。 In the method for remotely constructing a virtual scene as described in item 6 or item 7 of the scope of patent application, the processor can adjust the virtual object that has been created in the scene information around the first optical label device. 如申請專利範圍第8項所述的遠端建構虛擬場景的方法,其中該顯示器能顯示包括複數個對應個別時間點場景資訊的第一光標籤裝置周圍的場景資訊。 In the method for remotely constructing a virtual scene as described in item 8 of the scope of patent application, the display can display scene information around the first light label device including a plurality of scene information corresponding to individual time points. 如申請專利範圍第9項所述的遠端建構虛擬場景的方法,其中對應時間點的該場景資訊能藉由該處理器去選擇時間並建立相對應的場景資訊。 In the method for remotely constructing a virtual scene as described in item 9 of the scope of patent application, the scene information at the corresponding time point can be selected by the processor to select the time and establish the corresponding scene information. 一種非暫存性電腦可讀取記錄媒體,係用於儲存一程式,當一處理晶片載入該程式後將可執行如申請專利範圍第6至10項中任一項所述的方法。 A non-temporary computer-readable recording medium is used to store a program. When a processing chip is loaded into the program, the method described in any one of items 6 to 10 of the scope of patent application can be executed.
TW109116842A 2020-05-21 2020-05-21 Remote virtual scene building system, method and non-transitory computer-readable storage medium TWI747294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109116842A TWI747294B (en) 2020-05-21 2020-05-21 Remote virtual scene building system, method and non-transitory computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109116842A TWI747294B (en) 2020-05-21 2020-05-21 Remote virtual scene building system, method and non-transitory computer-readable storage medium

Publications (2)

Publication Number Publication Date
TWI747294B true TWI747294B (en) 2021-11-21
TW202145061A TW202145061A (en) 2021-12-01

Family

ID=79907520

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109116842A TWI747294B (en) 2020-05-21 2020-05-21 Remote virtual scene building system, method and non-transitory computer-readable storage medium

Country Status (1)

Country Link
TW (1) TWI747294B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM482796U (en) * 2013-12-12 2014-07-21 Univ Nan Kai Technology Multi-functional augmentation real environment manual tag
TW201539305A (en) * 2014-02-28 2015-10-16 Microsoft Corp Controlling a computing-based device using gestures

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM482796U (en) * 2013-12-12 2014-07-21 Univ Nan Kai Technology Multi-functional augmentation real environment manual tag
TW201539305A (en) * 2014-02-28 2015-10-16 Microsoft Corp Controlling a computing-based device using gestures

Also Published As

Publication number Publication date
TW202145061A (en) 2021-12-01

Similar Documents

Publication Publication Date Title
KR102534637B1 (en) augmented reality system
US20220051022A1 (en) Methods and apparatus for venue based augmented reality
US20210209857A1 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
US10403044B2 (en) Telelocation: location sharing for users in augmented and virtual reality environments
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN110874818B (en) Image processing and virtual space construction method, device, system and storage medium
US9881425B1 (en) Synchronized side-by-side display of real and virtual environments
JP7192592B2 (en) Imaging device, image communication system, image processing method, program
US20180144520A1 (en) Oriented image stitching for spherical image content
TWI783472B (en) Ar scene content generation method, display method, electronic equipment and computer readable storage medium
JP2017212510A (en) Image management device, program, image management system, and information terminal
TW201915445A (en) Locating method, locator, and locating system for head-mounted display
CN109816768A (en) A kind of interior method for reconstructing, device, equipment and medium
CN109801354B (en) Panorama processing method and device
TWI747294B (en) Remote virtual scene building system, method and non-transitory computer-readable storage medium
GB2566006A (en) Three-dimensional video processing
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN112017242A (en) Display method and device, equipment and storage medium
JP2017182681A (en) Image processing system, information processing device, and program
TWM601380U (en) Remote virtual scene building system
WO2022040868A1 (en) Panoramic photography method, electronic device, and storage medium
JP7415496B2 (en) Image processing device, program and image processing system
JPH1173489A (en) Image storing method and machine-readable medium
WO2023090038A1 (en) Information processing apparatus, image processing method, and program
US20240087157A1 (en) Image processing method, recording medium, image processing apparatus, and image processing system