TWI799195B - Method and system for implementing third-person perspective with a virtual object - Google Patents
Method and system for implementing third-person perspective with a virtual object Download PDFInfo
- Publication number
- TWI799195B TWI799195B TW111110010A TW111110010A TWI799195B TW I799195 B TWI799195 B TW I799195B TW 111110010 A TW111110010 A TW 111110010A TW 111110010 A TW111110010 A TW 111110010A TW I799195 B TWI799195 B TW I799195B
- Authority
- TW
- Taiwan
- Prior art keywords
- dimensional space
- virtual object
- virtual
- realizing
- processing program
- Prior art date
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
說明書公開一種以第三人稱視角瀏覽三維空間的方法,特別是提供使用者通過瀏覽器操作虛擬物件瀏覽三維空間的一種方法與系統。The description discloses a method for browsing three-dimensional space from a third-person perspective, and in particular provides a method and system for users to browse three-dimensional space by operating virtual objects through a browser.
隨著逐漸成熟的立體影像形成與立體模型建模技術,發展出許多利用三維立體影像呈現物品的方式,舉例來說,可以使用照相機拍攝一場景的影像,並利用立體建模的軟體掃描空間,就可以形成一個三維空間的影像。With the gradual maturity of stereo image formation and stereo model modeling technology, many ways of using three-dimensional stereo images to present objects have been developed. For example, a camera can be used to capture an image of a scene, and the space can be scanned using stereo modeling software. A three-dimensional image can be formed.
進一步地,當建立一個場景的三維空間模型後,可以利用已經準備好針對此場景的圖像進行貼圖,其中技術例如將三維空間模型解構後,得出每個方位角度的面積、角度與頂點,並根據每個區塊生成對應的影像,每個區塊影像同樣具有三維的空間座標資訊,並儲存成檔案。之後要生成三維空間影像時,選取三維空間模型,將對應的區塊影像逐畫素執行立體空間的映射,貼圖生成三維空間影像。Furthermore, when a three-dimensional space model of a scene is established, the images prepared for this scene can be used for texture mapping. For example, after deconstructing the three-dimensional space model, the area, angle and vertex of each azimuth angle can be obtained, A corresponding image is generated according to each block, and each block image also has three-dimensional spatial coordinate information, and is stored as a file. To generate a 3D space image later, select a 3D space model, perform a pixel-by-pixel mapping of the corresponding block image in a 3D space, and map the texture to generate a 3D space image.
當以立體建模技術建立一個空間的三維影像後,可以實現一個三維虛擬的展示空間,並提供使用者利用電腦輸入裝置(如滑鼠、鍵盤、頭戴式顯示器(Head Mounted Display,HMD)等)瀏覽此展示空間。When a three-dimensional image of a space is established by stereo modeling technology, a three-dimensional virtual display space can be realized, and users can use computer input devices (such as mouse, keyboard, Head Mounted Display (HMD), etc. ) to browse this showcase.
瀏覽三維虛擬空間講究的是一種虛擬實境的沉浸感,然而,習知技術除了採用頭戴式顯示器讓使用者可以體驗較佳的沉浸感外,利用一般平面瀏覽器並不容易有太好的沉浸感,並且實用性仍然受限於電腦設備的三維影像處理能力。Browsing a three-dimensional virtual space pays attention to a sense of immersion in a virtual reality. However, in addition to using a head-mounted display to allow users to experience a better sense of immersion in conventional technologies, it is not easy to have a good experience using a general flat browser. Immersion and practicality are still limited by the 3D image processing capabilities of computer equipment.
為提供較佳瀏覽三維虛擬空間的體驗,揭露書提出一種利用虛擬物件實現第三人稱視角的方法與系統,其中特別地以不同處理程序處理三維空間影像的顯示以及提供使用者操作第三人稱視角的虛擬物件。In order to provide a better experience of browsing the 3D virtual space, the disclosure paper proposes a method and system for realizing a third-person perspective by using virtual objects, in which different processing programs are used to process the display of 3D space images and provide users with a third-person perspective virtual objects.
所述系統主要設有一伺服器,其中的資料庫儲存一或多個場景的三維空間影像數據,以及提供執行於使用者裝置的三維空間瀏覽器,三維空間瀏覽器自伺服器下載一場景的三維空間影像數據,可經第一處理程序重構後顯示三維空間,以第二處理程序運作虛擬物件。The system mainly includes a server, wherein the database stores the 3D space image data of one or more scenes, and provides a 3D space browser executed on the user device, and the 3D space browser downloads the 3D space image data of a scene from the server. The spatial image data can be reconstructed by the first processing program to display the three-dimensional space, and the virtual object can be operated by the second processing program.
根據利用虛擬物件實現第三人稱視角的方法的實施例,在使用者裝置中執行三維空間瀏覽器,經連線伺服器後,自伺服器載入三維空間影像數據與虛擬物件影像,經重構後形成顯示在此三維空間瀏覽器的三維空間影像與設於特定位置的虛擬物件。其中以第一處理程序處理三維空間影像,以第二處理程序運作虛擬物件。當使用者操作虛擬物件時,第二處理程序根據使用者置操作虛擬物件產生操作指令,第一處理程序於是接收到第二處理程序的操作指令,即可依據操作指令更新三維空間影像,例如自伺服器載入對應某一視角與一站位的三維空間影像數據,此三維空間影像數據可為一全景圖數據。According to the embodiment of the method of using virtual objects to realize the third-person perspective, the three-dimensional space browser is executed in the user device, and after being connected to the server, the three-dimensional space image data and the virtual object image are loaded from the server, and reconstructed Finally, a 3D space image displayed on the 3D space browser and a virtual object set at a specific position are formed. Wherein the first processing program is used to process the three-dimensional image, and the second processing program is used to operate the virtual object. When the user operates the virtual object, the second processing program generates an operation instruction according to the user's operation on the virtual object, and the first processing program receives the operation instruction of the second processing program, and can update the three-dimensional space image according to the operation instruction, for example, The server loads 3D space image data corresponding to a certain viewing angle and a station position, and the 3D space image data can be panorama data.
優選地,所述方法通過執行於三維空間瀏覽器的一瀏覽器程式語言處理顯示在三維空間瀏覽器的三維空間與該虛擬物件,並以一層疊樣式表設定顯示畫面的樣式及佈局。Preferably, the method processes the 3D space and the virtual object displayed in the 3D space browser by executing a browser programming language on the 3D space browser, and sets the style and layout of the display screen with a stacked style sheet.
進一步地,虛擬物件可為動態虛擬物件,通過第二處理程序,虛擬物件可以動畫回應使用者的操作指令。Further, the virtual object can be a dynamic virtual object, and through the second processing program, the virtual object can respond to the user's operation instruction with animation.
進一步地,虛擬物件可通過層疊樣式表以動畫切換影格的方式模擬行走效果,產生向前走的動畫、向後走的動畫,或是左右轉的動畫。Furthermore, the virtual object can simulate the walking effect in the form of animation switching frames through the cascading style sheet, so as to generate animations of walking forward, moving backward, or turning left and right.
進一步地,除使用者操作的虛擬物件外,伺服器可再以不同的處理程序於三維空間內另外建立一或多個以人工智慧技術運作的虛擬物件,當使用者操作虛擬物件,即可通過一互動功能與其他一或多個虛擬物件互動。Furthermore, in addition to the virtual objects operated by the user, the server can use different processing procedures to create one or more virtual objects operated by artificial intelligence technology in the three-dimensional space. When the user operates the virtual object, it can pass An interactive function interacts with one or more other virtual objects.
進一步地,通過伺服器同一時間還可服務連線伺服器的多個使用者,並以不同的處理程序處理多個使用者登入由三維空間影像所呈現的三維空間操作各自的虛擬物件產生的操作指令。同樣地,使用者可操作虛擬物件,以通過互動功能與三維空間內其他一或多個操作虛擬物件的使用者互動。Further, the server can also serve multiple users connected to the server at the same time, and use different processing procedures to process the operations generated by multiple users logging in to the three-dimensional space presented by the three-dimensional space image and operating their respective virtual objects instruction. Similarly, the user can operate the virtual object to interact with one or more other users operating the virtual object in the three-dimensional space through the interactive function.
為使能更進一步瞭解本發明的特徵及技術內容,請參閱以下有關本發明的詳細說明與圖式,然而所提供的圖式僅用於提供參考與說明,並非用來對本發明加以限制。In order to further understand the features and technical content of the present invention, please refer to the following detailed description and drawings related to the present invention. However, the provided drawings are only for reference and description, and are not intended to limit the present invention.
以下是通過特定的具體實施例來說明本發明的實施方式,本領域技術人員可由本說明書所公開的內容瞭解本發明的優點與效果。本發明可通過其他不同的具體實施例加以施行或應用,本說明書中的各項細節也可基於不同觀點與應用,在不悖離本發明的構思下進行各種修改與變更。另外,本發明的附圖僅為簡單示意說明,並非依實際尺寸的描繪,事先聲明。以下的實施方式將進一步詳細說明本發明的相關技術內容,但所公開的內容並非用以限制本發明的保護範圍。The implementation of the present invention is described below through specific specific examples, and those skilled in the art can understand the advantages and effects of the present invention from the content disclosed in this specification. The present invention can be implemented or applied through other different specific embodiments, and various modifications and changes can be made to the details in this specification based on different viewpoints and applications without departing from the concept of the present invention. In addition, the drawings of the present invention are only for simple illustration, and are not drawn according to the actual size, which is stated in advance. The following embodiments will further describe the relevant technical content of the present invention in detail, but the disclosed content is not intended to limit the protection scope of the present invention.
應當可以理解的是,雖然本文中可能會使用到“第一”、“第二”、“第三”等術語來描述各種元件或者訊號,但這些元件或者訊號不應受這些術語的限制。這些術語主要是用以區分一元件與另一元件,或者一訊號與另一訊號。另外,本文中所使用的術語“或”,應視實際情況可能包括相關聯的列出項目中的任一個或者多個的組合。It should be understood that although terms such as "first", "second", and "third" may be used herein to describe various elements or signals, these elements or signals should not be limited by these terms. These terms are mainly used to distinguish one component from another component, or one signal from another signal. In addition, the term "or" used herein may include any one or a combination of more of the associated listed items depending on the actual situation.
揭露書關於一種利用虛擬物件實現第三人稱視角的方法與系統,其中技術目的之一是提供使用者在進入一個虛擬三維空間時,可以藉由操作一個虛擬物件(如一虛擬之人)瀏覽三維空間,以此虛擬物件實現第三人稱視角的三維空間導引的技術。The disclosure book is about a method and system for realizing a third-person perspective by using virtual objects. One of the technical purposes is to provide users with the ability to browse the three-dimensional space by operating a virtual object (such as a virtual person) when entering a virtual three-dimensional space , using virtual objects to realize the technology of three-dimensional space guidance of the third-person perspective.
然而,為了解決同時處理三維影像與其中虛擬物件耗費運算資源的問題,揭露書所提出的利用虛擬物件實現第三人稱視角的方法利用一個支援三維影像處理的瀏覽程式,讓三維空間數據處理與虛擬物件處理以不同的處理程序運行,並提供虛擬物件與三維空間兩個處理程序的溝通機制,能在優化運算程序的方式下達到利用虛擬物件實現第三人稱視角的目的。However, in order to solve the problem of consuming computing resources while processing 3D images and virtual objects in them, the method proposed in the Disclosure Book to use virtual objects to realize the third-person perspective uses a browser program that supports 3D image processing, allowing 3D spatial data processing and virtual The object processing operates with different processing programs, and provides a communication mechanism between the virtual object and the three-dimensional space processing program, and can achieve the purpose of using virtual objects to realize the third-person perspective in the way of optimizing the calculation program.
圖1顯示利用虛擬物件實現第三人稱視角的系統實施例圖,其中伺服端設有伺服器101,伺服器101提供一或多個場景的三維空間影像數據,根據使用者傳送的請求發送對應的數據,必提供使用者虛擬物件相關數據。另提供在使用者端的使用者裝置中執行對應服務的瀏覽器,經瀏覽器連線伺服器101後,可以選擇場景與虛擬物件,瀏覽器將啟始至少兩個處理程序,第一處理程序用於處理三維空間影像數據,包括連線伺服器101載入對應的三維空間影像數據以及重構顯示三維空間,第二處理程序用於處理虛擬物件的顯示與操作。Fig. 1 shows a diagram of an embodiment of a system using virtual objects to realize a third-person perspective, in which a
根據圖示的實施例,在系統端設有提供三維空間瀏覽服務的伺服器101與資料庫110,資料庫110中儲存有針對一或多個場景的三維空間影像數據111以及使用者資料113,伺服器101通過網路10提供服務給各端使用者裝置103、105,讓使用者裝置103、105通過一特定應用程式存取伺服器101中的資源。根據實施例,系統可通過應用程式提供一或多個使用者選擇進入某一場景的虛擬三維空間中,例如虛擬展示間、房屋物件、虛擬教學現場或遊戲空間等,並提供多人之間互動,或是使用者可與其中人工智能技術運作的其他虛擬物件互動。According to the illustrated embodiment, a
根據實施例,利用虛擬物件實現第三人稱視角系統提供使用者端的使用者裝置103、105安裝一應用程式,如三維空間瀏覽器,如圖2所示的三維空間瀏覽器200,並提供使用者設定進入一個三維空間210並可執行瀏覽內容的虛擬物件220,例如虛擬的人物、動物、車輛或各種物品等。According to the embodiment, the third-person perspective system is implemented using virtual objects. The
根據圖2所示利用虛擬物件實現第三人稱視角的實施例示意圖,其中使用者端的裝置可自伺服器下載一特定場景的三維空間影像數據,並經使用者裝置中執行的三維空間瀏覽器200處理,經第一處理程序重構後形成一個視覺化的三維空間210,再通過另一處理程序(第二處理程序)運作一虛擬物件220,包括處理虛擬物件的顯示與操作,讓使用者在三維空間瀏覽器200上實現第三人稱視角瀏覽三維空間210的目的。According to the schematic diagram of an embodiment of using virtual objects to realize the third-person perspective as shown in FIG. 2, the device at the user end can download the 3D space image data of a specific scene from the server, and use the
其中特別的是,在三維空間瀏覽器200中建立的虛擬物件220與通過三維空間瀏覽器200顯示的3三維空間210為不同的處理程序,但可通過瀏覽器技術讓使用者操作虛擬物件220產生的指令傳送到三維空間210的處理程序。如此達成的技術效果是,虛擬物件220用於模擬在三維空間210內移動,讓使用者在使用者裝置上操作,代替使用者在此虛擬的三維空間210內以第三人稱視角移動,用以模擬使用者在三維空間210內移動,且因為虛擬物件220不必使用立體模型,減少三維物件的運算,可節省系統效能消耗,讓更低階的裝置可以體驗以第三人稱視角瀏覽三維空間210的沉浸感。In particular, the
在另一實施方案中,由伺服器主導提供使用者可以進入瀏覽的三維空間210內的活動,如以三維空間210實現的特定空間導覽、虛擬展示間、遊戲空間或教學現場,可讓多人同時進入同一個三維空間210,更使得每個參與的使用者可以在其中看到代表其他使用者的虛擬物件。In another embodiment, the server provides the activities that the user can enter into the browsing three-
進一步地,根據實施例,揭露書提出的系統應用的三維空間的產生方式可以先取得場景的全景圖,再以空間掃描儀取得相同空間的深度資訊,建立此場景三維模型,經組合全景圖後形成三維空間影像。Further, according to the embodiment, the method of generating the three-dimensional space of the system application proposed in the disclosure can first obtain the panorama of the scene, and then use a space scanner to obtain the depth information of the same space to build a three-dimensional model of the scene. After combining the panorama Form a three-dimensional space image.
其中技術之一是由二維影像轉換為三維空間影像得出,其中運用的深度學習(deep learning)方法以人工神經網路為架構,對資料進行表徵學習的演算法,自動取得影像中足以代表影像特性的特徵(feature),舉例來說,可如圖3描述的雙投射網路(Dual-Projection Network,DuLa-Net)的深度學習方法的流程實施例,其中同時採用圖4描述的深度殘差網路(Deep Residual Network,ResNet)的深度學習方法。One of the technologies is obtained by converting a two-dimensional image into a three-dimensional space image. The deep learning method used in it is based on an artificial neural network. The feature (feature) of the image characteristic, for example, can be described in Figure 3 as the flow embodiment of the deep learning method of the Dual-Projection Network (Dual-Projection Network, DuLa-Net), wherein the depth residue described in Figure 4 is used at the same time. Deep Residual Network (ResNet) deep learning method.
雙投射網路為一種深度學習架構(deep learning framework),用以根據單一全彩全景圖(RGB panorama)預測一個立體空間的格局(3D room layout),其中,為了要得到更佳的預測準確性(prediction accuracy),可先得出兩個預測結果,例如一為等距長方全景視圖(equirectangular panorama-view),另一為透視天花板視圖(perspective ceiling view),每個預測得出的全景視圖分別包括空間格局(room layout)的不同線索,使得得到更為準確的預測空間格局。其結果更能在深度學習中用於訓練預測平面圖與格局之用,若要學習更複雜的空間格局,更可引入其他包括有不同角落(corner)的空間格局的立體數據。The double-projection network is a deep learning framework (deep learning framework), which is used to predict a three-dimensional space (3D room layout) based on a single full-color panorama (RGB panorama). Among them, in order to obtain better prediction accuracy (prediction accuracy), two prediction results can be obtained first, for example, one is equirectangular panorama-view, and the other is perspective ceiling view (perspective ceiling view), each predicted panoramic view Different cues of the room layout are respectively included, so that a more accurate prediction of the room layout can be obtained. The results can be used in deep learning to train and predict floor plans and patterns. To learn more complex spatial patterns, other three-dimensional data including spatial patterns with different corners can be introduced.
如圖所示,在雙投射網路的深度學習方法中,採用了兩個影像處理技術,在等距長方全景視圖的應用中,先輸入一個空間內特定區域的全景圖(301),通過特徵擷取(303)得到等距長方全景視圖,其中特徵擷取(303)的步驟利用了深度殘差網路的深度學習方法,用以識別與分類出影像中的空間格局,形成全景機率概圖(305)。另一方面,在透視天花板視圖的應用中,先取得所述區域的天花板視圖(302),同樣在特徵擷取(304)可採用深度殘差網路的深度學習方法,用以識別與分類出影像中的關於天花板的空間特徵,形成平面機率概圖(306)。之後,雙投射網路的深度學習方法進一步結合全景機率概圖(305)與平面機率概圖(306),根據兩個概圖的影像資訊,經過一個平面圖的擬合過程(floor plan fitting),形成一個二維平面圖(2D floor plan)(307),並經立體空間建模後預測區域的立體空間格局(308)。之後的流程即繼續對空間內其他區域演算產生立體格局圖,再通過如圖3的流程得到各區域點、線、多個區域之間的連接關係,建立所述每個站點的全景圖。As shown in the figure, in the deep learning method of the dual-projection network, two image processing techniques are used. In the application of the equidistant rectangular panorama view, first input a panorama (301) of a specific area in the space, and pass The feature extraction (303) obtains the equidistant rectangular panoramic view, wherein the step of feature extraction (303) utilizes the deep learning method of the deep residual network to identify and classify the spatial pattern in the image and form the panoramic probability Overview (305). On the other hand, in the application of the see-through ceiling view, first obtain the ceiling view of the area (302), and also use the deep learning method of the deep residual network in the feature extraction (304) to identify and classify The spatial features of the ceiling in the image form a planar probability map (306). Afterwards, the deep learning method of the dual-projection network further combines the panoramic probability map (305) and the planar probability map (306). According to the image information of the two maps, a floor plan fitting process (floor plan fitting) is performed. A two-dimensional plan (2D floor plan) (307) is formed, and the three-dimensional spatial pattern of the region is predicted after three-dimensional space modeling (308). The subsequent process is to continue to generate a three-dimensional pattern diagram for other areas in the space, and then obtain the connection relationship between points, lines, and multiple areas in each area through the process shown in Figure 3, and establish a panoramic view of each site.
再參考圖4所示深度殘差網路的深度學習方法流程。深度殘差網路的深度學習方法為一種用於影像識別與分類用的深度學習方法,特色在於可快速收斂深度學習的誤差,也使得可以實現更深層的學習、提高準確度,使得有效而快速地識別(recognition)與分類(classification)空間格局。Refer to the flow of the deep learning method of the deep residual network shown in FIG. 4 . The deep learning method of the deep residual network is a deep learning method used for image recognition and classification. It is characterized in that it can quickly converge the error of deep learning, and also enables deeper learning and improved accuracy, making it effective and fast. Spatial pattern of recognition and classification.
如示意圖所示,先取得空間內各區域的全景圖401,圖中示意表示有客廳、浴室與臥室的全景圖,之後經過深度殘差網路403的演算,包括影像處理431與識別與分類432等深度學習過程,利用深度學習從大數據建立描述各種空間型態的資料集(data set),例如,資料集分別記載了描述一個室內空間的浴室、臥室、餐廳、廚房與客廳等區域的數據,此例中,最後依照深度學習得到的資料集的數據判斷出各區域為客廳405a、浴室405b與臥室405c等格局。As shown in the schematic diagram, the
基於上述技術建立的三維空間影像數據上傳至伺服器後,讓伺服器可以通過實作第三人稱視角虛擬物件以提供三維空間瀏覽的服務,其中技術概念可參考圖5所示的系統運作方法的實施例圖,並可同時參考圖6所示利用虛擬物件實現第三人稱視角的方法的概念流程圖。After uploading the 3D space image data based on the above technology to the server, the server can provide a 3D space browsing service by implementing virtual objects from a third-person perspective. The technical concept can refer to the system operation method shown in Figure 5 Embodiment diagram, and the conceptual flow chart of the method for realizing the third-person perspective by using virtual objects shown in FIG. 6 can also be referred to.
使用者可以通過三維空間瀏覽器自伺服器507載入三維空間影像數據511以及虛擬物件影像513,通過執行於三維空間瀏覽器515的瀏覽器程式語言503(如Javascript)處理各種要顯示的物件(三維空間、虛擬物件等),包括連線伺服器、傳送指令以及載入影像等程序,其中還根據三維空間影像數據511重構三維空間影像,以及根據虛擬物件影像513形成靜態或動態虛擬物件,其中以層疊樣式表(CSS)509設定顯示畫面的樣式及佈局,產生顯示畫面,虛擬物件可設定在畫面的特定位置。特別的是,在執行三維空間瀏覽器515的使用者裝置中,通過處理器建立分別處理三維空間影像與虛擬物件影像的處理程序,其中以第一處理程序處理三維空間影像,以第二處理程序運作虛擬物件。The user can load the three-dimensional
接著,可參考圖6所示實施例流程圖,使用者通過使用者裝置操作虛擬物件(501),操作方式例如通過滑鼠、鍵盤或特定輸入手段(如頭戴式顯示器、行動裝置等)操作顯示在三維空間瀏覽器的虛擬物件,經瀏覽器程式語言503偵測使用者的操作指令,並解釋產生操作指令505,由第二處理程序處理操作指令505(步驟S601)。根據實施例,第二處理程序將根據使用者操作虛擬物件產生的操作指令505調整虛擬物件在三維空間的視角與站位。以動態的虛擬物件為例,通過第二處理程序,虛擬物件將以動畫回應使用者的操作指令505,如前進、後退、向左或向右移動等(步驟S603)。Next, referring to the flow chart of the embodiment shown in FIG. 6, the user operates the virtual object (501) through the user device, for example, through a mouse, a keyboard, or a specific input means (such as a head-mounted display, a mobile device, etc.) The virtual object displayed in the three-dimensional space browser detects the user's operation command through the
以虛擬人物為例,通過層疊樣式表509以動畫切換影格的方式改變虛擬人物的姿態與在三維空間內的視角(步驟S605),例如利用層疊樣式表處理動畫切換影格,模擬使用者行走效果,產生向前走的動畫、向後走的動畫,或是左右轉的動畫,最後根據指令到達某一站位。Taking the avatar as an example, change the posture of the avatar and the angle of view in the three-dimensional space in the form of animation switching frames through the cascading style sheet 509 (step S605). Generate an animation of walking forward, walking backward, or turning left and right, and finally arrive at a certain station according to the instruction.
在此一提的是,以不同處理程序處理三維空間影像與虛擬物件時,分別以不同顯示層顯示兩種影像,並經組合後顯示在三維空間瀏覽器上。而虛擬物件可以是一種GIF或PNG動畫實作,但不侷限於此虛擬物件動畫之實作。利用不同的姿態影像模擬在三維空間內移動的狀態,如人在空間內行走或跑步的樣子。然而,虛擬物件的動畫並不限於此,可包含任何可以節省使用立體模型需要的運算資源的動畫。What is mentioned here is that when processing the 3D space images and virtual objects with different processing programs, the two images are displayed on different display layers, and are combined and displayed on the 3D space browser. The virtual object can be a GIF or PNG animation implementation, but is not limited to the implementation of the virtual object animation. Use different posture images to simulate the state of moving in three-dimensional space, such as the appearance of people walking or running in space. However, the animation of the virtual object is not limited thereto, and may include any animation that can save computing resources required for using the three-dimensional model.
經第二處理程序的處理,此時表示使用者操作虛擬物件到達一站位,並且具有朝向一個方位的視角,此時,第一處理程序能即時回應第二處理程序產生的指令。當處理三維空間影像的第一處理程序接收到第二處理程序的操作指令505後,依據操作指令505更新三維空間影像,更新的畫面主要為再自伺服器507載入對應所述視角與站位的三維空間影像數據511。After being processed by the second processing program, it means that the user operates the virtual object to reach a station and has a viewing angle facing a direction. At this time, the first processing program can immediately respond to the command generated by the second processing program. When the first processing program for processing three-dimensional space images receives the
同時,在三維空間瀏覽器515顯示三維空間影像,經成像處理後(步驟S607),通過瀏覽器程式語言503結合虛擬物件,並顯示在三維空間瀏覽器上(步驟S609)。At the same time, the 3D space image is displayed on the
在一實施例中,通過伺服器507可於同一時間服務連線伺服器507的多個使用者,並處理多個使用者登入由三維空間影像所呈現的三維空間產生的操作指令505,伺服器507更能處理多個使用者在同一三維空間內的互動,包括讓各使用者從三維空間影像中得知或看到其他登入相同三維空間的使用者,相關實施例可參考圖7所示利用虛擬物件實現第三人稱視角的實施例示意圖。In one embodiment, the server 507 can serve multiple users connected to the server 507 at the same time, and process the
根據一實施例,此實施例示意圖顯示的三維空間710中的多個虛擬物件可以為伺服器以人工智能技術運作的虛擬物件,以此圖例而言,近端顯示一個具有特定視角與一站位的虛擬物件一721,下方有一排功能按鍵730,提示操作此虛擬物件一721的使用者可以操作的功能,此例顯示為可以與三維空間710內的其他虛擬人物(如虛擬物件二722)對話的對話框功能、開啟使用者裝置語音對話功能的麥克風功能,以及控制音量的控制按鈕。According to an embodiment, the multiple virtual objects in the three-
特別的是,此例中,除使用者操作的虛擬物件一721外,伺服器再以不同的處理程序於此三維空間內另外建立一或多個以人工智慧技術運作的虛擬物件,如虛擬物件二722,此可稱為非玩家腳色(non-player character,NPC)。其應用例如,三維空間瀏覽器700顯示一展示間的虛擬三維空間,其中有各種展品,並由伺服器通過智能機器人實作的虛擬客服人員,可由圖例顯示的虛擬物件二722表示,三維空間內多個虛擬物件由不同的處理程序個別運作,其中可採用智能機器人的技術實作每一個虛擬物件,虛擬物件二722可自動回應使用者所操作的虛擬物件一721產生的對話信息,使用者可以通過畫面提示的功能按鍵730開啟各種互動功能,以與其他虛擬物件互動,例如可與所述虛擬客服人員進行文字或語音對話。In particular, in this example, in addition to the
更者,於再一實施例中,圖7顯示為三維空間瀏覽器700顯示的一個三維空間,畫面近端為使用者所操作的虛擬物件一721,同樣可通過互動功能,如文字或語音對話功能,以第三人稱視角與其他進入相同三維空間的使用者操作各自的虛擬物件(如虛擬物件二722)執行互動。此例中,虛擬物件二722可為另一通過伺服器登入此三維空間的使用者,代表該使用者於此三維空間內活動,伺服器將以不同的處理程序處理多個使用者登入由三維空間影像所呈現的三維空間產生的操作指令。Moreover, in yet another embodiment, FIG. 7 shows a three-dimensional space displayed by the three-
綜合以上系統與各種實施例的描述,可參考圖8所示利用虛擬物件實現第三人稱視角的方法的實施例之一流程圖。Based on the above descriptions of the system and various embodiments, reference may be made to the flow chart of an embodiment of the method for realizing the third-person perspective by using virtual objects shown in FIG. 8 .
使用者操作使用者裝置開啟三維空間瀏覽器(步驟S801),通過三維空間瀏覽器連線伺服器,根據選擇或是預設載入一場景的三維空間影像數據(步驟S803),並也自伺服器載入經過選擇、自訂或是預設的虛擬物件(步驟S805),在使用者裝置中,通過處理器以不同處理程序處理並顯示畫面,其中以第一處理程序處理三維空間影像,以第二處理程序處理虛擬物件的運作(步驟S807)。The user operates the user device to open the three-dimensional space browser (step S801), connects to the server through the three-dimensional space browser, and loads the three-dimensional space image data of a scene according to selection or default (step S803), and also automatically The selected, customized or preset virtual object is loaded into the device (step S805), and in the user device, the processor processes and displays the screen through different processing programs, wherein the first processing program processes the three-dimensional space image to The second processing program processes the operation of the virtual object (step S807).
接著可以利用輸入手段操作顯示在三維空間瀏覽器中特定位置的虛擬物件,通過第二處理程序接收並判斷操作指令,並交由不同處理程序處理,如處理三維空間影像的第一處理程序(步驟S809),此時,由第一處理程序根據操作指令控制三維空間的角度(步驟S811),其中方法如上述實施例,可以根據操作指令自伺服器載入對應某一視角在某一站位的三維空間影像數據(步驟S813),重構出更新後的三維空間影像。若此三維空間有其他使用者登入,根據伺服器提供的信息,可以在此三維空間內得到其他虛擬物件的資訊,包括顯示在第三人稱視角內可看到的其他虛擬物件(步驟S815),並通過三維空間瀏覽器顯示出來(步驟S817)。Then, the virtual object displayed at a specific position in the three-dimensional space browser can be operated by using the input means, and the second processing program receives and judges the operation instruction, and then it is handed over to different processing programs for processing, such as the first processing program for processing three-dimensional space images (step S809), at this time, the angle of the three-dimensional space is controlled by the first processing program according to the operation instruction (step S811), wherein the method is as in the above-mentioned embodiment, and the image corresponding to a certain viewing angle at a certain station can be loaded from the server according to the operation instruction The three-dimensional space image data (step S813 ), reconstructing the updated three-dimensional space image. If other users log in to the 3D space, according to the information provided by the server, the information of other virtual objects in the 3D space can be obtained, including displaying other virtual objects visible in the third-person perspective (step S815 ), And display it through the three-dimensional space browser (step S817).
圖9繼續顯示利用虛擬物件實現第三人稱視角的方法的實施例之二流程圖。FIG. 9 continues to show the flow chart of the second embodiment of the method for realizing the third-person perspective by using virtual objects.
在此實施例流程中,當同時有多個虛擬物件處於相同的三維空間時(步驟S901),由伺服器的處理器通過不同處理程序顯示多個虛擬物件與三維空間影像(步驟S903)。當使用者操作代表自己的虛擬物件,產生操作指令,通過第二處理程序判斷操作指令,可以控制虛擬物件的動畫或是姿態回應此操作指令(步驟S905),第一處理程序根據第二處理程序產生的信息控制三維空間對應特定視角與站位的角度,並取得對應的三維空間影像數據(步驟S907)。In the flow of this embodiment, when there are multiple virtual objects in the same 3D space (step S901 ), the processor of the server displays the multiple virtual objects and 3D space images through different processing programs (step S903 ). When the user operates the virtual object representing himself, an operation instruction is generated, and the operation instruction is judged by the second processing program, and the animation or gesture of the virtual object can be controlled to respond to the operation instruction (step S905), the first processing program according to the second processing program The generated information controls the three-dimensional space to correspond to a specific viewing angle and an angle of a station position, and obtains corresponding three-dimensional space image data (step S907 ).
同時,在此三維空間內有其他虛擬物件,每次的更新將再次取得三維空間內其他的虛擬物件的資訊(步驟S909),並判斷虛擬物件之間是否觸發互動(步驟S911),例如對話,若沒有,重新回到步驟S905;若觸發互動,將啟始一互動介面,如聊天介面、語音對話介面(步驟S913),讓使用者可以操作進行互動,例如產生對話框,以文字或語音產生對話內容,伺服器即據此處理互動信息(步驟S915)。其中,若對應的虛擬物件為另一使用者所操作,該使用者應以其使用者裝置操作回應;若對應的虛擬物件為伺服器直接以人工智慧技術運行,將會自動產生回應。At the same time, there are other virtual objects in the three-dimensional space, each update will obtain the information of other virtual objects in the three-dimensional space again (step S909), and determine whether interaction is triggered between the virtual objects (step S911), such as dialogue, If not, go back to step S905; if the interaction is triggered, an interactive interface will be started, such as a chat interface, a voice dialogue interface (step S913), allowing the user to operate and interact, such as generating a dialog box, generated by text or voice The server processes the interaction information according to the dialog content (step S915). Among them, if the corresponding virtual object is operated by another user, the user should respond by operating his user device; if the corresponding virtual object is directly operated by the server using artificial intelligence technology, a response will be automatically generated.
綜上所述,根據上述利用虛擬物件實現第三人稱視角的方法與系統的實施方式,提出的伺服器提供使用者以瀏覽器瀏覽特定三維空間,並設定提供給使用者以第三人稱視角操作的虛擬物件,操作虛擬物件時,再根據其第三人稱視角與站位提供對應的三維空間影像數據,虛擬物件與三維空間影像是由不同處理程序處理,影像經重構後顯示給使用者觀看,提供優化的處理效能以及較佳的沉浸式瀏覽三維空間的體驗。To sum up, according to the implementation of the method and system for realizing the third-person perspective by using virtual objects, the proposed server provides the user with a browser to browse a specific three-dimensional space, and provides the user with a third-person perspective to operate When operating virtual objects, the corresponding 3D space image data is provided according to the third-person perspective and position. The virtual objects and 3D space images are processed by different processing programs, and the images are reconstructed and displayed for users to watch. , providing optimized processing performance and a better experience of immersive browsing in 3D space.
以上所公開的內容僅為本發明的優選可行實施例,並非因此侷限本發明的申請專利範圍,所以凡是運用本發明說明書及圖式內容所做的等效技術變化,均包含於本發明的申請專利範圍內。The content disclosed above is only a preferred feasible embodiment of the present invention, and does not therefore limit the scope of the patent application of the present invention. Therefore, all equivalent technical changes made by using the description and drawings of the present invention are included in the application of the present invention. within the scope of the patent.
10:網路
101:伺服器
110:資料庫
111:三維空間影像數據
113:使用者資料
103, 105:使用者裝置
200:三維空間瀏覽器
210:三維空間
220:虛擬物件
301:全景圖
303:特徵擷取
305:全景機率概圖
302:天花板視圖
304:特徵擷取
306:平面機率概圖
307:二維平面圖
308:立體格局圖
401:全景圖
403:深度殘差網路
431:影像處理
432:識別與分類
405a:客廳
405b:浴室
405c:臥室
501:使用者操作
503:瀏覽器程式語言
505:指令
507:伺服器
509:層疊樣式表
511:三維空間影像數據
513:虛擬物件影像
515:三維空間瀏覽器
700:三維空間瀏覽器
710:三維空間
721:虛擬物件一
722:虛擬物件二
730:功能按鍵
步驟S601~S609:利用虛擬物件實現第三人稱視角的流程
步驟S801~S817:利用虛擬物件實現第三人稱視角的流程
步驟S901~S915:利用虛擬物件實現第三人稱視角的流程10: Internet
101:Server
110: Database
111: Three-dimensional space image data
113:
圖1顯示利用虛擬物件實現第三人稱視角的系統實施例圖;FIG. 1 shows a diagram of a system embodiment using virtual objects to realize a third-person perspective;
圖2顯示利用虛擬物件實現第三人稱視角的實施例示意圖之一;FIG. 2 shows one of the schematic diagrams of an embodiment of using virtual objects to realize a third-person perspective;
圖3所示為雙投射網路的深度學習方法的流程實施例;Fig. 3 shows the process embodiment of the deep learning method of double-projection network;
圖4所示為深度殘差網路的深度學習方法的流程實施例;Fig. 4 shows the process embodiment of the deep learning method of depth residual network;
圖5顯示利用虛擬物件實現第三人稱視角的系統的運作方法實施例圖;FIG. 5 shows an embodiment diagram of the operation method of the system using virtual objects to realize the third-person perspective;
圖6顯示利用虛擬物件實現第三人稱視角的方法的概念流程圖;FIG. 6 shows a conceptual flowchart of a method for realizing a third-person perspective by using virtual objects;
圖7顯示利用虛擬物件實現第三人稱視角的實施例示意圖之二;FIG. 7 shows the second schematic diagram of an embodiment of realizing a third-person perspective by using virtual objects;
圖8顯示利用虛擬物件實現第三人稱視角的方法的實施例之一流程圖;以及FIG. 8 shows a flow chart of an embodiment of a method for realizing a third-person perspective using virtual objects; and
圖9顯示利用虛擬物件實現第三人稱視角的方法的實施例之二流程圖。FIG. 9 shows a flow chart of Embodiment 2 of a method for realizing a third-person perspective by using virtual objects.
200:三維空間瀏覽器 200: 3D space browser
210:三維空間 210: Three-dimensional space
220:虛擬物件 220: Virtual objects
Claims (16)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163288031P | 2021-12-10 | 2021-12-10 | |
US63/288031 | 2021-12-10 |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI799195B true TWI799195B (en) | 2023-04-11 |
TW202324311A TW202324311A (en) | 2023-06-16 |
Family
ID=86948870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111110010A TWI799195B (en) | 2021-12-10 | 2022-03-18 | Method and system for implementing third-person perspective with a virtual object |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI799195B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201637698A (en) * | 2015-04-27 | 2016-11-01 | 樂線韓國股份有限公司 | Method for controlled object and device thereof |
CN108536374A (en) * | 2018-04-13 | 2018-09-14 | 网易(杭州)网络有限公司 | Virtual objects direction-controlling method and device, electronic equipment, storage medium |
US20200289934A1 (en) * | 2019-03-15 | 2020-09-17 | Sony Interactive Entertainment Inc. | Methods and systems for spectating characters in virtual reality views |
CN112862935A (en) * | 2021-03-16 | 2021-05-28 | 天津亚克互动科技有限公司 | Game character motion processing method and device, storage medium and computer equipment |
CN113687761A (en) * | 2021-08-24 | 2021-11-23 | 网易(杭州)网络有限公司 | Game control method and device, electronic equipment and storage medium |
CN113694526A (en) * | 2021-09-18 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Method, system, device, apparatus, medium, and program for controlling virtual object |
-
2022
- 2022-03-18 TW TW111110010A patent/TWI799195B/en active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201637698A (en) * | 2015-04-27 | 2016-11-01 | 樂線韓國股份有限公司 | Method for controlled object and device thereof |
CN108536374A (en) * | 2018-04-13 | 2018-09-14 | 网易(杭州)网络有限公司 | Virtual objects direction-controlling method and device, electronic equipment, storage medium |
US20200289934A1 (en) * | 2019-03-15 | 2020-09-17 | Sony Interactive Entertainment Inc. | Methods and systems for spectating characters in virtual reality views |
CN112862935A (en) * | 2021-03-16 | 2021-05-28 | 天津亚克互动科技有限公司 | Game character motion processing method and device, storage medium and computer equipment |
CN113687761A (en) * | 2021-08-24 | 2021-11-23 | 网易(杭州)网络有限公司 | Game control method and device, electronic equipment and storage medium |
CN113694526A (en) * | 2021-09-18 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Method, system, device, apparatus, medium, and program for controlling virtual object |
Also Published As
Publication number | Publication date |
---|---|
TW202324311A (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11656736B2 (en) | Computer simulation method with user-defined transportation and layout | |
US11948239B2 (en) | Time-dependent client inactivity indicia in a multi-user animation environment | |
EP3304252B1 (en) | Shared tactile interaction and user safety in shared space multi-person immersive virtual reality | |
US9064023B2 (en) | Providing web content in the context of a virtual environment | |
Craig et al. | Developing virtual reality applications: Foundations of effective design | |
CN105981076B (en) | Synthesize the construction of augmented reality environment | |
TWI567659B (en) | Theme-based augmentation of photorepresentative view | |
CN109885367B (en) | Interactive chat implementation method, device, terminal and storage medium | |
JP2000512039A (en) | Programmable computer graphic objects | |
JP2021103526A (en) | Information providing device, information providing system, information providing method, and information providing program | |
WO2022259253A1 (en) | System and method for providing interactive multi-user parallel real and virtual 3d environments | |
US20230419618A1 (en) | Virtual Personal Interface for Control and Travel Between Virtual Worlds | |
TWI799195B (en) | Method and system for implementing third-person perspective with a virtual object | |
CN114026524A (en) | Animated human face using texture manipulation | |
Jin et al. | Volumivive: An authoring system for adding interactivity to volumetric video | |
JPH11353080A (en) | Three-dimensional image display method, recording medium recorded with program therefor, storage medium recorded with image arrangement data, and three-dimensional image display device | |
JP7409468B1 (en) | Virtual space generation device, virtual space generation program, and virtual space generation method | |
US12093995B1 (en) | Card ecosystem guest interface in virtual reality retail environments | |
Sherstyuk et al. | Virtual roommates: sampling and reconstructing presence in multiple shared spaces | |
JP7409467B1 (en) | Virtual space generation device, virtual space generation program, and virtual space generation method | |
JP2002245294A (en) | Model dwelling house system utilizing virtual space | |
US20240193894A1 (en) | Data processing method and apparatus, electronic device and storage medium | |
Checo | " Non-verbal communication in Social VR": Exploring new ways to handshake | |
WO2023249918A1 (en) | Virtual personal interface for control and travel between virtual worlds | |
Hoch et al. | Individual and Group Interaction |