TWM594767U - Virtual character live streaming system - Google Patents

Virtual character live streaming system Download PDF

Info

Publication number
TWM594767U
TWM594767U TW108216467U TW108216467U TWM594767U TW M594767 U TWM594767 U TW M594767U TW 108216467 U TW108216467 U TW 108216467U TW 108216467 U TW108216467 U TW 108216467U TW M594767 U TWM594767 U TW M594767U
Authority
TW
Taiwan
Prior art keywords
live broadcast
virtual character
live
terminal
interactive
Prior art date
Application number
TW108216467U
Other languages
Chinese (zh)
Inventor
賴錦德
Original Assignee
狂點軟體開發股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 狂點軟體開發股份有限公司 filed Critical 狂點軟體開發股份有限公司
Priority to TW108216467U priority Critical patent/TWM594767U/en
Publication of TWM594767U publication Critical patent/TWM594767U/en

Links

Images

Abstract

一種虛擬人物直播系統包含一直播管理平台以及一第一終端裝置。直播管理平台用於傳送一互動訊號以及一直播動畫,其中直播動畫係依據一控制訊號及一虛擬人物所產生。第一終端裝置用於偵測一使用者之一動作以產生對應動作之一偵測訊號,且依據互動訊號產生互動資訊,並顯示一設定介面以及一直播介面,其中設定介面包含一虛擬人物設定區,且直播介面包含虛擬人物以及一互動資訊,並經由虛擬人物設定區域設定虛擬人物。A virtual character live broadcast system includes a live broadcast management platform and a first terminal device. The live broadcast management platform is used to transmit an interactive signal and a live broadcast animation, wherein the live broadcast animation is generated based on a control signal and a virtual character. The first terminal device is used to detect a motion of a user to generate a detection signal corresponding to the motion, and generate interactive information according to the interactive signal, and display a setting interface and a live interface, wherein the setting interface includes a virtual character setting Area, and the live interface includes virtual characters and interactive information, and the virtual characters are set through the virtual character setting area.

Description

虛擬人物直播系統Virtual character live broadcast system

本創作是有關一種虛擬人物直播系統,特別是一種基於真實影像之虛擬人物直播系統。This creation is about a virtual character live broadcast system, especially a virtual character live broadcast system based on real images.

虛擬實境(Virtual Reality, VR)是將真實世界的使用者置身虛擬的空間,經由電腦塑造逼近真實的三維虛擬環境,例如:透過視覺輔佐聽覺、模擬真實動態雙眼視差,讓使用者體驗身歷其境。擴增實境(Augmented Reality, AR)是透過攝影機拍攝的位置及角度精算加上圖像分析技術,讓螢幕上的虛擬角色與現實世界能夠結合互動的技術,換言之,讓使用者可以在真實世界中看見假想物件出現在實境畫面中。Virtual reality (Virtual Reality, VR) is to put real-world users in a virtual space and shape a three-dimensional virtual environment close to the real through the computer, for example: through visual aids to hearing, simulate real dynamic binocular parallax, let users experience the experience Its situation. Augmented Reality (Augmented Reality, AR) is a technology that uses the actuarial calculation of the position and angle of the camera and the image analysis technology to allow the virtual characters on the screen to interact with the real world. In other words, the user can be in the real world. I saw an imaginary object appear in the reality screen.

上述虛擬實境及擴增實境影像服務提供者/遊戲廠商,通常是依據既有資料庫提供一虛擬影像,以供使用者觀看,例如在醫療領域中透過一擴增實境系統顯示一擴增實境影像,例如依據超音波影像所產生的三維虛擬器官影像,讓醫護人員操作並展示該擴增實境影像給病患觀看並進行解說。然而,上述影像技術只是透過資料庫儲存的虛擬物件影像,尚無提供使用者修改或客製化虛擬影像之功能,且往往受限於特定場域、設備才能顯示,在應用上仍有所限制。The above virtual reality and augmented reality image service providers/game manufacturers usually provide a virtual image based on the existing database for users to watch, for example, display an expansion through an augmented reality system in the medical field Augmented reality images, such as three-dimensional virtual organ images generated based on ultrasound images, allow medical personnel to operate and display the augmented reality images for patients to view and explain. However, the above-mentioned image technology is only a virtual object image stored through a database, and there is no function for users to modify or customize virtual images, and it is often limited to specific fields and devices to display, and is still limited in application. .

有鑑於此,本創作之部分實施例提供一種虛擬人物直播系統。In view of this, some embodiments of the present creation provide a virtual character live broadcast system.

本創作一實施例之虛擬人物直播系統包含一直播管理平台以及一第一終端裝置。直播管理平台包含一管理通訊模組以及一管理運算單元。管理通訊模組用於傳送一互動訊號以及對應一控制訊號之一直播動畫,其中直播動畫係依據控制訊號及一虛擬人物所產生。管理運算單元電性連接於管理通訊模組。管理運算單元用於辨識一偵測訊號以產生對應偵測訊號之控制訊號。第一終端裝置包含一終端通訊模組、一終端顯示單元、一動作偵測單元以及一終端運算單元。終端運算單元電性連接於終端通訊模組、終端顯示單元及動作偵測單元。終端通訊模組通訊連接於管理通訊模組。終端通訊模組用於接收互動訊號並傳送偵測訊號。終端顯示單元用於顯示一設定介面以及一直播介面,其中設定介面包含一虛擬人物設定區,且直播介面中包含虛擬人物以及一互動資訊,並經由虛擬人物設定區域設定虛擬人物。動作偵測單元用於偵測一使用者之一動作。終端運算單元用於產生對應動作之偵測訊號,並依據互動訊號產生互動資訊。The virtual character live broadcast system according to an embodiment of the present invention includes a live broadcast management platform and a first terminal device. The live broadcast management platform includes a management communication module and a management computing unit. The management communication module is used to transmit an interactive signal and a live animation corresponding to a control signal, wherein the live animation is generated based on the control signal and a virtual character. The management computing unit is electrically connected to the management communication module. The management computing unit is used to identify a detection signal to generate a control signal corresponding to the detection signal. The first terminal device includes a terminal communication module, a terminal display unit, a motion detection unit, and a terminal arithmetic unit. The terminal computing unit is electrically connected to the terminal communication module, the terminal display unit and the motion detection unit. The terminal communication module is communicatively connected to the management communication module. The terminal communication module is used to receive interactive signals and send detection signals. The terminal display unit is used to display a setting interface and a live interface. The setting interface includes a virtual character setting area, and the live interface includes virtual characters and interactive information, and the virtual character is set through the virtual character setting area. The motion detection unit is used to detect a motion of a user. The terminal computing unit is used to generate a detection signal corresponding to the motion and generate interactive information according to the interactive signal.

藉此,使用者可透過第一終端裝置連線至直播管理平台,先在設定介面上經客製化設定喜好的虛擬人物,然後使用者在直播平台上即時控制虛擬人物的表情、動作以作為自己的替身影像,據以透過直播平台與觀眾分享實況直播與即時互動,可突破預錄式虛擬影像及觀看地點的傳統限制,從而增加直播的趣味性且在特定應用上保有使用者個人隱私性。In this way, the user can connect to the live broadcast management platform through the first terminal device, first customize the virtual character on the setting interface through customization, and then the user can control the virtual character's expressions and actions in real time on the live broadcast platform as Your own stand-in video, according to which you can share the live broadcast and real-time interaction with the audience through the live broadcast platform, can break through the traditional limitations of pre-recorded virtual images and viewing locations, thereby increasing the fun of live broadcasting and maintaining user privacy in specific applications .

以下藉由具體實施例配合所附的圖式詳加說明,當更容易瞭解本創作之目的、技術內容、特點及其所達成之功效。The following is a detailed description with specific examples and accompanying drawings, so that it is easier to understand the purpose, technical content, characteristics and effects of the creation.

以下將詳述本創作之各實施例,並配合圖式作為例示。在說明書的描述中,為了使讀者對本創作有較完整的瞭解,提供了許多特定細節;然而,本創作可能在省略部分或全部特定細節的前提下仍可實施。圖式中相同或類似之元件將以相同或類似符號來表示。特別注意的是,圖式僅為示意之用,並非代表元件實際之尺寸或數量,有些細節可能未完全繪出,以求圖式之簡潔。In the following, each embodiment of the creation will be described in detail, and the drawings will be used as examples. In the description of the specification, in order to make the reader have a more complete understanding of this creation, many specific details are provided; however, this creation may still be implemented on the premise that some or all of the specific details are omitted. The same or similar elements in the drawings will be represented by the same or similar symbols. It is important to note that the drawings are for illustrative purposes only, and do not represent the actual size or number of components. Some details may not be fully drawn for simplicity.

圖1為本創作一實施例之虛擬人物直播方法之流程示意圖。圖2為本創作一實施例之虛擬人物直播系統之架構示意圖。圖3為本創作一實施例之第一終端裝置之方塊示意圖。圖4為本創作一實施例之直播管理平台之方塊示意圖。請一併參照圖1至圖4,本創作之一實施例之虛擬人物直播方法可藉由與第一終端裝置1通訊連線之一直播管理平台2來實現,以供一個或多個第二終端裝置3、3’即時接收來自第一終端裝置1之使用者直播影像或互動任務等。舉例而言,使用者可為網路媒體、直播平台的直播主,亦有稱VTuber,他們的網路直播內容包含但不限於聊天、唱歌、舞蹈、美食、電競、動漫、美妝、娛樂等類型,這些直播主透過直播平台與觀眾分享實況直播與即時互動,可因此獲得數位轉帳的打賞與直播平台分成而獲利。FIG. 1 is a schematic flowchart of a method for creating a virtual character live broadcast according to an embodiment. FIG. 2 is a schematic structural diagram of creating a virtual character live broadcast system according to an embodiment. FIG. 3 is a block diagram of a first terminal device according to an embodiment of the invention. FIG. 4 is a block diagram of a live broadcast management platform according to an embodiment. Please refer to FIG. 1 to FIG. 4 together. The virtual character live streaming method according to an embodiment of the present invention can be implemented by a live broadcast management platform 2 communicating with the first terminal device 1 to provide one or more second The terminal devices 3, 3'receive live video or interactive tasks from users of the first terminal device 1 in real time. For example, users can be live hosts of online media and live broadcast platforms, also known as VTuber. Their live web content includes but is not limited to chat, singing, dance, food, e-sports, anime, beauty, entertainment For other types of live broadcasters, these live broadcasters can share live broadcasts and real-time interactions with viewers through the live broadcast platform, which can be rewarded by the digital transfer and the live broadcast platform sharing.

在部分實施例中,虛擬人物直播方法可由一電腦程式實現,以致於當電腦(即,具有終端通訊模組10、終端顯示單元12、動作偵測單元14與終端運算單元16之任意電子裝置,如:第一終端裝置1)網路連線至伺服器 (即,具有管理通訊模組20與管理運算單元22之任意電子裝置,如:直播管理平台2),載入程式並執行後,可完成任一實施例之虛擬人物直播方法。於本實施例中,使用者例如直播主,可利用第一終端裝置1例如但不限於:手機、平板電腦或筆記型電腦等,透過網際網路與直播管理平台2建立通訊連線以執行虛擬人物直播方法,以供觀眾操作第二終端裝置3、3’透過網際網路與直播管理平台2建立通訊連線,以觀看直播內容、進行互動或參與遊戲解任務等。In some embodiments, the virtual character live broadcast method can be implemented by a computer program, so that it becomes a computer (ie, any electronic device with a terminal communication module 10, a terminal display unit 12, a motion detection unit 14, and a terminal arithmetic unit 16, For example, the first terminal device 1) is connected to a server via a network (that is, any electronic device with a management communication module 20 and a management computing unit 22, such as a live broadcast management platform 2). After the program is loaded and executed, it can be Complete the live streaming method for virtual characters of any embodiment. In this embodiment, a user such as a live broadcaster can use the first terminal device 1 such as, but not limited to, a mobile phone, a tablet computer, or a notebook computer to establish a communication connection with the live broadcast management platform 2 through the Internet to perform virtual Character live broadcast method for the viewer to operate the second terminal device 3, 3'to establish a communication connection with the live broadcast management platform 2 via the Internet to watch live broadcast content, interact or participate in game solving tasks, etc.

創作人認識到,現有的虛擬實境及擴增實境影像服務提供者,通常是依據影像資料庫提供既有的虛擬影像,以供觀眾在特定場合觀看,應用上仍有所限制。然而,透過本創作任一實施例之虛擬人物直播方法可突破虛擬影像及觀看地點的限制,以帶來更廣泛的應用及趣味性,例如利用虛擬人物來進行直播,觀眾可即時透過行動裝置在任何地點收看直播主所建立之虛擬影像並進行互動,以下例示說明相關實施例及詳細步驟。The creator recognizes that existing virtual reality and augmented reality image service providers usually provide existing virtual images based on the image database for viewers to watch on specific occasions, and their applications are still limited. However, the virtual character live streaming method of any embodiment of the present invention can break through the limitations of virtual images and viewing locations to bring a wider range of applications and fun. For example, using virtual characters for live broadcasting, the audience can instantly use the mobile device to Watch and interact with the virtual images created by the live host at any location. The following examples illustrate related embodiments and detailed steps.

於本實施例中,首先,使用者利用第一終端裝置1網路連線至直播管理平台2,並透過第一終端裝置1內建之網頁瀏覽器(Browser)或應用程式(APP)來讀取並操作源自直播管理平台2的一設定介面。設定介面可顯示一虛擬人物設定區,其中虛擬人物設定區提供一個或多個臉型、表情、身形、服裝或人物角色等影像編輯要素,以供使用者選擇並設定用於直播之虛擬人物特色。於一實施例中,使用者可選擇在基於自己真實影像加入虛擬服裝、裝飾品、道具、動態表情等影像編輯要素,經由影像修飾、整合或透過人工智慧演算法,產生不同於真實世界之虛擬人物來替代自己真實影像,以增加直播的趣味性或在特定應用上保有使用者個人隱私性。In this embodiment, first, the user uses the first terminal device 1 to connect to the live broadcast management platform 2 through a web browser (Browser) or application (APP) built in the first terminal device 1 to read Take and operate a setting interface from the live broadcast management platform 2. The setting interface can display a virtual character setting area, where the virtual character setting area provides one or more image editing elements such as face, expression, body shape, clothing or character, for users to select and set the characteristics of virtual characters for live broadcast . In one embodiment, the user can choose to add image editing elements such as virtual clothing, decorations, props, and dynamic expressions based on his own real image, through image modification, integration, or through artificial intelligence algorithms, to generate a virtual reality different from the real world. Characters to replace their own real images to increase the fun of live broadcasting or to maintain the user's personal privacy in specific applications.

舉例而言,直播主基於自己真實影像修飾轉換為高瘦身型、穿著漫畫服裝之虛擬人物,或掃臉自建三維虛擬人物(3D Avatar),並據此虛擬影像進行即時直播,無論是虛擬實境(VR)、擴增實境(AR)或混合實境(MR)技術均為可行方案,然本創作並不限制這些實施細節;或,使用者可簡單選擇一虛擬角色來作為自己替身影像,其中虛擬角色可為二維或三維圖像,由圖畫、卡通、電腦動畫等形式呈現且不必然相同或近似於使用者之形像等,亦即使用者可選擇與自己真實影像不同之虛擬角色作為替身以進行直播。簡言之,透過步驟S1,第一終端裝置1之終端顯示單元12顯示一設定介面,其中設定介面包含一虛擬人物設定區,並經由虛擬人物設定區域設定一虛擬人物,以用於直播。For example, the live broadcaster converts to high-slim type virtual characters wearing comic costumes based on his real image modification, or sweeps the face to build a 3D virtual character (3D Avatar), and based on this virtual image, live broadcast, whether it is virtual reality Virtual reality (VR), augmented reality (AR) or mixed reality (MR) technologies are all feasible solutions, but the original creation does not limit these implementation details; or, users can simply choose a virtual character as their own avatar image , Where the virtual character can be a two-dimensional or three-dimensional image, represented by pictures, cartoons, computer animations, etc., and is not necessarily the same or similar to the user's image, that is, the user can choose a virtual different from his own real image The character acts as a stand-in for live streaming. In short, through step S1, the terminal display unit 12 of the first terminal device 1 displays a setting interface, where the setting interface includes a virtual character setting area, and a virtual character is set through the virtual character setting area for live broadcast.

在至少一實施例中,設定介面包含一身分驗證機制,舉例而言,可採用綁定使用者真實資料或社群網站帳號等登入驗證方式,例如:使用者必須填寫個人資料以註冊帳戶並登入後,再客製化設定專屬的虛擬人物或基於真實影像修改、編輯所欲添加的影像要素作為虛擬人物,以用於後續直播。藉此,可確保同一虛擬人物是由同一使用者來設定及使用,以避免第三人盜用虛擬人物,維持直播平台管理秩序及交易安全。In at least one embodiment, the configuration interface includes an identity verification mechanism. For example, a login verification method such as binding a user’s real data or a social site account can be used. For example, the user must fill in personal information to register an account and log in After that, you can customize the virtual character or the image elements you want to add based on the real image modification and editing as the virtual character for subsequent live broadcast. In this way, it can be ensured that the same virtual character is set and used by the same user, so as to prevent the third person from embezzling the virtual character, and maintain the management order of the live broadcast platform and transaction security.

其次,透過步驟S2,第一終端裝置1啟動並顯示一直播介面,直播介面中包含虛擬人物。舉例而言,終端運算單元16啟動一直播介面,並依據使用者設定之虛擬人物,透過終端顯示單元12顯示虛擬人物於直播介面。整體而言,終端顯示單元12可顯示設定介面以及直播介面,其中設定介面包含虛擬人物設定區,且直播介面中包含經使用者設定的虛擬人物以及來自觀眾的互動資訊,其中使用者經由虛擬人物設定區域設定虛擬人物之臉型、表情、身形、服裝或人物角色等影像編輯要素,以用於直播。Next, through step S2, the first terminal device 1 activates and displays a live interface, which includes virtual characters. For example, the terminal computing unit 16 activates a live interface, and displays the virtual character on the live interface through the terminal display unit 12 according to the virtual character set by the user. Overall, the terminal display unit 12 can display a setting interface and a live interface, where the setting interface includes a virtual character setting area, and the live interface includes a virtual character set by the user and interactive information from the audience, wherein the user passes the virtual character The setting area sets the image editing elements such as the face shape, expression, body shape, clothing or character of the virtual character for live broadcast.

再者,在步驟S3中,第一終端裝置1偵測一使用者之一動作以產生對應動作之一偵測訊號。於一實施例中,第一終端裝置1透過動作偵測單元14偵測使用者之一動作,以產生相對應之一偵測訊號。其中,使用者之動作可為直播主的臉部特徵、表情、手勢、位移、舞蹈及其組合等。舉例而言,動作偵測單元14用於擷取使用者的可見光影像、不可見光(紅外光、遠紅外光)影像、熱影像,或擷取使用者的手勢、位移、肢體動作等,以產生相對應之一偵測訊號,例如:數位影像訊號、聲音訊號等。Furthermore, in step S3, the first terminal device 1 detects an action of a user to generate a detection signal of the corresponding action. In an embodiment, the first terminal device 1 detects a user's motion through the motion detection unit 14 to generate a corresponding detection signal. Among them, the user's actions can be the live host's facial features, expressions, gestures, displacement, dance, and combinations thereof. For example, the motion detection unit 14 is used to capture the user's visible light image, invisible light (infrared light, far infrared light) image, thermal image, or capture the user's gestures, displacement, limb movements, etc. to generate Corresponding to one of the detection signals, for example: digital image signal, audio signal, etc.

接著,在步驟S4中,第一終端裝置1將偵測訊號經由網路傳送至直播管理平台2,並由直播管理平台2辨識偵測訊號以產生對應偵測訊號之一控制訊號,播送對應控制訊號之一直播動畫,其中直播動畫係依據控制訊號及虛擬人物所產生。於一實施例中,終端運算單元16透過終端通訊模組10將偵測訊號經由網路傳送至直播管理平台2,其中管理運算單元22透過管理通訊模組20接收偵測訊號,接著,管理運算單元22辨識偵測訊號以產生對應偵測訊號之一控制訊號,舉例而言,動作偵測單元14擷取使用者之動作影像,因此偵測訊號為數位影像訊號,以供管理運算單元22辨識數位影像訊號,詳言之,終端運算單元16依據動作影像,計算在動作影像中該動作之座標變化並辨識該動作,以輸出對應之控制訊號。然後,管理運算單元22依據控制訊號及虛擬人物圖像,透過動作捕捉演算法產生關聯於虛擬人物且對應於控制訊號的直播動畫,並透過管理通訊模組20播送直播動畫給一個或多個第二終端裝置3、3’。於另一實施例中,第一終端裝置1透過步驟S1在虛擬人物設定區選定喜好的臉型、表情、身形、服裝或人物角色等影像編輯要素,作為用於直播之虛擬人物特色,以建立初始的動畫模型,而直播管理平台2在步驟S4中,透過動作捕捉演算法依據控制訊號修改關聯於虛擬人物之動畫模型,產生直播動畫,並透過管理通訊模組20播送直播動畫給一個或多個第二終端裝置3、3’。Next, in step S4, the first terminal device 1 transmits the detection signal to the live broadcast management platform 2 via the network, and the live broadcast management platform 2 recognizes the detection signal to generate a control signal corresponding to the detection signal, and broadcasts the corresponding control One of the live broadcast animations of the signal, where the live broadcast animation is generated based on the control signal and the virtual character. In an embodiment, the terminal computing unit 16 transmits the detection signal to the live broadcast management platform 2 through the network through the terminal communication module 10, wherein the management computing unit 22 receives the detection signal through the management communication module 20, and then, manages the operation The unit 22 recognizes the detection signal to generate a control signal corresponding to the detection signal. For example, the motion detection unit 14 captures the user's motion image, so the detection signal is a digital image signal for the management computing unit 22 to recognize The digital image signal. In detail, the terminal arithmetic unit 16 calculates the coordinate change of the action in the action image based on the action image and recognizes the action to output the corresponding control signal. Then, the management computing unit 22 generates a live animation associated with the virtual character and corresponding to the control signal through the motion capture algorithm based on the control signal and the virtual character image, and broadcasts the live animation to one or more third parties through the management communication module 20 Two terminal devices 3, 3'. In another embodiment, the first terminal device 1 selects favorite image editing elements such as face shape, expression, body shape, clothing, or character in the virtual character setting area through step S1 as the characteristics of the virtual character for live broadcast to create The initial animation model, and the live broadcast management platform 2 in step S4, through the motion capture algorithm to modify the animation model associated with the virtual character according to the control signal, generate live animation, and broadcast live animation to one or more through the management communication module 20 Second terminal devices 3, 3'.

需說明的是,控制訊號適用於依據使用者的動作操控直播動畫中虛擬人物的動作,而控制訊號是基於第一終端裝置1所擷取的偵測訊號(例如但讀限於:聲音、影像等訊號),經由例如雲端伺服器的直播管理平台2辨識所產生。而直播管理平台2,依據控制訊號及虛擬人物,經影像演算技術處理產生相對應之直播動畫,可有效節省第一終端裝置1的運算資源及軟硬體成本,但不以此為限。然而,在部分實施例中,第一終端裝置1之終端運算單元16亦可用於辨識偵測訊號,以產生對應使用者動作的控制訊號。舉例而言,動作偵測單元14擷取使用者之動作影像作為偵測訊號,以及終端運算單元16依據動作影像,計算在動作影像中該動作之座標變化並辨識該動作,以輸出對應之控制訊號。又,動作偵測單元14與終端運算單元16可整合為一,但不以此為限。It should be noted that the control signal is suitable for controlling the movement of the virtual character in the live animation according to the user's action, and the control signal is based on the detection signal captured by the first terminal device 1 (such as but limited to reading: sound, video, etc.) Signal), generated by identification of the live broadcast management platform 2 such as a cloud server. The live broadcast management platform 2, based on the control signals and virtual characters, is processed by the image calculation technology to generate the corresponding live broadcast animation, which can effectively save the computing resources and hardware and software costs of the first terminal device 1, but not limited to this. However, in some embodiments, the terminal computing unit 16 of the first terminal device 1 can also be used to identify the detection signal to generate a control signal corresponding to the user's action. For example, the motion detection unit 14 captures the user's motion image as a detection signal, and the terminal computing unit 16 calculates the coordinate change of the motion in the motion image based on the motion image and recognizes the motion to output the corresponding control Signal. In addition, the motion detection unit 14 and the terminal computing unit 16 can be integrated into one, but not limited to this.

在本實施例中,直播管理平台2包含管理通訊模組20、管理運算單元22以及管理資料庫24,如圖4所示。於一實施例中,管理資料庫24可由一個或多個記憶體實現。管理資料庫24儲存複數臉型、表情、身形、服裝或人物角色等影像編輯要素,可供設定介面或直播介面查詢及維護。雖然未特別繪出,在部分實施例中直播管理平台2可由網路伺服器、管理伺服器及資料庫來實現相同功能。於至少一實施例中,直播管理平台2可為微軟公司(Microsoft)所提供的公用雲端服務(Public Cloud Service)平台,例如:Microsoft Azure,但不以此為限。管理運算單元22是以 Azure 內的伺服器群經過虛擬化後形成的大量虛擬機器 (Virtual Machine) 所組成的服務群,其主要功能是提供 CPU、記憶體等具有運算能力的資源。而管理資料庫24可由Azure提供基本的儲存 (Azure Storage)與關聯式資料庫 (SQL Azure),包含但不限於 Blob、Table 和 Queue 分別管理非結構化資料、結構化資料與訊息通訊,並可支援在雲端虛擬機器間的快速資料共用。In this embodiment, the live broadcast management platform 2 includes a management communication module 20, a management computing unit 22, and a management database 24, as shown in FIG. In one embodiment, the management database 24 may be implemented by one or more memories. The management database 24 stores a plurality of image editing elements such as multiple faces, expressions, body shapes, clothing, or characters, which can be used for query and maintenance of the setting interface or live interface. Although not specifically depicted, in some embodiments, the live broadcast management platform 2 can be implemented by a web server, a management server, and a database. In at least one embodiment, the live broadcast management platform 2 may be a public cloud service (Public Cloud Service) platform provided by Microsoft, such as Microsoft Azure, but not limited thereto. The management computing unit 22 is a service group composed of a large number of virtual machines (Virtual Machine) formed by virtualization of the server group in Azure, and its main function is to provide CPU, memory and other resources with computing capabilities. The management database 24 can be provided by Azure with basic storage (Azure Storage) and relational database (SQL Azure), including but not limited to Blob, Table and Queue to manage unstructured data, structured data and message communication respectively, and Supports fast data sharing between virtual machines in the cloud.

最後,在步驟S5中,第一終端裝置1接收由直播管理平台2所傳送之一互動訊號,並依據互動訊號顯示一互動資訊於直播介面。於一實施例中,觀眾利用第二終端裝置3透過網際網路與直播管理平台2建立通訊連線,藉此,透過第二終端裝置3內建之網頁瀏覽器或應用程式來讀取並操作源自直播管理平台2的一直播介面,以即時觀看上述虛擬人物之直播動畫。同時,在直播過程中,觀眾可利用第二終端裝置3透過文字、語音、影像及其組合等媒介,傳遞相關請求或回應給直播主進行互動。舉例而言,第二終端裝置3傳送互動訊號至直播管理平台2,其中互動訊號可為但不限於文字訊號、語音訊號以及影像訊號,無論由數位訊號及類比訊號均可實現,而直播管理平台2接收互動訊號並傳送至第一終端裝置1,然後,第一終端裝置1之終端運算單元16透過終端通訊模組10接收由直播管理平台2所傳送之互動訊號,且終端運算單元16依據互動訊號進行對應運算處理後,透過終端顯示單元12顯示直播介面包含互動資訊,例如但不限於:客端請求、送禮通知、打賞獎勵、文字交談、即時語音通訊、觀眾即時影像、第二終端裝置3執行上述虛擬人物直播方法所產生之客端虛擬人物的直播動畫,但不以此為限。Finally, in step S5, the first terminal device 1 receives an interactive signal transmitted by the live broadcast management platform 2, and displays an interactive information on the live broadcast interface according to the interactive signal. In an embodiment, the viewer uses the second terminal device 3 to establish a communication connection with the live broadcast management platform 2 through the Internet, thereby reading and operating through the web browser or application program built in the second terminal device 3 A live broadcast interface derived from the live broadcast management platform 2 to watch live broadcast animations of the above virtual characters in real time. At the same time, during the live broadcast process, the audience can use the second terminal device 3 to communicate relevant requests or responses to the live broadcast host through text, voice, video and combinations thereof and other media for interaction. For example, the second terminal device 3 transmits an interactive signal to the live broadcast management platform 2, where the interactive signal may be, but not limited to, a text signal, a voice signal, and an image signal, whether it is realized by a digital signal or an analog signal, and the live broadcast management platform 2 Receive the interactive signal and send it to the first terminal device 1, then, the terminal computing unit 16 of the first terminal device 1 receives the interactive signal transmitted by the live broadcast management platform 2 through the terminal communication module 10, and the terminal computing unit 16 according to the interaction After the signals are processed correspondingly, the terminal display unit 12 displays the live interface including interactive information, such as but not limited to: client requests, gift notifications, rewards, text chats, instant voice communications, viewer real-time images, second terminal devices 3 The live animation of the guest virtual character generated by executing the above virtual character live broadcasting method, but not limited to this.

本創作之部分實施例之虛擬人物直播方法可實現互動式遊戲解任務活動,以增加趣味性,更可透過計算機技術和可穿戴設備協同產生並提供所有的真實和虛擬組合環境以及人機互動,來實現擴展實境(Extended Reality, XR)應用。於一實施例中,第一終端裝置1傳送一互動任務至直播管理平台2,然後直播管理平台2可傳送給登入直播介面的觀眾,以進行互動,例如但不限於猜謎活動、上班賺錢、尋寶任務、代位行銷等互動任務等。簡言之,第一終端裝置1傳送互動任務至直播管理平台2,以供複數第二終端裝置3、3’依據互動任務產生上述互動訊號,藉此,第一終端裝置1可透過步驟S5顯示互動資訊回饋給使用者。The live streaming method of virtual characters in some embodiments of this creation can realize interactive game task-removing activities to increase the fun. It can also be generated through computer technology and wearable devices and provide all real and virtual combined environments and human-computer interactions. To implement Extended Reality (XR) applications. In one embodiment, the first terminal device 1 transmits an interactive task to the live broadcast management platform 2, and then the live broadcast management platform 2 can be transmitted to the viewers who log in to the live broadcast interface for interaction, such as but not limited to guessing activities, making money at work, and treasure hunting Interactive tasks such as tasks and subrogation marketing. In short, the first terminal device 1 transmits the interactive task to the live broadcast management platform 2 for the plurality of second terminal devices 3, 3'to generate the interactive signal according to the interactive task, whereby the first terminal device 1 can display through step S5 Interactive information is given back to users.

在其他實施例中,虛擬人物直播方法可允許第一終端裝置1在一電子地圖中預設活動任務以及相對應的任務地點,而當觀眾行動攜帶第二終端裝置3且位於任務地點時,第二終端裝置3可顯示活動任務,以供觀眾執行互動任務。換言之,第二終端裝置3可傳送所在位置資訊至直播管理平台2,且直播管理平台2判定第二終端裝置3的位置資訊是否符合電子地圖中的任務地點,當位置資訊符合任務地點時,直播管理平台2允許第二終端裝置3執行互動任務。在部分實施例中,第一終端裝置1可預錄至少一段虛擬人物影像作為任務影像,並傳送到直播管理平台2,藉此,當直播管理平台2判定第二終端裝置3的位置資訊是否符合電子地圖中的任務地點時,直播管理平台2可傳送任務影像至設置於任務位置的全像投影裝置4,以投放任務影像於任務地點。簡言之,全像投影裝置4接收並投放任務影像至任務地點,增加互動任務的視覺效果及趣味性。In other embodiments, the virtual character live broadcast method may allow the first terminal device 1 to preset an active task and a corresponding task location in an electronic map, and when the viewer moves to carry the second terminal device 3 and is located at the task location, the first The two terminal devices 3 can display active tasks for the audience to perform interactive tasks. In other words, the second terminal device 3 can transmit the location information to the live broadcast management platform 2, and the live broadcast management platform 2 determines whether the location information of the second terminal device 3 matches the task location in the electronic map. When the location information matches the task location, the live broadcast The management platform 2 allows the second terminal device 3 to perform interactive tasks. In some embodiments, the first terminal device 1 may pre-record at least one virtual character image as a task image and transmit it to the live broadcast management platform 2, thereby, when the live broadcast management platform 2 determines whether the location information of the second terminal device 3 matches At the task location in the electronic map, the live broadcast management platform 2 can transmit the task image to the holographic projection device 4 set at the task location, so as to project the task image at the task location. In short, the holographic projection device 4 receives and delivers the task image to the task location, increasing the visual effect and interestingness of the interactive task.

舉例而言,虛擬人物直播方法可結合擴增實境技術,上述任務地點可為實體門市、店鋪,而第一終端裝置1/第二終端裝置3也依使用者/觀眾設定產生所屬的虛擬人物,且使用者/觀眾可至實體門市將自己所擁有的虛擬人物由行動裝置例如第一終端裝置1及第二終端裝置3,傳送至實體門市的顯示裝置例如全像投影裝置4中進行顯示,藉此虛實整合線下互動實體門市。當虛擬人物顯示於實體門市的顯示裝置時,行動裝置中即不會顯示虛擬人物。如此,可模擬虛擬人物至實體門市「上班」之情境,即為互動任務之示例,且透過設置於實體門市的擴增實境的顯示裝置,彷彿多了虛擬店員於門市中。對於實體門市來說,可以增加消費者(包含使用者及觀眾)至實體門市的機會。而對於消費者來說,也可以增加虛擬人物的實用性及趣味性。讓此虛擬人物不僅能在行動裝置中被執行任務,亦可以被傳送至另一裝置中執行任務,並且於執行任務的過程中獲得對應的報酬等,例如,讓虛擬人物也能去「上班賺錢」、「尋寶任務」、「代位行銷」等執行互動任務。進一步,操作過程將於雲端伺服器例如直播管理平台2中,保存消費者於實體門市的消費行為,並可進一步和各種雲端數據業者合作,最終取得完整的消費者之O2O營銷模式(online-to-offline)的行為數據,作為各種商業利用。For example, the virtual character live broadcast method can be combined with augmented reality technology. The above-mentioned task locations can be physical stores and shops, and the first terminal device 1/second terminal device 3 also generates their own virtual characters according to the user/viewer settings , And the user/viewer can go to the physical store to transfer the virtual characters they own from the mobile device such as the first terminal device 1 and the second terminal device 3 to the display device of the physical store such as the holographic projection device 4 for display, Take this to integrate offline and real physical stores. When the avatar is displayed on the display device of the physical store, the avatar is not displayed on the mobile device. In this way, it is possible to simulate the situation of "working" from a virtual character to a physical store, which is an example of an interactive task, and through the augmented reality display device installed in the physical store, it seems that there are more virtual shop assistants in the store. For physical stores, the opportunity for consumers (including users and viewers) to physical stores can be increased. For consumers, it can also increase the practicality and fun of virtual characters. Let this avatar not only be executed in the mobile device, but also be transferred to another device to perform the task, and get the corresponding rewards during the execution of the task, for example, let the avatar also go to "work to make money ", "Treasure Hunting Task", "Subrogation Marketing" and other interactive tasks. Further, the operation process will be stored in cloud servers such as live broadcast management platform 2, to save consumers' consumption behavior in physical stores, and can further cooperate with various cloud data providers to finally obtain a complete consumer O2O marketing model (online-to -offline) behavior data, used as a variety of business.

請繼續參照圖3及圖4,本創作又一實施例之虛擬人物直播系統包含一第一終端裝置1及一直播管理平台2。直播管理平台2包含一管理通訊模組20以及一管理運算單元22。管理運算單元22電性連接於管理通訊模組20。於一實施例中,管理通訊模組20可為無線通訊介面,例如但不限於:無線區域網路(WIFI)、蜂巢式網路(3G、4G、5G)、紫蜂(Zigbee)等,用於傳送互動訊號以及對應控制訊號之直播動畫,關於訊號收發機制以及虛擬人物、直播動畫演算機制等相關實施例,請詳參前述;管理運算單元22可由一個或多個諸如微處理器、微控制器、數位信號處理器、微型計算機、中央處理器、場編程閘陣列、可編程邏輯設備、狀態器、邏輯電路、類比電路、數位電路和/或任何基於操作指令操作信號(類比和/或數位)的處理元件來實現,用於辨識偵測訊號以產生相對應之控制訊號,並依據控制訊號及虛擬人物產生直播動畫,關於影像運算機制及其相關實施例,請詳參前述。Please continue to refer to FIGS. 3 and 4. The virtual character live broadcast system according to another embodiment of the present invention includes a first terminal device 1 and a live broadcast management platform 2. The live broadcast management platform 2 includes a management communication module 20 and a management computing unit 22. The management computing unit 22 is electrically connected to the management communication module 20. In an embodiment, the management communication module 20 may be a wireless communication interface, such as but not limited to: wireless local area network (WIFI), cellular network (3G, 4G, 5G), Zigbee, etc. For transmitting interactive signals and live animations corresponding to control signals, please refer to the foregoing for the signal transmission mechanism, virtual characters, live animation calculation mechanism and other related embodiments; the management computing unit 22 can be one or more such as microprocessors, micro-controllers Devices, digital signal processors, microcomputers, central processors, field programming gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, digital circuits, and/or any operating signals (analog and/or digital) based on operating instructions ) Is implemented by a processing element for identifying the detection signal to generate a corresponding control signal, and generating live animation based on the control signal and the virtual character. For the image calculation mechanism and related embodiments, please refer to the foregoing.

第一終端裝置1包含一終端通訊模組10、一終端顯示單元12、一動作偵測單元14以及一終端運算單元16。終端運算單元16電性連接於終端通訊模組10、終端顯示單元12及動作偵測單元14。終端通訊模組10通訊連接於管理通訊模組20。於一實施例中,終端通訊模組10可為無線通訊介面,例如但不限於:無線區域網路(WIFI)、蜂巢式網路(3G、4G、5G)、紫蜂(Zigbee)等,用於接收互動訊號並傳送控制訊號,關於訊號收發機制及其相關實施例,請詳參前述。The first terminal device 1 includes a terminal communication module 10, a terminal display unit 12, a motion detection unit 14, and a terminal arithmetic unit 16. The terminal arithmetic unit 16 is electrically connected to the terminal communication module 10, the terminal display unit 12 and the motion detection unit 14. The terminal communication module 10 is communicatively connected to the management communication module 20. In one embodiment, the terminal communication module 10 may be a wireless communication interface, such as but not limited to: wireless local area network (WIFI), cellular network (3G, 4G, 5G), Zigbee, etc. For receiving interactive signals and transmitting control signals, please refer to the foregoing for details of the signal transmission and reception mechanism and related embodiments.

於一實施例中,終端顯示單元12可為一觸控式螢幕,用於顯示一設定介面以及一直播介面,其中設定介面及直播介面之相關技術內容、功效及優點,請詳參前述。動作偵測單元14可為一影像擷取單元,例如但不限於電荷耦合元件(CCD)、互補式金屬氧化物半導體(CMOS)或深度攝影機,用於偵測使用者之動作,關於偵測動作機制及其相關實施例,請詳參前述。終端運算單元16可由一個或多個諸如微處理器、微控制器、數位信號處理器、微型計算機、中央處理器、場編程閘陣列、可編程邏輯設備、狀態器、邏輯電路、類比電路、數位電路和/或任何基於操作指令操作信號(類比和/或數位)的處理元件來實現,用於產生對應動作之偵測訊號,並依據互動訊號產生互動資訊,關於辨識動作機制及其相關實施例,請詳參前述。In an embodiment, the terminal display unit 12 may be a touch screen for displaying a setting interface and a live broadcast interface. The related technical content, functions, and advantages of the setting interface and the live broadcast interface are described in detail above. The motion detection unit 14 may be an image capture unit, such as but not limited to a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS), or a depth camera, used to detect the user's motion. For the mechanism and related embodiments, please refer to the foregoing. The terminal arithmetic unit 16 may be one or more such as a microprocessor, a microcontroller, a digital signal processor, a microcomputer, a central processor, a field programming gate array, a programmable logic device, a state machine, a logic circuit, an analog circuit, a digital The circuit and/or any processing element based on the operation command operation signal (analog and/or digital) is used to generate a detection signal corresponding to the action and generate interactive information based on the interactive signal. Regarding the identification action mechanism and related embodiments , Please refer to the aforementioned.

在部分實施例中,用於虛擬人物直播的電腦程式產品是由一組指令所組成,當諸如第一終端裝置1等電腦載入並執行該組指令後能完成上述任一實施例之虛擬人物直播方法。In some embodiments, the computer program product for live broadcasting of virtual characters is composed of a set of instructions. When a computer such as the first terminal device 1 loads and executes the set of instructions, the virtual character of any of the above embodiments can be completed Live streaming method.

綜合上述,本創作之部分實施例提供一種虛擬人物直播系統,主要是透過第一終端裝置連線至直播管理平台,先在設定介面上經客製化設定喜好的虛擬人物,然後在直播平台上即時控制虛擬人物的表情、動作以作為使用者的替身影像,據以透過直播平台與觀眾分享實況直播與即時互動,可突破預錄式虛擬影像及觀看地點的傳統限制,從而增加直播的趣味性且在特定應用上保有使用者個人隱私性。此外,可透過計算機技術和可穿戴設備協同產生並提供所有的真實和虛擬組合環境以及人機互動,舉例而言,虛擬人物結合擴增實境技術,將實體門市設定為任務地點,使用者/觀眾可至實體門市將自己所擁有的虛擬人物由行動裝置例如第一/第二終端裝置,傳送至實體門市的顯示裝置,讓虛擬人物不僅能在行動裝置中被執行任務,亦可以被傳送至另一裝置中執行任務,藉此虛實整合線下互動實體門市,以增加虛擬人物的實用性及趣味性。Based on the above, some embodiments of this creation provide a virtual character live broadcast system, which is mainly connected to the live broadcast management platform through the first terminal device, first through the customized virtual preferences of the setting interface, and then on the live broadcast platform Real-time control of the expressions and actions of virtual characters as user stand-in images, based on which live broadcast and real-time interaction can be shared with viewers through the live broadcast platform, which can break through the traditional limitations of pre-recorded virtual images and viewing locations, thereby increasing the fun of live broadcast And keep the user's personal privacy in specific applications. In addition, all real and virtual combined environments and human-computer interactions can be generated and provided through computer technology and wearable devices. For example, virtual characters combined with augmented reality technology can set physical stores as task locations, users/ The audience can go to the physical store to transfer the virtual characters they own from the mobile device, such as the first/second terminal device, to the display device of the physical store, so that the virtual characters can not only be executed in the mobile device, but also can be transferred to In another device, the task is performed, thereby integrating the virtual and real offline interactive physical stores to increase the practicality and fun of the virtual character.

以上所述之實施例僅是為說明本創作之技術思想及特點,其目的在使熟習此項技藝之人士能夠瞭解本創作之內容並據以實施,當不能以此限定本創作之專利範圍,即大凡依本創作所揭示之精神所作之均等變化或修飾,仍應涵蓋在本創作之專利範圍內。The above-mentioned embodiments are only to illustrate the technical ideas and characteristics of this creation, and its purpose is to enable those who are familiar with this skill to understand the content of this creation and implement it accordingly. When the patent scope of this creation cannot be limited by this, That is to say, the equal changes or modifications made by Dafan in accordance with the spirit of this creation should still be covered by the patent scope of this creation.

S1~S5:步驟 1:第一終端裝置 10:終端通訊模組 12:終端顯示單元 14:動作偵測單元 16:終端運算單元 2:直播管理平台 20:管理通訊模組 22:管理運算單元 24:管理資料庫 3、3’:第二終端裝置 4:全像投影裝置S1~S5: Step 1: the first terminal device 10: Terminal communication module 12: Terminal display unit 14: Motion detection unit 16: Terminal arithmetic unit 2: Live broadcast management platform 20: Management communication module 22: Management computing unit 24: Manage the database 3. 3’: Second terminal device 4: Holographic projection device

圖1為本創作一實施例之虛擬人物直播方法之流程示意圖。 圖2為本創作一實施例之虛擬人物直播系統之架構示意圖。 圖3為本創作一實施例之第一終端裝置之方塊示意圖。 圖4為本創作一實施例之直播管理平台之方塊示意圖。 FIG. 1 is a schematic flowchart of a method for creating a virtual character live broadcast according to an embodiment. FIG. 2 is a schematic structural diagram of creating a virtual character live broadcast system according to an embodiment. FIG. 3 is a block diagram of a first terminal device according to an embodiment of the invention. FIG. 4 is a block diagram of a live broadcast management platform according to an embodiment.

1:第一終端裝置 1: the first terminal device

2:直播管理平台 2: Live broadcast management platform

3、3’:第二終端裝置 3. 3’: Second terminal device

4:全像投影裝置 4: Holographic projection device

Claims (7)

一種虛擬人物直播系統,包含: 一直播管理平台,包含: 一管理通訊模組,用於傳送一互動訊號以及對應一控制訊號之一直播動畫,其中該直播動畫係依據該控制訊號及一虛擬人物所產生;以及 一管理運算單元,電性連接於該管理通訊模組,用於辨識一偵測訊號以產生對應該偵測訊號之該控制訊號;以及 一第一終端裝置,包含: 一終端通訊模組,通訊連接於該管理通訊模組,用於接收該互動訊號並傳送該偵測訊號; 一終端顯示單元,用於顯示一設定介面以及一直播介面,其中該設定介面包含一虛擬人物設定區,且該直播介面中包含該虛擬人物以及一互動資訊,並經由該虛擬人物設定區域設定該虛擬人物; 一動作偵測單元,用於偵測一使用者之一動作;以及 一終端運算單元,電性連接於該終端通訊模組、該終端顯示單元及該動作偵測單元,用於產生對應該動作之該偵測訊號,並依據該互動訊號產生該互動資訊。 A virtual character live broadcast system, including: A live broadcast management platform, including: A management communication module for transmitting an interactive signal and a live animation corresponding to a control signal, wherein the live animation is generated based on the control signal and a virtual character; and A management computing unit, electrically connected to the management communication module, for identifying a detection signal to generate the control signal corresponding to the detection signal; and A first terminal device, including: A terminal communication module, which is connected to the management communication module for receiving the interactive signal and transmitting the detection signal; A terminal display unit is used for displaying a setting interface and a live interface, wherein the setting interface includes a virtual character setting area, and the live interface includes the virtual character and an interactive information, and the virtual character setting area is used to set the Virtual characters; A motion detection unit for detecting a motion of a user; and A terminal computing unit, electrically connected to the terminal communication module, the terminal display unit and the motion detection unit, is used to generate the detection signal corresponding to the motion and generate the interactive information according to the interactive signal. 如請求項1所述之虛擬人物直播系統,其中該第一終端裝置之該動作偵測單元擷取該使用者之一動作影像;以及該直播管理平台之該管理運算單元依據該動作影像,計算該動作之座標變化並辨識該動作,以輸出對應之該控制訊號。The virtual character live broadcast system according to claim 1, wherein the motion detection unit of the first terminal device captures a motion image of the user; and the management computing unit of the live broadcast management platform calculates based on the motion image The coordinate of the action changes and the action is recognized to output the corresponding control signal. 如請求項1所述之虛擬人物直播系統,其中該第一終端裝置之該終端運算單元辨識該偵測訊號以產生對應該偵測訊號之該控制訊號。The virtual character live broadcast system according to claim 1, wherein the terminal computing unit of the first terminal device recognizes the detection signal to generate the control signal corresponding to the detection signal. 如請求項1所述之虛擬人物直播系統,其中該直播管理平台之該管理通訊模組接收該第一終端裝置之該終端通訊模組所傳送之該偵測訊號,且該管理運算單元依據基於該偵測訊號所產生之該控制訊號修改關聯於該虛擬人物之一動畫模型,以產生該直播動畫。The virtual character live broadcast system according to claim 1, wherein the management communication module of the live broadcast management platform receives the detection signal transmitted by the terminal communication module of the first terminal device, and the management operation unit is based on The control signal generated by the detection signal is modified and associated with an animation model of the virtual character to generate the live animation. 如請求項1所述之虛擬人物直播系統,其中該第一終端裝置傳送一互動任務至該直播管理平台,以供與該直播管理平台通訊連線之複數第二終端裝置依據該互動任務產生該互動訊號。The virtual character live broadcast system as described in claim 1, wherein the first terminal device transmits an interactive task to the live broadcast management platform for a plurality of second terminal devices in communication with the live broadcast management platform to generate the Interactive signal. 如請求項5所述之虛擬人物直播系統,其中該些第二終端裝置至少其中之一傳送一位置資訊至該直播管理平台,且該直播管理平台判定該位置資訊是否符合一電子地圖中一任務地點,當該位置資訊符合該任務地點時,該直播管理平台允許該第二終端裝置執行該互動任務。The virtual character live broadcast system according to claim 5, wherein at least one of the second terminal devices transmits a location information to the live broadcast management platform, and the live broadcast management platform determines whether the location information meets a task in an electronic map Location, when the location information matches the task location, the live broadcast management platform allows the second terminal device to perform the interactive task. 如請求項6所述之虛擬人物直播系統,更包含: 一全像投影裝置,通訊連線於該管理通訊模組,用於接收並投放一任務影像至該任務地點。 The virtual character live broadcast system as described in claim 6 further includes: A holographic projection device is connected to the management communication module for receiving and sending a task image to the task location.
TW108216467U 2019-12-11 2019-12-11 Virtual character live streaming system TWM594767U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108216467U TWM594767U (en) 2019-12-11 2019-12-11 Virtual character live streaming system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108216467U TWM594767U (en) 2019-12-11 2019-12-11 Virtual character live streaming system

Publications (1)

Publication Number Publication Date
TWM594767U true TWM594767U (en) 2020-05-01

Family

ID=71896552

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108216467U TWM594767U (en) 2019-12-11 2019-12-11 Virtual character live streaming system

Country Status (1)

Country Link
TW (1) TWM594767U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI747333B (en) * 2020-06-17 2021-11-21 光時代科技有限公司 Interaction method based on optical communictation device, electric apparatus, and computer readable storage medium
CN115225915A (en) * 2021-04-15 2022-10-21 奥图码数码科技(上海)有限公司 Live broadcast recording device, live broadcast recording system and live broadcast recording method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI747333B (en) * 2020-06-17 2021-11-21 光時代科技有限公司 Interaction method based on optical communictation device, electric apparatus, and computer readable storage medium
CN115225915A (en) * 2021-04-15 2022-10-21 奥图码数码科技(上海)有限公司 Live broadcast recording device, live broadcast recording system and live broadcast recording method
US11921971B2 (en) 2021-04-15 2024-03-05 Optoma China Co., Ltd Live broadcasting recording equipment, live broadcasting recording system, and live broadcasting recording method

Similar Documents

Publication Publication Date Title
TWI708152B (en) Image processing method, device, and storage medium
TWI650675B (en) Method and system for group video session, terminal, virtual reality device and network device
CN107852573B (en) Mixed reality social interactions
US20180232929A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US11450051B2 (en) Personalized avatar real-time motion capture
US11050464B2 (en) First portable electronic device for facilitating a proximity based interaction with a second portable electronic device based on a plurality of gesture
US10966076B2 (en) First portable electronic device for facilitating a proximity based interaction with a second portable electronic device
US20230116929A1 (en) Mirror-based augmented reality experience
US11782272B2 (en) Virtual reality interaction method, device and system
TW202123128A (en) Virtual character live broadcast method, system thereof and computer program product
WO2016130935A1 (en) System and method to integrate content in real time into a dynamic 3-dimensional scene
US20230130535A1 (en) User Representations in Artificial Reality
KR20230062857A (en) augmented reality messenger system
TWM594767U (en) Virtual character live streaming system
TWI803224B (en) Contact person message display method, device, electronic apparatus, computer readable storage medium, and computer program product
Wen et al. A survey of facial capture for virtual reality
TW202249484A (en) Dynamic mixed reality content in virtual reality
WO2023071630A1 (en) Enhanced display-based information exchange method and apparatus, device, and medium
WO2023211688A1 (en) Shared augmented reality experience in video chat
Zhang et al. Development of a haptic video chat system
TWM609535U (en) Blockchain interface system combined with augmented reality application
US20240039878A1 (en) Mutual affinity widget
US20220141551A1 (en) Moving image distribution system, moving image distribution method, and moving image distribution program
US20230362333A1 (en) Data processing method and apparatus, device, and readable storage medium