TW202123128A - Virtual character live broadcast method, system thereof and computer program product - Google Patents

Virtual character live broadcast method, system thereof and computer program product Download PDF

Info

Publication number
TW202123128A
TW202123128A TW108145415A TW108145415A TW202123128A TW 202123128 A TW202123128 A TW 202123128A TW 108145415 A TW108145415 A TW 108145415A TW 108145415 A TW108145415 A TW 108145415A TW 202123128 A TW202123128 A TW 202123128A
Authority
TW
Taiwan
Prior art keywords
live
live broadcast
management platform
interactive
virtual character
Prior art date
Application number
TW108145415A
Other languages
Chinese (zh)
Inventor
賴錦德
Original Assignee
狂點軟體開發股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 狂點軟體開發股份有限公司 filed Critical 狂點軟體開發股份有限公司
Priority to TW108145415A priority Critical patent/TW202123128A/en
Publication of TW202123128A publication Critical patent/TW202123128A/en

Links

Images

Abstract

A virtual character live broadcast system includes a live management platform and a first terminal device. The live management platform is configured to transmit an interactive signal and live animations, where the live animations are generated based on a control signal and a virtual character. The first terminal device is configured to detect the action of a user to generate the detecting signal corresponding to an action, and generate interactive information according to the interaction signal, and display a setting interface and a live interface, wherein the setting interface includes a virtual character setting area, and the live interface includes the virtual character and the interactive information, and sets the virtual character through the virtual character setting area.

Description

虛擬人物直播方法、系統及電腦程式產品Virtual character live broadcast method, system and computer program product

本發明是有關一種虛擬人物直播方法、系統及電腦程式產品,特別是一種基於真實影像之虛擬人物直播方法、系統及電腦程式產品。The present invention relates to a method, system and computer program product for live broadcast of virtual characters, in particular to a method, system and computer program product for live broadcast of virtual characters based on real images.

虛擬實境(Virtual Reality, VR)是將真實世界的使用者置身虛擬的空間,經由電腦塑造逼近真實的三維虛擬環境,例如:透過視覺輔佐聽覺、模擬真實動態雙眼視差,讓使用者體驗身歷其境。擴增實境(Augmented Reality, AR)是透過攝影機拍攝的位置及角度精算加上圖像分析技術,讓螢幕上的虛擬角色與現實世界能夠結合互動的技術,換言之,讓使用者可以在真實世界中看見假想物件出現在實境畫面中。Virtual Reality (VR) is where real-world users are placed in a virtual space, which is shaped by a computer to approximate a real three-dimensional virtual environment. For example: through visual aids hearing, simulation of real dynamic binocular parallax, allowing users to experience the experience Its environment. Augmented Reality (AR) is a technology that allows the virtual characters on the screen to interact with the real world through the actuarial calculation of the position and angle of the camera and the real world. In other words, the user can be in the real world. I saw imaginary objects appearing in the real scene.

上述虛擬實境及擴增實境影像服務提供者/遊戲廠商,通常是依據既有資料庫提供一虛擬影像,以供使用者觀看,例如在醫療領域中透過一擴增實境系統顯示一擴增實境影像,例如依據超音波影像所產生的三維虛擬器官影像,讓醫護人員操作並展示該擴增實境影像給病患觀看並進行解說。然而,上述影像技術只是透過資料庫儲存的虛擬物件影像,尚無提供使用者修改或客製化虛擬影像之功能,且往往受限於特定場域、設備才能顯示,在應用上仍有所限制。The aforementioned virtual reality and augmented reality image service providers/game manufacturers usually provide a virtual image based on an existing database for users to watch. For example, in the medical field, an augmented reality system is used to display an expansion. Augmented reality images, such as three-dimensional virtual organ images generated based on ultrasound images, allow medical staff to operate and display the augmented reality images for patients to view and explain. However, the above-mentioned image technology is only a virtual object image stored in a database, and does not yet provide users with the function of modifying or customizing the virtual image, and is often limited to a specific field and device to display it, and there are still restrictions on applications .

有鑑於此,本發明之部分實施例提供一種虛擬人物直播方法、系統及電腦程式產品。In view of this, some embodiments of the present invention provide a method, system and computer program product for live broadcast of virtual characters.

本發明一實施例之虛擬人物直播方法,應用於一第一終端裝置,包含:顯示一設定介面,其中設定介面包含一虛擬人物設定區,並經由虛擬人物設定區域設定一虛擬人物;啟動並顯示一直播介面,直播介面中包含虛擬人物;偵測一使用者之一動作以產生對應動作之一偵測訊號;將偵測訊號經由網路傳送至一直播管理平台,並由直播管理平台辨識偵測訊號以產生對應偵測訊號之一控制訊號,播送對應控制訊號之一直播動畫,其中直播動畫係依據控制訊號及虛擬人物所產生;以及接收由直播管理平台所傳送之一互動訊號,並依據互動訊號顯示一互動資訊於直播介面。An embodiment of the avatar live broadcast method of the present invention is applied to a first terminal device and includes: displaying a setting interface, wherein the setting interface includes an avatar setting area, and setting an avatar through the avatar setting area; starting and displaying A live interface, which contains virtual characters; detects a user's action to generate a detection signal corresponding to the action; sends the detection signal to a live management platform via the network, and the live management platform recognizes and detects Detect the signal to generate a control signal corresponding to the detection signal, broadcast a live animation of the corresponding control signal, where the live animation is generated based on the control signal and virtual characters; and receive an interactive signal transmitted by the live management platform, and based on The interactive signal displays an interactive message on the live interface.

本發明另一實施例之電腦程式產品,其包括一組指令,當電腦載入並執行此組指令後能完成根據本發明任一實施例之虛擬人物直播方法。A computer program product according to another embodiment of the present invention includes a set of instructions. When the computer loads and executes the set of instructions, the virtual character live broadcast method according to any embodiment of the present invention can be completed.

本發明又一實施例之虛擬人物直播系統包含一直播管理平台以及一第一終端裝置。直播管理平台包含一管理通訊模組以及一管理運算單元。管理通訊模組用於傳送一互動訊號以及對應一控制訊號之一直播動畫,其中直播動畫係依據控制訊號及一虛擬人物所產生。管理運算單元電性連接於管理通訊模組。管理運算單元用於辨識一偵測訊號以產生對應偵測訊號之控制訊號。第一終端裝置包含一終端通訊模組、一終端顯示單元、一動作偵測單元以及一終端運算單元。終端運算單元電性連接於終端通訊模組、終端顯示單元及動作偵測單元。終端通訊模組通訊連接於管理通訊模組。終端通訊模組用於接收互動訊號並傳送偵測訊號。終端顯示單元用於顯示一設定介面以及一直播介面,其中設定介面包含一虛擬人物設定區,且直播介面中包含虛擬人物以及一互動資訊,並經由虛擬人物設定區域設定虛擬人物。動作偵測單元用於偵測一使用者之一動作。終端運算單元用於產生對應動作之偵測訊號,並依據互動訊號產生互動資訊。A virtual character live broadcast system according to another embodiment of the present invention includes a live broadcast management platform and a first terminal device. The live broadcast management platform includes a management communication module and a management computing unit. The management communication module is used to transmit an interactive signal and a live animation corresponding to a control signal, where the live animation is generated based on the control signal and a virtual character. The management computing unit is electrically connected to the management communication module. The management computing unit is used to identify a detection signal to generate a control signal corresponding to the detection signal. The first terminal device includes a terminal communication module, a terminal display unit, a motion detection unit, and a terminal computing unit. The terminal computing unit is electrically connected to the terminal communication module, the terminal display unit and the motion detection unit. The terminal communication module is connected to the management communication module. The terminal communication module is used for receiving interactive signals and transmitting detection signals. The terminal display unit is used for displaying a setting interface and a live interface. The setting interface includes an avatar setting area, and the live interface includes a avatar and an interactive information, and the avatar is set through the avatar setting area. The motion detection unit is used for detecting a motion of a user. The terminal arithmetic unit is used to generate detection signals corresponding to actions, and generate interactive information according to the interactive signals.

藉此,使用者可透過第一終端裝置連線至直播管理平台,先在設定介面上經客製化設定喜好的虛擬人物,然後使用者在直播平台上即時控制虛擬人物的表情、動作以作為自己的替身影像,據以透過直播平台與觀眾分享實況直播與即時互動,可突破預錄式虛擬影像及觀看地點的傳統限制,從而增加直播的趣味性且在特定應用上保有使用者個人隱私性。In this way, the user can connect to the live broadcast management platform through the first terminal device, first customize the favorite avatar on the setting interface, and then the user can control the avatar’s expressions and actions in real time on the live broadcast platform. Your own stand-in image can be used to share the live broadcast and real-time interaction with the audience through the live broadcast platform, which can break through the traditional limitations of pre-recorded virtual images and viewing locations, thereby increasing the fun of the live broadcast and maintaining the user’s personal privacy in specific applications .

以下藉由具體實施例配合所附的圖式詳加說明,當更容易瞭解本發明之目的、技術內容、特點及其所達成之功效。The following detailed descriptions are provided with specific embodiments in conjunction with the accompanying drawings to make it easier to understand the purpose, technical content, characteristics and effects of the present invention.

以下將詳述本發明之各實施例,並配合圖式作為例示。在說明書的描述中,為了使讀者對本發明有較完整的瞭解,提供了許多特定細節;然而,本發明可能在省略部分或全部特定細節的前提下仍可實施。圖式中相同或類似之元件將以相同或類似符號來表示。特別注意的是,圖式僅為示意之用,並非代表元件實際之尺寸或數量,有些細節可能未完全繪出,以求圖式之簡潔。Hereinafter, each embodiment of the present invention will be described in detail, and the drawings will be used as examples. In the description of the specification, in order to enable the reader to have a more complete understanding of the present invention, many specific details are provided; however, the present invention may still be implemented under the premise of omitting some or all of the specific details. The same or similar elements in the drawings will be represented by the same or similar symbols. It should be noted that the drawings are for illustrative purposes only, and do not represent the actual size or quantity of the components. Some details may not be completely drawn in order to keep the drawings concise.

圖1為本發明一實施例之虛擬人物直播方法之流程示意圖。圖2為本發明一實施例之虛擬人物直播系統之架構示意圖。圖3為本發明一實施例之第一終端裝置之方塊示意圖。圖4為本發明一實施例之直播管理平台之方塊示意圖。請一併參照圖1至圖4,本發明之一實施例之虛擬人物直播方法可藉由與第一終端裝置1通訊連線之一直播管理平台2來實現,以供一個或多個第二終端裝置3、3’即時接收來自第一終端裝置1之使用者直播影像或互動任務等。舉例而言,使用者可為網路媒體、直播平台的直播主,亦有稱VTuber,他們的網路直播內容包含但不限於聊天、唱歌、舞蹈、美食、電競、動漫、美妝、娛樂等類型,這些直播主透過直播平台與觀眾分享實況直播與即時互動,可因此獲得數位轉帳的打賞與直播平台分成而獲利。FIG. 1 is a schematic flowchart of a method for live broadcast of a virtual character according to an embodiment of the present invention. FIG. 2 is a schematic diagram of the structure of a virtual character live broadcast system according to an embodiment of the present invention. FIG. 3 is a block diagram of a first terminal device according to an embodiment of the invention. Fig. 4 is a block diagram of a live broadcast management platform according to an embodiment of the present invention. Please refer to FIGS. 1 to 4 together. The live broadcast method of a virtual person in an embodiment of the present invention can be implemented by a live broadcast management platform 2 communicating with the first terminal device 1 to provide one or more second The terminal devices 3 and 3'instantly receive the user's live video or interactive tasks from the first terminal device 1 in real time. For example, users can be live broadcasters of online media and live broadcast platforms, also known as VTuber. Their live web content includes but is not limited to chat, singing, dancing, food, e-sports, animation, beauty, and entertainment. These live broadcasters share the live broadcast and real-time interaction with the audience through the live broadcast platform, so they can get the rewards of digital transfer and the live broadcast platform to share and profit.

在部分實施例中,虛擬人物直播方法可由一電腦程式實現,以致於當電腦 (即,具有終端通訊模組10、終端顯示單元12、動作偵測單元14與終端運算單元16之任意電子裝置,如:第一終端裝置1)網路連線至伺服器(即,具有管理通訊模組20與管理運算單元22之任意電子裝置,如:直播管理平台2),載入程式並執行後,可完成任一實施例之虛擬人物直播方法。於本實施例中,使用者例如直播主,可利用第一終端裝置1例如但不限於:手機、平板電腦或筆記型電腦等,透過網際網路與直播管理平台2建立通訊連線以執行虛擬人物直播方法,以供觀眾操作第二終端裝置3、3’透過網際網路與直播管理平台2建立通訊連線,以觀看直播內容、進行互動或參與遊戲解任務等。In some embodiments, the virtual character live broadcast method can be implemented by a computer program, so that it acts as a computer (that is, any electronic device having a terminal communication module 10, a terminal display unit 12, a motion detection unit 14 and a terminal computing unit 16, For example, the first terminal device 1) is connected to the server through the network (ie, any electronic device with the management communication module 20 and the management computing unit 22, such as the live management platform 2), after the program is loaded and executed, it can be Complete the virtual character live broadcast method of any embodiment. In this embodiment, a user such as a live broadcast host can use the first terminal device 1, such as but not limited to: a mobile phone, a tablet computer, or a notebook computer, to establish a communication connection with the live broadcast management platform 2 through the Internet to execute virtual The character live broadcast method allows the audience to operate the second terminal device 3, 3'to establish a communication connection with the live broadcast management platform 2 through the Internet to watch live broadcast content, interact or participate in game solving tasks, etc.

發明人認識到,現有的虛擬實境及擴增實境影像服務提供者,通常是依據影像資料庫提供既有的虛擬影像,以供觀眾在特定場合觀看,應用上仍有所限制。然而,透過本發明任一實施例之虛擬人物直播方法可突破虛擬影像及觀看地點的限制,以帶來更廣泛的應用及趣味性,例如利用虛擬人物來進行直播,觀眾可即時透過行動裝置在任何地點收看直播主所建立之虛擬影像並進行互動,以下例示說明相關實施例及詳細步驟。The inventor recognizes that existing virtual reality and augmented reality image service providers usually provide existing virtual images based on image databases for viewers to watch on specific occasions, and there are still limitations in application. However, the live broadcast method of virtual characters in any embodiment of the present invention can break through the limitations of virtual images and viewing locations to bring a wider range of applications and fun. For example, the use of virtual characters to broadcast live, viewers can use mobile devices in real time. Watch and interact with virtual images created by the live broadcaster at any place. The following examples illustrate related embodiments and detailed steps.

於本實施例中,首先,使用者利用第一終端裝置1網路連線至直播管理平台2,並透過第一終端裝置1內建之網頁瀏覽器(Browser)或應用程式(APP)來讀取並操作源自直播管理平台2的一設定介面。設定介面可顯示一虛擬人物設定區,其中虛擬人物設定區提供一個或多個臉型、表情、身形、服裝或人物角色等影像編輯要素,以供使用者選擇並設定用於直播之虛擬人物特色。於一實施例中,使用者可選擇在基於自己真實影像加入虛擬服裝、裝飾品、道具、動態表情等影像編輯要素,經由影像修飾、整合或透過人工智慧演算法,產生不同於真實世界之虛擬人物來替代自己真實影像,以增加直播的趣味性或在特定應用上保有使用者個人隱私性。In this embodiment, first, the user uses the first terminal device 1 to connect to the live broadcast management platform 2 through the network, and reads through the built-in web browser (Browser) or application (APP) of the first terminal device 1 Fetch and operate a setting interface from the live broadcast management platform 2. The setting interface can display an avatar setting area, where the avatar setting area provides one or more image editing elements such as face, expression, body shape, clothing or persona for users to select and set the characteristics of the avatar for live broadcast . In one embodiment, users can choose to add image editing elements such as virtual clothing, decorations, props, dynamic expressions, etc. based on their real images, and generate virtual images that are different from the real world through image modification, integration, or artificial intelligence algorithms. Characters replace their real images to increase the fun of live broadcasts or to maintain user privacy on specific applications.

舉例而言,直播主基於自己真實影像修飾轉換為高瘦身型、穿著漫畫服裝之虛擬人物,或掃臉自建三維虛擬人物(3D Avatar),並據此虛擬影像進行即時直播,無論是虛擬實境(VR)、擴增實境(AR)或混合實境(MR)技術均為可行方案,然本發明並不限制這些實施細節;或,使用者可簡單選擇一虛擬角色來作為自己替身影像,其中虛擬角色可為二維或三維圖像,由圖畫、卡通、電腦動畫等形式呈現且不必然相同或近似於使用者之形像等,亦即使用者可選擇與自己真實影像不同之虛擬角色作為替身以進行直播。簡言之,透過步驟S1,第一終端裝置1之終端顯示單元12顯示一設定介面,其中設定介面包含一虛擬人物設定區,並經由虛擬人物設定區域設定一虛擬人物,以用於直播。For example, the live broadcaster converts his real image modification into a tall and thin virtual character wearing a comic costume, or scans his face to create a 3D avatar (3D Avatar), and performs real-time live broadcast based on the virtual image, whether it is a virtual reality Virtual reality (VR), augmented reality (AR), or mixed reality (MR) technologies are all feasible solutions, but the present invention does not limit these implementation details; or, the user can simply choose a virtual character as a stand-in image , Where the virtual character can be a two-dimensional or three-dimensional image, presented in the form of pictures, cartoons, computer animation, etc., and is not necessarily the same or similar to the image of the user, that is, the user can choose a virtual image that is different from the real image of the user. The character acts as a stand-in for live broadcast. In short, through step S1, the terminal display unit 12 of the first terminal device 1 displays a setting interface, wherein the setting interface includes an avatar setting area, and an avatar is set through the avatar setting area for live broadcast.

在至少一實施例中,設定介面包含一身分驗證機制,舉例而言,可採用綁定使用者真實資料或社群網站帳號等登入驗證方式,例如:使用者必須填寫個人資料以註冊帳戶並登入後,再客製化設定專屬的虛擬人物或基於真實影像修改、編輯所欲添加的影像要素作為虛擬人物,以用於後續直播。藉此,可確保同一虛擬人物是由同一使用者來設定及使用,以避免第三人盜用虛擬人物,維持直播平台管理秩序及交易安全。In at least one embodiment, the setting interface includes an identity verification mechanism. For example, a login verification method such as binding user real data or social website account can be used. For example, the user must fill in personal data to register an account and log in. Then, customize the exclusive virtual character or modify and edit the image elements based on the real image as the virtual character for subsequent live broadcast. In this way, it can be ensured that the same virtual character is set and used by the same user, so as to prevent third parties from embezzling the virtual character and maintain the order of management of the live broadcast platform and transaction security.

其次,透過步驟S2,第一終端裝置1啟動並顯示一直播介面,直播介面中包含虛擬人物。舉例而言,終端運算單元16啟動一直播介面,並依據使用者設定之虛擬人物,透過終端顯示單元12顯示虛擬人物於直播介面。整體而言,終端顯示單元12可顯示設定介面以及直播介面,其中設定介面包含虛擬人物設定區,且直播介面中包含經使用者設定的虛擬人物以及來自觀眾的互動資訊,其中使用者經由虛擬人物設定區域設定虛擬人物之臉型、表情、身形、服裝或人物角色等影像編輯要素,以用於直播。Secondly, through step S2, the first terminal device 1 activates and displays a live interface, and the live interface includes virtual characters. For example, the terminal arithmetic unit 16 activates a live interface, and displays the avatar on the live interface through the terminal display unit 12 according to the avatar set by the user. In general, the terminal display unit 12 can display a setting interface and a live broadcast interface. The setting interface includes an avatar setting area, and the live broadcast interface includes avatars set by the user and interactive information from the audience. The setting area sets the image editing elements of the avatar's face, expression, body shape, clothing, or characters for live broadcast.

再者,在步驟S3中,第一終端裝置1偵測一使用者之一動作以產生對應動作之一偵測訊號。於一實施例中,第一終端裝置1透過動作偵測單元14偵測使用者之一動作,以產生相對應之一偵測訊號。其中,使用者之動作可為直播主的臉部特徵、表情、手勢、位移、舞蹈及其組合等。舉例而言,動作偵測單元14用於擷取使用者的可見光影像、不可見光(紅外光、遠紅外光)影像、熱影像,或擷取使用者的手勢、位移、肢體動作等,以產生相對應之一偵測訊號,例如:數位影像訊號、聲音訊號等。Furthermore, in step S3, the first terminal device 1 detects an action of a user to generate a detection signal corresponding to the action. In one embodiment, the first terminal device 1 detects a movement of the user through the motion detection unit 14 to generate a corresponding detection signal. Among them, the user's actions can be the facial features, expressions, gestures, displacement, dance, and combinations thereof of the live broadcaster. For example, the motion detection unit 14 is used to capture the user's visible light image, invisible light (infrared light, far-infrared light) image, thermal image, or capture the user's gestures, displacements, body movements, etc., to generate Corresponding to a detection signal, such as: digital video signal, audio signal, etc.

接著,在步驟S4中,第一終端裝置1將偵測訊號經由網路傳送至直播管理平台2,並由直播管理平台2辨識偵測訊號以產生對應偵測訊號之一控制訊號,播送對應控制訊號之一直播動畫,其中直播動畫係依據控制訊號及虛擬人物所產生。於一實施例中,終端運算單元16透過終端通訊模組10將偵測訊號經由網路傳送至直播管理平台2,其中管理運算單元22透過管理通訊模組20接收偵測訊號,接著,管理運算單元22辨識偵測訊號以產生對應偵測訊號之一控制訊號,舉例而言,動作偵測單元14擷取使用者之動作影像,因此偵測訊號為數位影像訊號,以供管理運算單元22辨識數位影像訊號,詳言之,終端運算單元16依據動作影像,計算在動作影像中該動作之座標變化並辨識該動作,以輸出對應之控制訊號。然後,管理運算單元22依據控制訊號及虛擬人物圖像,透過動作捕捉演算法產生關聯於虛擬人物且對應於控制訊號的直播動畫,並透過管理通訊模組20播送直播動畫給一個或多個第二終端裝置3、3’。於另一實施例中,第一終端裝置1透過步驟S1在虛擬人物設定區選定喜好的臉型、表情、身形、服裝或人物角色等影像編輯要素,作為用於直播之虛擬人物特色,以建立初始的動畫模型,而直播管理平台2在步驟S4中,透過動作捕捉演算法依據控制訊號修改關聯於虛擬人物之動畫模型,產生直播動畫,並透過管理通訊模組20播送直播動畫給一個或多個第二終端裝置3、3’。Then, in step S4, the first terminal device 1 transmits the detection signal to the live management platform 2 via the network, and the live management platform 2 recognizes the detection signal to generate a control signal corresponding to the detection signal, and broadcast the corresponding control One of the signals is live animation, in which live animation is generated based on control signals and virtual characters. In one embodiment, the terminal computing unit 16 transmits the detection signal to the live management platform 2 through the terminal communication module 10 via the network, wherein the management computing unit 22 receives the detection signal through the management communication module 20, and then, the management computing The unit 22 recognizes the detection signal to generate a control signal corresponding to the detection signal. For example, the motion detection unit 14 captures the user's motion image, so the detection signal is a digital image signal for the management computing unit 22 to identify For the digital image signal, in detail, the terminal arithmetic unit 16 calculates the coordinate change of the action in the action image according to the action image and recognizes the action to output the corresponding control signal. Then, the management computing unit 22 generates a live animation related to the virtual character and corresponding to the control signal through a motion capture algorithm according to the control signal and the virtual character image, and broadcasts the live animation to one or more firsts through the management communication module 20 Two terminal devices 3, 3'. In another embodiment, the first terminal device 1 selects preferred image editing elements such as face, expression, body shape, clothing, or character in the virtual character setting area through step S1, as the virtual character feature for live broadcast, to create In step S4, the live management platform 2 modifies the animation model associated with the virtual character according to the control signal through the motion capture algorithm to generate the live animation, and broadcasts the live animation to one or more through the management communication module 20. A second terminal device 3, 3'.

需說明的是,控制訊號適用於依據使用者的動作操控直播動畫中虛擬人物的動作,而控制訊號是基於第一終端裝置1所擷取的偵測訊號(例如但讀限於:聲音、影像等訊號),經由例如雲端伺服器的直播管理平台2辨識所產生。而直播管理平台2,依據控制訊號及虛擬人物,經影像演算技術處理產生相對應之直播動畫,可有效節省第一終端裝置1的運算資源及軟硬體成本,但不以此為限。然而,在部分實施例中,第一終端裝置1之終端運算單元16亦可用於辨識偵測訊號,以產生對應使用者動作的控制訊號。舉例而言,動作偵測單元14擷取使用者之動作影像作為偵測訊號,以及終端運算單元16依據動作影像,計算在動作影像中該動作之座標變化並辨識該動作,以輸出對應之控制訊號。又,動作偵測單元14與終端運算單元16可整合為一,但不以此為限。It should be noted that the control signal is suitable for controlling the action of the virtual character in the live animation according to the user's action, and the control signal is based on the detection signal captured by the first terminal device 1 (such as but limited to: sound, image, etc.) Signal), which is generated by identification by the live management platform 2 of, for example, a cloud server. On the other hand, the live management platform 2 generates corresponding live animations based on control signals and virtual characters through image calculation technology, which can effectively save the computing resources and software and hardware costs of the first terminal device 1, but is not limited to this. However, in some embodiments, the terminal arithmetic unit 16 of the first terminal device 1 can also be used to identify the detection signal to generate a control signal corresponding to the user's action. For example, the motion detection unit 14 captures the user's motion image as a detection signal, and the terminal computing unit 16 calculates the coordinate change of the motion in the motion image according to the motion image and recognizes the motion to output the corresponding control Signal. Moreover, the motion detection unit 14 and the terminal arithmetic unit 16 can be integrated into one, but it is not limited to this.

在本實施例中,直播管理平台2包含管理通訊模組20、管理運算單元22以及管理資料庫24,如圖4所示。於一實施例中,管理資料庫24可由一個或多個記憶體實現。管理資料庫24儲存複數臉型、表情、身形、服裝或人物角色等影像編輯要素,可供設定介面或直播介面查詢及維護。雖然未特別繪出,在部分實施例中直播管理平台2可由網路伺服器、管理伺服器及資料庫來實現相同功能。於至少一實施例中,直播管理平台2可為微軟公司(Microsoft)所提供的公用雲端服務(Public Cloud Service)平台,例如:Microsoft Azure,但不以此為限。管理運算單元22是以 Azure 內的伺服器群經過虛擬化後形成的大量虛擬機器 (Virtual Machine) 所組成的服務群,其主要功能是提供 CPU、記憶體等具有運算能力的資源。而管理資料庫24可由Azure提供基本的儲存 (Azure Storage)與關聯式資料庫 (SQL Azure),包含但不限於 Blob、Table 和 Queue 分別管理非結構化資料、結構化資料與訊息通訊,並可支援在雲端虛擬機器間的快速資料共用。In this embodiment, the live broadcast management platform 2 includes a management communication module 20, a management computing unit 22, and a management database 24, as shown in FIG. 4. In one embodiment, the management database 24 can be implemented by one or more memories. The management database 24 stores a plurality of image editing elements such as faces, expressions, body shapes, costumes, or characters, and can be used for query and maintenance of a setting interface or a live interface. Although not specifically shown, in some embodiments, the live broadcast management platform 2 can implement the same functions by a network server, a management server, and a database. In at least one embodiment, the live broadcast management platform 2 may be a Public Cloud Service (Public Cloud Service) platform provided by Microsoft, such as Microsoft Azure, but not limited to this. The management computing unit 22 is a service group composed of a large number of virtual machines (Virtual Machines) formed after the server group in Azure is virtualized, and its main function is to provide resources with computing capabilities such as CPU and memory. The management database 24 can be provided by Azure with basic storage (Azure Storage) and relational database (SQL Azure), including but not limited to Blob, Table, and Queue to manage unstructured data, structured data, and message communication, respectively. Supports fast data sharing between virtual machines in the cloud.

最後,在步驟S5中,第一終端裝置1接收由直播管理平台2所傳送之一互動訊號,並依據互動訊號顯示一互動資訊於直播介面。於一實施例中,觀眾利用第二終端裝置3透過網際網路與直播管理平台2建立通訊連線,藉此,透過第二終端裝置3內建之網頁瀏覽器或應用程式來讀取並操作源自直播管理平台2的一直播介面,以即時觀看上述虛擬人物之直播動畫。同時,在直播過程中,觀眾可利用第二終端裝置3透過文字、語音、影像及其組合等媒介,傳遞相關請求或回應給直播主進行互動。舉例而言,第二終端裝置3傳送互動訊號至直播管理平台2,其中互動訊號可為但不限於文字訊號、語音訊號以及影像訊號,無論由數位訊號及類比訊號均可實現,而直播管理平台2接收互動訊號並傳送至第一終端裝置1,然後,第一終端裝置1之終端運算單元16透過終端通訊模組10接收由直播管理平台2所傳送之互動訊號,且終端運算單元16依據互動訊號進行對應運算處理後,透過終端顯示單元12顯示直播介面包含互動資訊,例如但不限於:客端請求、送禮通知、打賞獎勵、文字交談、即時語音通訊、觀眾即時影像、第二終端裝置3執行上述虛擬人物直播方法所產生之客端虛擬人物的直播動畫,但不以此為限。Finally, in step S5, the first terminal device 1 receives an interactive signal transmitted by the live management platform 2, and displays an interactive information on the live interface according to the interactive signal. In one embodiment, the viewer uses the second terminal device 3 to establish a communication connection with the live broadcast management platform 2 through the Internet, thereby reading and operating through the web browser or application built in the second terminal device 3 A live interface from the live management platform 2 to watch the live animation of the above-mentioned virtual characters in real time. At the same time, during the live broadcast process, the audience can use the second terminal device 3 to transmit related requests or responses to the live broadcast host for interaction through media such as text, voice, video, and combinations thereof. For example, the second terminal device 3 transmits interactive signals to the live management platform 2. The interactive signals can be, but are not limited to, text signals, voice signals, and image signals, which can be realized by digital signals and analog signals. The live management platform 2 Receive the interactive signal and transmit it to the first terminal device 1. Then, the terminal arithmetic unit 16 of the first terminal device 1 receives the interactive signal transmitted by the live management platform 2 through the terminal communication module 10, and the terminal arithmetic unit 16 interacts according to the After the signal is processed by the corresponding operation, the live interface is displayed through the terminal display unit 12 and contains interactive information, such as but not limited to: client request, gift notification, rewards, text chat, instant voice communication, audience real-time video, and second terminal device 3 Perform the live animation of the guest virtual character generated by the above virtual character live broadcast method, but it is not limited to this.

本發明之部分實施例之虛擬人物直播方法可實現互動式遊戲解任務活動,以增加趣味性,更可透過計算機技術和可穿戴設備協同產生並提供所有的真實和虛擬組合環境以及人機互動,來實現擴展實境(Extended Reality, XR)應用。於一實施例中,第一終端裝置1傳送一互動任務至直播管理平台2,然後直播管理平台2可傳送給登入直播介面的觀眾,以進行互動,例如但不限於猜謎活動、上班賺錢、尋寶任務、代位行銷等互動任務等。簡言之,第一終端裝置1傳送互動任務至直播管理平台2,以供複數第二終端裝置3、3’依據互動任務產生上述互動訊號,藉此,第一終端裝置1可透過步驟S5顯示互動資訊回饋給使用者。The virtual character live broadcast method of some embodiments of the present invention can realize interactive game task-solving activities to increase the fun. It can also be generated through the collaboration of computer technology and wearable devices and provide all real and virtual combined environments and human-computer interaction. To achieve extended reality (Extended Reality, XR) applications. In one embodiment, the first terminal device 1 transmits an interactive task to the live broadcast management platform 2, and then the live broadcast management platform 2 can transmit to viewers who log in to the live broadcast interface for interaction, such as but not limited to guessing activities, earning money at work, and treasure hunting. Interactive tasks such as tasks, subrogation marketing, etc. In short, the first terminal device 1 transmits the interactive task to the live broadcast management platform 2, so that a plurality of second terminal devices 3, 3'can generate the above-mentioned interactive signal according to the interactive task, whereby the first terminal device 1 can display through step S5 Interactive information is given back to users.

在其他實施例中,虛擬人物直播方法可允許第一終端裝置1在一電子地圖中預設活動任務以及相對應的任務地點,而當觀眾行動攜帶第二終端裝置3且位於任務地點時,第二終端裝置3可顯示活動任務,以供觀眾執行互動任務。換言之,第二終端裝置3可傳送所在位置資訊至直播管理平台2,且直播管理平台2判定第二終端裝置3的位置資訊是否符合電子地圖中的任務地點,當位置資訊符合任務地點時,直播管理平台2允許第二終端裝置3執行互動任務。在部分實施例中,第一終端裝置1可預錄至少一段虛擬人物影像作為任務影像,並傳送到直播管理平台2,藉此,當直播管理平台2判定第二終端裝置3的位置資訊是否符合電子地圖中的任務地點時,直播管理平台2可傳送任務影像至設置於任務位置的全像投影裝置4,以投放任務影像於任務地點。簡言之,全像投影裝置4接收並投放任務影像至任務地點,增加互動任務的視覺效果及趣味性。In other embodiments, the avatar live broadcast method may allow the first terminal device 1 to preset the activity task and the corresponding task location on an electronic map, and when the viewer moves to carry the second terminal device 3 and is located at the task location, the first terminal device 1 The second terminal device 3 can display activity tasks for the audience to perform interactive tasks. In other words, the second terminal device 3 can transmit location information to the live broadcast management platform 2, and the live broadcast management platform 2 determines whether the location information of the second terminal device 3 matches the mission location in the electronic map. When the location information matches the mission location, the live broadcast The management platform 2 allows the second terminal device 3 to perform interactive tasks. In some embodiments, the first terminal device 1 may pre-record at least one segment of an avatar image as a task image and send it to the live broadcast management platform 2, so that when the live broadcast management platform 2 determines whether the location information of the second terminal device 3 matches When the task location is on the electronic map, the live broadcast management platform 2 can transmit the task image to the holographic projection device 4 installed at the task location to place the task image on the task location. In short, the holographic projection device 4 receives and sends the task image to the task location, which increases the visual effect and interest of the interactive task.

舉例而言,虛擬人物直播方法可結合擴增實境技術,上述任務地點可為實體門市、店鋪,而第一終端裝置1/第二終端裝置3也依使用者/觀眾設定產生所屬的虛擬人物,且使用者/觀眾可至實體門市將自己所擁有的虛擬人物由行動裝置例如第一終端裝置1及第二終端裝置3,傳送至實體門市的顯示裝置例如全像投影裝置4中進行顯示,藉此虛實整合線下互動實體門市。當虛擬人物顯示於實體門市的顯示裝置時,行動裝置中即不會顯示虛擬人物。如此,可模擬虛擬人物至實體門市「上班」之情境,即為互動任務之示例,且透過設置於實體門市的擴增實境的顯示裝置,彷彿多了虛擬店員於門市中。對於實體門市來說,可以增加消費者(包含使用者及觀眾)至實體門市的機會。而對於消費者來說,也可以增加虛擬人物的實用性及趣味性。讓此虛擬人物不僅能在行動裝置中被執行任務,亦可以被傳送至另一裝置中執行任務,並且於執行任務的過程中獲得對應的報酬等,例如,讓虛擬人物也能去「上班賺錢」、「尋寶任務」、「代位行銷」等執行互動任務。進一步,操作過程將於雲端伺服器例如直播管理平台2中,保存消費者於實體門市的消費行為,並可進一步和各種雲端數據業者合作,最終取得完整的消費者之O2O營銷模式(online-to-offline)的行為數據,作為各種商業利用。For example, the virtual character live broadcast method can be combined with augmented reality technology. The above task location can be a physical store or shop, and the first terminal device 1 / second terminal device 3 also generates virtual characters according to the user/audience settings , And users/audiences can go to the physical store to transfer their virtual characters from mobile devices such as the first terminal device 1 and the second terminal device 3 to the display device of the physical store, such as the holographic projection device 4, for display. Through this virtual and real integration of offline interactive physical stores. When the avatar is displayed on the display device of the physical store, the avatar will not be displayed on the mobile device. In this way, it is possible to simulate the situation where a virtual person goes to a physical store to "go to work", which is an example of an interactive task, and through the augmented reality display device installed in the physical store, it seems that there are more virtual shop assistants in the store. For physical stores, it can increase the opportunities for consumers (including users and viewers) to come to physical stores. For consumers, it can also increase the practicality and interest of virtual characters. Let the avatar not only be able to perform tasks on the mobile device, but also be transferred to another device to perform tasks, and get corresponding rewards in the process of performing tasks. For example, the avatar can also go to work and earn money. ", "treasure hunt", "subrogation marketing" and other interactive tasks. Furthermore, the operation process will be stored in a cloud server such as the live broadcast management platform 2 to save consumers' consumption behaviors in physical stores, and can further cooperate with various cloud data providers to obtain a complete online-to-consumer O2O marketing model (online-to-market). -offline) behavior data, as a variety of commercial use.

請繼續參照圖3及圖4,本發明又一實施例之虛擬人物直播系統包含一第一終端裝置1及一直播管理平台2。直播管理平台2包含一管理通訊模組20以及一管理運算單元22。管理運算單元22電性連接於管理通訊模組20。於一實施例中,管理通訊模組20可為無線通訊介面,例如但不限於:無線區域網路(WIFI)、蜂巢式網路(3G、4G、5G)、紫蜂(Zigbee)等,用於傳送互動訊號以及對應控制訊號之直播動畫,關於訊號收發機制以及虛擬人物、直播動畫演算機制等相關實施例,請詳參前述;管理運算單元22可由一個或多個諸如微處理器、微控制器、數位信號處理器、微型計算機、中央處理器、場編程閘陣列、可編程邏輯設備、狀態器、邏輯電路、類比電路、數位電路和/或任何基於操作指令操作信號(類比和/或數位)的處理元件來實現,用於辨識偵測訊號以產生相對應之控制訊號,並依據控制訊號及虛擬人物產生直播動畫,關於影像運算機制及其相關實施例,請詳參前述。Please continue to refer to FIG. 3 and FIG. 4, a virtual character live broadcast system according to another embodiment of the present invention includes a first terminal device 1 and a live broadcast management platform 2. The live broadcast management platform 2 includes a management communication module 20 and a management computing unit 22. The management computing unit 22 is electrically connected to the management communication module 20. In one embodiment, the management communication module 20 may be a wireless communication interface, such as but not limited to: wireless local area network (WIFI), cellular network (3G, 4G, 5G), Zigbee, etc. For the live animation that transmits interactive signals and corresponding control signals, please refer to the foregoing for the signal transceiver mechanism, virtual characters, and live animation calculation mechanism. The management computing unit 22 can be one or more such as microprocessors and micro-controllers. Processor, digital signal processor, microcomputer, central processing unit, field programming gate array, programmable logic device, state device, logic circuit, analog circuit, digital circuit and/or any operation signal based on operation instruction (analog and/or digital ) Is implemented by the processing element, which is used to identify the detection signal to generate the corresponding control signal, and generate the live animation based on the control signal and the avatar. For the image operation mechanism and related embodiments, please refer to the foregoing for details.

第一終端裝置1包含一終端通訊模組10、一終端顯示單元12、一動作偵測單元14以及一終端運算單元16。終端運算單元16電性連接於終端通訊模組10、終端顯示單元12及動作偵測單元14。終端通訊模組10通訊連接於管理通訊模組20。於一實施例中,終端通訊模組10可為無線通訊介面,例如但不限於:無線區域網路(WIFI)、蜂巢式網路(3G、4G、5G)、紫蜂(Zigbee)等,用於接收互動訊號並傳送控制訊號,關於訊號收發機制及其相關實施例,請詳參前述。The first terminal device 1 includes a terminal communication module 10, a terminal display unit 12, a motion detection unit 14 and a terminal computing unit 16. The terminal computing unit 16 is electrically connected to the terminal communication module 10, the terminal display unit 12 and the motion detection unit 14. The terminal communication module 10 is communicatively connected to the management communication module 20. In one embodiment, the terminal communication module 10 can be a wireless communication interface, such as but not limited to: wireless local area network (WIFI), cellular network (3G, 4G, 5G), Zigbee, etc. For receiving interactive signals and transmitting control signals, please refer to the foregoing for details on the signal receiving and sending mechanism and related embodiments.

於一實施例中,終端顯示單元12可為一觸控式螢幕,用於顯示一設定介面以及一直播介面,其中設定介面及直播介面之相關技術內容、功效及優點,請詳參前述。動作偵測單元14可為一影像擷取單元,例如但不限於電荷耦合元件(CCD)、互補式金屬氧化物半導體(CMOS)或深度攝影機,用於偵測使用者之動作,關於偵測動作機制及其相關實施例,請詳參前述。終端運算單元16可由一個或多個諸如微處理器、微控制器、數位信號處理器、微型計算機、中央處理器、場編程閘陣列、可編程邏輯設備、狀態器、邏輯電路、類比電路、數位電路和/或任何基於操作指令操作信號(類比和/或數位)的處理元件來實現,用於產生對應動作之偵測訊號,並依據互動訊號產生互動資訊,關於辨識動作機制及其相關實施例,請詳參前述。In one embodiment, the terminal display unit 12 may be a touch screen for displaying a setting interface and a live broadcast interface. For the technical content, functions and advantages of the setting interface and the live broadcast interface, please refer to the foregoing for details. The motion detection unit 14 may be an image capture unit, such as but not limited to a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS) or a depth camera, for detecting the user's motion, regarding the detection of motion For the mechanism and related embodiments, please refer to the foregoing for details. The terminal arithmetic unit 16 can be composed of one or more microprocessors, microcontrollers, digital signal processors, microcomputers, central processing units, field programming gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, and digital The circuit and/or any processing element based on the operation command operation signal (analog and/or digital) is implemented to generate the detection signal corresponding to the action, and generate interactive information based on the interactive signal, regarding the recognition action mechanism and related embodiments , Please refer to the foregoing for details.

在部分實施例中,用於虛擬人物直播的電腦程式產品是由一組指令所組成,當諸如第一終端裝置1等電腦載入並執行該組指令後能完成上述任一實施例之虛擬人物直播方法。In some embodiments, the computer program product used for the live broadcast of the virtual character is composed of a set of instructions. When the computer such as the first terminal device 1 loads and executes the set of instructions, the virtual character of any of the above embodiments can be completed. Live broadcast method.

綜合上述,本發明之部分實施例提供一種虛擬人物直播方法、系統及電腦程式產品,主要是透過第一終端裝置連線至直播管理平台,先在設定介面上經客製化設定喜好的虛擬人物,然後在直播平台上即時控制虛擬人物的表情、動作以作為使用者的替身影像,據以透過直播平台與觀眾分享實況直播與即時互動,可突破預錄式虛擬影像及觀看地點的傳統限制,從而增加直播的趣味性且在特定應用上保有使用者個人隱私性。此外,可透過計算機技術和可穿戴設備協同產生並提供所有的真實和虛擬組合環境以及人機互動,舉例而言,虛擬人物結合擴增實境技術,將實體門市設定為任務地點,使用者/觀眾可至實體門市將自己所擁有的虛擬人物由行動裝置例如第一/第二終端裝置,傳送至實體門市的顯示裝置,讓虛擬人物不僅能在行動裝置中被執行任務,亦可以被傳送至另一裝置中執行任務,藉此虛實整合線下互動實體門市,以增加虛擬人物的實用性及趣味性。In summary, some embodiments of the present invention provide a method, system and computer program product for live broadcast of virtual characters, which are mainly connected to the live broadcast management platform through a first terminal device, and firstly customize the favorite virtual characters on the setting interface. , And then control the facial expressions and actions of the avatar on the live broadcast platform as the user’s stand-in image, and share the live broadcast and real-time interaction with the audience through the live broadcast platform, which can break through the traditional limitations of pre-recorded virtual images and viewing locations. In this way, the interest of the live broadcast is increased and the user's personal privacy is maintained in a specific application. In addition, computer technology and wearable devices can be used to collaboratively generate and provide all real and virtual combined environments and human-computer interaction. For example, virtual characters combined with augmented reality technology can set physical stores as task locations, and users/ Viewers can go to the physical store to transfer their own virtual characters from mobile devices such as the first/second terminal device to the display device of the physical store, so that the virtual characters can not only be executed on the mobile device, but also be sent to The task is performed in another device to integrate the virtual and real offline interactive physical stores to increase the practicality and interest of the virtual characters.

以上所述之實施例僅是為說明本發明之技術思想及特點,其目的在使熟習此項技藝之人士能夠瞭解本發明之內容並據以實施,當不能以此限定本發明之專利範圍,即大凡依本發明所揭示之精神所作之均等變化或修飾,仍應涵蓋在本發明之專利範圍內。The above-mentioned embodiments are only to illustrate the technical ideas and features of the present invention, and their purpose is to enable those who are familiar with the art to understand the content of the present invention and implement them accordingly. When the scope of the patent of the present invention cannot be limited by this, That is, all equal changes or modifications made in accordance with the spirit of the present invention should still be covered by the patent scope of the present invention.

S1~S5:步驟 1:第一終端裝置 10:終端通訊模組 12:終端顯示單元 14:動作偵測單元 16:終端運算單元 2:直播管理平台 20:管理通訊模組 22:管理運算單元 24:管理資料庫 3、3’:第二終端裝置 4:全像投影裝置S1~S5: steps 1: The first terminal device 10: Terminal communication module 12: Terminal display unit 14: Motion detection unit 16: terminal arithmetic unit 2: Live broadcast management platform 20: Management communication module 22: Management computing unit 24: Management database 3. 3’: second terminal device 4: Holographic projection device

圖1為本發明一實施例之虛擬人物直播方法之流程示意圖。 圖2為本發明一實施例之虛擬人物直播系統之架構示意圖。 圖3為本發明一實施例之第一終端裝置之方塊示意圖。 圖4為本發明一實施例之直播管理平台之方塊示意圖。FIG. 1 is a schematic flowchart of a method for live broadcast of a virtual character according to an embodiment of the present invention. FIG. 2 is a schematic diagram of the structure of a virtual character live broadcast system according to an embodiment of the present invention. FIG. 3 is a block diagram of a first terminal device according to an embodiment of the invention. Fig. 4 is a block diagram of a live broadcast management platform according to an embodiment of the present invention.

1:第一終端裝置1: The first terminal device

2:直播管理平台2: Live broadcast management platform

3、3’:第二終端裝置3. 3’: second terminal device

4:全像投影裝置4: Holographic projection device

Claims (15)

一種虛擬人物直播方法,應用於一第一終端裝置,包含: 顯示一設定介面,其中該設定介面包含一虛擬人物設定區,並經由該虛擬人物設定區設定一虛擬人物; 啟動並顯示一直播介面,該直播介面中包含該虛擬人物; 偵測一使用者之一動作以產生對應該動作之一偵測訊號; 將該偵測訊號經由網路傳送至一直播管理平台,並由該直播管理平台辨識該偵測訊號以產生對應該偵測訊號之一控制訊號,播送對應該控制訊號之一直播動畫,其中該直播動畫係依據該控制訊號及該虛擬人物所產生;以及 接收由該直播管理平台所傳送之一互動信號,並依據該互動訊號顯示一互動資訊於該直播介面。A method for live broadcast of virtual characters, applied to a first terminal device, includes: Display a setting interface, where the setting interface includes an avatar setting area, and an avatar is set through the avatar setting area; Start and display a live broadcast interface, the live broadcast interface contains the virtual character; Detect an action of a user to generate a detection signal corresponding to the action; The detection signal is transmitted to a live management platform via the network, and the live management platform recognizes the detection signal to generate a control signal corresponding to the detection signal, and broadcast a live animation corresponding to the control signal. The live animation is generated based on the control signal and the virtual character; and An interactive signal transmitted by the live management platform is received, and an interactive information is displayed on the live interface according to the interactive signal. 如請求項1所述之虛擬人物直播方法,其中該偵測該使用者之該動作之步驟包含:擷取該使用者之一動作影像;以及,該直播管理平台辨識該偵測訊號之步驟包含:依據該動作影像,計算該動作之座標變化並辨識該動作,以輸出對應之該控制訊號。The method for live broadcast of a virtual person according to claim 1, wherein the step of detecting the action of the user includes: capturing a motion image of the user; and the step of identifying the detection signal by the live management platform includes : According to the action image, calculate the coordinate change of the action and identify the action to output the corresponding control signal. 如請求項1所述之虛擬人物直播方法,其中該偵測該使用者之該動作以產生對應該動作之該偵測訊號之步驟包含:偵測及辨識該使用者之該動作以產生對應該動作之該控制訊號。The method for live broadcast of a virtual person according to claim 1, wherein the step of detecting the action of the user to generate the detection signal corresponding to the action includes: detecting and recognizing the action of the user to generate the corresponding action The control signal of the action. 如請求項1所述之虛擬人物直播方法,其中由該直播管理平台播送對應於該控制訊號之該直播動畫之步驟包含:該直播管理平台依據該控制訊號修改關聯於該虛擬人物之一動畫模型,以產生該直播動畫。The virtual character live broadcast method according to claim 1, wherein the step of broadcasting the live animation corresponding to the control signal by the live management platform includes: the live management platform modifies an animation model associated with the virtual character according to the control signal To generate the live animation. 如請求項1所述之虛擬人物直播方法,更包含: 傳送一互動任務至該直播管理平台,以供複數第二終端裝置依據該互動任務產生該互動訊號。The method for live broadcast of virtual characters as described in claim 1, further includes: An interactive task is sent to the live management platform for a plurality of second terminal devices to generate the interactive signal according to the interactive task. 如請求項5所述之虛擬人物直播方法,更包含: 該些第二終端裝置至少其中之一傳送一位置資訊至該直播管理平台,且該直播管理平台判定該位置資訊是否符合一電子地圖中一任務地點,當該位置資訊符合該任務地點時,該直播管理平台允許該第二終端裝置執行一互動任務。The live broadcast method of the virtual person as described in claim 5 further includes: At least one of the second terminal devices transmits location information to the live broadcast management platform, and the live broadcast management platform determines whether the location information matches a task location in an electronic map. When the location information matches the task location, the The live broadcast management platform allows the second terminal device to perform an interactive task. 如請求項6所述之虛擬人物直播方法,更包含: 一全像投影裝置接收並投放一任務影像至該任務地點。The live broadcast method of virtual characters as described in claim 6, further including: A holographic projection device receives and projects a task image to the task location. 一種電腦程式產品,包括一組指令,當電腦載入並執行該組指令後能完成如請求項1至7中之任一項所述之虛擬人物直播方法。A computer program product includes a set of instructions. When the computer loads and executes the set of instructions, the virtual character live broadcast method as described in any one of request items 1 to 7 can be completed. 一種虛擬人物直播系統,包含: 一直播管理平台,包含: 一管理通訊模組,用於傳送一互動訊號以及對應一控制訊號之一直播動畫,其中該直播動畫係依據該控制訊號及一虛擬人物所產生;以及 一管理運算單元,電性連接於該管理通訊模組,用於辨識一偵測訊號以產生對應該偵測訊號之該控制訊號;以及 一第一終端裝置,包含: 一終端通訊模組,通訊連接於該管理通訊模組,用於接收該互動訊號並傳送該偵測訊號; 一終端顯示單元,用於顯示一設定介面以及一直播介面,其中該設定介面包含一虛擬人物設定區,且該直播介面中包含該虛擬人物以及一互動資訊,並經由該虛擬人物設定區域設定該虛擬人物; 一動作偵測單元,用於偵測一使用者之一動作;以及 一終端運算單元,電性連接於該終端通訊模組、該終端顯示單元及該動作偵測單元,用於產生對應該動作之該偵測訊號,並依據該互動訊號產生該互動資訊。A virtual character live broadcast system, including: A live broadcast management platform, including: A management communication module for transmitting an interactive signal and a live animation corresponding to a control signal, wherein the live animation is generated based on the control signal and an avatar; and A management computing unit, electrically connected to the management communication module, for identifying a detection signal to generate the control signal corresponding to the detection signal; and A first terminal device, including: A terminal communication module, communicatively connected to the management communication module, for receiving the interactive signal and transmitting the detection signal; A terminal display unit for displaying a setting interface and a live interface, wherein the setting interface includes an avatar setting area, and the live interface includes the avatar and an interactive information, and the avatar setting area is used to set the Virtual characters; A motion detection unit for detecting a movement of a user; and A terminal arithmetic unit is electrically connected to the terminal communication module, the terminal display unit and the motion detection unit for generating the detection signal corresponding to the motion, and generating the interactive information according to the interactive signal. 如請求項9所述之虛擬人物直播系統,其中該第一終端裝置之該動作偵測單元擷取該使用者之一動作影像;以及該直播管理平台之該管理運算單元依據該動作影像,計算該動作之座標變化並辨識該動作,以輸出對應之該控制訊號。The virtual character live broadcast system according to claim 9, wherein the motion detection unit of the first terminal device captures a motion image of the user; and the management computing unit of the live broadcast management platform calculates based on the motion image The coordinate of the action changes and the action is identified to output the corresponding control signal. 如請求項9所述之虛擬人物直播系統,其中該第一終端裝置之該終端運算單元辨識該偵測訊號以產生對應該偵測訊號之該控制訊號。The virtual character live broadcast system according to claim 9, wherein the terminal computing unit of the first terminal device recognizes the detection signal to generate the control signal corresponding to the detection signal. 如請求項9所述之虛擬人物直播系統,其中該直播管理平台之該管理通訊模組接收該第一終端裝置之該終端通訊模組所傳送之該偵測訊號,且該管理運算單元依據基於該偵測訊號所產生之該控制訊號修改關聯於該虛擬人物之一動畫模型,以產生該直播動畫。The virtual character live broadcast system according to claim 9, wherein the management communication module of the live management platform receives the detection signal transmitted by the terminal communication module of the first terminal device, and the management computing unit is based on The control signal generated by the detection signal modifies an animation model associated with the virtual character to generate the live animation. 如請求項9所述之虛擬人物直播系統,其中該第一終端裝置傳送一互動任務至該直播管理平台,以供與該直播管理平台通訊連線之複數第二終端裝置依據該互動任務產生該互動訊號。The virtual character live broadcast system according to claim 9, wherein the first terminal device transmits an interactive task to the live broadcast management platform, so that a plurality of second terminal devices connected to the live broadcast management platform can generate the interactive task according to the interactive task. Interactive signal. 如請求項13所述之虛擬人物直播系統,其中該些第二終端裝置至少其中之一傳送一位置資訊至該直播管理平台,且該直播管理平台判定該位置資訊是否符合一電子地圖中一任務地點,當該位置資訊符合該任務地點時,該直播管理平台允許該第二終端裝置執行該互動任務。The virtual character live broadcast system according to claim 13, wherein at least one of the second terminal devices transmits location information to the live broadcast management platform, and the live broadcast management platform determines whether the location information meets a task in an electronic map Location. When the location information matches the location of the task, the live broadcast management platform allows the second terminal device to execute the interactive task. 如請求項14所述之虛擬人物直播系統,更包含: 一全像投影裝置,通訊連線於該管理通訊模組,用於接收並投放一任務影像至該任務地點。The virtual character live broadcast system as described in claim 14, further including: A holographic projection device is connected to the management communication module for receiving and sending a task image to the task location.
TW108145415A 2019-12-11 2019-12-11 Virtual character live broadcast method, system thereof and computer program product TW202123128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108145415A TW202123128A (en) 2019-12-11 2019-12-11 Virtual character live broadcast method, system thereof and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108145415A TW202123128A (en) 2019-12-11 2019-12-11 Virtual character live broadcast method, system thereof and computer program product

Publications (1)

Publication Number Publication Date
TW202123128A true TW202123128A (en) 2021-06-16

Family

ID=77516860

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108145415A TW202123128A (en) 2019-12-11 2019-12-11 Virtual character live broadcast method, system thereof and computer program product

Country Status (1)

Country Link
TW (1) TW202123128A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947959A (en) * 2021-10-23 2022-01-18 首都医科大学附属北京天坛医院 Remote teaching system and live broadcast problem screening system based on MR technology
TWI776643B (en) * 2021-08-19 2022-09-01 崑山科技大學 Image display device
WO2023071917A1 (en) * 2021-10-26 2023-05-04 阿里巴巴达摩院(杭州)科技有限公司 Virtual object interaction method and device, and storage medium and computer program product

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI776643B (en) * 2021-08-19 2022-09-01 崑山科技大學 Image display device
CN113947959A (en) * 2021-10-23 2022-01-18 首都医科大学附属北京天坛医院 Remote teaching system and live broadcast problem screening system based on MR technology
WO2023071917A1 (en) * 2021-10-26 2023-05-04 阿里巴巴达摩院(杭州)科技有限公司 Virtual object interaction method and device, and storage medium and computer program product

Similar Documents

Publication Publication Date Title
TWI708152B (en) Image processing method, device, and storage medium
US10609334B2 (en) Group video communication method and network device
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US20180232929A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction
US11450051B2 (en) Personalized avatar real-time motion capture
US11782272B2 (en) Virtual reality interaction method, device and system
US20230377189A1 (en) Mirror-based augmented reality experience
TW202123128A (en) Virtual character live broadcast method, system thereof and computer program product
US11790614B2 (en) Inferring intent from pose and speech input
US11900506B2 (en) Controlling interactive fashion based on facial expressions
US20230130535A1 (en) User Representations in Artificial Reality
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
US20220270302A1 (en) Content distribution system, content distribution method, and content distribution program
WO2022267729A1 (en) Virtual scene-based interaction method and apparatus, device, medium, and program product
US20220398816A1 (en) Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
TWI803224B (en) Contact person message display method, device, electronic apparatus, computer readable storage medium, and computer program product
TWM594767U (en) Virtual character live streaming system
US11880947B2 (en) Real-time upper-body garment exchange
Wen et al. A survey of facial capture for virtual reality
WO2023211688A1 (en) Shared augmented reality experience in video chat
JP7291106B2 (en) Content delivery system, content delivery method, and content delivery program
US20220141551A1 (en) Moving image distribution system, moving image distribution method, and moving image distribution program
US20230362333A1 (en) Data processing method and apparatus, device, and readable storage medium
US20240007585A1 (en) Background replacement using neural radiance field
WO2023246207A1 (en) Interface display method and apparatus, and device and medium