TW201108151A - Instant communication control system and its control method - Google Patents

Instant communication control system and its control method Download PDF

Info

Publication number
TW201108151A
TW201108151A TW98127642A TW98127642A TW201108151A TW 201108151 A TW201108151 A TW 201108151A TW 98127642 A TW98127642 A TW 98127642A TW 98127642 A TW98127642 A TW 98127642A TW 201108151 A TW201108151 A TW 201108151A
Authority
TW
Taiwan
Prior art keywords
expression
image
instant messaging
original
feature quantity
Prior art date
Application number
TW98127642A
Other languages
Chinese (zh)
Inventor
Chih-Yu Hsu
Original Assignee
Univ Chaoyang Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Chaoyang Technology filed Critical Univ Chaoyang Technology
Priority to TW98127642A priority Critical patent/TW201108151A/en
Publication of TW201108151A publication Critical patent/TW201108151A/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an instant communication control system and its control method. An image capture unit is used for capturing an user's true expression, and accordingly a virtual image generated will perform a instant change of facial expression based on the change of the user's true expression. Therefore, users of the instant communication system can express true emotions via virtual images, without exposing their true faces, which increases the privacy for users of the instant communication system.

Description

201108151 六、發明說明: 【發明所屬之技術領域】 本發明疋有關於一種用於即時通訊之控制系統及 ^控制方法’特較有關於—種可讓使用者於使用即時 i訊糸統時,將制者原始表情轉換為模擬表情動晝, 即時顯示於此即時通訊系統視窗中的一種用於即時通 訊之控制系統及其控制方法。201108151 VI. Description of the Invention: [Technical Field of the Invention] The present invention relates to a control system and a control method for instant messaging, which are particularly useful for users to use an instant messaging system. The original expression of the maker is converted into an analog expression, and a control system for instant communication and a control method thereof are displayed in the instant messaging system window.

【先前技術】 a近年來,隨著科技的進步,使得電腦成為人類於日 中’進行情感溝通的主要工具之-,其所採取的 最吊見的方式係為使用即時通訊服務,配合網際網路與 位在世界各_朋友、家人及商業夥伴進行互動溝通。 虽使用者想使用如Windows Live Messenger或 H00! Me隨㈣等㈣通訊軟體與其他位於遠端的使 #通訊時’首先,須註冊加人成為即時通訊軟體 本員仃列’並於執行即時通訊軟體後進行帳號登 錄,才可與其他的註冊會員進行即時通話。如第i圖所 Ή系為使用即時通訊軟體的架構示意圖。由第i圖可 2會員A想與會員b在網路上通話時,必須要先各自 2即時通III軟體’且同時登人即時通訊舰器時,才 月t»,、線上的使用者進行通話。 如第2圖所示為網路即時通^的方法流程圖。首 ,使用者須從網路連結到即時通訊軟體的網頁下載, 201108151 並安裝即時通訊軟體(如步驟S210),接著執行即時通訊 軟體(如步驟S220),此時則會判斷使用者是否已經完成 註冊(如步驟S230),若使用者尚未註冊,則即時通訊軟 體會要求使用者立即於線上進行註冊(如步驟S240),並 於註冊完成後,再由使用者輸入會員帳號及密碼以登入 即時通訊軟體(如步驟S250),若使用者已先行註冊,則 於直接輸入會員帳號後即可登入(如步驟S250)。於使用 者登入後,使用者即可檢視欲通話之其它會員是否在線 上(如步驟S260)。若欲通話之會員尚未上線,則可結束 通話(如步驟S280),若欲通話之會員已上線,則可開啟 對話框,開始對話(如步驟S270)。最後,於通話内容交 談完畢後,結束對話(步驟S280)。 此些即時通訊軟體除了具有上述通話功能之外,更 具有即時傳遞文字、圖像、短片或語音等訊息與其他人 進行即時通訊的功能,如上所述,使用者在使用即時通 訊軟體時,都必須先在某一特定伺服器中登錄其代表的 帳號。藉此,使用者就能經由登錄此特定伺服器而得以 與其他同樣登錄於此特定伺服器中的使用者互傳訊息 或傳送檔案。無疑的,此種即時聯繫與資源分享的方 案,確實已為廣大的網路使用群眾提供了極為便捷的溝 通管道,並且這些提供即時通訊與資源分享的軟體也因 為這些優點而深受網路使用者的喜愛。 然而,雖然有須於特定伺服器中登錄的步驟來進行 安全把關,但是,現有的即時通訊軟體,不論是一對一 的個人通訊或是一對多的群組通訊,皆存在同樣的不安 201108151 全性 習知:即時通訊軟體對談時的趣味性’ 種網腺说彼/華民國么開專利200824775號所揭露的一 種網路玩偶系統,如第[Prior technology] In recent years, with the advancement of technology, the computer has become the main tool for human beings to conduct emotional communication in Japan-China. The most confusing way is to use instant messaging service with the Internet. The road interacts with friends, family and business partners in the world. Although the user wants to use (such as Windows Live Messenger or H00! Me with (4), etc. (4) communication software and other remotely located communication # first, you must register to become an instant messaging software member's queue and perform instant messaging After the software is registered, you can make an instant call with other registered members. As shown in Figure i, it is a schematic diagram of the architecture using instant messaging software. By the i-fi can 2 member A wants to talk with the member b on the Internet, they must first have their own 2 instant pass III software 'and simultaneously enter the instant messaging ship, only the month t», the online user to make a call . As shown in Figure 2, the flow chart of the method of network instant communication ^ is shown. First, the user must download the webpage from the Internet to the instant messaging software, 201108151 and install the instant messaging software (step S210), and then execute the instant messaging software (step S220). At this time, it is determined whether the user has completed. Registration (such as step S230), if the user has not registered, the instant messaging software will require the user to register online immediately (step S240), and after the registration is completed, the user enters the member account and password to log in to the instant. The communication software (such as step S250), if the user has registered first, the user can log in directly after entering the member account (step S250). After the user logs in, the user can check whether other members who want to call are online (step S260). If the member who wants to talk is not online, the call can be ended (step S280). If the member who wants to talk is online, the dialog box can be opened to start the conversation (step S270). Finally, after the conversation content is completed, the conversation is ended (step S280). In addition to the above-mentioned calling function, these instant messaging softwares have the function of instantly transmitting text, images, videos or voices to other people for instant communication. As described above, when users use instant messaging software, You must first log in to the account that it represents on a particular server. In this way, the user can log in to the specific server to exchange messages or transfer files with other users who are also logged into the specific server. Undoubtedly, this kind of instant contact and resource sharing scheme has indeed provided an extremely convenient communication channel for the masses of Internet users, and these softwares that provide instant messaging and resource sharing are also deeply used by the network because of these advantages. Favorite. However, although there are steps to log in to a specific server for security check, existing instant messaging software, whether it is one-to-one personal communication or one-to-many group communication, has the same unease 201108151 Full-sexual knowledge: the fun of instant messaging software talks. The kind of network doll system disclosed in the patent 200824775, such as the first

用於網路,包含雷早^圖所不,此網路玩偶系統係適 3〇3係電性^裳以及玩偶3〇4。電子裝置 資料漆4…接網路以接收m料,並依據遠端 敗、一祕一玩偶控制訊號。玩偶304係具有一輸出/入電 卫制電路及至少一使用者提示模組,其中輸出/ 係電性連接電子裝置以接收玩偶控制訊號,控制 ,丨、糸電性連接輸出/入電路以依據玩偶控制訊號產生 至少一動作提示訊號來控制使用者提示模組產生對應 的動作。玩偶304包含發光元件331、播音元件341、 擺動元件351及影像麻元件361以進行對應的動作。 上述習知技藝,係藉由電子裝置303依據遠端資料 產生玩偶控制訊號來控制玩偶304的動作,突破習知即 時通訊平面表達方式的約束,讓經由網路連通兩端的使 用者雙方得以達到更進一步視覺上互動的效果。 此習知技藝雖可解決單調的文字表現,藉由玩偶的 表情來增添使用即時通訊服務的互動趣味性,以及增添 表達本身情感或情緒的多樣性,但是仍存在著玩偶裝置 的機械性限制及控制裝置的極限,其所能顯示的表情個 數有限’且無法即時表現出使用者當下的臉部表情等問 201108151 題。 揭露的在視::驾知的中華民國公告專利1297863號所 揭路的在視訊晝面中嵌人’ 中嵌入圖案之方法係適用於工:方法,此種在視訊晝面 畫面中加人機通話狀態下,於視訊 使用者可經由熱鍵時的娛樂效果。 情圖案加人視訊書面之—個=模組,並將一個表 .,.加人、— 個合成區域,將此視訊畫面結 端最鴒顧:二晝面’然後經由通訊設備傳輸至接收 :接收端的視訊視窗上,而達到加強情緒 表】:功效、。其實施例係如第4圖所示之視訊晝面中嵌 / 之方法的發送端操作流程圖,本實施例採用直接 在^端合成晝面再行傳輪的方法,即使在接收端不支 ^本發明的情況下,仍然可以接收並顯示合成晝面。首 接收個由使用者輸入之控制訊冑(如步驟s4〇〇), 其中此控制訊號可為-個預先設定之熱鍵訊號,並在使 =者欲加人表情圖案之視訊畫面時,藉由按下預設熱鍵 來加入,接著由視訊影像中擷取一視訊晝面(如步驟 10)並由-貝料庫中載入相對於此控制訊號之動畫模 組(步驟S420),然後判斷此動畫模組是否需要參考視 訊畫面(步驟S430),若需要參考視訊晝面時(例如要 插入額頭爆青筋的圖案,則需要找尋額頭位置),則由 視訊旦面中選取一合成區域(步驟S44〇),並記錄此合 成區域之座標位置(步驟S450),然後將此視訊晝面與 此動晝模組之圖案結合為一合成晝面(步驟S46〇),最 後才將此合成畫面輸出至接收端(步驟S48〇 );若否(例 201108151 要插入一背景效果,則不需找尋特定位置),則直接 二 Μ 見訊晝面與此動畫模組之圖案結合為一合成晝面 =驟S470)’並同樣地將此合成畫面輸出至接收端(步 鄉 S480 )。 ^述習知技藝,係藉由於表情圖案中加人視訊晝面 區域,藉此視訊畫面結和成為—個合成晝面,輸 2接收端來增添使用即時軌服務的互動趣味性,以 入Π達本身情感或情緒的多樣性,但是所能提供的 鏗於習知技藝之各項問題,為了能夠兼顧決 月)基於多年研究開發與諸多實務經驗,提出一種 述缺點之實現方式與依據。 。上 【發明内容】 有鑑於上述習知技藝之問題,本 :尤是在提供-種用於即時通訊之控制系統及其中 始表情或頭部的動作(如 r Μ微夭等表情)、身段的變化等 为的安全性及使用者的隱私權, 士 /、充 用者本人的真實面目之目的。會讓指對方知道使 緣是,為達上述目的,依本發明之一種 訊之控制系統,其包含一影像操取單元、一表情二: 201108151 算單元、一影像處理單元及一通訊單元。影像擷取單元 係擷取一原始表情影像及一第一表情影像,表情特徵計 算單元則依據原始表情影像計算第一表情影像之一表 情特徵量,影像處理單元依據表情特徵量產生一虛擬影 像,通訊單元以一通訊協定執行一即時通訊系統,並透 過該即時通訊糸統顯示該虛擬影像。 根據本發明之另一目的,提出一種一種用於即時通 訊之控制系統。其更包含有一設定單元,係依據使用者 的喜好設定,使即時通訊系統所顯示之圖像為使用者原 始表情或經處理後的模擬影像。此外,更可設置輸入單 元,其可包含文字輸入模組及語音輸入模組。文字輸入 模組,用於接收表情特徵文字的輸入;語音輸入模組, 用於接收表情特徵語音的輸入。具體來說,近端主機的 輸入單元可以是鍵盤及麥克風等,任何其他可用於輸入 晝面、文字及語音之輸入設備皆可適用。 根據本發明之再一目的,提出一種用於即時通訊之 控制系統。其更可依據表情特徵文字發送晝面資訊,或 者是依據表情特徵語音發送語音資訊及晝面資訊。而透 過即時通訊系統所顯示之圖像模擬影像可以為2D動晝 或3D動晝。 此外,本發明更提出一種用於即時通訊之控制系統 及其控制方法,其係包括有下述步驟:首先,輸入原始 表情至近端主機。接下來,依據表情特徵量計算所輸入 之原始表情之表情特徵量。接續著,顯示依據表情特徵 201108151 里之晝面貝訊。緊接著,於與近端主機以通訊協定連接 並正進行即時通訊服務之遠端主機接收畫面資訊。最 後’遠端主機依據所接收之畫面資訊,即時顯示對應於 原始表清之杈擬表情動畫。上述即時通訊服務可為 QQ、Windows Live Messenger、灿〇〇!心此―或 Skype等常見之即時通訊系統。 此外,於視訊畫面顯示模擬表情動畫之方法,更包 含近端主機依據使用者於—設定單元之喜好設定,判斷 „依據表情特徵量之晝面資訊或一原始表情資訊 = 中更包含遠端主機依據所接收之畫面資 訊,輸出相對應於原始表情之模擬表情 先,;= = = =首 依據原始表情影像計算第__For the network, including Lei early ^ map, this network doll system is suitable for 3 〇 3 series electric 裳 以及 and dolls 3 〇 4. The electronic device data paint 4... is connected to the network to receive the m material, and according to the remote defeat, the first secret and the doll control signal. The doll 304 has an output/input and maintenance circuit and at least one user prompting module, wherein the output/electrical connection electronic device receives the doll control signal, and controls, switches, and electrically connects the output/input circuit to the doll. The control signal generates at least one action prompt signal to control the user prompting module to generate a corresponding action. The doll 304 includes a light-emitting element 331, a sounding element 341, a swinging element 351, and an imaging element 361 to perform corresponding operations. The above-mentioned prior art technique controls the action of the doll 304 by the electronic device 303 according to the remote data generating doll control signal, and breaks the constraint of the conventional instant messaging plane expression mode, so that both users connected via the network can achieve more. The effect of further visual interaction. Although this technique can solve the monotonous text performance, the interactive expression of the instant messaging service is added by the expression of the doll, and the diversity of expressive emotions or emotions is added, but there are still mechanical limitations of the doll device and The limit of the control device, the number of expressions that can be displayed is limited, and it is impossible to immediately express the facial expression of the user, etc. 201108151. The method of embedding the pattern embedded in the video in the video of the Republic of China Announced Patent No. 1,297,863 is applicable to the work: method, which adds a human machine to the video screen. In the state of the call, the entertainment effect of the video user can be via the hot key. The love pattern is added to the video - a = module, and a table.,. Add people, - a composite area, the most attention to this video picture: the second side 'and then transmitted to the receiving via the communication device: At the receiving end of the video window, and to achieve enhanced emotional table]: efficacy,. The embodiment is a flowchart of the transmitting end operation of the method embedded in the video plane as shown in FIG. 4, and the embodiment adopts the method of directly synthesizing the surface and transmitting the wheel directly, even if the receiving end does not support ^ In the case of the present invention, the synthetic facets can still be received and displayed. First receiving a control message input by the user (step s4〇〇), wherein the control signal can be a preset hot key signal, and when the video screen of the person who wants to add the expression pattern is borrowed By pressing the preset hotkey to join, then capturing a video camera from the video image (as in step 10) and loading the animation module relative to the control signal from the -beet library (step S420), then Determining whether the animation module needs to refer to the video screen (step S430). If it is necessary to refer to the video surface (for example, to insert a forehead blue rib pattern, it is necessary to find the forehead position), then a composite area is selected from the video surface ( Step S44〇), and recording the coordinate position of the composite area (step S450), and then combining the image of the video surface with the pattern of the dynamic module into a composite surface (step S46〇), and finally the composite picture Output to the receiving end (step S48〇); if not (example 201108151 to insert a background effect, no need to find a specific position), then the direct view and the animation module are combined into a synthetic surface =Step S4 70)' and output the synthesized picture to the receiving end (step S480). ^Speaking of the art of skill, by adding the video area to the face of the emoticon, the video picture is combined and becomes a synthetic face, and the 2 receiving end is added to increase the interactive interest of using the real-time service. To achieve the diversity of emotions or emotions, but the problems that can be provided by the know-how, in order to be able to balance the decision-making process, based on years of research and development and many practical experiences, propose a way to realize the shortcomings. . [Summary of the Invention] In view of the above-mentioned problems of the prior art, the present invention provides, in particular, a control system for instant messaging and an action of an initial expression or a head (such as an expression such as r Μ micro-夭), and a body segment. The security of the change and the privacy of the user, the purpose of the real person of the person/charger. In order to achieve the above object, the control system of the present invention comprises an image manipulation unit, an expression two: 201108151 calculation unit, an image processing unit and a communication unit. The image capturing unit captures an original expression image and a first expression image, and the expression feature calculation unit calculates an expression feature quantity of the first expression image according to the original expression image, and the image processing unit generates a virtual image according to the expression feature quantity. The communication unit executes an instant messaging system in a communication protocol and displays the virtual image through the instant messaging system. According to another object of the present invention, a control system for instant communication is proposed. It further includes a setting unit that sets the image displayed by the instant messaging system to the user's original expression or the processed analog image according to the user's preference. In addition, an input unit can be provided, which can include a text input module and a voice input module. a text input module for receiving input of expression character text; and a voice input module for receiving input of expression feature speech. Specifically, the input unit of the near-end host can be a keyboard, a microphone, etc., and any other input device that can be used for inputting face, text, and voice can be applied. According to still another object of the present invention, a control system for instant messaging is proposed. It can also send face information according to the expression feature text, or send voice information and face information according to the voice feature. The image simulation image displayed by the instant messaging system can be 2D moving or 3D moving. In addition, the present invention further provides a control system for instant messaging and a control method thereof, including the steps of: first, inputting an original expression to a near-end host. Next, the expression feature amount of the original expression input is calculated based on the expression feature amount. Continued, according to the expression feature 201108151 in the face of the news. Then, the remote host that is connected to the near-end host by the communication protocol and is performing the instant messaging service receives the screen information. The last 'remote host displays the simulated emoticon corresponding to the original table in real time according to the received picture information. The above instant messaging service can be a common instant messaging system such as QQ, Windows Live Messenger, Chan Chan, or this, or Skype. In addition, the method for displaying the simulated expression animation on the video screen further includes the near-end host determining, according to the preference setting of the user-setting unit, „the side information according to the expression feature amount or the original expression information=including the remote host According to the received picture information, the output corresponding to the original expression of the original expression first;; = = = first based on the original expression image calculation __

量,並依據表情特徵d 2影像之-表情特徵 取=含將虛=。其中,原始表情影像的操 塊,㈣像分割成複數個像素區 取得:一像素區塊内之平均像素:素72度後’ 情特徵量計算:::二::二據儲存於資料庫所之表 遠端主機的輸出單元:表之量。此外, 動畫…畫。上述,表情特徵量可=:表是: 201108151 像中的至少一個眉 少一個鼻子特微旦徵置、至少一個眼睛特徵量、至 頰特徵量之盆;:二:―個嘴唇特徵量及至少-個臉 包含快樂、生氣、難用於本發明原始表情 慮、點頭、搖頭、嘟i :吾悅、厭惡、驚訝、焦 喻嘴、砭眼和無表情之其中之一。 餘;5 述、’依本發明之—種用於即時通訊之控制系 、乂、工,方法,其可具有一或多個下述優點··’、 ⑴本能顯示的模擬表情動晝,可依使用者表 *月支換,無傳統玩偶系統或嵌入表情圖案個數及 使用上的制限。 (2) 本發明可就使用纟使用之通訊協定即時發送接 收,可即時完整的表達出使用者當下的情感、情 緒狀態、臉部表情等。 〜 (3) 本發明可以2D動畫或3D動晝來表達使用者的情 感、情緒、表情等,極具趣味性可增添生活型態 依賴網路的使用者生活上的情趣。 兹為使貴審查委員對本發明之技術特徵及所達到 之功效有更進一步之瞭解與認識,謹佐以較佳之實施例 及配合詳細之說明如後。 201108151 【實施方式】 以下將參照相關圖式,說明依本發明較佳實施例之 一種用於即時通訊之控制系統及其控制方法,為使便於 理解,下述實施例中之相同元件係以相同之符號標示來 說明。 請參閱第5圖,其係為本發明之一種用於即時通訊 之控制系統的功能方塊圖。圖中,用於即時通訊之控制 系統包含一近端主機500及一個遠端主機550。其中, • 近端主機500包含一近端影像擷取單元510、一近端表 情特徵計算單元520 —近端影像處理單元530及一近端 通訊單元540。 於近端主機500中,藉由近端影像擷取單元510擷 取一原始表情影像及一第一表情影像,表情特徵計算單 元520即依據原始表情影像計算第一表情影像之一表情 特徵量。影像處理單元530則依據表情特徵量產生一近 端虛擬影像。近端通訊單元540與近端影像擷取單元510 • 及表情特徵計算單元520連接,依據使用者所使用的通 訊協定進行即時通訊系統900,並透過即時通訊系統900 顯示近端虛擬影像531。 此外,遠端主機550亦包含一遠端影像擷取單元 560、一遠端表情特徵計算單元570、一遠端影像處理單 元580及一遠端通訊單元590。遠端通訊單元590,依 據使用者所使用的通訊協定,透過即時通訊系統900與 近端通訊單元540連接,進行即時通訊系統900的服 11 201108151 務,並於即時通訊系統900中觀看近端主機500於即時 通訊系統900所顯示的近端虛擬影像。而遠端影像擷取 單元560,則擷取遠端使用者之表情影像,透過遠端表 情特徵計算單元570及遠端影像處理單元580形成一遠 端虛擬影像,並依據遠端通訊單元590所連接之即時通 訊系統900所顯示的近端虛擬影像,即時輸出對應近端 虛擬影像及遠端虛擬影像予近端主機500之使用者。 此外,近端主機500及遠端主機550更包含有設定 單元(圖未示),可依據使用者的喜好設定,使即時通訊 · 系統900所顯示的顯示圖像為原始表情影像或者是虛擬 影像。近端主機500及遠端主機550更可分別設置一輸 入單元(圖未示),除影像擷取單元更可包含文字輸入模 組及語音輸入模組。文字輸入模組,用於接收表情特徵 文字的輸入;語音輸入模組,用於接收表情特徵語音的 輸入。具體來說,輸入單元可以是鍵盤及麥克風等但不 以此為限,任何其他可用於輸入晝面、文字及語音之輸 入設備皆可適用。 籲 此外,近端主機500及遠端主機550的通訊單元更 可以,依據表情特徵文字發送晝面資訊,或者是依據表 情特徵語音發送語音資訊及晝面資訊。而近端主機500 及遠端主機550的及近端影像處理單元530遠端影像處 理單元580所輸出之近端虛擬影像及遠端虛擬影像模擬 表情動畫可以為2D動晝或3D動畫。 所謂表情係包含面部表情、動作表情和語言表情 12 201108151 緒狀=身::表Ϊ:是指人類的身段變化,人類的情 手和脚的動作上通=動作。動作表情主要體現在 足蹈、手二;;手舞 手叫絕、旱聲雷動等’都是情緒的特定表現。 亦合it’φ人類說話聲音的音調、節奏、速度、强度等, 如:特定的情緒’這就是所謂的言語表情。例 直’音調低’節奏緩慢,聲音高低差別很小; ;二;調高:速度較快,聲音高低差別較大;憤怒 ^明尖’並且伴隨著顫抖等等,都是很好的 另外 臉部的表情動作則屬面部表情。其中,眼睛 ^ ^靈的窗口 ’它的形態變化往往直接表現情緒的Quantity, and according to the expression characteristics d 2 image - expression features take = contain will be false =. Among them, the operation block of the original expression image, (4) the image is divided into a plurality of pixel regions to obtain: the average pixel in a pixel block: after 72 degrees, the 'feature feature quantity calculation::: two:: two data stored in the database The output unit of the remote host: the amount of the table. In addition, animation...painting. Above, the expression feature quantity can be:: The table is: 201108151 At least one eyebrow in the image is less than a nose, a micro-denier, at least one eye feature, and a cheek feature; 2: "a lip feature and at least - A face contains happiness, anger, and is difficult to use in one of the original expressions of the present invention, nodding, shaking his head, and beating i: one of my joy, disgust, surprise, amazement, blinking, and expressionlessness. [5], according to the present invention, a control system, a device, a method for instant messaging, which may have one or more of the following advantages: · (1) an instinctual display of simulated expressions, According to the user table * monthly change, no traditional doll system or embedded emoticons and the use of restrictions. (2) The present invention can instantly transmit and receive the communication protocol used by the present invention, and can instantly express the user's current emotion, emotional state, facial expression and the like. ~ (3) The present invention can express the user's feelings, emotions, expressions, etc. in 2D animation or 3D animation, and is very interesting to add a lifestyle to the user's life in the network. For a better understanding of the technical features of the present invention and the efficacies of the present invention, the preferred embodiments and the detailed description are as follows. 201108151 [Embodiment] Hereinafter, a control system for instant communication and a control method thereof according to a preferred embodiment of the present invention will be described with reference to the related drawings. For ease of understanding, the same components in the following embodiments are the same. The symbol is marked to illustrate. Please refer to FIG. 5, which is a functional block diagram of a control system for instant messaging according to the present invention. In the figure, the control system for instant messaging includes a near end host 500 and a remote host 550. The near-end host 500 includes a near-end image capturing unit 510, a near-end image feature computing unit 520, a near-end image processing unit 530, and a near-end communication unit 540. In the near-end host 500, the original image capturing unit 510 captures an original expression image and a first expression image, and the expression feature computing unit 520 calculates an expression feature quantity of the first expression image according to the original expression image. The image processing unit 530 generates a near-end virtual image according to the expression feature amount. The near-end communication unit 540 is connected to the near-end image capturing unit 510 and the expression feature computing unit 520, and performs the instant messaging system 900 according to the communication protocol used by the user, and displays the near-end virtual image 531 through the instant messaging system 900. In addition, the remote host 550 also includes a remote image capturing unit 560, a remote expression feature computing unit 570, a remote image processing unit 580, and a remote communication unit 590. The remote communication unit 590 is connected to the near-end communication unit 540 through the instant messaging system 900 according to the communication protocol used by the user, performs the service of the instant messaging system 900, and views the near-end host in the instant messaging system 900. 500 is a near-end virtual image displayed by the instant messaging system 900. The remote image capturing unit 560 captures the expression image of the remote user, and forms a remote virtual image through the remote expression feature computing unit 570 and the remote image processing unit 580, and according to the remote communication unit 590. The near-end virtual image displayed by the connected instant messaging system 900 instantly outputs the corresponding near-end virtual image and the remote virtual image to the user of the near-end host 500. In addition, the near-end host 500 and the remote host 550 further include a setting unit (not shown), which can be set according to the preference of the user, so that the display image displayed by the instant messaging system 900 is the original expression image or the virtual image. . The near-end host 500 and the remote host 550 can respectively be provided with an input unit (not shown), and the image capturing unit can further include a text input module and a voice input module. a text input module for receiving an input of an expression feature text; and a voice input module for receiving an input of an expression feature voice. Specifically, the input unit may be a keyboard and a microphone, etc., but not limited thereto, and any other input device that can be used for inputting face, text, and voice is applicable. In addition, the communication unit of the near-end host 500 and the remote host 550 can send facial information according to the expression feature text, or send voice information and face information according to the feature feature voice. The near-end virtual image and the far-end virtual image emoticon animation output by the remote image processing unit 580 of the near-end host 500 and the remote host 550 and the near-end image processing unit 530 may be 2D animation or 3D animation. The so-called facial expressions include facial expressions, facial expressions, and verbal expressions. 12 201108151 Threads = body:: Ϊ Ϊ: refers to the change of human body, the movement of human emotions and feet. The expression of action is mainly reflected in the foot and the hand; the hand dance is called the absolute, the dry sound is thunder, etc. It also coincides with the tone, rhythm, speed, intensity, etc. of the human voice, such as: specific emotions. This is the so-called verbal expression. The straight 'tune low' rhythm is slow, the difference between the sound level is very small; 2; heightening: the speed is faster, the sound level is different; the anger ^ Ming tip' and the trembling, etc., are very good Facial expressions are facial expressions. Among them, the eye ^ ^ spiritual window ’ its morphological changes often directly express emotions

大泣時眼部肌肉收縮,憤怒時橫眉張目。而嘴巴 =接=緒的變化’悲哀時嘴角下垂,高興時嘴角 、’ 13提升。面部表情中眼睛和嘴巴的形態變化, 最能表現-個人的情緒變化。在直接表達情緒、情感方 面’具主要作用的是面部表情和言語表情,面部表情直 觀’言語表情準確。而動作表情則只是表達情緒 的一種輔助手段。 〜 於本發明中,表情特徵量可以是原始表情影像中的 至少一個眉毛特徵量、至少一個眼睛特徵量、至少一個 f子特徵量、至少—個嘴唇特徵量及至少-個臉頰特徵 置之其中之一或其組合等面部表情,但不以此為限,例 13 201108151 如也可以是言語表情或動作表情。 於本發明中,原始表情係包含快樂、生氣、難過、 害怕、喜悅、厭惡、驚訝、焦慮、點頭、搖頭、嘟嘴、 眨眼和無表情之其中之一,但不以此為限。 請參閱第6圖,其係為一種用於即時通訊之控制方 法之步驟流程圖。其包括下列步驟: 步驟S610 :擷取一原始表情影像及一第一表情影 像; 步驟S620 :依據原始表情影像計算第一表情影像之 一表情特徵量; 步驟S630 :依據表情特徵量產生一虛擬影像;以及 步驟S640 :透過一即時通訊系統顯示虛擬影像。 其中,此即時通訊之控制方法是用於一遠端主機與 一近端主機間,並藉由一通訊協定連接,以進行即時通 訊系統之服務。 適用於本實施例之即時通訊系統係包含QQ、 Windows Live Messenger、Yahoo! Messenger 或 Skype 等,但不以此為限。 此外,於即時通訊系統視訊畫面顯示模擬表情動晝 之方法,更包含依據即時通訊系統使用者依喜好使用設 定單元進行設定之設定值,判斷須發送依據表情特徵量 之畫面資訊或原始表情資訊至遠端主機。 其中,遠端主機依據所接收之晝面資訊或原始表情 資訊,輸出相對應於原始表情之模擬表情動晝或者是原 14 201108151 始表情其中之一至遠端主機。輸入原始表情至近端主機 的方法,除了輸入原始表情,更包含輸入表情特徵文字 或表情特徵語音。且可依據表情特徵文字發送晝面資 訊,或者是依據表情特徵語音發送語音資訊及畫面資 訊。其中,遠端主機的輸出單元所輸出之模擬表情動晝 可以是2D動畫或3D動晝。 第7圖係為本發明之表情特徵量之產生方法之步驟 流程圖,其包含下列步驟: • 步驟S710 :儲存複數個原始表情影像; 步驟S720:計算原始表情影像之複數個原始表情特 徵量; 步驟730 :擷取一第一表情影像;以及 步驟740 :依據原始表情特徵量計算第一表情影像 之一表情特徵量。 於本實施例中,每一位使用者之原始表情影像之擷 • 取方式係包含將原始表情影像分割成複數個像素區 塊,比較每一個像素區塊内之兩兩相鄰的像素亮度後, 取得每一個像素區塊内之一平均像素及一像素亮度,本 實施例中可利用光流記算法以取得每一個像素區塊内 之一平均像素及一像素亮度,但不以此為限。 此外,適用於本實施例之步驟S740之表情特徵量 可以是藉由任意兩個表情特徵值以直接對應法(Direct Mapping)或者是奇異值分解法(Singular Value Decomposition)計算得出。當使用奇異值分解法時,需 15 201108151 分解出1了,清特徵值置入相同矩陣内,再將上述矩陣 表情特使用者依喜好自行設定維度大小的 值:二=可::出表情特徵向量與各表情特徵 用以決定量=表情特徵向量的維度大小可 度大小則可決定忙使用/大小’然而表情特徵量的維 計算得出表情特表情進行變換的精確度以及 所需時間長進讀換的精確度隨之增高,但計算 計算=二資料庫之表情特徵量來 表情通λ畫或預設表情動晝依據 sn特徵語音_擬===特: 主機可以將使用文字時,於此情況下, :用者表情特徵之卡 ::之畫模擬表㈣畫,此模擬表情動畫可為= 量、==量特ΓΛ係使用了跟晴特徵量、鼻子特徵 情4^1 iii ㈣,料詩本實施例之表 清特徵里更可為層毛特徵量及臉頰特徵量以及上述各 201108151 _ 特徵量之其中之一或其組合。 常見的表情辨識技術可使用主要元素分析 (Principle Components Analysis)和線性識別分析(Linear Discriminate Analysis)的方法’雖然簡單,但應用於表情 變化度高的人臉表情辨識上,其效果並不好。此外,利 用類神經網路(Neural Network)、隱藏式馬可夫模型 (Hidden Markov Models Wavelet Transformation)結合支 持向量機(Support Vector Machine)等方法,雖然可達到 • 不錯的辨識率,卻又因複雜的運算過程耗費過多的系統 效能及執行時間,而無法配合即時的功用。 對於習知的表情辨識系統而言,目前的研究大部分 都假設人臉大小是固定的,或者背景需要是單純的顏 色。不然就是假設人臉已找到,或者臉部的特徵是用手 動來擷取,很少有完整的一套是從影像進來,就開始偵 測人臉,然後擷取特徵來辨識表情的方法。 為達到表情辨識系統可應用於即時通訊軟體之目 ® 的,更適合於本發明之表情辨識技術,係使用自動化處 理偵測臉部、擷取特徵以及辨識表情的表情辨識技術。 其中,偵測臉部的處理’可以採用一般常用的進化演繹 法,利用皮膚顏色及類橢圓形狀來債測出人臉的目前位 置’但不以此為限。接下來’可以依據表情特徵量計算 的結果進一步取得原始表情中的至少一個眉毛特徵 量、至少一個眼睛特徵量、至少一個鼻子特徵量、至少 一個嘴唇特徵量及至少一個臉頰特徵量之其中之一或 17 201108151 …、且。*不以此為限。作】如也可以取得至少一個抬 紋位置特録來進行擁取特徵。 士 下來’則依據表情特徵辨識出原始表情後,發送 目々二之旦面貝訊。於本發明中,可採用倒傳遞類神經 =’ TSK&,系絲賴原始表情係為快樂、生氣、 =害泊、吾悅、厭惡、驚訝、焦慮、點頭、搖頭、 眨眼和無表情之其中之一但不以此為限。於辨 3始表情後,可以輸出2D動畫或3D動晝的模擬表情 動旦,即時表現使用者當下的表情。 μ 士 ^上所述僅為舉例性,而非為限制性者。任何未脫 ”之精神與範,’而對其進行之等效修改或變 更,均應包含於後附之申請專利範圍中。 201108151 f圖式簡單說明】 第1圖係為即時通訊軟體的架構示意圖; 第2圖係為網路即時通話的方法流程圖,· 第3圖係為習知技藝之玩偶系統之示意圖; 第4圖係' 為習知技藝之網路即時通話的方法流程圖 •係為本發明之於視訊畫面顯示模擬表情動畫 之方法之流程示意圖; 瞻帛6圖係為本發明之表情特徵量的產生方法之流程 示意圖;以及 & 第7圖係為本發明之於視訊晝面顯示模擬表情動晝 之系統的方塊示意圖。 一 【主要元件符號說明】 11 〇 :即時通訊伺服器; 120 :註冊會員A ; 130 :註冊會員b ; S210〜S280 :習知技藝的各流程步驟; 303 :電子裝置; 304 :玩偶; 331 :發光元件; 341 :播音元件; 351 :擺動元件; 201108151 361 :影像擷取元件; S400〜S480 :習知技藝的各流程步驟; 500 :近端主機; 510 :近端影像擷取單元; 520 :近端表情特徵計算單元; 530 :近端影像處理單元; 540 :近端通訊單元; 550 :遠端主機; 560 :遠端影像擷取單元; 570 :遠端表情特徵計算單元; 580 :遠端影像處理單元; 590 :遠端通訊單元; S610〜S640 :各流程步驟; S710〜S740 :各流程步驟;以及 900 :即時通訊系統。When the big tears, the eye muscles contract, and when they are angry, they cross their eyes. And the mouth = the change of the thread = the sorrow when the corner of the mouth droops, when the mouth is happy, '13 lift. The morphological changes in the eyes and mouth in facial expressions are most likely to manifest - individual emotional changes. In the direct expression of emotions and emotions, the main functions are facial expressions and verbal expressions, and the facial expressions are straightforward. The action expression is just an aid to expressing emotions. In the present invention, the expression feature quantity may be at least one eyebrow feature quantity, at least one eye feature quantity, at least one f sub-feature quantity, at least one lip feature quantity, and at least one cheek feature in the original expression image. Facial expressions such as one or a combination thereof, but not limited to this, Example 13 201108151 can also be a verbal expression or an action expression. In the present invention, the original expression includes one of happiness, anger, sadness, fear, joy, disgust, surprise, anxiety, nodding, shaking his head, pouting, blinking, and expressionlessness, but is not limited thereto. Please refer to Fig. 6, which is a flow chart of the steps of the control method for instant messaging. The method includes the following steps: Step S610: capturing an original expression image and a first expression image; Step S620: calculating an expression feature quantity of the first expression image according to the original expression image; Step S630: generating a virtual image according to the expression feature quantity And step S640: displaying the virtual image through an instant messaging system. The instant messaging control method is used between a remote host and a near-end host, and is connected by a communication protocol to perform the service of the instant communication system. The instant messaging system applicable to the embodiment includes QQ, Windows Live Messenger, Yahoo! Messenger or Skype, but is not limited thereto. In addition, the method for displaying the simulated expression on the video screen of the instant messaging system further includes determining the setting value according to the user of the instant messaging system according to the preference setting unit, and determining whether to send the screen information or the original expression information according to the expression feature amount to Remote host. The remote host outputs the simulated expression corresponding to the original expression or one of the original expressions to the remote host according to the received facial information or the original expression information. The method of inputting the original expression to the near-end host, in addition to inputting the original expression, includes inputting an expression feature text or an expression feature voice. The voice information can be sent according to the expression feature text, or the voice information and the screen information can be sent according to the voice feature. The analog expression outputted by the output unit of the remote host may be a 2D animation or a 3D animation. Figure 7 is a flow chart of the steps of the method for generating the expression feature quantity of the present invention, which includes the following steps: • Step S710: storing a plurality of original expression images; Step S720: calculating a plurality of original expression feature quantities of the original expression image; Step 730: Capture a first expression image; and step 740: calculate an expression feature quantity of the first expression image according to the original expression feature amount. In this embodiment, each user's original expression image is obtained by dividing the original expression image into a plurality of pixel blocks, and comparing the brightness of two adjacent pixels in each pixel block. Obtaining an average pixel and a pixel brightness in each pixel block, in this embodiment, an optical flow algorithm can be used to obtain an average pixel and a pixel brightness in each pixel block, but not limited thereto. . In addition, the expression feature quantity applicable to step S740 of the embodiment may be calculated by using any two expression feature values by Direct Mapping or Singular Value Decomposition. When using the singular value decomposition method, it is necessary to decompose 1 in 201108151, and the eigenvalues are placed in the same matrix, and then the above-mentioned matrix expression user can set the value of the dimension size according to the preference: 2 = can:: expression features The vector and each expression feature are used to determine the quantity = the dimension size of the expression feature vector can determine the busy use / size. However, the dimension of the expression feature quantity is calculated to obtain the accuracy of the expression of the expression and the length of time required to read. The accuracy of the change is increased, but the calculation and calculation = the expression of the feature of the second database to express the λ picture or the preset expression according to the sn feature voice _ _ == special: the host can use the text, here In the case, the user's expression feature card:: the painting simulation table (four) painting, this simulation expression animation can be used for the quantity, == quantity, the use of the clear feature quantity, the nose characteristic 4^1 iii (four), In the surface feature of the embodiment of the present invention, one of the layer hair feature quantity and the cheek feature quantity and one or a combination of the above-mentioned 201108151 _ feature quantities may be used. Common expression recognition techniques can use the methods of Principle Component Analysis and Linear Discriminate Analysis. Although simple, they are not good for facial expression recognition with high expression variation. In addition, using Neural Network, Hidden Markov Models Wavelet Transformation, and Support Vector Machine, etc., although it can achieve a good recognition rate, but also due to complex operations The process consumes too much system performance and execution time, and it cannot match the immediate function. For the conventional expression recognition system, most of the current research assumes that the face size is fixed, or the background needs to be a simple color. Otherwise, it is assumed that the face has been found, or that the features of the face are manually captured. Few of the complete sets are from the image, and then the method of detecting the face and then extracting features to recognize the expression. In order to achieve the expression recognition system, which can be applied to the object of instant messaging software, it is more suitable for the expression recognition technology of the present invention, and uses an automatic expression processing technique for detecting faces, capturing features, and recognizing expressions. Among them, the process of detecting the face can use the commonly used evolutionary deduction method to use the skin color and the elliptical shape to measure the current position of the face', but not limited thereto. Next, one of the at least one eyebrow feature amount, the at least one eye feature amount, the at least one nose feature amount, the at least one lip feature amount, and the at least one cheek feature amount in the original expression may be further obtained according to the result of the expression feature amount calculation. Or 17 201108151 ..., and. *Not limited to this. For example, at least one elevation position can also be obtained to perform the acquisition feature. After the character is down, the original expression is recognized according to the expression features, and the target is sent to the second. In the present invention, the reverse transfer type nerve = 'TSK& can be used, and the original expression system is happy, angry, stunned, ugly, disgusted, surprised, anxious, nodding, shaking, blinking, and expressionless. One but not limited to this. After the expression of 3 starts, you can output 2D animation or 3D dynamic expressions, and instantly express the user's current expression. The above description is for illustrative purposes only and is not a limitation. Any changes or modifications to the spirit and scope of the company shall be included in the scope of the patent application attached. 201108151 f Simple description of the schema] Figure 1 is the architecture of the instant messaging software. Schematic diagram; Figure 2 is a flow chart of the method of instant call on the network, · Figure 3 is a schematic diagram of the doll system of the prior art; Figure 4 is a flow chart of the method of instant call on the network of the prior art The flow chart of the method for displaying the simulated expression animation on the video screen of the present invention; the schematic diagram of the method for generating the expression feature quantity of the present invention; and < Figure 7 is the video of the present invention. The block diagram of the system that simulates the emoticon is displayed. [Main component symbol description] 11 〇: instant messaging server; 120: registered member A; 130: registered member b; S210~S280: various processes of the known skill Step 303: electronic device; 304: doll; 331: light-emitting element; 341: broadcast component; 351: swing component; 201108151 361: image capture component; S400~S480: flow of conventional skill Step; 500: near-end host; 510: near-end image capture unit; 520: near-end expression feature calculation unit; 530: near-end image processing unit; 540: near-end communication unit; 550: remote host; 560: far End image capturing unit; 570: remote emoticon feature computing unit; 580: remote image processing unit; 590: remote communication unit; S610~S640: various process steps; S710~S740: various process steps; and 900: instant Communication system.

Claims (1)

201108151 七、申請專利範園: 1.種用於即時通訊之控制系統’其包含: 一影像擷取單元,係擷取一原始表情影像及一 第一表情影像; ▲ 表情特徵計算單元,係依據該原始表情影像 汁鼻該第一表情影像之一表情特徵量; 衫像處理單元,係依據該表情特徵量產生一 擬影像;以及 1 一通訊單元,係以一通訊協定執行一即時通訊系 統,並透過該即時通訊系統顯示該虛擬影像。〃 2.如申請專利範圍第!項所述之一種用於即時通訊 之控制系統,其更包含—机中错- 开顧_外哲 3又疋早兀以設定該通訊單 疋顯不该第一影像及該虛擬影像其中之一。 3. 如申請專利範圍第!項所述之一種用 之控制系統,其更具有-輸入單元。 f通訊 4. 如申請專職目帛3項所敎—姻於 ,控制系統,其中該輸入單元更包含 = 組,係接收一表情特徵文字之輸入。 翰入模 5. =請專利範圍第3項所述之—種用於即時 n统’其該輸人單元更包含—語 組,係接收一表情特徵語音之輸入。 和入模 6. 如申請專利範圍第i項所述之】一種用於 控制系統,其該輸入單元係為一鍵盤及—麥h 21 201108151 其中之一或其組合。 7. 8, 9. 10. 11. =專利範圍第i項所述之—種 之控制糸統’其中該影像 機及-網路照相機其中之-者^讀^網路攝影 如申請專利_第1項所狀-種詩即時、g 系統J中該虛擬影像係為一 ;K 動畫或一二維(3D)動晝。 〕 如申請專利範圍第丨項所述 之控制系統,其中該表情特徵量係為至; 徵量、至少-眼睛特徵量、至少-鼻子特徵;特 t一嘴唇特徵量及至少-臉頻特徵量之Λ之至 或其組合》 里 肀之— 如申請專利範圍第1項所述之一種用於即時通▲ 氣、難過'害怕、喜悅、厭惡清二象;、=生 搖頭、嘟嘴、眨眼和無表情之其中之―:、 —種用於即時通訊之控制方法,包含下列步驟: 操取-原始表情影像及—第—表情影像; ^據該原始表情影像計算該第— 表情特徵量; 豕之一 依據該表情特徵量產生一虛擬影像;以及 透過一即時通訊系統顯示該虛擬影像。 如申請專利範圍第η項所述之—種用於即時通訊 22 12. 201108151 中該透過-即時通訊系統顯示該虛 =像之步驟中,係依據—Μ單元之輸入 —Ρ影:訊系統所顯示之影像為該虛擬影像或該第 項所敎,於即時通訊 下;步驟:、中該表情特徵量之產生方法可包含 Φ 儲存複數個原始表情影像; 十算h一原始表情影像之複數個原始表情特徵量; 擷取一第一表情影像;以及 , =些原始表情特徵量計算該第一表 一表情特徵量。 14. 如申請專利範圍第U項所述之一種用於即時通訊 =制方法,其中該原始表情影像之絲係包含將 Μ原始表情影像分割成複數個像素 •=素區塊内之兩兩相鄰的像素亮度後,取 該像素區塊内之一平均像素及一像素亮度。 15. 如申專㈣目第u項所述之—種詩即時通 之控制方法’其中該虛擬影像係為一二維 : 動晝或一三維(3D)動畫。 16. 如申請專職圍第η項所狀—種用於即時 =制方法’其中該表情特徵量係為至少一眉毛特 5 $、至少一眼睛特徵量、至少-鼻子特徵量、至 23 f 201108151 少一嘴唇特徵量及至少一臉頰特徵量之其中之一 或其組合。201108151 VII. Application for Patent Park: 1. A control system for instant messaging's: It includes: an image capture unit that captures an original expression image and a first expression image; ▲ Expression feature calculation unit, based on The original expression image is one of the expression features of the first expression image; the shirt image processing unit generates a pseudo image according to the expression feature quantity; and 1 a communication unit executes an instant messaging system by using a communication protocol. And displaying the virtual image through the instant messaging system. 〃 2. If you apply for the patent scope! One of the control systems for instant messaging, which further includes - the error in the machine - the external _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ . 3. If you apply for a patent scope! A control system for use in the item, further having an - input unit. fCommunication 4. If you apply for a full-time catalogue, you will be able to control the system. The input unit also contains the = group, which receives the input of an expression character. John entered the model 5. = Please refer to the third paragraph of the patent scope for the instant n system's input unit further includes a group, which receives the input of an expression feature voice. And entering the mold 6. As described in claim i of the scope of the invention, a control unit is used for the control system, and the input unit is one of a keyboard and a MM 21 201108151 or a combination thereof. 7. 8, 9. 10. 11. = The control system described in item i of the patent scope 'where the video camera and the network camera are the ones ^ ^ ^ ^ Network photography as patent application _ 1 item-type poem instant, g system J in the virtual image system is one; K animation or a two-dimensional (3D) dynamic. The control system of claim 2, wherein the expression feature quantity is; devaluation, at least - eye feature amount, at least - nose feature; special t-lip feature quantity and at least - face frequency feature quantity The Λ Λ 或其 或其 — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Among the expressionless ones: -, - a control method for instant messaging, comprising the following steps: fetching - the original emoticon image and - the first emoticon image; ^ calculating the first emoticon feature amount according to the original emoticon image; One generates a virtual image according to the expression feature quantity; and displays the virtual image through an instant messaging system. As described in the scope of patent application, item n is used in instant messaging 22 12. 201108151. In the step of displaying the virtual image by the instant-instant messaging system, the input is based on the unit - the video system: The displayed image is the virtual image or the first item, in the instant messaging; step: the method for generating the expression feature quantity may include Φ storing a plurality of original expression images; and counting a plurality of original expression images The original expression feature quantity; capturing a first expression image; and, = some original expression feature quantities, calculating the first table-one expression feature quantity. 14. A method for instant messaging as described in claim U, wherein the original facial expression image comprises dividing the original expression image into a plurality of pixels. After the neighboring pixel brightness, one of the average pixels and one pixel brightness in the pixel block is taken. 15. For the control method of the poem immediately described in item (4) of the application, the virtual image is a two-dimensional: dynamic or a three-dimensional (3D) animation. 16. If applying for the full-time η item--for the instant = system method', the expression feature quantity is at least one eyebrow special 5 $, at least one eye feature quantity, at least - nose feature quantity, to 23 f 201108151 One of the lip feature amount and at least one cheek feature amount or a combination thereof. 24twenty four
TW98127642A 2009-08-17 2009-08-17 Instant communication control system and its control method TW201108151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98127642A TW201108151A (en) 2009-08-17 2009-08-17 Instant communication control system and its control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98127642A TW201108151A (en) 2009-08-17 2009-08-17 Instant communication control system and its control method

Publications (1)

Publication Number Publication Date
TW201108151A true TW201108151A (en) 2011-03-01

Family

ID=44835547

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98127642A TW201108151A (en) 2009-08-17 2009-08-17 Instant communication control system and its control method

Country Status (1)

Country Link
TW (1) TW201108151A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631370A (en) * 2012-08-28 2014-03-12 腾讯科技(深圳)有限公司 Method and device for controlling virtual image
CN110597384A (en) * 2019-08-23 2019-12-20 苏州佳世达光电有限公司 Information communication method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631370A (en) * 2012-08-28 2014-03-12 腾讯科技(深圳)有限公司 Method and device for controlling virtual image
CN103631370B (en) * 2012-08-28 2019-01-25 腾讯科技(深圳)有限公司 A kind of method and device controlling virtual image
CN110597384A (en) * 2019-08-23 2019-12-20 苏州佳世达光电有限公司 Information communication method and system

Similar Documents

Publication Publication Date Title
CN111833418B (en) Animation interaction method, device, equipment and storage medium
US11736756B2 (en) Producing realistic body movement using body images
WO2020204000A1 (en) Communication assistance system, communication assistance method, communication assistance program, and image control program
CN107894833B (en) Multi-modal interaction processing method and system based on virtual human
US20190320144A1 (en) Communication using interactive avatars
CN112379812B (en) Simulation 3D digital human interaction method and device, electronic equipment and storage medium
CN1326400C (en) Virtual television telephone device
Le et al. Live speech driven head-and-eye motion generators
TWI486904B (en) Method for rhythm visualization, system, and computer-readable memory
US20140351720A1 (en) Method, user terminal and server for information exchange in communications
CN108874114B (en) Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
KR20130022434A (en) Apparatus and method for servicing emotional contents on telecommunication devices, apparatus and method for recognizing emotion thereof, apparatus and method for generating and matching the emotional contents using the same
CN111583355B (en) Face image generation method and device, electronic equipment and readable storage medium
KR102148151B1 (en) Intelligent chat based on digital communication network
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
WO2020215590A1 (en) Intelligent shooting device and biometric recognition-based scene generation method thereof
JP7278307B2 (en) Computer program, server device, terminal device and display method
CN117036583A (en) Video generation method, device, storage medium and computer equipment
CN114615455A (en) Teleconference processing method, teleconference processing device, teleconference system, and storage medium
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
CN112669846A (en) Interactive system, method, device, electronic equipment and storage medium
TW201108151A (en) Instant communication control system and its control method
CN111461005A (en) Gesture recognition method and device, computer equipment and storage medium
WO2024108431A1 (en) Live stream interaction methods and apparatuses, device, storage medium, and program product