TW201220172A - forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario - Google Patents

forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario Download PDF

Info

Publication number
TW201220172A
TW201220172A TW99137649A TW99137649A TW201220172A TW 201220172 A TW201220172 A TW 201220172A TW 99137649 A TW99137649 A TW 99137649A TW 99137649 A TW99137649 A TW 99137649A TW 201220172 A TW201220172 A TW 201220172A
Authority
TW
Taiwan
Prior art keywords
image
interactive
interactive communication
communication interface
interaction
Prior art date
Application number
TW99137649A
Other languages
Chinese (zh)
Inventor
Chun-Lin Lu
Churng-Jou Tsai
Chao-Hsiung Tseng
Chung-Jen Guo
Chung-Fan Liu
Ling-Erl Cheng
Jinn-Kwei Guo
Original Assignee
Univ Kun Shan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Kun Shan filed Critical Univ Kun Shan
Priority to TW99137649A priority Critical patent/TW201220172A/en
Publication of TW201220172A publication Critical patent/TW201220172A/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method for Integrating online interactive communication and dynamic digital screen, mainly for use in network real-time communication (instant messaging), wherein an image capture unit is used to capture respective image (self or counterpart) of the interactive communicator, then the image is combined into the object of the interactive communication interface to form a digital screen, thus enabling the interactive communicators themselves or counterparts to feel immersed deeply into the object's scenario, making it different from the image perception shown by conventional video interactive devices, in addition, interactive communicators can execute interactive instructions on objects, so that the objects can interact in response, thereby achieving effect of human interacting with human and human interacting with object, as well as providing the presence. Compared with the prior art, the present invention offers a better approach to interactivity.

Description

201220172 六、發明說明: 【發明所屬之技術領域】 [〇〇〇1] 本發明係有關於—種線上互動通訊整合動態數位晝 面之方法,係指藉由影像擷取單元擷取互動通訊者影像 ,並將影像結合於物件而形成數位畫面,再由互動通訊 介面呈現出數位畫面,以達成人與人之間或人與物件之 間’可以有互動效果之方法。 【先前技術】201220172 VI. Description of the invention: [Technical field to which the invention pertains] [〇〇〇1] The present invention relates to a method for integrating interactive digital digits in online interactive communication, which refers to capturing interactive communicators by image capturing unit The image is combined with the object to form a digital image, and then the interactive communication interface presents a digital image to achieve a method of interaction between people or between people and objects. [Prior Art]

Ο 透過網路視訊通話已成為現代人溝通的橋樑,其中 最耳熟此詳的網路视訊通話莫過於MS.N、yah00即時通訊 等,均有提供視訊通話功能,藉由可以看見被此而讓雙 方都能感受到對方的存在,但是只能透過視訊看到對方 已經滿足不了使用者的需求,所以已經有相關前案改善 了網路視§fl通話互動性不足的問題,如中華民國發明專 利公開第2 01017 5 0 7號『情境式即時通訊互動系統』之 專利案’包括:影像擷取模組以及通訊伺服器,其中該 通訊伺服器包含資料庫、辨識模組、左動模組以及使用 者介面’並利用資料庫儲存個人化訊息,又該辨識模組 透過通訊伺服器接收影像訊號,辨識生成一識別訊號, 以互動模組因應於前述識別訊號以及個人化預設訊息, 產生一情境式互動訊息,於使用者介面呈現,以達成互 動目的’突破一般視訊通訊的效果,加入許多的有趣内 容和遊戲,更增進人與人之間的關係與感情。 上述前案係利用影像擷取模組擷取使用者影像,透 過影像與辨識模組比對而產生情境式互動訊息,並藉由 内建分身人物將情境式互動訊息呈現出,以向其他使用 099137649 表單編號 A0101 第 3 頁/共 17 頁 0992065617-0 201220172 者表達出情感,前案雖然將使用者情感藉由分身人物表 達出’而與其他使用者有互動的效果,然而,分身人物 畢竟不是❹者本身,藉由分身人物與其他使用者或内 建背景互動似乎少了—點真實感,此外,雖然影像梅取 拉組可擷取使用者影像並呈現於使用者介面但所呈現 之影像與習知的視訊互動裝置相同’並未能融人视訊^ 動裝置所提供之場景,以至於使騎丨彡像未有任何與場 景互動之效果,而無法讓人有深入其境的感受,在互動 性上就顯得不足。 【發明内容】 [0003] 099137649 爰此,為改善習知的視訊互動裝置在人與人之間的 互動性不足’以及使用者不能融入視訊互動裝置之場景 内,而不能藉由場景與其他使用者互動之缺失,本發明 提供一種方法,所述之方法可以藉由影像擷取單元梅取 使用者之影像,並將所擷取之影像結合視訊互動裝置提 供的場景内’使得每偃使用者(自己或對方)都能感受到 深入其境。 每個使用者都可以與所提供之場景互動,在接收到 每個使用者之互動指令後,場景會根據互動指令給予互 動結果’而達成使用者雙方(或三方以上)與場景互動, 而增加互動通訊過程中的互動性,且具有臨場感,使得 視訊互動裝置賦有情感。 欲達成上述功能可藉由一種線上互動通訊整合動態 數位畫面之方法,包含有下列步驟:A提供一主方端及訊 號連接該主方端之至少一客方端,該主方端提供有一第 一互動通訊介面,該客方端提供有可與該第一互動通訊 表單煸號A0101 第4頁/共17頁 0992065617-0 201220172 ❹ 介面互動之一第二互動通訊介面;b由一資料庫提供一物 件分別傳輸至主方端之第一互動通訊介面及客方端之第 二互動通訊介面;C提供一第一影像擷取單元訊號連接上 述主方端用以擷取一主方影像,提供一第二影像擷取單 元訊號連接上述客方端用以擷取一客方影像;D將上述客 方影像傳輸至主方端之第一互動通訊介面,並令該客方 影像結合該物件而使該第一互動通訊介面呈現出一第一 數位畫面;上述主方影像傳輸至客方端之第二互動通訊 介面,並令該主方影像結合該物件而使該第二互動通訊 介面呈現出一第二數位畫面。 其中步驟D所述之第一數位畫面結合有該主方影像, 該第二數位畫面結合有該客方影像。 其中步驟D所述之結合係將該主方影像及該客方影像 疊合於該物件後方,且令該物件提供局部區域用以呈現 出該主方影像或/及該客方影像。 ❹ 上述之物件包含有動態畫面或/及靜態畫面,並以2D 模式或/及3D模式呈現出。 上述之方法,進一步包含有步驟E,提供一第一互動 單元訊號連接該主方端之第一互動通訊介面,前述第一 互動單元根據該第一數位畫面而執行一第一互動指令使 得該物件產生一第一互動結果;提供一第二互動單元訊 號連接該客方端之第二互動通訊介面,前述第二互動單 元根據該第二數位畫面而執行一第二互動指令使得該物 件產生一第二互動結果。 上述之第一互動指令及第二互動指令包含有聲音指 令、文字指令或動作指令之一或其組合。 099137649 表單編號A0101 第5頁/共17頁 0992065617-0 201220172 上述之第一互動結果及第二互動結果包含有時間變 化、空間變化、聲音變化或光線變化之一或其組合。 本發明之功效: 1 .本發明將影像結合物件可以讓互動通訊者感受到 深入其境的觀感。 2. 本發明透過互動通訊者可與物件互動之功能,使 得視訊互動過程各互動者有共同的互動目標、共同的話 題,而有更佳的互動性。 3. 本發明可藉由不同的物件而達成不同的互動目的 ,如應用於購物、教學、遊戲或簡報等。 4. 本發明可藉由物件的呈現,加上互動通訊者可與 物件互動,以輔助文字的講解,可讓對方更輕易的得知 所欲表達的意思。 5. 本發明透過互動性的提昇,而縮短互動通訊者之 間的距離,為習知視訊互動通訊方法無法比擬。 【實施方式】 [0004] 有關本發明之技術特徵及增:進功效,配合下列圖式 之較佳實施例即可清楚呈現,首先,請參閱第一圖所示 ,為一種線上互動通訊整合動態數位畫面之方法,包含 有下列步驟: A提供一主方端(1)及訊號連接該主方端(1)之至少一 客方端(2),所述之主方係指第一人稱使用即時通訊裝置 (A)的使用者,而客方係指第二、三人稱使用即時通訊裝 置(A)的使用者,前述之即時通訊裝置(A)可以是電腦或 智慧型手機等内含有即時通訊軟件,該主方端(1)提供有 一第一互動通訊介面(11)呈現於電腦之顯示器,該客方 099137649 表單編號A0101 第6頁/共17頁 0992065617-0 201220172 ^ (2)¼供有一第二互動通訊介面(21)呈現於電腦之顯示 器,並可以藉由網路(3)與該第一互動通訊介面(11)產生 互動,詳細的說,本較佳實施例之第一互動通訊介面 (π)與第二互動通訊介面(21)可以是MSN或是Yah〇〇 messenger,透過網路(3)通訊使得主方端(1)及客方端 (2)彼此可以透過文字或是聲音互動,其中該主方端(1) 當然可以同時與多個客方端(2)進行互動。 b由一資料庫(4)提供一物件(41)分別傳輸至主方端 (1)之第一互動通訊介面(U)及客方端(2)之第二互動通 訊介面(21),該資料庫(4)可哼是第三方資料庫、主方端 寊料庫或客方端資料庫之一,.而該物件(41)包含有動態 畫面(411)或/及靜態畫面(412) ’並以2D模式或/及3D 模式呈現出,請參閱第二圖所示,本較佳實施例將物件 (41)以虛擬魚缸作為例子,所述之虛擬魚虹包含有動態 畫面(411)〔魚游動畫面〕及靜態畫面(412)〔背景畫面 〕,且動態畫面(411)以3D模式呈現出,雨靜態畫面 (412)以2D模式呈現出,此外,前述之動態畫面Hu)及 靜態畫面(412)亦可以全2D模式呈現出,或以全3D模式 呈現出’該物件(41)之呈現方式取決於設計者。 C提供一第一影像擷取單元(B1)訊號連接上述主方端 (1)用以擷取一主方影像(12) ’提供一第二影像擷取單元 (B2)訊號連接上述客方端(2)用以擷取一客方影像(22) ,所述之主方影像(12)可以是第一人稱使用者之影像, 而該客方影像(22)可以是第二、三人稱使用者之影像。 D將上述客方影像(22)傳輸至主方端(1)之第一互動 通訊介面(11) ’並令該客方影像(22)結合該物件(41), 099137649 表單編號A0101 第7頁/共17頁 0992065617-0 201220172 而使該第一互動通訊介面(〗丨)呈現出一第一數位畫面 (13);將上述主方影像(12)傳輸至客方端(2)之第二互 動通訊介面(21),並令該主方影像(12)結合該物件(41) 而使該第二互動通訊介面(21)呈現出一第二數位畫面 (23),而所述之結合係將該主方影像(12)及該客方影像 (22)疊合於該物件(41)後方,且令該物件(41)提供局部 區域用以呈現出該主方影像(丨”或/及該客方影像(22) ,使得該第一數位畫面(13)及第二數位畫面(23)呈現出 使用者深入物件其境之互動畫面,而讓主方端(1)及客方 端(2)之使用者於視覺上,感受谢有別於以往視訊互動之 效果〔如第三圖所示〕,此外,該第一數位畫面(13)也 可以結合有主方影像(12),該第二數位畫面(23)也可以 結合有客方影像(22)〔如第四圖所示〕。 本較佳實施例進一步更可以包含有步驟E,提供一第 一互動單tc(CI)〔鍵盤、滑鼠、麥克風、體感測器、手 寫板等〕訊號連接該主方端(】)’並可對該第一互動通訊 介面(11)下達指令,主方端(1)之使用者根據該第一數位 畫面(13),並藉由第一互動單元(C1)執行一第一互動指 令(14),使得該物件(41)產生一第一互動結果(15),並 同時於該第一數位晝面(13)及該第二數位晝面(23)呈現 出;提供一第二互動單元(C2)〔鍵盤、滑鼠、麥克風、 體感測器、手寫板等〕訊號連接該客方端(2)之第二互動 通訊介面(21),客方端(2)之使用者根據該第二數位畫面 (23) ,並藉由第二互動單元(C2)執行一第二互動指令 (24) ,使得該物件(41)產生一第二互動結果(25),並同 時於忒第一數位畫面(丨3)及該第二數位畫面(23)呈現出 099137649 表單編號A0101 第&頁/共I?頁 0992065617-0 201220172 ,藉以達成視訊互動過程中,主方端(丨)與客方端(2)皆 能融入物件(41)内,並透過與物件(41)之互動,而有更 好得互動性,進而提昇臨場感。 上述第一互動指令(14)及第二互動指令(24)包含有 聲音指令、文字指令或動作指令之一或其組合,且產生 之第一互動結果(15)及第二互動結果(25)包含有時間變 化、空間變化、聲音變化或光線變化之一或其組合,詳 細的說,請參閱第五圖所示,本較佳實施例以上述虛擬 魚缸作為例子,主方端(1)或客方端(2)之使用者藉由麥 克風對虛擬魚缸執行聲音指令,而即時通訊裝置(A)根據 聲音之分貝大小而產生互動結果,如魚聽到聲音受到驚 嚇的樣子,或是可透過鍵盤輸入文字執行文字指令,或 是根據主方影像(13)或客方影像(23)的動作執行動作指 令,而所述之時間變化可以是魚的生長過程,空間變化 可以是魚的位置,聲音變化可以是魚游動的聲音,光線 變化可以是魚缸的明暗程度。 惟以上所述僅係為本發明之較佳實施例,當不能以 此限定本發明實施之範圍,即依本發明申請專利範圍及 發明說明内容所作簡單的等效變化與修飾,皆屬本發明 專利涵蓋之範圍内。 【圖式簡單說明】 [0005] 第一圖係為方塊圖,說明較佳實施例之互動通訊及 數位畫面之整合架構與執行關係。 第二圖係為示意圖’朗較佳實施例之物件呈氛模 式0 099137649 第三圖係為不意圖,說明較佳實施例將擷取 表單編號A0101 第9頁/共17貝 之影像 0992065617-0 201220172 融入物件而形成數位畫面(一)。 第四圖係為示意圖,說明較佳 融入物件而形成數位晝面(二)。 實施例將撷取之影像 第五圖係為示意圖,說明較佳實施例之使用者於數 位影像執行互動指令而感生互動效果。 【主要元件符號說明】 [0006]视 The video communication via the Internet has become a bridge for modern people to communicate. The most familiar network video call is MS.N, yah00 instant messaging, etc., all of which provide video calling functions, as can be seen by this. Both parties can feel the existence of the other party, but they can only see through the video that the other party has not satisfied the needs of the users. Therefore, relevant pre-cases have improved the problem of insufficient interaction between the Internet and §fl calls, such as the invention of the Republic of China. Patent Publication No. 2 01017 5 0 "Practitional Instant Messaging Interactive System" patent case includes: an image capture module and a communication server, wherein the communication server includes a database, an identification module, and a left motion module. And the user interface' uses a database to store the personalized message, and the identification module receives the image signal through the communication server to identify and generate an identification signal, and the interactive module generates the identification signal and the personalized preset message. A situational interactive message, presented in the user interface, to achieve the purpose of interaction 'breaking the effect of general video communication, join Xu In the fun and games content, but to enhance the relationship between people and feelings. The above-mentioned pre-case uses the image capture module to capture the user image, and generates a contextual interactive message through the comparison of the image and the recognition module, and presents the contextual interactive message through the built-in avatar to other uses. 099137649 Form No. A0101 Page 3 of 17 0992065617-0 201220172 The person expresses the emotion. Although the former case expresses the user's emotion through the avatar, it has the effect of interacting with other users. However, the avatar is not after all. In fact, the interaction between the avatar and other users or the built-in background seems to be less-realistic. In addition, although the image capture group can capture the user's image and present it on the user interface, the image is presented. It is the same as the conventional video interactive device, and it does not integrate the scene provided by the video device, so that the riding image does not have any effect of interacting with the scene, and it is impossible to let people have deep feelings. In terms of interactivity, it is insufficient. SUMMARY OF THE INVENTION [0003] 099137649 Here, in order to improve the lack of interaction between the conventional video interaction device and the user can not be integrated into the scene of the video interaction device, and can not be used by scenes and other uses The present invention provides a method for taking a user's image by the image capturing unit and combining the captured image with the scene provided by the video interactive device to make each user (I or the other person) can feel deep into the ground. Each user can interact with the provided scene. After receiving the interactive instruction of each user, the scene will give an interactive result according to the interactive instruction, and the two sides (or more than three parties) interact with the scene to increase the interaction. The interactive communication process has a sense of presence and makes the video interactive device emotional. To achieve the above functions, a method for integrating a dynamic digital image by an online interactive communication includes the following steps: A provides a master terminal and a signal connection to at least one guest terminal of the master terminal, the master terminal provides a first An interactive communication interface, the client side provides a second interactive communication interface with the first interactive communication form nickname A0101, page 4/17 page 0992065617-0 201220172 ; interface; b is provided by a database An object is respectively transmitted to the first interactive communication interface of the master side and the second interactive communication interface of the client side; C provides a first image capturing unit signal connected to the master side for capturing a master image, and providing a second image capturing unit signal is connected to the client side for capturing a guest image; D transmits the above-mentioned guest image to the first interactive communication interface of the master side, and the guest image is combined with the object. The first interactive communication interface is presented with a first digital image; the primary image is transmitted to the second interactive communication interface of the guest end, and the primary image is combined with the object to make the second mutual Communications interface showing a second digital picture. The first digital image described in step D is combined with the master image, and the second digital image is combined with the guest image. The combination of step D is to superimpose the master image and the guest image behind the object, and to provide a partial area for presenting the master image or/and the guest image. ❹ The above objects contain dynamic pictures and/or static pictures and are displayed in 2D mode and/or 3D mode. The method further includes a step E of providing a first interactive unit signal to connect to the first interactive communication interface of the master side, and the first interaction unit executes a first interaction instruction according to the first digital screen to cause the object Generating a first interaction result; providing a second interaction unit signal to connect to the second interactive communication interface of the client side, the second interaction unit performing a second interaction instruction according to the second digit picture to cause the object to generate a first Two interactive results. The first interactive command and the second interactive command include one or a combination of a voice command, a text command or an action command. 099137649 Form No. A0101 Page 5 of 17 0992065617-0 201220172 The first interaction result and the second interaction result described above contain one or a combination of time variation, spatial variation, sound variation or light variation. The effect of the invention: 1. The invention combines the image with the object to enable the interactive communicator to feel the depth of the view. 2. The present invention enables interactive actors to interact with objects, so that the interacting parties of the video interaction process have common interaction goals and common topics, and have better interactivity. 3. The present invention can achieve different interactive purposes by different objects, such as shopping, teaching, games or briefings. 4. The present invention can be used to facilitate the interpretation of the text by means of the presentation of the object, and the interactive communicator can interact with the object, so that the other party can more easily know the meaning of the expression. 5. The present invention shortens the distance between interactive communicators through interactive enhancement, which is unmatched by conventional video communication methods. [Embodiment] [0004] With regard to the technical features and enhancements of the present invention, the preferred embodiment of the following drawings can be clearly presented. First, please refer to the first figure, which is an online interactive communication integration dynamic. The digital picture method comprises the following steps: A provides a master terminal (1) and a signal connection to at least one client terminal (2) of the master terminal (1), wherein the master party refers to the first person use The user of the instant messaging device (A), and the guest refers to the second or third user who uses the instant messaging device (A), and the instant messaging device (A) may be a computer or a smart phone, etc. The communication software, the main party (1) provides a first interactive communication interface (11) presented on the display of the computer, the guest 099137649 form number A0101 page 6 / 17 pages 0992065617-0 201220172 ^ (2) 1⁄4 for A second interactive communication interface (21) is presented on the display of the computer, and can interact with the first interactive communication interface (11) via the network (3). In detail, the first interaction of the preferred embodiment Communication interface (π) and second mutual The communication interface (21) may be MSN or Yah〇〇 messenger, and the communication between the main party (1) and the client side (2) through the network (3) can interact with each other through text or voice. (1) Of course, it is possible to interact with multiple client terminals (2) at the same time. b is provided by a database (4) to transmit an object (41) to the first interactive communication interface (U) of the master side (1) and the second interactive communication interface (21) of the client side (2), respectively. The database (4) may be one of a third-party database, a master-side database or a client-side database, and the object (41) includes a dynamic picture (411) or/and a static picture (412). 'And presented in 2D mode or / and 3D mode, please refer to the second figure, the preferred embodiment takes the object (41) as a virtual fish tank, and the virtual fish rainbow contains a dynamic picture (411) [fishing animation surface] and static screen (412) [background screen], and dynamic screen (411) is presented in 3D mode, rain static screen (412) is presented in 2D mode, and dynamic picture Hu) and The static picture (412) can also be rendered in full 2D mode, or presented in full 3D mode. 'The presentation of the object (41) depends on the designer. C provides a first image capturing unit (B1) signal connected to the main terminal (1) for capturing a master image (12) 'providing a second image capturing unit (B2) signal to connect the client side (2) for capturing a guest image (22), the master image (12) may be an image of a first person user, and the guest image (22) may be a second or three person Image of the person. D transmits the above-mentioned guest image (22) to the first interactive communication interface (11) of the master side (1) and combines the object image (22) with the object (41), 099137649 Form No. A0101 Page 7 / a total of 17 pages 0992065617-0 201220172 and the first interactive communication interface (〗 〖) presents a first digital picture (13); the above-mentioned main image (12) is transmitted to the second side of the customer side (2) An interactive communication interface (21), and the main image (12) is combined with the object (41) to cause the second interactive communication interface (21) to present a second digital image (23), and the combination is The main image (12) and the guest image (22) are superimposed on the rear of the object (41), and the object (41) is provided with a partial area for presenting the main image (丨) or/and The guest image (22) causes the first digital image (13) and the second digital image (23) to present an interactive picture of the user's in-depth object, and the primary side (1) and the guest end ( 2) The user visually feels different from the effect of the previous video interaction (as shown in the third figure), in addition, the first digital screen 13) A master image (12) may also be combined, and the second digit image (23) may also be combined with a guest image (22) [as shown in the fourth figure]. The preferred embodiment further includes Step E, providing a first interactive single tc (CI) (keyboard, mouse, microphone, body sensor, tablet, etc.) signal connected to the master (])' and the first interactive communication interface ( 11) issuing the command, the user of the master side (1) according to the first digital screen (13), and executing a first interactive command (14) by the first interactive unit (C1), so that the object (41) Generating a first interaction result (15) and presenting the first digit (13) and the second digit (23) simultaneously; providing a second interaction unit (C2) [keyboard, mouse, a microphone, a body sensor, a tablet, etc., a signal connected to the second interactive communication interface (21) of the client terminal (2), and a user of the client terminal (2) according to the second digit picture (23), and Performing a second interaction instruction (24) by the second interaction unit (C2), so that the object (41) generates a second interaction result (25), At the same time, the first digital screen (丨3) and the second digit screen (23) present 099137649 form number A0101 page & page/total I page 0992065617-0 201220172, in order to achieve the video interaction process, the master Both the end (丨) and the guest end (2) can be integrated into the object (41), and through interaction with the object (41), there is better interactivity, thereby enhancing the sense of presence. The first interactive instruction (14) And the second interactive instruction (24) includes one or a combination of a voice command, a text command, or an action command, and the first interaction result (15) and the second interaction result (25) generated include temporal changes and spatial changes. One or a combination of sound changes or light changes, in detail, referring to the fifth figure, the preferred embodiment uses the above-mentioned virtual fish tank as an example, the main side (1) or the guest end (2) The user performs a voice command on the virtual fish tank by the microphone, and the instant messaging device (A) generates an interactive result according to the size of the sound, such as the sound of the fish being shocked, or the text command can be input through the keyboard. The action instruction is executed according to the action of the master image (13) or the guest image (23), and the time change may be a fish growth process, the spatial change may be a fish position, and the sound change may be a fish swim. Sound, light changes can be the brightness of the fish tank. However, the above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, that is, the simple equivalent changes and modifications according to the scope of the present invention and the description of the invention are the present invention. Within the scope of the patent. BRIEF DESCRIPTION OF THE DRAWINGS [0005] The first figure is a block diagram illustrating the integrated architecture and execution relationship of the interactive communication and digital pictures of the preferred embodiment. The second figure is a schematic diagram of the object of the preferred embodiment. The third figure is not intended to illustrate that the preferred embodiment will capture the form number A0101 page 9 / total 17 images of the image 0992065617-0 201220172 Integrate objects to form a digital screen (1). The fourth figure is a schematic diagram showing that the object is preferably incorporated into the digital surface (2). The fifth image is a schematic diagram illustrating that the user of the preferred embodiment interacts with the digital image to effect an interactive effect. [Main component symbol description] [0006]

主方知 (11) 主方影像 (13) 第一互動指令(1 5) 客方端 (21) 客方影像 (23) 第二互動指令(2 5) 網路 第一互動通訊介面 第一數位晝面 第一互動結果 第二互動通訊介面 第一數位畫面 第二互動結果 資料庫 (41)物件 (4 1 1 )動態晝面 (4 1 2)靜態晝面 (A) 即時通訊裝置 (B ]_ )第一影像擷取單元 (B 2) 第二影像擷取單元 (Cl)第一互動單元 C 2) 第二互動單元 099137649 表單編號A0101 第10頁/共17頁 0992065617-0The master knows (11) the master image (13) the first interactive command (1 5) the client side (21) the guest image (23) the second interactive command (2 5) the first interactive interface of the network first interactive communication interface第一面第一互动结果Second interactive communication interface First digital screen Second interactive result database (41) Object (4 1 1 ) Dynamic 昼 (4 1 2) Static 昼 (A) Instant messaging device (B ] _) First image capturing unit (B 2) Second image capturing unit (Cl) First interactive unit C 2) Second interactive unit 099137649 Form number A0101 Page 10 of 17 0992065617-0

Claims (1)

201220172 七、申請專利範圍: 1 種線上互動通訊整合動態數位畫面之方法,包含有下列 步驟: A提供一主方端及訊號連接該主方端之至少一客方端 °亥主方端提供有—第―互動通訊介面’該客方端提供有 T與該第-互動通訊介面互動之—第二互動通訊介面; b由資料庫提供一物件分別傳輸至主方端之第—互 動通訊介面及客方端之第二互動通訊介面; 〇 C提供一第—影像擷取單元訊號連接上述主方端用以 擷取一主方影像,提供一第二影像糊取單元訊號連接上述 客方端用以擷取一客方影像; D將上述客方影像傳輸至主方端之第_互動通訊介面 ,並令該客方影像結合該物件而使該第一互動通訊介面呈 現出-第-數位晝面;上述主方影像傳赢客方端之第二 互動通訊介面’並令該主方影像結合該物件而使該第二互 動通訊介面呈現出一第二數位畫面。 〇 2 · st專利範圍第1項所述之線上互動通訊整合動態數位 畫面之方法,步驟D所述之第一數位畫面結合有該主方影 像,該第二數位畫面結合有該客方影像。 3·如申請專利範圍第2項所述之線上互動通訊整合動態數位 畫面之方法’步驟D所述之結合係將該主方影像及該客方 影像疊合於該物件後方,且令該物件提供局部區域用以呈 現出該主方影像或/及該客方影像。 4.如申請專利範圍第3項所述之線上互動通訊整合動態數位 畫面之方法,該物件包含有動態畫面或/及靜態畫面,並 099137649 表單編號A0101 第11頁/共17頁 0992065617-0 201220172 以2D模式或/及3D模式呈現出。 5 .如申請專利範圍第1項至第4項任一項所述之線上互動通訊 整合動態數位畫面之方法,進一步包含有步驟E,提供一 第一互動單元訊號連接該主方端之第一互動通訊介面,前 述第一互動單元根據該第一數位晝面而執行一第一互動指 令使得該物件產生一第一互動結果;提供一第二互動單元 訊號連接該客方端之第二互動通訊介面,前述第二互動單 元根據該第二數位晝面而執行一第二互動指令使得該物件 產生一第二互動結果。 6 .如申請專利範圍第5項所述之線上互動通訊整合動態數位 畫面之方法,上述第一互動指令及第二互動指令包含有聲 音指令、文字指令或動作指令之一或其組合。 7 .如申請專利範圍第5項所述之線上互動通訊整合動態數位 畫面之方法,上述第一互動結果及第二互動結果包含有時 間變化、空間變化、聲音變化或光線變化之一或其組合。 0992065617-0 099137649 表單編號A0101 第12頁/共17頁201220172 VII. Patent application scope: The method of integrating dynamic digital screen into one online interactive communication includes the following steps: A provides a master terminal and a signal connection to at least one guest end of the master terminal. - the first interactive communication interface 'the client side provides the second interactive communication interface that interacts with the first-interactive communication interface; b provides an object to be transmitted from the database to the first interactive communication interface of the main party and The second interactive communication interface of the client side; 〇C provides a first image capturing unit signal connected to the main terminal for capturing a master image, and providing a second image pasting unit signal for connecting the client terminal To capture a guest image; D transmit the above-mentioned guest image to the first interactive interface of the host side, and combine the object image to make the first interactive communication interface present a -digit-digit number The master image transmits the second interactive communication interface of the client side and combines the master image with the object to cause the second interactive communication interface to present a second digital image. The method of integrating the dynamic digital screen by the online interactive communication described in Item 1 of the patent scope, the first digital image described in the step D is combined with the main image, and the second digital image is combined with the guest image. 3. The method of integrating the dynamic digital screen by the online interactive communication as described in the second paragraph of the patent application, the combination described in the step D is to superimpose the master image and the guest image behind the object, and make the object A local area is provided for presenting the master image or/and the guest image. 4. The method for integrating a dynamic digital screen for online interactive communication as described in claim 3 of the patent application, the object comprising a dynamic picture or/and a static picture, and 099137649 Form No. A0101 Page 11 / Total 17 Page 0992065617-0 201220172 Presented in 2D mode or / and 3D mode. 5. The method for integrating an interactive digital image of an online interactive communication according to any one of claims 1 to 4, further comprising a step E of providing a first interactive unit signal to connect the first end of the primary end An interactive communication interface, the first interaction unit executes a first interaction instruction according to the first digital number to cause the object to generate a first interaction result; and provides a second interaction unit signal to connect the second interactive communication of the client end The second interactive unit performs a second interaction instruction according to the second digit to cause the object to generate a second interaction result. 6. The method of integrating an interactive digital screen with an online interactive communication as described in claim 5, wherein the first interactive command and the second interactive command comprise one or a combination of a voice command, a text command or an action command. 7. The method for integrating an interactive digital image in an online interactive communication as described in claim 5, wherein the first interaction result and the second interaction result comprise one or a combination of temporal changes, spatial changes, sound changes, or light changes. . 0992065617-0 099137649 Form No. A0101 Page 12 of 17
TW99137649A 2010-11-02 2010-11-02 forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario TW201220172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99137649A TW201220172A (en) 2010-11-02 2010-11-02 forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99137649A TW201220172A (en) 2010-11-02 2010-11-02 forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario

Publications (1)

Publication Number Publication Date
TW201220172A true TW201220172A (en) 2012-05-16

Family

ID=46553082

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99137649A TW201220172A (en) 2010-11-02 2010-11-02 forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario

Country Status (1)

Country Link
TW (1) TW201220172A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735388A (en) * 2013-12-18 2015-06-24 扬智科技股份有限公司 Interaction method of video image interaction system
CN105704420A (en) * 2014-11-28 2016-06-22 富泰华工业(深圳)有限公司 Somatosensory visual communication system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735388A (en) * 2013-12-18 2015-06-24 扬智科技股份有限公司 Interaction method of video image interaction system
CN105704420A (en) * 2014-11-28 2016-06-22 富泰华工业(深圳)有限公司 Somatosensory visual communication system and method

Similar Documents

Publication Publication Date Title
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
US10609332B1 (en) Video conferencing supporting a composite video stream
JP6616288B2 (en) Method, user terminal, and server for information exchange in communication
CN113168231A (en) Enhanced techniques for tracking movement of real world objects to improve virtual object positioning
CN103368816A (en) Instant communication method based on virtual character and system
JP2022516806A (en) Confirmation of consent
CN110677610A (en) Video stream control method, video stream control device and electronic equipment
US20200233489A1 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN110427227B (en) Virtual scene generation method and device, electronic equipment and storage medium
US20240205036A1 (en) Shared augmented reality experience in video chat
TW201220172A (en) forming a digital screen to allow interactive communicators themselves or counterparts to feel immersed deeply into the object 's scenario
Roberts et al. Bringing the client and therapist together in virtual reality telepresence exposure therapy
CN109685911B (en) AR glasses capable of realizing virtual fitting and realization method thereof
Dijkstra-Soudarissanane et al. Virtual visits: life-size immersive communication
Dijkstra-Soudarissanane et al. Towards XR communication for visiting elderly at nursing homes
Oliva et al. The Making of a Newspaper Interview in Virtual Reality: Realistic Avatars, Philosophy, and Sushi
CN116530078A (en) 3D video conferencing system and method for displaying stereo-rendered image data acquired from multiple perspectives
KR20230060985A (en) Extended reality-based live commerce system incorporating virtual influencers
WO2023082737A1 (en) Data processing method and apparatus, and device and readable storage medium
US20240202944A1 (en) Aligning scanned environments for multi-user communication sessions
US20230289993A1 (en) 3D Representation of Physical Environment Objects
US20240037886A1 (en) Environment sharing
US20230419625A1 (en) Showing context in a communication session
KR20240057880A (en) XR live commerce system using digital human creation technology and 3D avatar fitting technology
CN117555415A (en) Naked eye 3D interaction system based on action recognition and display device thereof