TW201917556A - Multi-screen interaction method and apparatus, and electronic device - Google Patents

Multi-screen interaction method and apparatus, and electronic device Download PDF

Info

Publication number
TW201917556A
TW201917556A TW107119580A TW107119580A TW201917556A TW 201917556 A TW201917556 A TW 201917556A TW 107119580 A TW107119580 A TW 107119580A TW 107119580 A TW107119580 A TW 107119580A TW 201917556 A TW201917556 A TW 201917556A
Authority
TW
Taiwan
Prior art keywords
specified object
terminal
image
interactive
video
Prior art date
Application number
TW107119580A
Other languages
Chinese (zh)
Inventor
王英楠
蔡建平
余瀟
段虞峰
張智淇
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW201917556A publication Critical patent/TW201917556A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed in embodiments of the present application are a multi-screen interaction method and apparatus, and an electronic device. The method comprises: a first terminal loads an interactive material, the interactive material comprising a specified object material created according to a specified object; acquire a real image; when a target event related to the specified object is played back in a video in a second terminal, add the specified object material to the real image. By means of the embodiments of the present application, the interactive participation of users can be improved.

Description

多屏互動方法、裝置及電子設備Multi-screen interactive method, device and electronic device

本申請涉及多屏互動技術領域,特別是涉及多屏互動方法、裝置及電子設備。The present application relates to the field of multi-screen interaction technology, and in particular, to a multi-screen interaction method, device, and electronic device.

多屏互動是指透過無線網路連接,在不同的多媒體終端設備上(如常見的手機與電視之間等等),可進行多媒體(音頻,視頻,圖片)內容的傳輸、解析、展示、控制等一系列操作,可以在不同終端設備顯示同樣的內容,並實現各個終端之間的內容互通。   現有技術中,從電視端到手機端的互動,通常是借助於圖形碼的方式來實現,例如,通常可以在電視螢幕上顯示與當前正在播放的節目相關的二維碼,用戶可以用手機中安裝的應用程式的“掃一掃”等功能對該二維碼進行掃描,然後在手機端進行解析,並展示出具體的互動頁面,然後,用戶可以在頁面中進行回答問題、抽獎等互動。   這種現有技術的方式雖然能夠實現手機與電視之間的互動,但是具體的實現形式比較呆板,用戶的實際參與度並不高。因此,如何提供形式更豐富的多屏互動,提高用戶的參與度,成為需要本領域技術人員解決的技術問題。Multi-screen interaction refers to the transmission, analysis, display and control of multimedia (audio, video, picture) content on different multimedia terminal devices (such as between common mobile phones and TVs) through wireless network connection. After a series of operations, the same content can be displayed on different terminal devices, and content intercommunication between the terminals can be realized. In the prior art, the interaction from the television end to the mobile terminal is usually implemented by means of a graphic code. For example, a two-dimensional code related to a program currently being played can be displayed on a television screen, and the user can install it in a mobile phone. The application's "sweep" and other functions scan the QR code, then parse it on the mobile phone and display the specific interactive page. Then, the user can answer questions, draws and other interactions on the page. Although this prior art method can realize the interaction between the mobile phone and the television, the specific implementation form is relatively rigid, and the actual participation of the user is not high. Therefore, how to provide a richer multi-screen interaction and increase user participation has become a technical problem that needs to be solved by those skilled in the art.

本申請提供了多屏互動方法、裝置及電子設備,可以提高用戶對互動的參與度。   本申請提供了如下方案:   一種多屏互動方法,包括:   第一終端加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種多屏互動方法,包括:   第一服務端保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   將所述互動素材提供給第一終端,由所述第一終端採集實景圖像,當第二終端中的視頻播放到與所述指定對象對應的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種多屏互動方法,包括:   第二終端播放視頻;   在所述視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   一種多屏互動方法,包括:   第二服務端接收第一服務端提供的預置頻率的聲波信號資訊;   在視頻中與指定對象相關的目標事件發生的位置插入所述預置頻率的聲波信號,以便在透過第二終端播放所述視頻的過程中,由第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   一種視頻互動方法,包括:   第一終端加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   當所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面;   在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   一種視頻互動方法,包括:   第一服務端保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   將所述互動素材提供給第一終端,以便所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面,並在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   一種互動方法,包括:   加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當檢測到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種多屏互動裝置,應用於第一終端,包括:   第一素材加載單元,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第一實景圖像採集單元,用於採集實景圖像;   第一素材添加單元,用於當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種多屏互動裝置,應用於第一服務端,包括:   第一互動素材保存單元,用於保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第一互動素材提供單元,用於將所述互動素材提供給第一終端,由所述第一終端採集實景圖像,當第二終端中的視頻播放到與所述指定對象對應的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種多屏互動裝置,應用於第二終端,包括:   視頻播放單元,用於播放視頻;   聲波信號播放單元,用於在所述視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   一種多屏互動裝置,應用於第二服務端,包括:   聲波信號資訊接收單元,用於接收第一服務端提供的預置頻率的聲波信號資訊;   聲波信號資訊插入單元,用於在視頻中與指定對象相關的目標事件發生的位置插入所述預置頻率的聲波信號,以便在透過第二終端播放所述視頻的過程中,由第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   一種視頻互動裝置,應用於第一終端,包括:   加載單元,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   界面跳轉單元,用於當所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面;   素材添加單元,用於在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   一種視頻互動裝置,應用於第一服務端,包括:   第二素材保存單元,用於保存互動素材,所述互動素材包括根據指定對象創建的指定物件素材;   第二素材提供單元,用於將所述互動素材提供給第一終端,以便所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面,並在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   一種互動裝置,包括:   第二素材加載單元,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第二實景圖像採集單元,用於採集實景圖像;   第二素材添加單元,用於當檢測到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   一種電子設備,包括:   一個或多個處理器;以及   與所述一個或多個處理器關聯的儲存器,所述儲存器用於儲存程式指令,所述程式指令在被所述一個或多個處理器讀取執行時,執行如下操作:   加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   根據本申請提供的具體實施例,本申請公開了以下技術效果:   透過本申請實施例,可以根據與指定對象相關的視頻/動畫創建互動素材,在具體進行互動的過程中,可以對用戶所在的實際環境進行採集實景圖像,並在第二終端中播出與所述指定對象對應的目標事件時,將指定對象素材添加到所述實景圖像中進行展示。這樣,使得用戶可以獲得相關指定對象來到自己所在空間環境(例如,自己家中等)的體驗,因此,可以提高用戶對互動的參與度。   當然,實施本申請的任一產品並不一定需要同時達到以上所述的所有優點。The application provides a multi-screen interaction method, device and electronic device, which can improve the user's participation in interaction. The present application provides the following solution: A multi-screen interaction method, comprising: a first terminal loading an interactive material, the interactive material comprising a specified object material created according to a specified object; acquiring a real-life image; when the video in the second terminal is played The specified object material is added to the live image when the target event is associated with the specified object. A multi-screen interaction method, comprising: a first server saves an interactive material, the interactive material includes a specified object material created according to the specified object; and the interactive material is provided to the first terminal, and the real image is collected by the first terminal For example, when the video in the second terminal is played to a target event corresponding to the specified object, the specified object material is added to the live image. A multi-screen interaction method, comprising: playing a video by a second terminal; playing a sound wave signal of a preset frequency when the video is played to a target event related to the specified object, so that the first terminal knows by detecting the sound wave signal The target event occurs and the specified object material is added to the captured live image. A multi-screen interaction method includes: receiving, by a second server, sound wave signal information of a preset frequency provided by a first server; inserting a sound wave signal of the preset frequency at a position where a target event related to a specified object occurs in the video, In order to play the video through the second terminal, the first terminal learns the occurrence of the target event by detecting the sound wave signal, and adds the specified object material to the collected real-life image. A video interaction method, comprising: loading, by a first terminal, an interactive material, the interactive material comprising a specified object material created according to a specified object; when a video in the first terminal is played to a target event related to the specified object, Jumping to the interactive interface; displaying the live image acquisition result in the interactive interface, and adding the specified object material to the live image. A video interaction method includes: a first server saves an interactive material, the interactive material includes a specified object material created according to the specified object; and the interactive material is provided to the first terminal, so that the video in the first terminal is played When the target event related to the specified object is reached, the interactive interface is jumped, and the live image acquisition result is displayed in the interactive interface, and the specified object material is added to the real image. An interactive method, comprising: loading an interactive material, the specified material comprising a specified object created according to the specified object; acquiring a real image; adding the specified object material when a target event related to the specified object is detected Go to the live image. A multi-screen interactive device is applied to the first terminal, and includes: a first material loading unit, configured to load an interactive material, the interactive material includes a specified object material created according to the specified object; and a first real image capturing unit, configured to: Collecting a live view image; a first material adding unit, configured to add the specified object material to the real-life image when a video in the second terminal is played to a target event related to the specified object. A multi-screen interactive device is applied to the first server, and includes: a first interactive material saving unit, configured to save the interactive material, the interactive material includes a specified object material created according to the specified object; the first interactive material providing unit, Providing the interactive material to the first terminal, the real image is collected by the first terminal, and when the video in the second terminal is played to a target event corresponding to the specified object, the specified object material is added Go to the live image. A multi-screen interactive device is applied to a second terminal, comprising: a video playing unit for playing a video; and a sound wave signal playing unit, configured to play a preset frequency when the video is played to a target event related to the specified object The sound wave signal is such that the first terminal learns the occurrence of the target event by detecting the sound wave signal, and adds the specified object material to the collected real-life image. A multi-screen interactive device is applied to the second server, and includes: an acoustic signal information receiving unit, configured to receive sound wave signal information of a preset frequency provided by the first server; and an acoustic wave signal information inserting unit, configured to Inserting a sound wave signal of the preset frequency at a position where the target event related to the object is specified, so that the first terminal can detect the occurrence of the target event by detecting the sound wave signal during the process of playing the video through the second terminal. And add the specified object material to the captured live image. A video interaction device, applied to the first terminal, includes: a loading unit, configured to load an interactive material, the interactive material includes a specified object material created according to the specified object; and an interface jump unit, configured to be in the first terminal Jumping to the interactive interface when the video is played to the target event related to the specified object; a material adding unit, configured to display the live image capturing result in the interactive interface, and adding the specified object material to the real scene In the image. A video interaction device is applied to the first server, and includes: a second material saving unit, configured to save the interactive material, the interactive material includes a specified object material created according to the specified object; and a second material providing unit, configured to The interactive material is provided to the first terminal, so that when the video in the first terminal is played to the target event related to the specified object, the interactive interface is jumped to, and the live image collection result is displayed in the interactive interface. And adding the specified object material to the live image. An interactive device includes: a second material loading unit, configured to load an interactive material, the interactive material includes a specified object material created according to the specified object; a second real image capturing unit configured to collect the real image; Adding a unit for adding the specified object material to the live image when a target event related to the specified object is detected. An electronic device comprising: one or more processors; and a memory associated with the one or more processors, the memory for storing program instructions, the program instructions being processed by the one or more When the reading is performed, the following operations are performed: loading the interactive material, the specified material including the specified object material created according to the specified object; acquiring the real image; when the video in the second terminal is played to the target related to the specified object When the event, the specified object material is added to the live image. According to the specific embodiment provided by the present application, the present application discloses the following technical effects: Through the embodiment of the present application, an interactive material can be created according to a video/animation related to a specified object, and in a specific interaction process, the user may be located. When the actual environment collects the real-life image and broadcasts the target event corresponding to the specified object in the second terminal, the specified object material is added to the real-life image for display. In this way, the user can obtain the experience that the relevant specified object comes to the space environment (for example, his own home), and therefore, the user's participation in the interaction can be improved. Of course, implementing any of the products of the present application does not necessarily require all of the advantages described above to be achieved at the same time.

下面將結合本申請實施例中的附圖,對本申請實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本申請一部分實施例,而不是全部的實施例。基於本申請中的實施例,本領域普通技術人員所獲得的所有其他實施例,都屬於本申請保護的範圍。   在本申請實施例中,提供了一種新的多屏互動方案,在該方案中,主要是用戶的手機等移動終端設備(本申請實施例中稱為第一終端)與電視機等具有大螢幕的終端設備(本申請實施例中稱為第二終端)之間的互動。具體的,可以在透過第二終端對一些大型直播類晚會等節目(當然,也可以是其他類型的節目)進行播放的過程中,進行上述互動過程。例如,在直播類晚會節目中,會節目主辦方會邀請到一些娛樂明星等表演節目,但是,在現有技術中,用戶只能從第二終端中觀看明星在舞台上的表演。而在本申請實施例中,則可以透過一些技術手段,使用戶獲得“明星到我家”的體驗。具體實現時,可以預先提供與特定的娛樂明星等人物表演相關的素材,在第二終端中該人物的表演環節中,在第一終端中透過增強現實的方式,將預先錄製的該人物的表演視頻、動畫等素材投射到用戶所在的真實環境中。例如,用戶通常是在家裡觀看電視等第二終端中的節目,因此,就可以將具體的人物表演視頻/動畫等投射到用戶的家中。這樣,雖然用戶仍然需要隔著第二終端的螢幕觀看到具體的投射結果,但是,由於表演的背景是用戶所在環境中拍攝到的實景圖像,因此,相對於從第一終端中觀看到的在舞台上的表演而言,可以使得用戶獲得該“明星”真的在其家中的體驗。當然,在具體實現時,除了指定人物之外,還可以是動物,甚至還可以是商品,等等,在本申請實施例中,統一稱為“指定對象”。   具體實現時,從系統架構角度而言,參見圖1,本申請實施例中涉及到的硬體設備就可以包括前述第一終端以及第二終端,而涉及到的軟體可以是第一終端中安裝的某關聯的應用程式客戶端(或者,也可以是固化在第一終端中的程式等),以及雲端的第一服務端。例如,假設是在“雙11”晚會的過程中提供上述互動,則由於“雙11”晚會的主辦方通常是某網路銷售平台(例如,“手機淘寶”、“天貓”等)的公司,因此,就可以透過該網路銷售平台提供的應用程式客戶端以及服務端,為上述多屏互動提供技術支持。也就是說,用戶可以使用“手機淘寶”、“天貓”等應用程式的客戶端來進行具體的互動過程中,而互動過程中所需用到的素材等資料,則可以由服務端進行提供。需要說明的是,第二終端主要作為播放終端存在,而其中播放的視頻等內容可以是由後端的第二服務端(電視台的服務器等)進行控制的,也就是說,關於直播視頻流等信號,可以由第二服務端執行統一的視頻信號發射等操作,之後,將視頻信號傳輸到各個第二終端進行播放。也就是說,在本申請實施例提供的多屏互動場景中,第一終端與第二終端對應的是不同的服務端。   下面對具體的實現方案進行詳細介紹。 實施例一   首先,該實施例一從客戶端的角度,提供了一種多屏互動方法,參見圖2,該方法具體可以包括:   S201:第一終端加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   其中,關於互動素材也就是具體在進行增強現實的互動過程中,用於生成虛擬圖像等資訊內容所需的素材。具體實現時,指定對象具體可以是指定人物相關的資訊,或者指定商品相關的資訊,再或者,還可以是與線下遊戲相關的道具相關的資訊,等等。其中,不同的指定對象可以對應不同的互動場景,例如,當指定對象為指定人物時,具體的場景可以是“明星到你家”活動,也即,例如,用戶在透過電視等觀看電視節目的過程中,透過該方案,可以將節目中參加表演的“明星”等“穿越”到用戶家中。而在指定對象為指定商品時,該商品通常可以是與網路銷售系統中銷售的實物性商品等相關的商品,通常情況下,用戶需要支付較多的資源來購買該商品,但是,在活動期間,可以透過贈送或者超低價銷售等方式,作為禮物送給用戶。而在送禮物的過程中,就可以採用本申請實施例中的方式實現“跨屏送禮物”,具體的,透過在電視等第二終端中播放與指定商品相關的內容,並“穿越”到用戶手機等第一終端,另外還可以提供用於對所述指定商品關聯的資料對象進行搶購的操作選項,透過該操作選項接收到搶購操作時,提交到服務端,由服務端確定搶購結果。因此,使得用戶獲得搶購或者抽獎等機會,進而獲得對應的商品,或者,獲得以超低價購買對應商品的機會,等等。   另外,在指定對象為與線下遊戲相關的道具時,可以對應“跨屏送禮物”的另一種形式,也即,在活動期間,系統如果想要為用戶提供一些優惠券、“現金紅包”等非實物性的禮物,則可以將送禮物的過程,與線下的魔術等遊戲關聯。例如,在電視等第二終端播放魔術等節目的過程中,期間可能會用到某道具,則通本申請實施例的方案,可以將該道具“穿越”到也用戶的手機等第一終端設備中進行顯示,進而,用戶可以透過對該道具進行點擊等操作方式,進行非實物性禮物的領取操作。也即,接收到對所述目標道具進行操作資訊時,將所述操作資訊提交到服務端,由所述服務端確定該操作所獲得的獎勵資訊並回傳,然後,第一終端可以提供所獲得的獎勵資訊,等等。   指定對象素材具體可以包括透過對所述指定對象進行拍攝所獲得的視頻素材,例如,如果指定對象具體是指指定人物,則可以預先對指定人物表演的唱歌、跳舞等節目進行視頻錄製,得到視頻素材。或者,在另一種方式下,所述指定對象素材也可以包括:以所述指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材等等。例如,在指定對象為指定人物的情況下,可以根據指定人物的形象製作卡通人物形象,並基於該卡通人物形象製作動畫素材,包括由卡通人物形象的跳舞動畫、唱歌動畫等,其中,如果需要“唱歌”等,則可以由指定人物為該卡通形象進行配音,或者,播放該指定人物預先錄製好的歌曲,等等。   其中,對於同一個指定對象而言,可以對應多套不同的指定對象素材,例如,對於同一個指定人物,表演的不同節目,可以分別生成不同的人物素材,等等。也即,同一指定對象對應的素材可以為多套,具體在該指定對象進入到某個用戶的“家中”時,可以由用戶選擇具體的素材,然後,利用選定的素材提供具體的增強現實畫面。   另外,第一服務端提供的互動素材還可以包括:用於表示傳送通道的素材。例如,具體可以透過門、隧道、蟲洞、“天貓”等吉祥物、傳送光陣等生成該素材。這種用於表示傳送通道的素材可以用於:在具體將指定對象素材添加到實景圖像中之前,由於該指定對象本來是在晚會現場的舞台上進行表演,但緊接著會來到用戶家中,因此,為了增強趣味性,也使得指定對象出現的位置變化顯得更合理,還可以首先透過該用於表示傳送通道的素材,播放預置的動畫效果,營造出一種即將有指定對象透過這種傳送通道“穿越”到其家中的氛圍,使得用戶獲得更好的體驗。另外,在互動結束後,指定對象需要從家中離開時,還可以透過該用於表示傳送通道的素材提供與來到家中時相反過程的動畫,使得用戶獲得該指定對象從其家中離開、傳送通道逐漸關閉的體驗。   再者,第一服務端提供的互動素材還可以包括由所述指定對象錄製的語音樣本素材,這種語音樣本素材可以用於在指定對象“進入”到用戶家中時向用戶進行打招呼表示問候。並且,還可以在具體打招呼之前獲得用戶的用戶名稱(包括匿稱、真實姓名等)等資訊,實現“千人千面”的問候語,例如,“XXX,我到你家來了”,其中,對不同的用戶而言,“XXX”的具體內容是不同的。上述問候語會由所述指定對象透過語音的方式進行問候,而為了達到上述“千人千面”的目的,不能直接透過預先錄製一條問候語語音的方式來實現。為此,在本申請實施例中,可以預先由指定對象(具體可以對應指定人物的情形)朗讀一段特定文字,並對朗讀的各個文字的語音進行錄製,這段文字中會包括大部分的聲母、韻母以及音調等的發音方式。具體實現時,上述特定文字通常可以為一千個左右,其基本可以涵蓋90%的漢字發音。這樣,當指定對象剛“進入”用戶家中時,在根據用戶的專有名稱生成了具體的問候語之後,就可以透過該語音樣本素材中保存的各個漢字的發音資訊,發出對應的語音,以達到由指定對象喊出用戶名稱並進行問候的效果。   當然,在實際應用中,還可以包括其他素材,這裡不再一一列舉。具體實現時,上述互動素材的資料量可能會比較大,第一終端加載該互動素材的過程可能需要花費較長的時間,因此,可以是提前下載到第一終端本地。例如,在第二終端中播放的晚會開始後,用戶就可以透過第一終端提供的晚會主會場界面,邊觀看第二終端中的節目,邊透過該主會場界面時刻準備進行互動。而具體“明星到你家”環節可能是在晚會進行過程中的某個時刻,與第二終端的狀態同步進行,因此,只要在晚會開始後,用戶進入到第一終端的晚會主會場界面,即使具體的“明星到你家”活動尚未正式啟動,也可以預先進行相關互動素材下載的操作,這樣,具體的活動開始後,就可以快速進行到互動過程,避免由於互動素材尚未下載成功而導致的無法及時參加活動的情況。當然,對於沒有提前進入到第一終端提供的主會場界面的用戶,如果需要參加上述“明星到你家”活動,則也可以臨時下載相關的互動素材。其中,對於臨時下載的情況,為了避免下載所花費的時間過長,可以提供降級方案,例如,可以僅下載前述特定對象素材,關於用於表達傳送通道的素材以及語音樣本素材等可以不再下載,此時,用戶體會不到“穿越”的感覺,也接收不到指定對象的問候。 S202:採集實景圖像;   具體實現時,第一終端可以為“明星到你家”等活動提供對應的活動頁面,在該頁面中可以提供用於發出互動請求的操作選項。例如,如圖3-1所示,其為一個例子中的活動頁面展示示意圖,其中可以提供相關指定對象等提示資訊,還可以提供“立即啟動”等按鈕,該按鈕就可以是所述用戶發出互動請求的操作選項。用戶可以透過點擊該“立即啟動”按鈕發出具體的互動請求。當然,在實際應用中,還可以透過其他方式接收用戶的互動請求,例如,可以在第二終端螢幕上展示出二維碼,用戶透過第一終端對二維碼進行掃描的方式發出請求,等等。   具體實現時,上述“立即啟動”的按鈕等操作選項在正式互動過程開始之前可以處於不可操作狀態,避免用戶過早的點擊。另外,在操作選項上顯示的文案方面也可以有所不同,例如,在不可操作狀態下,可以顯示為“精彩馬上開啟”,等等。在互動即將開始之前,再將按鈕上展示的文案修改為“立即啟動”等狀態,並且,為了為用戶營造一種緊張的、焦急等待的氛圍,同時更吸引用戶執行點擊操作,還可以將該按鈕在顯示上呈現出“呼吸”動效,例如,按鈕可以以70%的比例收縮,3S後再恢復到原來大小,3S後再收縮,並不斷重複此節奏,等等。   其中,接收用戶互動請求的時間點可以早於指定對象正式從第二終端消失並“進入用戶家中”的時間點,真是因為,在用戶發出互動請求之後,客戶端還可以預先執行一些準備工作。具體的,在接收到用戶的互動請求後,可以首先開啟第一終端中的實景圖像採集,也即,可以啟動第一終端上的攝像頭組件,然後進入到實景拍攝的狀態,為後續基於增強現實的互動做好準備。   具體實現時,在具體啟動實景圖像的採集之前,還可以首先判斷第一終端本地是否已經加載了互動素材,如果尚未加載,則可以首先進行互動素材的加載處理。   需要說明的是,在本申請實施例中,透過增強現實的方式呈現給用戶的虛擬圖像是與具體指定對象素材等,為了使得互動的過程更加具有真實性,可以使得指定對象素材是展示在實景圖像中的一個平面上,例如,可以是地面、桌子的平面,等等,這樣,如果指定對象是指定人物,則可以使得指定人物的表演過程是在一個平面上進行的。而如果不進行特殊處理,則將所述指定對象素材添加到實景圖像中之後,可能會出現指定對象素材“飄”在半空中的情況,如果對應的指定對象素材是指定人物的跳舞、唱歌等表演,則會使得指定人物“飄”在半空中表演,這會降低用戶體驗,無法使用戶獲得更真實的身臨其境的沉浸感。   為此,在本申請的優選實施例中,還可以將與所述指定對象素材添加到所述實景圖像中包括的平面上進行展示。具體實現時,可以由第一終端從實景圖像中進行平面識別,然後,將所述指定對象素材添加到實景圖像中該平面上,避免產生“飄”在空中的現象。此時,關於指定對象素材具體出現在哪個位置點上,可以是由第一終端任意決定,只要位於一個平面上即可。或者,在另一種實現方式下,還可以更進一步,由用戶選擇具體的指定對象素材的出現位置。具體的,在啟動實景圖像檢測之後,客戶端可以首先從中進行平面檢測,在檢測到一個平面之後,如圖3-2所示,可以繪製出一個範圍,並提供一個可移動的光標,還可以在界面中提示用戶將光標放入繪製出的可放置範圍內。在用戶將光標移動到該可放置範圍內之後,光標的顏色可以發生變化,以提示用戶的放置位置可用。此時,客戶端可以記錄下光標具體被放置的位置。具體實現時,為了記錄該光標被放置的位置資訊,可以有多種方式,例如,在一種方式下,可以將某時刻第一終端所在的位置作為初始位置(例如,就可以將光標放置好時第一終端所在的位置作為初始位置,等等),並以該初始位置(可以是第一終端的幾何中心點等)作為坐標原點創建坐標系,然後,在光標被放入具體的可放置範圍後,就可以記錄下光標相對於該坐標系的位置,這樣,後續在指定對象素材添加到實景圖像中時,就可以以該位置為准進行添加。   另外,如前文所述,在可選的實施方式中,在指定對象素材正式“進入”到實景圖像中之前,還可以將用於表示傳送通道素材添加到實景圖像中,則在上述方式下,在用戶完成光標放置後,還可以在該光標所在的位置處呈現出具體的用於表達傳送通道的素材。例如,假設以“傳送門”素材作為傳送通道,則具體實現時,如圖3-3所示,在用戶完成光標的放置後,可以提示用戶“已確認平面,點擊放置傳送門”,等等,在用戶對光標完成點擊之後,就可以在對應的位置出呈現出“傳送門”素材。   後續具體在開始向實景圖像中添加指定對象素材,該光標可以消失,並且,還可以根據傳送通道素材提供動畫效果,以用於展示出有指定對象透過所述傳送通道進入所述拍攝到的實景圖像中的動畫過程。例如,如圖3-4以及3-5所示,其示出了上述動畫過程中的兩個狀態,可以看出,其呈現出了即將有人透過該傳送門“進入”用戶家中的效果。在所述指定對象素材進入到所述實景圖像中後,所述用於表示傳送通道的素材消失。互動結束時,則可以重新展示所述用於表示傳送通道的素材,並提供用於展示有指定對象透過所述傳送通道離開的動畫過程,完全離開後,所述用於表示傳送通道的素材消失。   S203:當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   具體實現時,開始進行互動的時間點可以是以第二終端中播出的與所述指定對象對應的目標事件相關的。其中,所謂的目標事件具體可以是指與指定對象相關的互動活動開始等事件。例如,在第二終端播放的節目中,到了“明星到你家”環節時,可以在舞台上放置“傳送門”(可以是實體的,或者也可以是透過投影的方式虛擬的)等,指定對象從舞台上的“傳送門”穿出的事件就可以作為所述目標事件,此時,該時間點也就成為互動的開始時間點,相應的,第一終端就可以執行具體的指定對象素材添加到實景圖像中進行展示的相關處理。   其中,由於第二終端中的節目通常是現場直播的,因此,無法按照預先在第一終端中設定好時間的方式,來保持與第二終端發生目標事件的時間點的同步。而由於第二終端中播放的通常是電視信號,雖然電視信號發送的時間點是相同的,但是,對於不同地理位置的用戶而言,信號到達用戶的時間點可能會有所不同。也就是說,同樣是指定對象從舞台上的“傳送門”穿出這一事件,北京的用戶可能是在21:00:00從第二終端上看到該事件的發生,而廣州的用戶則可能是在21:00:02時才看到,等等。因此,即使由第一服務端的工作人員在晚會現場看到事件發生時,統一向各個第一終端發送關於目標事件的通知消息,也可能存在不同地區的用戶實際體驗到的結果不同的情況,可能有些用戶感覺到指定對象的穿越過程與第二終端上的事件可以無縫銜接,而有些用戶則可能會感覺不到,可能出現電視節目中指定對象尚未從傳送門穿出,但卻已經進入到手機中的實景圖像中等情況。   為此,在本申請實施例中,由於用戶通常是在邊看電視的情況下,邊用手機等移動終端進行互動,因此,第一終端與第二終端通常是位於同一空間環境中,並且兩者相隔的距離不會太遠。在該情況下,還可以透過以下方式實現第一終端對第二終端中目標事件的感知:可以由電視節目製作方在所述目標事件發生時刻,在待發送的視頻信號中加入一個預置頻率的聲波信號。這樣,隨著具體視頻信號送到用戶的第二終端,該聲波信號也會隨之送達,並且,該聲波信號的頻率可以是在人類的聽力範圍之外,也即,用戶並不會感知該聲波信號的存在,但是,第一終端則能夠感知到該聲波信號,進而,第一終端就可以將該聲波信號作為目標事件的發生標誌,進而執行後續的互動過程即可。透過這種方式,可以將目標事件的發生標誌攜帶在具體的視頻信號中,並透過第二終端傳達給第一終端,因此,可以保證用戶在第二終端看到的事件,能夠更好的與第一終端中看到的圖像進行無縫銜接,從而獲得更好的體驗。   其中,關於所述的聲波信號,其具體的頻率資訊可以是由第一服務端確定的,並由第一服務端提供給第二服務端,由第二服務端在發送視頻信號的過程中,如果發現正在發生所述指定對象相關的目標事件,就可以在視頻信號對應的位置出插入該聲波信號。另一方面,第一服務端也可以透過一些方式將該聲波信號的頻率資訊告知給第一終端,這樣,第一終端與第二終端之間就可以透過該聲波信號建立聯繫。需要說明的是,在具體實現時,同一台晚會中,可能會具有多個“明星到你家”環節,對應不同的指定對象,因此,還可以分別為不同的指定對象提供不同頻率的聲波信號。第一服務端可以將指定對象與聲波頻率之間的對應關係提供給第二服務端,第二服務端在添加聲波信號時,可以根據第一服務端提供的對應關係進行添加;並且,該對應關係也會提供給第一終端,第一終端可以根據檢測到的聲波信號的頻率的不同,確定出當前事件對應的具體是哪個指定對象。   具體開始互動之後,如前文所述,可以透過基於傳送通道的動畫作為互動開始的標誌,之後,可以將與指定對象素材添加到實景圖像中,如果用戶已經指定位置,則可以添加到實景圖像中對應的位置處。如果指定對象素材添加進來時,用戶已經移動了第一終端設備的位置,也即其相對於初始位置已經發生了變化,以至於素材添加進來之後,卻並沒出現在第一終端的顯示屏中。對於此情況,由於之前基於移動終端設備的初始位置(該位置一旦確定就不再變化)創建了坐標系,因此,還可以基於SLAM (Concurrent Mapping and Localization,即時定位與地圖構建)等技術,確定出第一終端移動後的位置在該坐標系中的坐標,也即確定出第一終端移動到了什麼位置,並且可以確定第一終端相對於初始位置是向什麼方向發生了移動,進而就可以引導用戶向該方向相反的方向移動其第一終端,以使得已經添加的素材能夠出現在第一終端的畫面中。如圖3-6所示,可以透過“箭頭”的方式引導用戶對第一終端進行移動。   如前文所述,同一指定對象對應的素材可能有多套,例如,包括跳舞的素材,唱歌的素材等等,則具體在將指定對象素材添加到實景圖像之前,還可以為用戶提供用於選擇具體素材的選項,用戶可以進行選擇。在用戶進行選擇的過程中,還可以播放一段固定的視頻等,例如,這段固定視頻的內容可以是一個盒子,並在不停的跳動,用以表達指定對象正在做換衣服等準備工作,等等。在用戶選擇了一個具體的素材後,就可以將被選中的素材添加到實景圖像中進行展示,例如,如圖3-7所示,其為一個具體例子中,展示所述指定素材展示過程中的其中一幀圖像,其中,關於人物的部分圖像為虛擬圖像,人物後面的背景則是用戶透過第一終端採集到的實景圖像。   其中,由於互動素材中還可以包括由所述指定對象錄製的語音樣本素材,因此,在添加了所述指定對象素材之後,還可以獲得所述第一終端關聯用戶的用戶名稱資訊,並針對所述關聯用戶生成該用戶專用的問候語語料,其中包括所述用戶名稱。進而,根據所述語音樣本素材將所述問候語預料轉化為語音並進行播放。相應的,在指定對象素材中,還可以存在指定對象向用戶進行打招呼時的動作、表情等,使得用戶感覺到確實是該指定對象本人在跟自己打招呼。其中,關於用戶名稱,可以根據當前用戶登錄到的賬戶,確定對應的用戶匿稱,或者還可以根據用戶預先提供的實名認證資訊,獲得用戶真實姓名,等等,透過上述方式可以實現針對不同用戶的“千人千面”。當然,如果無法獲取到某些用戶的匿稱、真實姓名等,則還可以根據用戶的性別、年齡等,為用戶生成相對較通用的稱呼,等等。   另外,具體在將指定對象素材添加到實景圖像中之後,還可以提供拍攝操作選項,透過所述操作選項接收到操作請求時,可以透過對各圖像層進行截屏或錄屏等方式生成對應的圖像(照片或者視頻等)。透過這種方式,可以實現與所述指定對象的合影,等等。也就是說,在進行照片或者視頻等圖像的拍攝時,所述實景圖像中還可以包括需要與所述指定對象進行合影的人物實景圖像。例如,某用戶具體進行互動的過程中,由於其通常是在家中進行互動,身邊可能還會有其他人在,如果其他人想要與所述指定對象合影,則可以進入到第一終端的實景圖像採集區域,使得第一終端可以採集到他/她的實景圖像,之後,用戶透過操作所述操作選項,即可實現具體的拍照操作。具體實現時,還可以利用景深資訊,區分出實景圖像中的人物與虛擬圖像中的指定對象之間的前後位置關係,以進一步增強真實感。   其中,由於界面中還包括一些按鈕等操作選項,因此,在進行截屏或錄屏時,還可以去掉其中用於展示操作選項的圖像層,僅對實景圖像層以及所述視頻/動畫所在的圖像層進行截屏或者錄屏操作,以提高所生成照片或者視頻的真實感。   具體實現時,可以透過同一個操作選項提供拍攝照片以及視頻的功能,並透過不同的操作方式來區分用戶的具體意圖。例如,對所述操作選項進行點擊操作時對應拍攝照片功能,對所述操作選項進行長按操作時對應拍攝視頻功能,等等。也就是說,如果用戶只是點擊上述操作選項,則可以觸發截屏操作,生成照片。如果用戶按住操作選項一直不放,則觸發錄屏操作,直到用戶放開為止。另外,具體實現時,還可以對每次錄製的視頻長度進行限制,例如,每段視頻不超過10S,則用戶按住操作選項超過10S後,即使仍然按住不放,也會結束錄屏操作,生成最長為10S的視頻,等等。   再者,在互動過程中,還可以提供對拍攝所得的照片或視頻進行分享的操作選項。例如,該操作選項可以位於上述用於拍攝照片或視頻的操作選項的一側,並且可以展示出提示資訊,例如:“點擊可以分享精彩時刻哦”,等等。在用戶進行點擊之後,可以提供多個社交網路平台的分享入口,用戶可以選擇其中的社交網路平台進行分享。   此外,在互動過程中,除了可以播放指定對象對應的視頻/動畫,或者與指定對象進行拍照留念、分享等操作,還可以為用戶提供其他的互動操作選項,例如,還可以提供參加某公益活動的操作選項,如果用戶願意參加,則可以直接透過該操作選項進行點擊。或者,還可以將透過這種渠道參加對應公益活動的人數,與公益活動的完成進度相關聯,服務端透過統計這種人數資訊,並實時提供給節目現場的導演等工作人員,使得節目現場的佈景等可以隨著公益活動的完成進度發生相應的變化,例如,節目現場的佈景由沙漠逐漸變成綠洲,等等。   在互動結束後,指定對象素材將不會再展示,當然,該指定對象還可能會再次出現在第二終端的畫面中,因此,為了更好的展示出用戶的“穿越”過程,此時仍然可以再將傳送通道的素材進行展示,如圖3-8所示,還可以提供用於展示有人物透過所述傳送通道離開的動畫過程,傳送通道素材本身也可以呈現出逐漸變小的過程,完全離開後,所述用於表示傳送通道的素材也從畫面中消失。   在互動結束後,可以退出對實景圖像採集的界面,此時,在可選的實現方案中,還可以提供用於對拍攝所得的各照片或視頻進行瀏覽以及分享的承接頁面。也就是說,在互動結束後,可以提供一個承接頁面,以用於引導用戶對拍攝得到的照片或視頻進行分享操作。其中,各個照片或視頻在該頁面中可以按照拍攝的先後順序進行排序,例如,如圖3-9所示,具體可以從左到右開始按照由近及遠的時間進行依次展示,等等。在展示的過程中,用戶點擊其中任意一個照片或視頻,都可以喚起分享組件界面,如圖3-10所示,用戶可以透過該組件完成具體的分享操作。   總之,透過本申請實施例,可以加載指定對象素材,在具體進行互動的過程中,可以對用戶所在的實際環境進行採集實景圖像,並在第二終端中播出與所述指定對象對應的目標事件時,將指定對象素材添加到所述實景圖像中進行展示。這樣,使得用戶可以獲得特定對象來到自己所在空間環境(例如,自己家中等)的體驗,因此,可以提高用戶對互動的參與度。 實施例二   該實施例二是與實施例一相對應的,從服務端的角度,提供了一種多屏互動方法,其中,參見圖4,該方法具體可以包括:   S401:第一服務端保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   S402:將所述互動素材提供給第一終端,由所述第一終端採集實景圖像,當第二終端中的視頻播放到與所述指定對象對應的目標事件時,將所述指定對象素材添加到所述實景圖像中。   具體實現時,所述指定對象包括指定人物。當然,具體實現時,還可以包括動物、商品、道具等等。   具體在提供互動素材時,可以提供透過對所述指定對象進行拍攝所獲得的視頻素材。或者,提供以所述指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材。   具體的,在所述指定對象為指定人物時,還可以提供由所述指定人物錄製的語音樣本素材。   具體實現時,為了使得第一終端更方便的感知第二終端中所述目標事件的發生,第一服務端還可以向所述第二終端對應的第二服務端提供預置頻率的聲波信號,以用於在所述第二終端中的視頻播放到與所述指定對象對應的目標事件時,添加到所述視頻中,以便所述第一終端透過檢測所述預置頻率的聲波信號獲知所述目標事件的發生。   具體實現時,服務端還可以對各客戶端的互動情況進行統計。其中,還可以將統計資訊提供給所述第二終端對應的第二服務端,由所述第二服務端將所述統計資訊添加到所述第二終端播放的視頻中,以用於透過第二終端對統計結果進行公佈,或者,還可以統計結果來影響晚會現場的佈景,等等。   其中,由於該實施例二是與實施例一相對應的,因此,相關的具體實現可以參見前述實施例一中的記載,這裡不再贅述。 實施例三   該實施例三是從第二終端的角度,提供了一種多屏互動方法,參見圖5,該方法具體可以包括:   S501:第二終端播放視頻;   S502:在所述視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   具體實現時,不同指定對象可以對應不同頻率的聲波信號。 實施例四   該實施例四是從第二終端對應的第二服務端的角度,提供了一種多屏互動方法,參見圖6,該方法具體可以包括:   S601:第二服務端接收第一服務端提供的預置頻率的聲波信號資訊;   S602:在視頻中與指定對象相關的目標事件發生的位置插入所述預置頻率的聲波信號,以便在透過第二終端播放所述視頻的過程中,由第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   具體實現時,該第二服務端還可以接收所述第一服務端提供的對第一終端互動情況的統計資訊,將所述統計資訊添加到所述視頻中進行發送,以用於透過所述第二終端進行播放。 實施例五   在前述實施例一至實施例四中,是在第一終端與第二終端之間實現多屏互動,而在實際應用中,用戶還可以透過第一終端觀看晚會直播節目等視頻,在這種情況下,用戶在透過第一終端觀看視頻的過程中,也可以獲得“明星到我家”的體驗。也即,可以透過同一終端進行觀看視頻以及互動。   具體的,參見圖7,該實施例五提供了一種視頻互動方法,該方法具體可以包括:   S701:第一終端加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   S702:當所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面;   S703:在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   也就是說,用戶在透過第一終端觀看視頻的過程中,該視頻播放到了與所述指定對象相關的目標事件,則可以跳轉到互動界面中,在該互動界面中,可以首先進行實景圖像的採集,然後,將指定對象素材添加到所述實景圖像中。這樣,用戶同樣獲得指定對象從“晚會現場”等地“穿越”到自己所在空間環境的體驗。   關於該實施例五中其他的具體實現,可以參見前述各實施例中的記載,這裡不再贅述。 實施例六   該實施例六是與實施例五相對應的,從第一服務端的角度提供了一種視頻互動方法,參見圖8,該方法具體可以包括:   S801:第一服務端保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   S802:將所述互動素材提供給第一終端,以便所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面,並在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。 實施例七   在前述各實施例中,均是以手機等第一終端為執行主體來提供具體的互動結果,而在實際應用中,該方案還可以擴展至其他場景,例如,除了手機之外,還可以將智能眼鏡等可穿戴設備作為第一終端,除了可以與第二終端或者第一終端中播放的視頻進行互動,還可以是與電影螢幕中播放的視頻,或者現場觀看的表演、演出、商家促銷活動、體育賽事等等相關事件的過程中,進行具體的互動過程。為此,該實施例七提供了另一種互動方法,參見圖9,該方法具體可以包括:   S901:第一終端加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   在具體實現時,由於該實施例中的應用場景可以不進行限定,因此,在提供具體的互動界面之前,還可以提供用於加載互動素材的界面,在該界面中,可以提供多種可選的互動素材,例如,與當前各大院線正在上映的電影相關的素材,與線下的演出、促銷活動、比賽等事件相關的素材,等等,用戶可以從中選擇自己所需的素材進行下載。另外,在具體實現時,由於一些應用程式為用戶提供了線上訂票的功能,用戶可以透過線上操作的方式來訂票,其中不僅可以包括電影票,還可以包括各種演出、比賽等相關的門票。因此,具體提供互動素材時,還可以根據用戶的具體訂票資訊進行提供,例如,在用戶透過某在線訂票系統預訂了某電影票時,如果剛好存在與該電影相關的互動素材,則可以提示用戶進行下載,等等。需要說明的是,在本申請實施例中,下載的互動素材可以保存在手機等終端本地,或者,還可以下載到可穿戴設備等終端中,以更方便的在觀看電影、演出等過程中進行互動。   在該實施例中,具體的指定對象同樣可以是指指定人物、商品、道具等等。   S902:採集實景圖像;   通常,可穿戴設備帶有攝像頭等裝置,因此,可以透過可穿戴設備等進行實景圖像的採集,用戶在該觀看電影、演出等過程中,實際透過眼鏡等可穿戴設備看到的圖像,可以是這種眼鏡採集到的實景圖像。   S903:當檢測到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   具體的目標事件可以是指定對象出現在具體的電影、演出、比賽、促銷活動等過程中,等等。具體在對目標事件進行檢測時,可以有多種方式,例如,一種方式下,可以由電影、演出、比賽、促銷活動的放映方或者舉辦方,在具體的事件節點上插入一些聲波資訊等,可穿戴設備等透過檢測這種信號的方式獲知具體事件的發生。或者,在其他的實現方式下,還可以直接透過對採集到的實景圖像進行分析等方式來獲知目標事件的發生。例如,具體在使用可穿戴設備進行互動的過程中,可穿戴設備攝像頭採集到的實景圖像與用戶實際觀看到的實景圖像通常是相同的,或者具有相互重疊的部分,則如果用戶看到某目標事件發生,則該攝像頭實際上也能夠採集到對應事件的資訊,另外,可穿戴設備也可以帶有聲音採集器等,因此,可以透過圖像分析、語音分析等方式來獲知具體目標事件的發生,等等。   與實施例一相對應,本申請實施例還提供了一種多屏互動裝置,具體的,參見圖10,該裝置應用於第一終端,包括:   第一素材加載單元1001,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第一實景圖像採集單元1002,用於採集實景圖像;   第一素材添加單元1003,用於當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   其中,所述第二終端中播放的視頻為直播視頻流。   所述第一終端接收所述視頻播放時發出的預置頻率的聲波信號,確定所述目標事件的發生。其中,所述聲波信號可以由所述第二終端中的視頻播放到與所述指定對象對應的目標事件時發出的。   其中,所述第一素材添加單元具體可以用於:   將所述指定對象素材添加到所述實景圖像中包括的平面上進行展示。   具體實現時,該裝置還可以包括:   放置位置確定單元,用於所述添加所述指定對象素材之前,確定採集到的實景圖像中的放置位置;   所述第一素材添加單元具體可以用於:將所述指定對象素材添加到所述放置位置處。   其中,所述放置位置確定單元具體可以用於:   確定所述採集到的實景圖像中的平面位置,所述放置位置位於的所述平面位置中。   具體的,所述放置位置確定單元具體可以包括:   平面檢測子單元,用於在採集到的實景圖像中進行平面檢測;   光標提供子單元,用於提供光標,根據檢測到的平面,確定光標的可放置範圍;   放置位置確定子單元,用於將所述光標被放置的位置作為所述放置位置。   其中,所述放置位置確定子單元具體可以包括:   坐標系建立子單元,用於以第一終端所在的初始位置為原點建立坐標系;   坐標確定子單元,用於確定所述光標被放置的位置在所述坐標系中的光標坐標;   位置確定子單元,用於將所述光標坐標作為所述放置位置。   具體的,所述裝置還可以包括:   變化方向確定單元,用於在將所述指定對象素材添加到所述放置位置處後,當所述素材未出現在所述第一終端的界面時,確定所述第一終端相對於所述初始位置的變化方向;   提示單元,用於根據所述變化方向,在所述第一終端的界面中提供相反方向的提示標識。   具體實現時,所述互動素材還可以包括用於表示傳送通道的素材,該裝置還可以包括:   通道素材添加單元,用於在所述採集實景圖像步驟之後,將所述用於表示傳送通道的素材添加到所述實景圖像中。   其中,所述第一素材添加單元具體可以用於:   基於所述傳送通道素材,展示所述指定對象透過所述傳送通道進入所述拍攝到的實景圖像中的過程。   另外,所述互動素材中還可以包括由所述指定對象錄製的語音樣本素材;   所述裝置還可以包括:   用戶名稱獲得單元,用於獲得所述第一終端關聯用戶的用戶名稱資訊;   問候語料生成單元,用於針對所述關聯用戶生成包括該用戶名稱的問候語語料;   播放單元,用於根據所述語音樣本素材,將所述問候語預料轉化為語音並播放。   其中,所述指定對象素材為同一指定對象對應的多套素材,所述裝置還包括:   素材選擇選項提供單元,喲關於提供用於對所述指定對象素材進行選擇的操作選項;   所述第一素材添加單元可以用於:   將被選中的指定對象素材添加到所述實景圖像中。   另外,該裝置還可以包括:   拍攝選項提供單元,用於在對所述指定對象素材添加到所述實景圖像中進行展示的過程中,提供拍攝操作選項;   圖像生成單元,用於透過所述拍攝操作選項接收操作請求,根據各圖像層,生成對應的圖像,所述圖像層包括實景圖像、指定對像素材的圖像。   其中,所述圖像生成單元具體可以用於:   對各圖像層進行截屏或錄屏,並去掉其中用於展示操作選項的圖像層,生成拍攝圖像。   具體實現時,所述實景圖像中還包括與所述指定對象進行合影的人物圖像。   另外,還可以包括:   分享選項提供單元,用於提供對拍攝圖像進行分享的操作選項。   再者,還包括:   承接頁面提供單元,用於提供對拍攝圖像進行瀏覽以及分享的承接頁面。   具體實現時,所述指定對象包括指定人物的資訊。   或者,所述指定對象包括指定商品的資訊。   具體的,還包括:   搶購選項提供單元,用於所述將所述指定對象素材添加到所述實景圖像中之後,提供用於對所述指定商品關聯的資料對象進行搶購的操作選項;   提交單元,用於透過該操作選項接收到搶購操作時,提交到服務端,由服務端確定搶購結果。   另外,所述指定對象包括線下遊戲相關的道具資訊。   此時,該裝置還可以包括:   操作資訊提交單元,用於所述將所述指定對象素材添加到所述實景圖像中之後,接收到對所述目標道具進行操作資訊時,將所述操作資訊提交到服務端,由所述服務端確定該操作所獲得的獎勵資訊並回傳;   獎勵資訊提供單元,用於提供所獲得的獎勵資訊。   其中,所述指定對象素材包括透過對所述指定對象進行拍攝所獲得的視頻素材。   或者,所述指定對象素材包括:以所述指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材。   與實施例二相對應,本申請實施例還提供了一種多屏互動裝置,參見圖11,該裝置應用於第一服務端,包括:   第一互動素材保存單元1101,用於保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第一互動素材提供單元1102,用於將所述互動素材提供給第一終端,由所述第一終端採集實景圖像,當第二終端中的視頻播放到與所述指定對象對應的目標事件時,將所述指定對象素材添加到所述實景圖像中。   其中,所述指定對象包括指定人物。   具體的,所述第一互動素材保存單元具體可以用於:保存透過對所述指定對象進行拍攝所獲得的視頻素材。   或者,保存以所述指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材。   其中,所述指定對象包括指定人物,所述第一互動素材保存單元還可以用於:   保存由所述指定人物錄製的語音樣本素材。   另外,該裝置還可以包括:   聲波信號資訊提供單元,用於向所述第二終端對應的第二服務端提供預置頻率的聲波信號,以用於在所述第二終端中的視頻播放到與所述指定對象對應的目標事件時,添加到所述視頻中,以便所述第一終端透過檢測所述預置頻率的聲波信號獲知所述目標事件的發生。   另外,還可以包括:   統計單元,用於對各第一終端的互動情況進行統計;   統計資訊提供單元,用於將統計資訊提供給所述第二終端對應的第二服務端,由所述第二服務端將所述統計資訊添加到所述第二終端播放的視頻中。   與實施例三相對應,本申請實施例還提供了一種多屏互動裝置,參見圖12,該裝置應用於第二終端,包括:   視頻播放單元1201,用於播放視頻;   聲波信號播放單元1202,用於在所述視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   其中,不同指定對象對應不同頻率的聲波信號。   與實施例四相對應,本申請實施例還提供了一種多屏互動裝置,參見圖13,該裝置應用於第二服務端,包括:   聲波信號資訊接收單元1301,用於接收第一服務端提供的預置頻率的聲波信號資訊;   聲波信號資訊插入單元1302,用於在視頻中與指定對象相關的目標事件發生的位置插入所述預置頻率的聲波信號,以便在透過第二終端播放所述視頻的過程中,由第一終端透過檢測所述聲波信號獲知所述目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。   其中,該裝置還可以包括:   統計資訊接收單元,用於接收所述第一服務端提供的對第一終端互動情況的統計資訊;   統計資訊播放單元,用於將所述統計資訊添加到所述視頻中進行發送,以用於透過所述第二終端進行播放。   與實施例五相對應,本申請實施例還提供了一種視頻互動裝置,參見圖14,該裝置應用於第一終端,包括:   加載單元1401,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   界面跳轉單元1402,用於當所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面;   素材添加單元1403,用於在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   與實施例六相對應,本申請實施例還提供了一種視頻互動裝置,參見圖15,該裝置應用於第一服務端,包括:   第二素材保存單元1501,用於保存互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第二素材提供單元1502,用於將所述互動素材提供給第一終端,以便所述第一終端中的視頻播放到與所述指定對象相關的目標事件時,跳轉到互動界面,並在所述互動界面中展示實景圖像採集結果,並將所述指定對象素材添加到所述實景圖像中。   與實施例七相對應,本申請實施例還提供了一種互動裝置,參見圖16,該裝置可以包括:   第二素材加載單元1601,用於加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   第二實景圖像採集單元1602,用於採集實景圖像;   第二素材添加單元1603,用於當檢測到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   另外,本申請實施例還提供了一種電子設備,包括:   一個或多個處理器;以及   與所述一個或多個處理器關聯的儲存器,所述儲存器用於儲存程式指令,所述程式指令在被所述一個或多個處理器讀取執行時,執行如下操作:   加載互動素材,所述互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當第二終端中的視頻播放到與所述指定對象相關的目標事件時,將所述指定對象素材添加到所述實景圖像中。   其中,圖17示例性的展示出了電子設備的架構,例如,設備1700可以是移動電話,計算機,數位廣播終端,消息收發設備,遊戲控制台,平板設備,醫療設備,健身設備,個人數位助理,飛行器等。   參照圖17,設備1700可以包括以下一個或多個組件:處理組件1702,儲存器1704,電源組件1706,多媒體組件1708,音頻組件1710,輸入/輸出(I/O)的介面1712,傳感器組件1714,以及通信組件1716。   處理組件1702通常控制設備1700的整體操作,諸如與顯示,電話呼叫,資料通信,相機操作和記錄操作相關聯的操作。處理元件1702可以包括一個或多個處理器1720來執行指令,以完成本公開技術方案提供的視頻播放方法中的當滿足預設條件時,生成流量壓縮請求,併發送給服務器,其中所述流量壓縮請求中記錄有用於觸發服務器獲取目標關注區域的資訊,所述流量壓縮請求用於請求服務器優先保證目標關注區域內視頻內容的碼率;根據服務器回傳的碼流文件播放所述碼流文件對應的視頻內容,其中所述碼流文件為服務器根據所述流量壓縮請求對所述目標關注區域之外的視頻內容進行碼率壓縮處理得到的視頻文件的全部或部分步驟。此外,處理組件1702可以包括一個或多個模組,便於處理組件1702和其他組件之間的交互。例如,處理部件1702可以包括多媒體模組,以方便多媒體組件1708和處理組件1702之間的交互。   儲存器1704被配置為儲存各種類型的資料以支持在設備1700的操作。這些資料的示例包括用於在設備1700上操作的任何應用程式或方法的指令,連絡人資料,電話簿資料,消息,圖片,視頻等。儲存器1704可以由任何類型的揮發性或非揮發性儲存設備或者它們的組合實現,如靜態隨機存取記憶體(SRAM),電可擦除可編程唯讀記憶體(EEPROM),可擦除可編程唯讀記憶體(EPROM),可編程唯讀記憶體(PROM),唯讀記憶體(ROM),磁儲存器,快閃儲存器,磁盤或光碟。   電源組件1706為設備1700的各種組件提供電力。電源組件1706可以包括電源管理系統,一個或多個電源,及其他與為設備1700生成、管理和分配電力相關聯的組件。   多媒體組件1708包括在設備1700和用戶之間的提供一個輸出介面的螢幕。在一些實施例中,螢幕可以包括液晶顯示器(LCD)和觸控面板(TP)。如果螢幕包括觸控面板,螢幕可以被實現為觸控屏,以接收來自用戶的輸入信號。觸控面板包括一個或多個觸摸傳感器以感測觸摸、滑動和觸控面板上的手勢。觸摸傳感器可以不僅感測觸摸或滑動動作的邊界,而且還檢測與觸摸或滑動操作相關的持續時間和壓力。在一些實施例中,多媒體組件1708包括一個前置攝像頭和/或後置攝像頭。當設備1700處於操作模式,如拍攝模式或視頻模式時,前置攝像頭和/或後置攝像頭可以接收外部的多媒體資料。每個前置攝像頭和後置攝像頭可以是一個固定的光學透鏡系統或具有焦距和光學變焦能力。   音頻組件1710被配置為輸出和/或輸入聲波信號。例如,音頻組件1710包括一個麥克風(MIC),當設備1700處於操作模式,如呼叫模式、記錄模式和語音識別模式時,麥克風被配置為接收外部聲波信號。所接收的聲波信號可以被進一步儲存在儲存器1704或經由通信組件1716發送。在一些實施例中,音頻組件1710還包括一個揚聲器,用於輸出聲波信號。   I/O介面1712為處理組件1702和外圍介面模組之間提供介面,上述外圍介面模組可以是鍵盤,點擊輪,按鈕等。這些按鈕可包括但不限於:主頁按鈕、音量按鈕、啟動按鈕和鎖定按鈕。   傳感器組件1714包括一個或多個傳感器,用於為設備1700提供各個方面的狀態評估。例如,傳感器組件1714可以檢測到設備1700的打開/關閉狀態,組件的相對定位,例如所述組件為設備1700的顯示器和小鍵盤,傳感器組件1714還可以檢測設備1700或設備1700一個組件的位置改變,用戶與設備1700接觸的存在或不存在,設備1700方位或加速/減速和設備1700的溫度變化。傳感器組件1714可以包括接近傳感器,被配置用來在沒有任何的物理接觸時檢測附近商品的存在。傳感器組件1714還可以包括光傳感器,如CMOS或CCD圖像傳感器,用於在成像應用中使用。在一些實施例中,該傳感器組件1714還可以包括加速度傳感器,陀螺儀傳感器,磁傳感器,壓力傳感器或溫度傳感器。   通信組件1716被配置為便於設備1700和其他設備之間有線或無線方式的通信。設備1700可以接入基於通信標準的無線網路,如WiFi,2G或3G,或它們的組合。在一個示例性實施例中,通信部件1716經由廣播頻道接收來自外部廣播管理系統的廣播信號或廣播相關資訊。在一個示例性實施例中,所述通信部件1716還包括近場通信(NFC)模組,以促進短程通信。例如,在NFC模組可基於射頻識別(RFID)技術,紅外資料協會(IrDA)技術,超寬帶(UWB)技術,藍牙(BT)技術和其他技術來實現。   在示例性實施例中,設備1700可以被一個或多個應用專用集成電路(ASIC)、數位信號處理器(DSP)、數位信號處理設備(DSPD)、可編程邏輯裝置(PLD)、現場可編程閘陣列(FPGA)、控制器、微控制器、微處理器或其他電子元件實現,用於執行上述方法。   在示例性實施例中,還提供了一種包括指令的非臨時性計算機可讀儲存媒體,例如包括指令的儲存器1704,上述指令可由設備1700的處理器1720執行以完成本公開技術方案提供的視頻播放方法中的當滿足預設條件時,生成流量壓縮請求,併發送給服務器,其中所述流量壓縮請求中記錄有用於觸發服務器獲取目標關注區域的資訊,所述流量壓縮請求用於請求服務器優先保證目標關注區域內視頻內容的碼率;根據服務器回傳的碼流文件播放所述碼流文件對應的視頻內容,其中所述碼流文件為服務器根據所述流量壓縮請求對所述目標關注區域之外的視頻內容進行碼率壓縮處理得到的視頻文件。例如,所述非臨時性計算機可讀儲存媒體可以是ROM、隨機存取記憶體(RAM)、CD-ROM、磁帶、軟碟和光資料儲存設備等。   透過以上的實施方式的描述可知,本領域的技術人員可以清楚地瞭解到本申請可借助軟體加必需的通用硬體平台的方式來實現。基於這樣的理解,本申請的技術方案本質上或者說對現有技術做出貢獻的部分可以以軟體產品的形式體現出來,該計算機軟體產品可以儲存在儲存媒體中,如ROM/RAM、磁碟、光碟等,包括若干指令用以使得一台計算機設備(可以是個人計算機,服務器,或者網路設備等)執行本申請各個實施例或者實施例的某些部分所述的方法。   本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於系統或系統實施例而言,由於其基本相似於方法實施例,所以描述得比較簡單,相關之處參見方法實施例的部分說明即可。以上所描述的系統及系統實施例僅僅是示意性的,其中所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部模組來實現本實施例方案的目的。本領域普通技術人員在不付出創造性勞動的情況下,即可以理解並實施。   以上對本申請所提供的多屏互動方法、裝置及電子設備,進行了詳細介紹,本文中應用了具體個例對本申請的原理及實施方式進行了闡述,以上實施例的說明只是用於幫助理解本申請的方法及其核心思想;同時,對於本領域的一般技術人員,依據本申請的思想,在具體實施方式及應用範圍上均會有改變之處。綜上所述,本說明書內容不應理解為對本申請的限制。The following will be combined with the drawings in the embodiments of the present application. The technical solutions in the embodiments of the present application are clear, Completely described, Obviously, The described embodiments are only a part of the embodiments of the present application. Rather than all embodiments. Based on the embodiments in the present application, All other embodiments obtained by those of ordinary skill in the art, All are within the scope of this application.  In the embodiment of the present application, Provides a new multi-screen interactive solution. In this scenario, It is mainly an interaction between a mobile terminal device (referred to as a first terminal in the embodiment of the present application) such as a user's mobile phone and a terminal device (referred to as a second terminal in the present application embodiment) having a large screen such as a television. specific, Some programs such as large-scale live-party evenings can be used through the second terminal (of course, Can also be other types of programs) during playback, Conduct the above interaction process. E.g, In the live show, The program organizers will invite some entertainment stars and other performances. but, In the prior art, The user can only watch the performance of the star on the stage from the second terminal. In the embodiment of the present application, You can use some technical means, Enable users to get the "star to my home" experience. When it is implemented, Materials related to performances such as specific entertainment stars can be provided in advance. In the performance link of the character in the second terminal, Passing augmented reality in the first terminal, a pre-recorded performance video of the character, Materials such as animations are projected into the real environment where the user is located. E.g, The user usually watches the program in the second terminal such as TV at home. therefore, It is possible to project specific character performance videos/animations and the like into the user's home. such, Although the user still needs to see the specific projection result through the screen of the second terminal, but, Since the background of the performance is a real-life image captured in the user's environment, therefore, Relative to the performance on the stage viewed from the first terminal, It is possible to make the user get the experience that the "star" is really in his home. of course, In the specific implementation, In addition to the designated person, It can also be an animal, It can even be a commodity, and many more, In the embodiment of the present application, Unified is called "specified object."  When it is implemented, From a system architecture perspective, See Figure 1, The hardware device involved in the embodiment of the present application may include the foregoing first terminal and the second terminal. The software involved may be an associated application client installed in the first terminal (or, It can also be a program that is solidified in the first terminal, etc.) And the first server in the cloud. E.g, Assume that the above interaction is provided during the "Double 11" party. Because the organizer of the "Double 11" party is usually an online sales platform (for example, "Mobile Taobao", "Tmall", etc., the company, therefore, The application client and server provided by the online sales platform can be accessed. Provide technical support for the above multi-screen interaction. That is, Users can use "Mobile Taobao", Clients of applications such as "Tmall" come to the specific interaction process, And the materials used in the interaction process, etc. It can be provided by the server. It should be noted, The second terminal mainly exists as a playback terminal. The content such as the video played therein may be controlled by the second server of the back end (the server of the television station, etc.). That is, Regarding signals such as live video streams, A unified video signal transmission operation, etc., can be performed by the second server. after that, The video signal is transmitted to each of the second terminals for playback. That is, In the multi-screen interactive scenario provided by the embodiment of the present application, The first terminal corresponds to the second terminal and is a different server.  The specific implementation scheme is described in detail below.  Embodiment 1 First, In the first embodiment, from the perspective of the client, Provides a multi-screen interaction method, See Figure 2, The method may specifically include:  S201: The first terminal loads the interactive material, The interactive material includes a specified object material created according to the specified object;   among them, About interactive materials, that is, in the process of interacting with augmented reality, The material needed to generate information content such as virtual images. When it is implemented, The specified object may specifically be information related to the specified person. Or specify information about the product, Or, It can also be information related to items related to offline games. and many more. among them, Different specified objects can correspond to different interactive scenarios. E.g, When the specified object is the specified person, The specific scene can be a "star to your home" event. That is, E.g, During the process of watching TV programs on TV, etc. Through the program, The "stars" that participate in the performance in the program can be "traversed" to the user's home. And when the specified object is the specified item, The product may generally be a product related to a physical product sold in an online sales system, or the like. usually, Users need to pay more resources to purchase the item. but, During the event, Can be given by gift or ultra-low-cost sales, etc. Give it as a gift to the user. In the process of giving gifts, The "cross-screen gift" can be implemented in the manner of the embodiment of the present application. specific, By playing content related to a specified item in a second terminal such as a television, And "traversing" to the first terminal such as the user's mobile phone, In addition, an operation option for snapping up the data object associated with the specified item may also be provided. When receiving a snap-in operation through this operation option, Submit to the server, The panning result is determined by the server. therefore, Allow users to get the opportunity to snap up or draw, And then get the corresponding goods, or, Get the opportunity to buy the corresponding item at an ultra-low price, and many more.  In addition, When the specified object is an item related to the offline game, Another form of "cross-screen gift" can be used. That is, During the event, If the system wants to provide some coupons for users, Non-physical gifts such as "cash red envelopes", You can then send the gift, Associated with games such as offline magic. E.g, In the process of playing a program such as magic on a second terminal such as a TV, Some props may be used during the period. The solution of the embodiment of the present application is The item can be "traversed" to be displayed in the first terminal device such as the user's mobile phone. and then, The user can click on the item, etc. Carry out the collection of non-physical gifts. That is, Receiving information on the operation of the target item, Submit the operation information to the server, Determining, by the server, the reward information obtained by the operation and returning the information. then, The first terminal can provide the reward information obtained. and many more.  The specified object material may specifically include a video material obtained by shooting the specified object, E.g, If the specified object specifically refers to the specified person, You can sing in advance to the designated person’s performance, Video recording, such as dancing, Get the video clip. or, In another way, The specified object material may also include: a cartoon image made with the image of the specified object as a prototype, And animation material based on the cartoon image and so on. E.g, In the case where the specified object is the specified person, You can create a cartoon character based on the image of the designated person. And based on the cartoon character image to create animation material, Including dancing animations with cartoon characters, Singing animation, etc. among them, If you need to "sing", etc. The character can be dubbed by the designated character. or, Play the pre-recorded song of the specified character, and many more.   among them, For the same specified object, Can correspond to multiple sets of different specified object materials, E.g, For the same designated person, Different shows of the show, You can generate different character materials separately. and many more. That is, The material corresponding to the same specified object can be multiple sets. Specifically, when the specified object enters a user's "home", The specific material can be selected by the user. then, Provide specific augmented reality images with selected footage.  In addition, The interactive material provided by the first server may also include: Material used to represent the transfer channel. E.g, Specifically through the door, tunnel, Wormhole, Mascots such as "Tmall", The material is generated by transmitting a light array or the like. This material used to represent the transfer channel can be used to: Before specifically adding the specified object material to the live image, Since the designated object was originally performed on the stage of the party scene, But then it will come to the user’s home, therefore, To enhance the fun, It also makes the position change of the specified object more reasonable. You can also first pass the material used to represent the transfer channel. Play preset animations, Create an atmosphere where a designated object will “traverse” into its home through this transmission channel. Make users get a better experience. In addition, After the interaction is over, When the specified object needs to leave home, It is also possible to provide an animation of the opposite process to the home when the material used to represent the transmission channel is provided. Causing the user to get the specified object to leave the house, The experience of gradually closing the transfer channel.  Furthermore, The interactive material provided by the first server may further include a voice sample material recorded by the specified object. This voice sample material can be used to greet the user to indicate a greeting when the specified object "enters" into the user's home. and, You can also get the user's user name (including the name, before the specific greeting). Real name, etc.), etc. Achieve the greetings of “Thousands of People” E.g, "XXX, I am coming to your house," among them, For different users, The specific content of "XXX" is different. The above greeting will be greeted by the specified object by voice. In order to achieve the above-mentioned purpose of "thousands of thousands of people", It cannot be achieved directly by pre-recording a greeting voice. to this end, In the embodiment of the present application, A specific text can be read in advance by a specified object (specifically, corresponding to a designated person). And recording the voice of each text read aloud, Most of the initials will be included in this text. The pronunciation of the finals and tones. When it is implemented, The above specific text can usually be about a thousand or so. It can basically cover 90% of Chinese pronunciation. such, When the specified object just "enters" the user's home, After generating a specific greeting based on the user’s distinguished name, It is possible to transmit the pronunciation information of each Chinese character saved in the voice sample material. Send the corresponding voice, In order to achieve the effect of shouting the user name and greetings by the specified object.   of course, In practical applications, Can also include other materials, It is not listed here one by one. When it is implemented, The amount of information on the above interactive material may be large. The process of loading the interactive material by the first terminal may take a long time. therefore, It can be downloaded to the first terminal locally in advance. E.g, After the party played in the second terminal starts, The user can access the main venue interface of the party through the first terminal. While watching the program in the second terminal, Be prepared to interact through the main venue interface. The specific "star to your home" link may be at some point during the evening. Synchronized with the state of the second terminal, therefore, As long as after the party starts, The user enters the main conference site interface of the first terminal. Even if the specific "star to your home" event has not been officially launched, It is also possible to perform related interactive material downloading operations in advance. such, After the specific event begins, You can quickly move to the interactive process. Avoid situations where the active material has not been downloaded successfully and cannot participate in the event in time. of course, For users who do not enter the main site interface provided by the first terminal in advance, If you need to participate in the above "Star to your home" event, You can also temporarily download related interactive material. among them, For the case of temporary downloads, To avoid taking too long to download, A downgrade scheme can be provided. E.g, You can download only the specific object material mentioned above. The material used to express the transmission channel and the voice sample material can be downloaded no longer. at this time, Users don’t realize the feeling of “crossing”. The greeting of the specified object is also not received.  S202: Collecting real-life images;  When it is implemented, The first terminal can provide a corresponding activity page for activities such as "star to your home". An action option for making an interactive request can be provided on this page. E.g, As shown in Figure 3-1, It is a schematic diagram showing the activity page in an example. It can provide information such as related specified objects. You can also provide buttons such as "Start Now". The button can be an operation option for the user to make an interactive request. Users can make specific interaction requests by clicking on the "Start Now" button. of course, In practical applications, There are other ways to receive user interaction requests. E.g, A QR code can be displayed on the second terminal screen. The user sends a request by scanning the two-dimensional code through the first terminal, and many more.  When it is implemented, Operation options such as the "start now" button can be inoperable until the formal interaction process begins. Avoid user premature clicks. In addition, The copying aspect displayed on the operation options can also be different. E.g, In an inoperable state, Can be displayed as "exciting to open immediately", and many more. Before the interaction is about to begin, Then change the copy displayed on the button to the state of "start now". and, In order to create a nervous, Anxiously waiting for the atmosphere, At the same time, it is more attractive to users to perform click operations. You can also display the "breathing" effect on the display. E.g, The button can shrink at a rate of 70%. After 3S, it will return to its original size. After 3S, it shrinks again. And repeat this rhythm, and many more.   among them, The point in time at which the user interaction request is received may be earlier than the point in time when the specified object officially disappears from the second terminal and "enters the user's home". Really because After the user makes an interaction request, The client can also perform some preparatory work in advance. specific, After receiving the user’s interaction request, The live image acquisition in the first terminal can be first turned on, That is, The camera component on the first terminal can be activated. Then enter the state of live shooting, Prepare for subsequent augmented reality-based interactions.  When it is implemented, Before specifically launching the collection of real-life images, It is also possible to first determine whether the interactive material is already loaded locally in the first terminal. If not already loaded, Then you can first load the interactive material.   It should be noted, In the embodiment of the present application, The virtual image presented to the user through the augmented reality is related to the specified object material, In order to make the process of interaction more authentic, It is possible to make the specified object material appear on a plane in the real image. E.g, Can be the ground, The plane of the table, and many more, such, If the specified object is a specified person, Then, the performance process of the designated character can be performed on a plane. And if no special treatment is done, After the specified object material is added to the live image, It may happen that the specified object material is "floating" in midair. If the corresponding specified object material is the dance of the specified character, Singing and other performances, Will make the designated character "floating" in midair, This will reduce the user experience, Can't give users a more realistic immersive immersion.   to this end, In a preferred embodiment of the present application, It is also possible to display the specified object material to a plane included in the live image. When it is implemented, The plane recognition can be performed from the real image by the first terminal. then, Adding the specified object material to the plane in the real image, Avoid the phenomenon of “floating” in the air. at this time, Regarding where the specified object material appears, Can be arbitrarily determined by the first terminal, Just sit on a flat surface. or, In another implementation, Can go one step further, The user selects the location of the specific specified object material. specific, After launching the live image detection, The client can first perform plane detection from it. After detecting a plane, As shown in Figure 3-2, Can draw a range, And provide a movable cursor. You can also prompt the user to place the cursor in the drawn dropable range in the interface. After the user moves the cursor to the droppable range, The color of the cursor can change. To indicate that the user's placement location is available. at this time, The client can record where the cursor is actually placed. When it is implemented, In order to record the location information where the cursor is placed, There are many ways, E.g, In one way, The position where the first terminal is located at a certain moment can be used as the initial position (for example, It is possible to set the position of the first terminal as the initial position when the cursor is placed. and many more), And creating a coordinate system with the initial position (which may be the geometric center point of the first terminal, etc.) as a coordinate origin. then, After the cursor is placed in a specific dropable range, You can record the position of the cursor relative to the coordinate system. such, Subsequent when the specified object material is added to the live image, It can be added based on this position.  In addition, as described above, In an alternative embodiment, Before the specified object material is officially "entered" into the live image, You can also add the material used to represent the transfer channel to the live image. In the above manner, After the user completes the cursor placement, It is also possible to present a specific material for expressing the transmission channel at the position where the cursor is located. E.g, Assume that the "portal" material is used as the transmission channel. When it is implemented, As shown in Figure 3-3, After the user completes the placement of the cursor, You can prompt the user to "confirm the plane, Click to place the portal" and many more, After the user clicks on the cursor, It is possible to present a "portal" material at the corresponding location.  Subsequent specifics at the beginning to add the specified object material to the live image, The cursor can disappear, and, It can also animate based on the channel material. An animation process for displaying that a specified object enters the captured live image through the transmission channel. E.g, As shown in Figures 3-4 and 3-5, It shows the two states in the above animation process, As can be seen, It presents the effect that someone will "enter" the user's home through the portal. After the specified object material enters the live image, The material for indicating the transfer channel disappears. At the end of the interaction, Then the material used to represent the transmission channel can be re-displayed. And providing an animation process for displaying a specified object to leave through the transmission channel, After leaving completely, The material for indicating the transfer channel disappears.  S203: When the video in the second terminal plays to a target event related to the specified object, Adding the specified object material to the live image.  When it is implemented, The point in time at which the interaction begins may be related to the target event corresponding to the specified object broadcasted in the second terminal. among them, The so-called target event may specifically refer to an event such as the start of an interactive activity related to a specified object. E.g, In the program played by the second terminal, When it comes to the "star to your home" link, You can place a "portal" on the stage (can be physical, Or it can be virtual by projection, etc. An event in which a specified object is passed out from a "portal" on the stage can be used as the target event. at this time, This point in time also becomes the starting point of the interaction. corresponding, The first terminal can perform specific processing of adding specific object material to the live image for display.   among them, Since the programs in the second terminal are usually live broadcast, therefore, Unable to follow the method of setting the time in the first terminal in advance, To maintain synchronization with the point in time at which the second terminal has a target event. And because the second terminal plays usually a TV signal, Although the time point of the TV signal transmission is the same, but, For users in different geographic locations, The point in time at which the signal arrives at the user may vary. That is, The same is the event that the specified object is worn out from the "gateway" on the stage. Users in Beijing may be at 21: 00: 00 sees the occurrence of the event from the second terminal, The users in Guangzhou may be at 21: 00: I saw it at 02 o’clock. and many more. therefore, Even if the staff at the first server saw the event at the party’s scene, Sending a notification message about the target event to each first terminal uniformly, There may also be situations in which users in different regions actually experience different results. Some users may feel that the traversing process of the specified object and the event on the second terminal can be seamlessly connected. And some users may not feel it. It may happen that the specified object in the TV program has not been worn out from the portal. However, it has entered the medium-sized situation in the mobile phone.   to this end, In the embodiment of the present application, Since the user is usually watching TV, While interacting with mobile terminals such as mobile phones, therefore, The first terminal and the second terminal are usually located in the same space environment. And the distance between the two is not too far. In this case, The sensing of the target event in the second terminal by the first terminal may also be implemented in the following manner: The television program producer may at the time of occurrence of the target event, A sound wave signal of a preset frequency is added to the video signal to be transmitted. such, As the specific video signal is sent to the user's second terminal, The sound wave signal will also be delivered. and, The frequency of the acoustic signal can be outside the human hearing range. That is, The user does not perceive the presence of the acoustic signal. but, The first terminal can sense the sound wave signal, and then, The first terminal can use the sound wave signal as a sign of occurrence of the target event. Then perform the subsequent interaction process. In this way, The occurrence flag of the target event can be carried in a specific video signal. And communicating to the first terminal through the second terminal, therefore, The event that the user sees on the second terminal can be guaranteed. Better able to seamlessly interface with images seen in the first terminal, To get a better experience.   among them, Regarding the acoustic signal, The specific frequency information may be determined by the first server. And provided by the first server to the second server. In the process of transmitting the video signal by the second server, If it is found that the target event related to the specified object is occurring, The sound wave signal can be inserted at a position corresponding to the video signal. on the other hand, The first server can also inform the first terminal of the frequency information of the sound wave signal in some manner. such, The sound signal can be established between the first terminal and the second terminal. It should be noted, In the specific implementation, In the same party, There may be multiple "stars to your home" links. Corresponding to different specified objects, therefore, It is also possible to provide acoustic signals of different frequencies for different designated objects, respectively. The first server may provide a correspondence between the specified object and the sound wave frequency to the second server. When the second server adds an acoustic signal, It can be added according to the corresponding relationship provided by the first server; and, The correspondence is also provided to the first terminal. The first terminal may be different according to the frequency of the detected sound wave signal. Determine which specific object corresponds to the current event.  After the specific interaction, as described above, It is possible to use the animation based on the transmission channel as a sign of the beginning of the interaction. after that, You can add the specified object material to the live image. If the user has specified a location, It can be added to the corresponding position in the live image. If the specified object material is added, The user has moved the location of the first terminal device, That is, it has changed relative to the initial position, So that after the material is added, It did not appear in the display of the first terminal. For this situation, Since the coordinate system was previously created based on the initial position of the mobile terminal device (the position does not change once determined), therefore, It can also be based on SLAM (Concurrent Mapping and Localization, Technology such as instant positioning and map construction), Determining the coordinates of the position of the first terminal after moving in the coordinate system, That is, determine where the first terminal has moved, And determining, in what direction the first terminal is moved relative to the initial position, In turn, the user can be guided to move the first terminal in the opposite direction. So that the added material can appear in the picture of the first terminal. As shown in Figure 3-6, The user can be guided to move the first terminal through an "arrow".   as described above, There may be multiple sets of material corresponding to the same specified object. E.g, Including dancing material, Singing materials, etc. Specifically before adding the specified object material to the live image. It also gives users the option to select specific materials. The user can make a selection. In the process of user selection, You can also play a fixed video, etc. E.g, The content of this fixed video can be a box. And keep beating, Used to express the preparation work of the designated object being changed. and many more. After the user selects a specific material, You can add the selected material to the live image for display. E.g, As shown in Figure 3-7, It is a specific example, Showing one of the images in the specified material presentation process, among them, Part of the image about the character is a virtual image, The background behind the character is the real-life image captured by the user through the first terminal.   among them, Since the interactive material may also include a voice sample material recorded by the specified object, therefore, After adding the specified object material, A user name information of the associated user of the first terminal may also be obtained. And generating the user-specific greeting corpus for the associated user, This includes the user name. and then, The greeting is expected to be converted into speech and played according to the voice sample material. corresponding, In the specified object material, There may also be an action when the specified object greets the user, Expressions, etc. Make the user feel that the designated person is actually greeting himself. among them, Regarding the user name, Can be based on the account that the current user is logged into, Determine the corresponding user's name, Or you can also use the real-name authentication information provided by the user in advance. Get the user’s real name, and many more, Through the above methods, "Thousands of People" can be realized for different users. of course, If you can’t get the secret of some users, Real name, etc. Can also be based on the gender of the user, Age, etc. Generate a relatively generic name for the user, and many more.  In addition, Specifically after adding the specified object material to the live image, Shooting options are also available. When an operation request is received through the operation option, A corresponding image (photo or video, etc.) can be generated by performing screen capture or screen recording on each image layer. In this way, Can take a photo with the specified object, and many more. That is, When shooting images such as photos or videos, The live view image may further include a live view image of the person who needs to take a photo with the specified object. E.g, In the process of a user’s specific interaction, Because it is usually interactive at home, There may be other people around, If someone else wants to take a photo with the specified object, Then, it can enter the real image capturing area of the first terminal, Enabling the first terminal to capture his/her real-life image, after that, The user operates the operation option, A specific photographing operation can be realized. When it is implemented, You can also use depth of field information. Distinguishing the positional relationship between the person in the real image and the specified object in the virtual image, To further enhance the realism.   among them, Since the interface also includes some buttons and other operation options, therefore, When taking a screen shot or recording screen, You can also remove the image layer that is used to display the action options. Screen capture or screen recording operation only for the real image layer and the image layer where the video/animation is located. To improve the realism of the generated photo or video.  When it is implemented, The ability to take photos and videos can be provided through the same operating option. And through different ways of operation to distinguish the user's specific intentions. E.g, Corresponding to taking a photo function when clicking on the operation option, The corresponding shooting video function is performed when the operation option is long pressed. and many more. That is, If the user just clicks on the above action option, You can trigger a screen capture operation. Generate a photo. If the user presses and holds the action option, Then trigger the recording operation, Until the user let go. In addition, When it is implemented, You can also limit the length of each recorded video. E.g, Each video does not exceed 10S. Then the user presses and holds the operation option for more than 10 seconds. Even if you still hold it down, Will also end the screen recording operation, Generate a video of up to 10S, and many more.  Furthermore, In the process of interaction, You can also provide options for sharing photos or videos you take. E.g, This action option can be located on the side of the above operating options for taking photos or videos. And can display tips, E.g: "Click to share a great moment," and many more. After the user clicks, Can provide a shared portal for multiple social networking platforms. Users can choose which social network platform to share.  In addition, In the process of interaction, In addition to playing the video/animation corresponding to the specified object, Or take a photo with the specified object, Sharing and other operations, Other interactive options are also available for users. E.g, You can also provide options for participating in a charity event. If the user is willing to participate, You can click directly through this action option. or, It is also possible to participate in the corresponding public welfare activities through such channels. Associated with the completion of the charity event, The server collects such information on the number of people. And in real time, the director and other staff at the show site are available. Make the scenery of the program site, etc., change correspondingly with the completion progress of the public welfare activities. E.g, The scenery of the show scene gradually changed from desert to oasis. and many more.  After the interaction is over, The specified object material will not be displayed again. of course, The specified object may also appear in the screen of the second terminal again. therefore, In order to better show the user's "crossing" process, At this point, the material of the transfer channel can still be displayed. As shown in Figure 3-8, An animation process for displaying a person leaving through the transfer channel may also be provided. The transfer channel material itself can also appear to be gradually smaller. After leaving completely, The material for indicating the transfer channel also disappears from the screen.  After the interaction is over, Can exit the interface for real-time image acquisition, at this time, In an alternative implementation, A docking page for browsing and sharing each of the captured photos or videos can also be provided. That is, After the interaction is over, Can provide a receiving page, Used to guide the user to share the captured photos or videos. among them, Individual photos or videos can be sorted in this order in the order in which they were taken. E.g, As shown in Figure 3-9, Specifically, it can be displayed from left to right in order of time from near to far. and many more. In the process of presentation, The user clicks on any of the photos or videos. Can evoke the sharing component interface, As shown in Figure 3-10, Users can complete specific sharing operations through this component.  In short, Through the embodiment of the present application, Can load the specified object material, In the process of specific interaction, It is possible to collect real-life images of the actual environment in which the user is located. And when the target event corresponding to the specified object is broadcast in the second terminal, Add the specified object material to the live image for display. such, Enable users to get specific objects to their own space environment (for example, The experience of my own home, therefore, Can increase user engagement with interaction.  Embodiment 2 This embodiment 2 corresponds to the first embodiment, From the server's perspective, Provides a multi-screen interaction method, among them, See Figure 4, The method may specifically include:  S401: The first server saves the interactive material. The interactive material includes a specified object material created according to the specified object;  S402: Providing the interactive material to the first terminal, Collecting a real scene image by the first terminal, When the video in the second terminal plays to a target event corresponding to the specified object, Adding the specified object material to the live image.  When it is implemented, The specified object includes a designated person. of course, When it is implemented, Can also include animals, commodity, Props and more.  Specifically when providing interactive footage, A video material obtained by photographing the specified object may be provided. or, Providing a cartoon image based on the image of the specified object, And animated material based on the cartoon image.   specific, When the specified object is a designated person, It is also possible to provide a voice sample material recorded by the designated person.  When it is implemented, In order to make the first terminal more convenient to perceive the occurrence of the target event in the second terminal, The first server may further provide an acoustic signal of a preset frequency to the second server corresponding to the second terminal. When the video used in the second terminal is played to a target event corresponding to the specified object, Add to the video, So that the first terminal learns the occurrence of the target event by detecting an acoustic wave signal of the preset frequency.  When it is implemented, The server can also perform statistics on the interaction of each client. among them, The statistical information may also be provided to the second server corresponding to the second terminal. Adding, by the second server, the statistical information to the video played by the second terminal, For publishing the statistical results through the second terminal, or, Statistics can also be used to influence the setting of the party at the party. and many more.   among them, Since the second embodiment corresponds to the first embodiment, therefore, For a specific implementation, refer to the description in the foregoing Embodiment 1. I won't go into details here.  Embodiment 3 The third embodiment is from the perspective of the second terminal, Provides a multi-screen interaction method, See Figure 5, The method may specifically include:  S501: The second terminal plays the video;  S502: When the video plays to a target event related to the specified object, Playing the sound wave signal of the preset frequency, So that the first terminal knows the occurrence of the target event by detecting the sound wave signal. And add the specified object material to the captured real-life image.  When it is implemented, Different specified objects can correspond to sound waves of different frequencies.  Embodiment 4 The fourth embodiment is from the perspective of the second server corresponding to the second terminal, Provides a multi-screen interaction method, See Figure 6, The method may specifically include:  S601: The second server receives the sound wave signal information of the preset frequency provided by the first server;  S602: Inserting a sound wave signal of the preset frequency at a position where a target event related to a specified object occurs in the video, In order to play the video through the second terminal, Obtaining, by the first terminal, the occurrence of the target event by detecting the sound wave signal, And add the specified object material to the captured real-life image.  When it is implemented, The second server may further receive statistics about the interaction of the first terminal provided by the first server, Adding the statistical information to the video for transmission, For playing through the second terminal.  Embodiment 5 In the foregoing Embodiments 1 to 4, Realizing multi-screen interaction between the first terminal and the second terminal, In practical applications, Users can also watch videos such as live shows of the evening through the first terminal. under these circumstances, During the process of watching the video through the first terminal, You can also get the experience of "stars to my home." That is, Watch videos and interact with the same terminal.   specific, See Figure 7, The fifth embodiment provides a video interaction method. The method may specifically include:  S701: The first terminal loads the interactive material, The interactive material includes a specified object material created according to the specified object;  S702: When the video in the first terminal plays to a target event related to the specified object, Jump to the interactive interface;  S703: Displaying the live image acquisition result in the interactive interface, And adding the specified object material to the live image.  That is, During the process of watching the video through the first terminal, The video plays to the target event associated with the specified object. You can jump to the interactive interface. In the interactive interface, You can first collect real-life images. then, Adds the specified object material to the live image. such, The user also obtains the experience that the specified object "traverses" from the "evening scene" to the space environment in which it is located.  Regarding other specific implementations in the fifth embodiment, See the description in the foregoing embodiments, I won't go into details here.  Embodiment 6 This embodiment 6 corresponds to the fifth embodiment, Providing a video interaction method from the perspective of the first server, See Figure 8, The method may specifically include:  S801: The first server saves the interactive material. The interactive material includes a specified object material created according to the specified object;  S802: Providing the interactive material to the first terminal, When the video in the first terminal is played to a target event related to the specified object, Jump to the interactive interface, And displaying the real image acquisition result in the interactive interface, And adding the specified object material to the live image.  Embodiment 7 In the foregoing embodiments, The first terminal, such as a mobile phone, is used as an execution subject to provide specific interaction results. In practical applications, The program can also be extended to other scenarios. E.g, In addition to mobile phones, A wearable device such as smart glasses can also be used as the first terminal. In addition to interacting with the video played in the second terminal or the first terminal, It can also be a video played with a movie screen. Or live performances, show, Merchant promotions, In the process of sports events and other related events, Conduct a specific interactive process. to this end, This seventh embodiment provides another interactive method. See Figure 9, The method may specifically include:  S901: The first terminal loads the interactive material, The interactive material includes a specified object material created according to the specified object;  In the specific implementation, The application scenario in this embodiment may not be limited. therefore, Before providing a specific interactive interface, An interface for loading interactive footage is also available. In this interface, A variety of optional interactive materials are available. E.g, Materials related to the current movie being released by major theaters, With offline performances, Promotions, Events related to events such as competitions, and many more, Users can choose the material they need to download. In addition, In the specific implementation, Since some applications provide users with the ability to book online, Users can book tickets online. Not only can it include movie tickets, It can also include a variety of shows, Tickets related to the competition and so on. therefore, When providing interactive material, It can also be provided based on the user's specific booking information. E.g, When a user subscribes to a movie ticket through an online booking system, If there is just an interactive material related to the movie, You can prompt the user to download. and many more. It should be noted, In the embodiment of the present application, The downloaded interactive material can be saved locally on the terminal such as a mobile phone. or, It can also be downloaded to terminals such as wearable devices. To watch movies more conveniently, Interact in the process of performances.  In this embodiment, The specific specified object can also refer to the designated person, commodity, Props and more.  S902: Collecting real-life images;   usually, Wearable devices have devices such as cameras. therefore, Real-time image collection can be performed through wearable devices, etc. The user is watching the movie, During the performance, etc. Images actually seen through wearable devices such as glasses, It can be a real-life image captured by such glasses.  S903: When a target event related to the specified object is detected, Adding the specified object material to the live image.  The specific target event can be that the specified object appears in a specific movie, show, game, During promotions, etc. and many more. Specifically when detecting target events, There are many ways, E.g, One way, Can be made by movies, show, game, The exhibitor or organizer of the promotion, Insert some sonic information, etc. on specific event nodes. A wearable device or the like knows the occurrence of a specific event by detecting such a signal. or, In other implementations, It is also possible to directly know the occurrence of the target event by analyzing the collected real-life image. E.g, Specifically in the process of interacting with wearable devices, The real-life image captured by the wearable camera is usually the same as the real-life image actually viewed by the user. Or have overlapping parts, Then if the user sees a target event, Then the camera can actually collect information about the corresponding event. In addition, Wearable devices can also have sound collectors, etc. therefore, Through image analysis, Voice analysis and other methods to know the occurrence of specific target events, and many more.  Corresponding to the first embodiment, The embodiment of the present application further provides a multi-screen interactive device. specific, See Figure 10, The device is applied to the first terminal, include:  First material loading unit 1001, Used to load interactive footage, The interactive material includes a specified object material created according to the specified object;  First live image acquisition unit 1002, For collecting real-life images;  The first material adding unit 1003, For when a video in the second terminal is played to a target event related to the specified object, Adding the specified object material to the live image.   among them, The video played in the second terminal is a live video stream.  Receiving, by the first terminal, a sound wave signal of a preset frequency sent when the video is played, Determining the occurrence of the target event. among them, The sound wave signal may be emitted when a video in the second terminal is played to a target event corresponding to the specified object.   among them, The first material adding unit may specifically be used to:  The specified object material is added to a plane included in the live image for display.  When it is implemented, The device can also include:  Place the location determination unit, Before the adding the specified object material, Determining the placement position in the captured real-life image;  The first material adding unit may specifically be used to: Adding the specified object material to the placement location.   among them, The placement location determining unit may be specifically configured to:  Determining a plane position in the collected real-life image, The placement position is in the planar position.   specific, The placement location determining unit may specifically include:  Planar detection subunit, For performing plane detection in the collected real-life image;  The cursor provides a subunit, Used to provide a cursor, According to the detected plane, Determine the allowable range of the cursor;  Place the location determination subunit, A position for placing the cursor is placed as the placement position.   among them, The placement location determining subunit may specifically include:  The coordinate system establishes a subunit, Used to establish a coordinate system with the initial position where the first terminal is located;  Coordinate determination subunit, a cursor coordinate for determining a position at which the cursor is placed in the coordinate system;  Position determination subunit, Used to use the cursor coordinates as the placement position.   specific, The device may further include:  Change direction determining unit, For adding the specified object material to the placement location, When the material does not appear on the interface of the first terminal, Determining a direction of change of the first terminal relative to the initial position;  Prompt unit, Used to change direction according to the A prompt identifier in the opposite direction is provided in the interface of the first terminal.  When it is implemented, The interactive material may also include material for representing a transmission channel. The device can also include:  Channel material addition unit, For the step of collecting the live image, The material for representing the transfer channel is added to the live image.   among them, The first material adding unit may specifically be used to:  Based on the transfer channel material, A process of entering the specified object through the transfer channel into the captured live image.  In addition, The interactive material may further include a voice sample material recorded by the specified object;  The device may further include:  User name acquisition unit, User name information for obtaining the associated user of the first terminal;  Greeting corpus generation unit, Generating a greeting corpus including the user name for the associated user;  Play unit, For use according to the speech sample material, The greeting is expected to be converted to speech and played.   among them, The specified object material is a plurality of sets of materials corresponding to the same specified object, The device also includes:  The material selection option provides the unit, 哟 providing an operation option for selecting the specified object material;  The first material adding unit may be used to:  The selected specified object material to be selected is added to the live image.  In addition, The device can also include:  Shooting option providing unit, For displaying the specified object material in the live image for display, Provide shooting operation options;  Image generation unit, For receiving an operation request through the shooting operation option, According to each image layer, Generate the corresponding image, The image layer includes a real image, Specifies an image for the pixel material.   among them, The image generating unit may be specifically configured to:  Screen capture or screen recording for each image layer, And remove the image layer that is used to demonstrate the operation options. Generate a captured image.  When it is implemented, The live view image further includes a person image that is combined with the specified object.  In addition, It can also include:  Sharing options provide unit, Used to provide an operation option for sharing captured images.  Furthermore, Also includes:  Undertake page providing unit, Used to provide a docking page for browsing and sharing captured images.  When it is implemented, The specified object includes information specifying a person.  Or, The specified object includes information specifying the item.   specific, Also includes:  Snatch option offering unit, After the adding the specified object material to the live image, Providing an operation option for snapping up a data object associated with the specified item;  Submit unit, Used to receive a panic operation through this action option, Submit to the server, The panning result is determined by the server.  In addition, The specified object includes item information related to the offline game.   at this time, The device can also include:  Operation information submission unit, After the adding the specified object material to the live image, Receiving information on the operation of the target item, Submit the operation information to the server, Determining, by the server, the reward information obtained by the operation and returning the reward information;  Reward information providing unit, Used to provide the reward information obtained.   among them, The specified object material includes a video material obtained by photographing the specified object.  Or, The specified object material includes: a cartoon image made with the image of the specified object as a prototype, And animated material based on the cartoon image.  Corresponding to the second embodiment, The embodiment of the present application further provides a multi-screen interactive device. See Figure 11, The device is applied to the first server, include:  The first interactive material saving unit 1101, Used to save interactive footage, The interactive material includes a specified object material created according to the specified object;  a first interactive material providing unit 1102, Providing the interactive material to the first terminal, Collecting a real scene image by the first terminal, When the video in the second terminal plays to a target event corresponding to the specified object, Adding the specified object material to the live image.   among them, The specified object includes a designated person.   specific, The first interactive material storage unit may be specifically configured to: The video material obtained by shooting the specified object is saved.  Or, Preserving a cartoon image based on the image of the specified object, And animated material based on the cartoon image.   among them, The specified object includes a designated person, The first interactive material saving unit can also be used to:  The voice sample material recorded by the specified person is saved.  In addition, The device can also include:  Acoustic signal information providing unit, Providing, to the second server corresponding to the second terminal, a sound wave signal of a preset frequency, When the video used in the second terminal is played to a target event corresponding to the specified object, Add to the video, So that the first terminal learns the occurrence of the target event by detecting an acoustic wave signal of the preset frequency.  In addition, It can also include:  Statistical unit, Used to perform statistics on the interaction of each first terminal;  Statistical information providing unit, Providing the statistical information to the second server corresponding to the second terminal, The statistical information is added by the second server to the video played by the second terminal.  Corresponding to the three phases of the embodiment, The embodiment of the present application further provides a multi-screen interactive device. See Figure 12, The device is applied to the second terminal, include:  Video playback unit 1201, Used to play video;  Acoustic signal playing unit 1202, For playing the video to a target event related to the specified object, Playing the sound wave signal of the preset frequency, So that the first terminal knows the occurrence of the target event by detecting the sound wave signal. And add the specified object material to the captured real-life image.   among them, Different specified objects correspond to acoustic signals of different frequencies.  Corresponding to the fourth embodiment, The embodiment of the present application further provides a multi-screen interactive device. See Figure 13, The device is applied to the second server, include:  Acoustic signal information receiving unit 1301, Acoustic signal information for receiving a preset frequency provided by the first server;  Acoustic signal information insertion unit 1302, Inserting a sound wave signal of the preset frequency at a position where a target event related to a specified object occurs in the video, In order to play the video through the second terminal, Obtaining, by the first terminal, the occurrence of the target event by detecting the sound wave signal, And add the specified object material to the captured real-life image.   among them, The device can also include:  Statistical information receiving unit, And receiving statistical information about the interaction of the first terminal provided by the first server;  Statistical information playback unit, Used to add the statistical information to the video for transmission, For playing through the second terminal.  Corresponding to the fifth embodiment, The embodiment of the present application further provides a video interaction device. See Figure 14, The device is applied to the first terminal, include:  Loading unit 1401, Used to load interactive footage, The interactive material includes a specified object material created according to the specified object;  Interface jump unit 1402, For when a video in the first terminal is played to a target event related to the specified object, Jump to the interactive interface;  Material adding unit 1403, For displaying real-time image acquisition results in the interactive interface, And adding the specified object material to the live image.  Corresponding to the sixth embodiment, The embodiment of the present application further provides a video interaction device. See Figure 15, The device is applied to the first server, include:  Second material saving unit 1501, Used to save interactive footage, The interactive material includes a specified object material created according to the specified object;  a second material providing unit 1502, Providing the interactive material to the first terminal, When the video in the first terminal is played to a target event related to the specified object, Jump to the interactive interface, And displaying the real image acquisition result in the interactive interface, And adding the specified object material to the live image.  Corresponding to the seventh embodiment, An embodiment of the present application further provides an interactive device. See Figure 16, The device can include:  a second material loading unit 1601, Used to load interactive footage, The interactive material includes a specified object material created according to the specified object;  a second real image acquisition unit 1602, For collecting real-life images;  a second material adding unit 1603, For detecting a target event related to the specified object, Adding the specified object material to the live image.  In addition, An embodiment of the present application further provides an electronic device. include:  One or more processors; And a storage associated with the one or more processors, The storage is for storing program instructions, The program instructions, when being read and executed by the one or more processors, Do the following:  Load interactive footage, The interactive material includes a specified object material created according to the specified object;  Collecting real-life images;  When the video in the second terminal plays to a target event related to the specified object, Adding the specified object material to the live image.   among them, Figure 17 exemplarily shows the architecture of an electronic device, E.g, Device 1700 can be a mobile phone, computer, Digital broadcast terminal, Messaging device, Game console, Tablet device, Medical equipment, Fitness equipment, Personal digital assistant, Aircraft, etc.  Referring to Figure 17, Device 1700 can include one or more of the following components: Processing component 1702, Storage 1704, Power component 1706, Multimedia component 1708, Audio component 1710, Input/Output (I/O) interface 1712, Sensor assembly 1714, And a communication component 1716.  Processing component 1702 typically controls the overall operation of device 1700, Such as with display, Telephone call, Data communication, The operations associated with camera operations and recording operations. Processing component 1702 can include one or more processors 1720 to execute instructions, In the video playing method provided by the technical solution provided by the disclosure, when the preset condition is met, Generate a traffic compression request, And send it to the server, The traffic compression request records information for triggering the server to acquire the target attention area, The traffic compression request is used to request the server to preferentially guarantee the code rate of the video content in the target attention area; Playing the video content corresponding to the code stream file according to the code stream file returned by the server, The code stream file is a step of all or part of a video file obtained by the server performing rate conversion processing on the video content outside the target attention area according to the traffic compression request. In addition, Processing component 1702 can include one or more modules. It is convenient to handle the interaction between component 1702 and other components. E.g, Processing component 1702 can include a multimedia module, To facilitate interaction between the multimedia component 1708 and the processing component 1702.  The storage 1704 is configured to store various types of material to support operation at the device 1700. Examples of such materials include instructions for any application or method operating on device 1700, Contact information, Phone book information, News, image, Video, etc. The storage 1704 can be implemented by any type of volatile or non-volatile storage device or a combination thereof. Such as static random access memory (SRAM), Electrically erasable programmable read-only memory (EEPROM), Erasable programmable read-only memory (EPROM), Programmable Read Only Memory (PROM), Read only memory (ROM), Magnetic storage, Flash memory, Disk or disc.  Power component 1706 provides power to various components of device 1700. The power component 1706 can include a power management system, One or more power supplies, And others are generated for device 1700, Manage and distribute the components associated with power.  The multimedia component 1708 includes a screen between the device 1700 and the user that provides an output interface. In some embodiments, The screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, The screen can be implemented as a touch screen. To receive input signals from the user. The touch panel includes one or more touch sensors to sense the touch, Sliding and gestures on the touch panel. The touch sensor can sense not only the boundary of a touch or a sliding motion, It also detects the duration and pressure associated with the touch or slide operation. In some embodiments, The multimedia component 1708 includes a front camera and/or a rear camera. When the device 1700 is in the operating mode, When in shooting mode or video mode, The front camera and/or rear camera can receive external multimedia material. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.  The audio component 1710 is configured to output and/or input acoustic signals. E.g, The audio component 1710 includes a microphone (MIC), When the device 1700 is in the operating mode, Such as call mode, When recording mode and voice recognition mode, The microphone is configured to receive an external acoustic signal. The received acoustic signals may be further stored in the storage 1704 or transmitted via the communication component 1716. In some embodiments, The audio component 1710 also includes a speaker. Used to output sound wave signals.  The I/O interface 1712 provides an interface between the processing component 1702 and the peripheral interface module. The peripheral interface module may be a keyboard. Click on the wheel, Buttons, etc. These buttons can include but are not limited to: Home button, Volume button, Start button and lock button.  Sensor assembly 1714 includes one or more sensors, Used to provide device 1700 with a status assessment of various aspects. E.g, The sensor component 1714 can detect an open/closed state of the device 1700. Relative positioning of components, For example, the component is a display and a keypad of device 1700. Sensor component 1714 can also detect a change in position of one component of device 1700 or device 1700, The presence or absence of contact of the user with the device 1700, Device 1700 orientation or acceleration/deceleration and temperature change of device 1700. Sensor assembly 1714 can include a proximity sensor, It is configured to detect the presence of nearby merchandise without any physical contact. The sensor assembly 1714 can also include a light sensor, Such as CMOS or CCD image sensors, Used in imaging applications. In some embodiments, The sensor component 1714 can also include an acceleration sensor, Gyro sensor, Magnetic sensor, Pressure sensor or temperature sensor.  Communication component 1716 is configured to facilitate wired or wireless communication between device 1700 and other devices. The device 1700 can access a wireless network based on a communication standard. Like WiFi, 2G or 3G, Or a combination of them. In an exemplary embodiment, The communication section 1716 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, The communication component 1716 also includes a near field communication (NFC) module. To promote short-range communication. E.g, The NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies to achieve.  In an exemplary embodiment, Device 1700 can be implemented by one or more application specific integrated circuits (ASICs), Digital signal processor (DSP), Digital signal processing equipment (DSPD), Programmable logic device (PLD), Field Programmable Gate Array (FPGA), Controller, Microcontroller, Implemented by a microprocessor or other electronic component, Used to perform the above method.  In an exemplary embodiment, There is also provided a non-transitory computer readable storage medium comprising instructions, For example, a storage 1704 including instructions, The above instructions may be executed by the processor 1720 of the device 1700 to complete the preset conditions in the video playing method provided by the technical solution of the present disclosure. Generate a traffic compression request, And send it to the server, The traffic compression request records information for triggering the server to acquire the target attention area, The traffic compression request is used to request the server to preferentially guarantee the code rate of the video content in the target attention area; Playing the video content corresponding to the code stream file according to the code stream file returned by the server, The code stream file is a video file obtained by the server performing rate conversion processing on the video content outside the target attention area according to the traffic compression request. E.g, The non-transitory computer readable storage medium may be a ROM, Random access memory (RAM), CD-ROM, magnetic tape, Soft disk and optical data storage devices, etc.  As can be seen from the description of the above embodiments, It will be apparent to those skilled in the art that the present application can be implemented by means of a software plus a necessary universal hardware platform. Based on this understanding, The technical solution of the present application, which is essential or contributes to the prior art, can be embodied in the form of a software product. The computer software product can be stored in a storage medium. Such as ROM/RAM, Disk, CD, etc. Including a number of instructions to make a computer device (which may be a personal computer, server, Or a network device or the like) performs the methods described in various embodiments or portions of the embodiments of the present application.  Each embodiment in this specification is described in a progressive manner. The same similar parts between the various embodiments can be referred to each other. Each embodiment focuses on differences from other embodiments. especially, For a system or system embodiment, Since it is basically similar to the method embodiment, So the description is relatively simple, For related points, refer to the partial description of the method embodiment. The system and system embodiments described above are merely illustrative. The units described as separate components may or may not be physically separated. The component displayed as a unit may or may not be a physical unit. That can be located in one place, Or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art, without creative efforts, That is, it can be understood and implemented.  The multi-screen interaction method provided by the above application, Devices and electronic equipment, Introduced in detail, The principle and implementation of the present application are described in the following by using specific examples. The description of the above embodiments is only for helping to understand the method of the present application and its core ideas; Simultaneously, For those of ordinary skill in the art, According to the idea of the application, There will be changes in the specific implementation and application scope. In summary, The contents of this specification are not to be construed as limiting the application.

1001‧‧‧第一素材加載單元1001‧‧‧First material loading unit

1002‧‧‧第一實景圖像採集單元1002‧‧‧First live image acquisition unit

1003‧‧‧第一素材添加單元1003‧‧‧First material addition unit

1101‧‧‧第一互動素材保存單元1101‧‧‧First interactive material storage unit

1102‧‧‧第一互動素材提供單元1102‧‧‧First interactive material supply unit

1201‧‧‧視頻播放單元1201‧‧‧Video playback unit

1202‧‧‧聲波信號播放單元1202‧‧‧Sonic signal playback unit

1301‧‧‧聲波信號資訊接收單元1301‧‧‧Sonic signal receiving unit

1302‧‧‧聲波信號資訊插入單1302‧‧‧Sonic signal information insertion slip

1401‧‧‧加載單元1401‧‧‧Loading unit

1402‧‧‧界面跳轉單元1402‧‧‧Interface jump unit

1403‧‧‧素材添加單元1403‧‧‧Material Addition Unit

1501‧‧‧第二素材保存單元1501‧‧‧Second material storage unit

1502‧‧‧第二素材提供單元1502‧‧‧Second material supply unit

1601‧‧‧第二素材加載單元1601‧‧‧Second material loading unit

1602‧‧‧第二實景圖像採集單元1602‧‧‧Second real image acquisition unit

1603‧‧‧第二素材添加單元1603‧‧‧Second material addition unit

1700‧‧‧設備1700‧‧‧ Equipment

1702‧‧‧處理組件1702‧‧‧Processing components

1704‧‧‧儲存器1704‧‧‧Storage

1706‧‧‧電源組件1706‧‧‧Power components

1708‧‧‧多媒體組件1708‧‧‧Multimedia components

1710‧‧‧音頻組件1710‧‧‧Audio components

1712‧‧‧輸入/輸出(I/O)的介面1712‧‧‧Input/Output (I/O) interface

1714‧‧‧傳感器組件1714‧‧‧ Sensor components

1716‧‧‧通信組件1716‧‧‧Communication components

1720‧‧‧處理器1720‧‧‧ processor

為了更清楚地說明本申請實施例或現有技術中的技術方案,下面將對實施例中所需要使用的附圖作簡單地介紹,顯而易見地,下面描述中的附圖僅僅是本申請的一些實施例,對於本領域普通技術人員來講,在不付出創造性勞動的前提下,還可以根據這些附圖獲得其他的附圖。   圖1是本申請實施例提供的系統的示意圖;   圖2是本申請實施例提供的第一方法的流程圖;   圖3-1至3-10是本申請實施例提供的用戶界面的示意圖;   圖4是本申請實施例提供的第二方法的流程圖;   圖5是本申請實施例提供的第三方法的流程圖;   圖6是本申請實施例提供的第四方法的流程圖;   圖7是本申請實施例提供的第五方法的流程圖;   圖8是本申請實施例提供的第六方法的流程圖;   圖9是本申請實施例提供的第七方法的流程圖;   圖10是本申請實施例提供的第一裝置的示意圖;   圖11是本申請實施例提供的第二裝置的示意圖;   圖12是本申請實施例提供的第三裝置的示意圖;   圖13是本申請實施例提供的第四裝置的示意圖;   圖14是本申請實施例提供的第五裝置的示意圖;   圖15是本申請實施例提供的第六裝置的示意圖;   圖16是本申請實施例提供的第七裝置的示意圖;   圖17是本申請實施例提供的電子設備的示意圖。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some implementations of the present application. For example, other drawings may be obtained from those of ordinary skill in the art in light of the inventive work. 1 is a schematic diagram of a system provided by an embodiment of the present application; FIG. 2 is a flowchart of a first method provided by an embodiment of the present application; and FIGS. 3-1 to 3-10 are schematic diagrams of a user interface provided by an embodiment of the present application; 4 is a flowchart of a second method provided by an embodiment of the present application; FIG. 5 is a flowchart of a third method provided by an embodiment of the present application; FIG. 6 is a flowchart of a fourth method provided by an embodiment of the present application; The flowchart of the fifth method provided by the embodiment of the present application; FIG. 8 is a flowchart of the sixth method provided by the embodiment of the present application; FIG. 9 is a flowchart of the seventh method provided by the embodiment of the present application; FIG. 11 is a schematic diagram of a second device according to an embodiment of the present application; FIG. 12 is a schematic diagram of a third device provided by an embodiment of the present application; FIG. 14 is a schematic diagram of a fifth device provided by an embodiment of the present application; FIG. 15 is a schematic diagram of an embodiment of the present application; A schematic view of a sixth apparatus; FIG. 16 is a schematic view of a seventh embodiment of the device provided in this embodiment of the application; FIG. 17 is a schematic view of an electronic device provided by the embodiment of the present application.

Claims (48)

一種多屏互動方法,其特徵在於,包括:   第一終端加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當第二終端中的視頻播放到與該指定對象相關的目標事件時,將該指定對象素材添加到該實景圖像中。A multi-screen interaction method, comprising: loading, by a first terminal, an interactive material, the interactive material comprising a specified object material created according to the specified object; acquiring a real-life image; and playing the video in the second terminal to the specified object The specified object material is added to the live image when the related target event is associated. 根據申請專利範圍第1項所述的方法,其中,該第二終端中播放的視頻為直播視頻流。The method of claim 1, wherein the video played in the second terminal is a live video stream. 根據申請專利範圍第1項所述的方法,其中,該第一終端接收該視頻播放時發出的預置頻率的聲波信號,確定該目標事件的發生。The method of claim 1, wherein the first terminal receives a sound wave signal of a preset frequency emitted during the video playback to determine the occurrence of the target event. 根據申請專利範圍第1項所述的方法,其中,該聲波信號由該第二終端中的視頻播放到與該指定對象對應的目標事件時發出的。The method of claim 1, wherein the sound wave signal is emitted when a video in the second terminal is played to a target event corresponding to the specified object. 根據申請專利範圍第1項所述的方法,其中,所述將該指定對象素材添加到該實景圖像中,包括:   將該指定對象素材添加到該實景圖像中包括的平面上進行展示。The method of claim 1, wherein the adding the specified object material to the live image comprises: adding the specified object material to a plane included in the live image for display. 根據申請專利範圍第5項所述的方法,其中,   所述添加該指定對象素材之前,還包括:   確定採集到的實景圖像中的放置位置;   所述將該指定對象素材添加到該實景圖像中,包括:   將該指定對象素材添加到該放置位置處。The method of claim 5, wherein the adding the specified object material further comprises: determining a placement position in the collected real-life image; adding the specified object material to the real image The image includes: adding the specified object material to the placement location. 根據申請專利範圍第5項所述的方法,其中,   該確定採集到的實景圖像中的放置位置,包括:   確定該採集到的實景圖像中的平面位置,該放置位置位於的該平面位置中。The method of claim 5, wherein the determining the placement position in the collected real-life image comprises: determining a plane position in the collected real-life image, the plane position at which the placement position is located in. 根據申請專利範圍第7項所述的方法,其中,   所述確定該採集到的實景圖像中的平面位置,該放置位置位於的該平面位置中,包括:   在採集到的實景圖像中進行平面檢測;   提供光標,根據檢測到的平面,確定光標的可放置範圍;   將該光標被放置的位置作為該放置位置。The method of claim 7, wherein the determining a plane position in the collected real-life image, the plane position in which the placement position is located, comprises: performing in the collected real-life image Plane detection; providing a cursor to determine the position of the cursor according to the detected plane; the position where the cursor is placed is taken as the placement position. 根據申請專利範圍第8項所述的方法,其中,所述將該光標被放置的位置作為該放置位置,包括:   以第一終端所在的初始位置為原點建立坐標系;   確定該光標被放置的位置在該坐標系中的光標坐標;   將該光標坐標作為該放置位置。The method of claim 8, wherein the position at which the cursor is placed is the placement position, comprising: establishing a coordinate system with an initial position where the first terminal is located; determining that the cursor is placed The cursor coordinates in the coordinate system; the cursor coordinates are taken as the placement position. 根據申請專利範圍第6項所述的方法,其中,還包括:   在將該指定對象素材添加到該放置位置處後,當該素材未出現在該第一終端的界面時,確定該第一終端相對於該初始位置的變化方向;   根據該變化方向,在該第一終端的界面中提供相反方向的提示標識。The method of claim 6, wherein the method further comprises: determining the first terminal after the specified object material is added to the placement location, when the material does not appear on the interface of the first terminal a direction of change with respect to the initial position; according to the direction of change, a prompt indication of the opposite direction is provided in the interface of the first terminal. 根據申請專利範圍第1項所述的方法,其中,該互動素材還包括用於表示傳送通道的素材,在該採集實景圖像步驟之後,還包括:   將該用於表示傳送通道的素材添加到該實景圖像中。The method of claim 1, wherein the interactive material further comprises material for indicating a transfer channel, and after the step of acquiring the live image, the method further comprises: adding the material for indicating the transfer channel to In the real image. 根據申請專利範圍第11項所述的方法,其中,還包括:   所述將該指定對象素材添加到該實景圖像具體包括:   基於該傳送通道素材,展示該指定對象透過該傳送通道進入該拍攝到的實景圖像中的過程。The method of claim 11, further comprising: adding the specified object material to the live image comprises: displaying the specified object through the transmission channel to enter the shooting based on the transmission channel material The process in the real image. 根據申請專利範圍第1項所述的方法,其中,該互動素材中還包括由該指定對象錄製的語音樣本素材;   該方法還包括:   獲得該第一終端關聯用戶的用戶名稱資訊;   針對該關聯用戶生成包括該用戶名稱的問候語語料;   根據該語音樣本素材,將該問候語預料轉化為語音並播放。The method of claim 1, wherein the interactive material further includes a voice sample material recorded by the specified object; the method further comprising: obtaining user name information of the associated user of the first terminal; The user generates a greeting corpus including the user name; according to the voice sample material, the greeting is expected to be converted into a voice and played. 根據申請專利範圍第1項所述的方法,其中,該指定對象素材為同一指定對象對應的多套素材,該方法還包括:   提供用於對該指定對象素材進行選擇的操作選項;   所述將該指定對象素材添加到該實景圖像中,包括:   將被選中的指定對象素材添加到該實景圖像中。The method of claim 1, wherein the specified object material is a plurality of sets of materials corresponding to the same specified object, the method further comprising: providing an operation option for selecting the specified object material; The specified object material is added to the real image, including: adding the selected specified object material to the real image. 根據申請專利範圍第1項所述的方法,其中,還包括:   在對該指定對象素材添加到該實景圖像中進行展示的過程中,提供拍攝操作選項;   透過該拍攝操作選項接收操作請求,根據各圖像層,生成對應的圖像,該圖像層包括實景圖像、指定對像素材的圖像。The method of claim 1, further comprising: providing a shooting operation option during the displaying of the specified object material to the live image; receiving an operation request through the shooting operation option, A corresponding image is generated based on each image layer, the image layer including a live image and an image specifying the pixel material. 根據申請專利範圍第15項所述的方法,其中,該根據各圖像層,生成拍攝圖像,包括:   對各圖像層進行截屏或錄屏,並去掉其中用於展示操作選項的圖像層,生成拍攝圖像。The method of claim 15, wherein the generating the captured image according to each image layer comprises: taking a screen or a screen for each image layer, and removing the image for displaying the operation option Layer, generate a captured image. 根據申請專利範圍第15項所述的方法,其中,該實景圖像中還包括與該指定對象進行合影的人物圖像。The method of claim 15, wherein the live image further includes a person image that is photographed with the designated object. 根據申請專利範圍第15項所述的方法,其中,還包括:   提供對拍攝圖像進行分享的操作選項。The method of claim 15, wherein the method further comprises: providing an operation option for sharing the captured image. 根據申請專利範圍第15項所述的方法,其中,還包括:   提供對拍攝圖像進行瀏覽以及分享的承接頁面。The method of claim 15, wherein the method further comprises: providing a receiving page for browsing and sharing the captured image. 根據申請專利範圍第1至19項任一項所述的方法,其中,該指定對象包括指定人物的資訊。The method of any one of claims 1 to 19, wherein the specified object includes information specifying a person. 根據申請專利範圍第1至19項任一項所述的方法,其中,該指定對象包括指定商品的資訊。The method of any one of claims 1 to 19, wherein the specified object includes information specifying the item. 根據申請專利範圍第21項所述的方法,其中,所述將該指定對象素材添加到該實景圖像中之後,還包括:   提供用於對該指定商品關聯的資料對象進行搶購的操作選項;   透過該操作選項接收到搶購操作時,提交到服務端,由服務端確定搶購結果。The method of claim 21, wherein after the adding the specified object material to the live image, further comprising: providing an operation option for snapping the data object associated with the specified item; When receiving the panning operation through the operation option, the server submits to the server, and the server determines the panning result. 根據申請專利範圍第1至19項任一項所述的方法,其中,該指定對象包括線下遊戲相關的道具資訊。The method of any one of claims 1 to 19, wherein the specified object includes item information related to the offline game. 根據申請專利範圍第23項所述的方法,其中,所述將該指定對象素材添加到該實景圖像中之後,還包括:   接收到對該目標道具進行操作資訊時,將該操作資訊提交到服務端,由該服務端確定該操作所獲得的獎勵資訊並回傳;   提供所獲得的獎勵資訊。The method of claim 23, wherein after the adding the specified object material to the live image, the method further comprises: submitting the operation information to the operation information when the target item is received The server determines the reward information obtained by the operation and returns the information; and provides the reward information obtained. 根據申請專利範圍第1至19項任一項所述的方法,其中,該指定對象素材包括透過對該指定對象進行拍攝所獲得的視頻素材。The method of any one of claims 1 to 19, wherein the specified object material comprises a video material obtained by photographing the specified object. 根據申請專利範圍第1至19項任一項所述的方法,其中,該指定對象素材包括:以該指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材。The method according to any one of claims 1 to 19, wherein the specified object material comprises: a cartoon image made with the image of the specified object as a prototype, and an animation material produced based on the cartoon image. 一種多屏互動方法,其特徵在於,包括:   第一服務端保存互動素材,該互動素材包括根據指定對象創建的指定對象素材;   將該互動素材提供給第一終端,由該第一終端採集實景圖像,當第二終端中的視頻播放到與該指定對象對應的目標事件時,將該指定對象素材添加到該實景圖像中。A multi-screen interaction method, comprising: the first server saves an interactive material, the interactive material includes a specified object material created according to the specified object; the interactive material is provided to the first terminal, and the first terminal collects the real scene And an image, when the video in the second terminal is played to a target event corresponding to the specified object, the specified object material is added to the live image. 根據申請專利範圍第27項所述的方法,其中,該指定對象包括指定人物。The method of claim 27, wherein the specified object includes a designated person. 根據申請專利範圍第27項所述的方法,其中,該保存互動素材,包括:   保存透過對該指定對象進行拍攝所獲得的視頻素材。The method of claim 27, wherein the saving the interactive material comprises: saving a video material obtained by capturing the specified object. 根據申請專利範圍第27項所述的方法,其中,該保存互動素材,包括:   保存以該指定對象的形象為原型製作的卡通形象,以及基於該卡通形象製作的動畫素材。The method of claim 27, wherein the saving the interactive material comprises: saving a cartoon image made with the image of the specified object as a prototype, and an animation material based on the cartoon image. 根據申請專利範圍第27項所述的方法,其中,該指定對象包括指定人物,該保存互動素材,還包括:   保存由該指定人物錄製的語音樣本素材。The method of claim 27, wherein the specified object includes a designated person, the saving the interactive material, and further comprising: saving the voice sample material recorded by the designated person. 根據申請專利範圍第27項所述的方法,其中,該方法之前還包括:   向該第二終端對應的第二服務端提供預置頻率的聲波信號,以用於在該第二終端中的視頻播放到與該指定對象對應的目標事件時,添加到該視頻中,以便該第一終端透過檢測該預置頻率的聲波信號獲知該目標事件的發生。The method of claim 27, wherein the method further comprises: providing, to the second server corresponding to the second terminal, an acoustic signal of a preset frequency for use in the video in the second terminal When the target event corresponding to the specified object is played, it is added to the video, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal of the preset frequency. 根據申請專利範圍第27項所述的方法,其中,還包括:   對各第一終端的互動情況進行統計;   將統計資訊提供給該第二終端對應的第二服務端,由該第二服務端將該統計資訊添加到該第二終端播放的視頻中。The method of claim 27, further comprising: performing statistics on interactions of the first terminals; providing statistical information to the second server corresponding to the second terminal, by the second server Add the statistics to the video played by the second terminal. 一種多屏互動方法,其特徵在於,包括:   第二終端播放視頻;   在該視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測該聲波信號獲知該目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。A multi-screen interaction method, comprising: playing a video by a second terminal; playing a sound wave signal of a preset frequency when the video is played to a target event related to the specified object, so that the first terminal knows by detecting the sound wave signal The target event occurs and the specified object material is added to the captured live image. 根據申請專利範圍第34項所述的方法,其中,不同指定對象對應不同頻率的聲波信號。The method of claim 34, wherein the different designated objects correspond to acoustic signals of different frequencies. 一種多屏互動方法,其特徵在於,包括:   第二服務端接收第一服務端提供的預置頻率的聲波信號資訊;   在視頻中與指定對象相關的目標事件發生的位置插入該預置頻率的聲波信號,以便在透過第二終端播放該視頻的過程中,由第一終端透過檢測該聲波信號獲知該目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。A multi-screen interaction method, comprising: receiving, by a second server, sound wave signal information of a preset frequency provided by a first server; and inserting a position of a target event related to a specified object in the video into the preset frequency The sound wave signal is used to detect the occurrence of the target event by detecting the sound wave signal by the first terminal during the playing of the video through the second terminal, and adding the specified object material to the collected real-life image. 根據申請專利範圍第36項所述的方法,其中,還包括:   接收該第一服務端提供的對第一終端互動情況的統計資訊;   將該統計資訊添加到該視頻中進行發送,以用於透過該第二終端進行播放。The method of claim 36, further comprising: receiving statistical information provided by the first server for interaction of the first terminal; adding the statistical information to the video for transmission, for Play through the second terminal. 一種視頻互動方法,其特徵在於,包括:   第一終端加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   當該第一終端中的視頻播放到與該指定對象相關的目標事件時,跳轉到互動界面;   在該互動界面中展示實景圖像採集結果,並將該指定對象素材添加到該實景圖像中。A video interaction method, comprising: loading, by a first terminal, an interactive material, the interactive material comprising a specified object material created according to the specified object; when the video in the first terminal is played to a target event related to the specified object , jumping to the interactive interface; displaying the real image acquisition result in the interactive interface, and adding the specified object material to the real image. 一種視頻互動方法,其特徵在於,包括:   第一服務端保存互動素材,該互動素材包括根據指定對象創建的指定對象素材;   將該互動素材提供給第一終端,以便該第一終端中的視頻播放到與該指定對象相關的目標事件時,跳轉到互動界面,並在該互動界面中展示實景圖像採集結果,並將該指定對象素材添加到該實景圖像中。A video interaction method, comprising: a first server saves an interactive material, the interactive material includes a specified object material created according to the specified object; and the interactive material is provided to the first terminal, so that the video in the first terminal When playing to the target event related to the specified object, jump to the interactive interface, and display the live image acquisition result in the interactive interface, and add the specified object material to the real image. 一種互動方法,其特徵在於,包括:   加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當檢測到與該指定對象相關的目標事件時,將該指定對象素材添加到該實景圖像中。An interactive method, comprising: loading an interactive material, the specified material comprising a specified object created according to the specified object; acquiring a real image; and when the target event related to the specified object is detected, the specified object material is Add to this live image. 一種多屏互動裝置,其特徵在於,應用於第一終端,包括:   第一素材加載單元,用於加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   第一實景圖像採集單元,用於採集實景圖像;   第一素材添加單元,用於當第二終端中的視頻播放到與該指定對象相關的目標事件時,將該指定對象素材添加到該實景圖像中。The multi-screen interactive device is characterized in that it is applied to the first terminal, and includes: a first material loading unit, configured to load an interactive material, the interactive material includes a specified object material created according to the specified object; and the first real image capturing unit And the first material adding unit is configured to add the specified object material to the real image when the video in the second terminal is played to a target event related to the specified object. 一種多屏互動裝置,其特徵在於,應用於第一服務端,包括:   第一互動素材保存單元,用於保存互動素材,該互動素材包括根據指定對象創建的指定對象素材;   第一互動素材提供單元,用於將該互動素材提供給第一終端,由該第一終端採集實景圖像,當第二終端中的視頻播放到與該指定對象對應的目標事件時,將該指定對象素材添加到該實景圖像中。A multi-screen interactive device, comprising: the first interactive material saving unit, configured to: save the interactive material, the interactive material includes a specified object material created according to the specified object; a unit, configured to provide the interactive material to the first terminal, where the first terminal collects a real-life image, and when the video in the second terminal plays a target event corresponding to the specified object, the specified object material is added to In the real image. 一種多屏互動裝置,其特徵在於,應用於第二終端,包括:   視頻播放單元,用於播放視頻;   聲波信號播放單元,用於在該視頻播放到與指定對象相關的目標事件時,播放預置頻率的聲波信號,以便第一終端透過檢測該聲波信號獲知該目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。A multi-screen interactive device, characterized in that, the second terminal comprises: a video playing unit for playing a video; and a sound wave signal playing unit, configured to play the pre-playing when the video is played to a target event related to the specified object The sound wave signal of the frequency is set, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal, and adds the specified object material to the collected real-life image. 一種多屏互動裝置,其特徵在於,應用於第二服務端,包括:   聲波信號資訊接收單元,用於接收第一服務端提供的預置頻率的聲波信號資訊;   聲波信號資訊插入單元,用於在視頻中與指定對象相關的目標事件發生的位置插入該預置頻率的聲波信號,以便在透過第二終端播放該視頻的過程中,由第一終端透過檢測該聲波信號獲知該目標事件的發生,並將指定對象素材添加到採集到的實景圖像中。A multi-screen interactive device, characterized in that it is applied to a second server, comprising: an acoustic signal information receiving unit, configured to receive sound wave signal information of a preset frequency provided by a first server; and an acoustic signal information inserting unit, configured to Inserting a sound wave signal of the preset frequency in a position where a target event related to the specified object occurs in the video, so that the first terminal detects the occurrence of the target event by detecting the sound wave signal during the process of playing the video through the second terminal. And add the specified object material to the captured live image. 一種視頻互動裝置,其特徵在於,應用於第一終端,包括:   加載單元,用於加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   界面跳轉單元,用於當該第一終端中的視頻播放到與該指定對象相關的目標事件時,跳轉到互動界面;   素材添加單元,用於在該互動界面中展示實景圖像採集結果,並將該指定對象素材添加到該實景圖像中。A video interaction device, which is applied to the first terminal, and includes: a loading unit, configured to load an interactive material, the interactive material includes a specified object material created according to the specified object; and an interface jump unit, configured to be the first terminal When the video in the play reaches the target event related to the specified object, jump to the interactive interface; the material adding unit is configured to display the real image acquisition result in the interactive interface, and add the specified object material to the live image in. 一種視頻互動裝置,其特徵在於,應用於第一服務端,包括:   第二素材保存單元,用於保存互動素材,該互動素材包括根據指定對象創建的指定對象素材;   第二素材提供單元,用於將該互動素材提供給第一終端,以便該第一終端中的視頻播放到與該指定對象相關的目標事件時,跳轉到互動界面,並在該互動界面中展示實景圖像採集結果,並將該指定對象素材添加到該實景圖像中。A video interaction device, configured to be applied to the first server, comprising: a second material saving unit, configured to save an interactive material, the interactive material includes a specified object material created according to the specified object; Providing the interactive material to the first terminal, so that when the video in the first terminal plays the target event related to the specified object, jumping to the interactive interface, and displaying the real image acquisition result in the interactive interface, and Add the specified object material to the live image. 一種互動裝置,其特徵在於,包括:   第二素材加載單元,用於加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   第二實景圖像採集單元,用於採集實景圖像;   第二素材添加單元,用於當檢測到與該指定對象相關的目標事件時,將該指定對象素材添加到該實景圖像中。An interactive device, comprising: a second material loading unit, configured to load an interactive material, the interactive material comprising a specified object material created according to the specified object; and a second real image capturing unit configured to collect the real image; a second material adding unit configured to add the specified object material to the real-life image when a target event related to the specified object is detected. 一種電子設備,其特徵在於,包括:   一個或多個處理器;以及   與該一個或多個處理器關聯的儲存器,該儲存器用於儲存程式指令, 該程式指令在被該一個或多個處理器讀取執行時,執行如下操作:   加載互動素材,該互動素材包括根據指定對象創建的指定對象素材;   採集實景圖像;   當第二終端中的視頻播放到與該指定對象相關的目標事件時,將該指定對象素材添加到該實景圖像中。An electronic device, comprising: one or more processors; and a memory associated with the one or more processors, the memory for storing program instructions, the program instructions being processed by the one or more When the device reads execution, the following operations are performed: loading the interactive material, including the specified object material created according to the specified object; collecting the real image; when the video in the second terminal is played to the target event related to the specified object , the specified object material is added to the live image.
TW107119580A 2017-10-19 2018-06-07 Multi-screen interaction method and apparatus, and electronic device TW201917556A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710979621.2A CN109688347A (en) 2017-10-19 2017-10-19 Multi-screen interaction method, device and electronic equipment
??201710979621.2 2017-10-19

Publications (1)

Publication Number Publication Date
TW201917556A true TW201917556A (en) 2019-05-01

Family

ID=66173994

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107119580A TW201917556A (en) 2017-10-19 2018-06-07 Multi-screen interaction method and apparatus, and electronic device

Country Status (3)

Country Link
CN (1) CN109688347A (en)
TW (1) TW201917556A (en)
WO (1) WO2019076202A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062290A (en) * 2019-04-30 2019-07-26 北京儒博科技有限公司 Video interactive content generating method, device, equipment and medium
CN113157178B (en) * 2021-02-26 2022-03-15 北京五八信息技术有限公司 Information processing method and device
CN113556531B (en) * 2021-07-13 2024-06-18 Oppo广东移动通信有限公司 Image content sharing method and device and head-mounted display equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776458B (en) * 2012-10-23 2017-04-12 华为终端有限公司 Navigation information processing method and on-board equipment
US9129430B2 (en) * 2013-06-25 2015-09-08 Microsoft Technology Licensing, Llc Indicating out-of-view augmented reality images
CN105810131A (en) * 2014-12-31 2016-07-27 吴建伟 Virtual receptionist device
US20160260319A1 (en) * 2015-03-04 2016-09-08 Aquimo, Llc Method and system for a control device to connect to and control a display device
CN104794834A (en) * 2015-04-04 2015-07-22 金琥 Intelligent voice doorbell system and implementation method thereof
CN105392022B (en) * 2015-11-04 2019-01-18 北京符景数据服务有限公司 Information interacting method and device based on audio frequency watermark
CN106028169B (en) * 2016-07-04 2019-04-12 无锡天脉聚源传媒科技有限公司 A kind of method and device of prize drawing interaction
CN106792246B (en) * 2016-12-09 2021-03-09 福建星网视易信息系统有限公司 Method and system for interaction of fusion type virtual scene
CN106730815B (en) * 2016-12-09 2020-04-21 福建星网视易信息系统有限公司 Somatosensory interaction method and system easy to realize
CN106899870A (en) * 2017-02-23 2017-06-27 任刚 A kind of VR contents interactive system and method based on intelligent television and mobile terminal
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN107172411B (en) * 2017-04-18 2019-07-23 浙江传媒学院 A kind of virtual reality business scenario rendering method under the service environment based on home videos

Also Published As

Publication number Publication date
CN109688347A (en) 2019-04-26
WO2019076202A1 (en) 2019-04-25

Similar Documents

Publication Publication Date Title
WO2019128787A1 (en) Network video live broadcast method and apparatus, and electronic device
CN108769814B (en) Video interaction method, device, terminal and readable storage medium
CN109167950B (en) Video recording method, video playing method, device, equipment and storage medium
US20210306700A1 (en) Method for displaying interaction information, and terminal
WO2022121557A1 (en) Live streaming interaction method, apparatus and device, and medium
CN113965807B (en) Message pushing method, device, terminal, server and storage medium
KR20230159578A (en) Presentation of participant responses within a virtual conference system
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN112717423B (en) Live broadcast method, device, equipment and storage medium for game match
TW202007142A (en) Video file generation method, device, and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN109754298A (en) Interface information providing method, device and electronic equipment
CN112181573A (en) Media resource display method, device, terminal, server and storage medium
CN111327916B (en) Live broadcast management method, device and equipment based on geographic object and storage medium
US20220078221A1 (en) Interactive method and apparatus for multimedia service
TW201917556A (en) Multi-screen interaction method and apparatus, and electronic device
WO2023134419A1 (en) Information interaction method and apparatus, and device and storage medium
CN109729367B (en) Method and device for providing live media content information and electronic equipment
CN109788364B (en) Video call interaction method and device and electronic equipment
CN109754275B (en) Data object information providing method and device and electronic equipment
CN109788327B (en) Multi-screen interaction method and device and electronic equipment
CN111382355A (en) Live broadcast management method, device and equipment based on geographic object and storage medium
CN114845129A (en) Interaction method, device, terminal and storage medium in virtual space
CN114302160A (en) Information display method, information display device, computer equipment and medium
CN114268823A (en) Video playing method and device, electronic equipment and storage medium