WO2019076202A1 - 多屏互动方法、装置及电子设备 - Google Patents

多屏互动方法、装置及电子设备 Download PDF

Info

Publication number
WO2019076202A1
WO2019076202A1 PCT/CN2018/109281 CN2018109281W WO2019076202A1 WO 2019076202 A1 WO2019076202 A1 WO 2019076202A1 CN 2018109281 W CN2018109281 W CN 2018109281W WO 2019076202 A1 WO2019076202 A1 WO 2019076202A1
Authority
WO
WIPO (PCT)
Prior art keywords
specified object
terminal
image
interactive
video
Prior art date
Application number
PCT/CN2018/109281
Other languages
English (en)
French (fr)
Inventor
王英楠
蔡建平
余潇
段虞峰
张智淇
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019076202A1 publication Critical patent/WO2019076202A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home

Definitions

  • the present application relates to the field of multi-screen interaction technology, and in particular, to a multi-screen interaction method, device, and electronic device.
  • Multi-screen interaction refers to the transmission, analysis, display, control, etc. of multimedia (audio, video, picture) content on different multimedia terminal devices (such as between common mobile phones and TVs) through a wireless network connection.
  • a series of operations can display the same content on different terminal devices and realize content intercommunication between the terminals.
  • the interaction from the television end to the mobile terminal is usually implemented by means of a graphic code.
  • a graphic code For example, a two-dimensional code related to a program currently being played can be displayed on a television screen, and the user can use the mobile phone.
  • the installed application's "sweep" and other functions scan the QR code, then parse it on the mobile phone, and display a specific interactive page. Then, the user can answer questions, draws and other interactions on the page.
  • the application provides a multi-screen interaction method, device and electronic device, which can improve the user's participation in interaction.
  • a multi-screen interaction method that includes:
  • the first terminal loads the interactive material, and the interactive material includes a specified object material created according to the specified object;
  • the specified object material is added to the live image when the video in the second terminal is played to a target event related to the specified object.
  • a multi-screen interaction method that includes:
  • the first server saves the interactive material, and the interactive material includes the specified object material created according to the specified object;
  • a multi-screen interaction method that includes:
  • the second terminal plays the video
  • the sound wave signal of the preset frequency is played, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal, and adds the specified object material to the collected In the real image.
  • a multi-screen interaction method that includes:
  • the second server receives the sound wave signal information of the preset frequency provided by the first server
  • a video interaction method including:
  • the first terminal loads the interactive material, and the interactive material includes a specified object material created according to the specified object;
  • a live image acquisition result is displayed in the interactive interface, and the specified object material is added to the live image.
  • a video interaction method including:
  • the first server saves the interactive material, and the interactive material includes the specified object material created according to the specified object;
  • An interactive method that includes:
  • the specified object material is added to the live image when a target event related to the specified object is detected.
  • a multi-screen interactive device is applied to a first terminal, including:
  • a first material loading unit configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a first real image capturing unit for collecting a real image
  • a first material adding unit configured to add the specified object material to the live image when a video in the second terminal is played to a target event related to the specified object.
  • a multi-screen interactive device is applied to the first server, including:
  • a first interactive material saving unit configured to save an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a first interactive material providing unit configured to provide the interactive material to the first terminal, and collect, by the first terminal, a real-life image, when the video in the second terminal plays a target event corresponding to the specified object, Adding the specified object material to the live image.
  • a multi-screen interactive device is applied to a second terminal, including:
  • a video playback unit for playing a video
  • a sound wave signal playing unit configured to: when the video is played to a target event related to the specified object, playing a sound wave signal of a preset frequency, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal, and The specified object material is added to the captured live image.
  • a multi-screen interactive device is applied to a second server, including:
  • the sound wave signal information receiving unit is configured to receive the sound wave signal information of the preset frequency provided by the first server;
  • a sound wave signal information insertion unit configured to insert an acoustic wave signal of the preset frequency at a position where a target event related to the specified object occurs in the video, so that the first terminal passes during the playing of the video by the second terminal
  • the sound wave signal is detected to learn the occurrence of the target event, and the specified object material is added to the collected real-life image.
  • a video interaction device is applied to the first terminal, including:
  • a loading unit configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • An interface jump unit configured to jump to the interactive interface when the video in the first terminal is played to a target event related to the specified object
  • a material adding unit configured to display the real image capturing result in the interactive interface, and add the specified object material to the real image.
  • a video interaction device is applied to the first server, including:
  • a second material saving unit configured to save an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a second material providing unit configured to provide the interactive material to the first terminal, so that when the video in the first terminal is played to a target event related to the specified object, the user jumps to the interactive interface, and The live image acquisition result is displayed in the interactive interface, and the specified object material is added to the real image.
  • An interactive device comprising:
  • a second material loading unit configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a second real image capturing unit for collecting a real image
  • a second material adding unit configured to add the specified object material to the live image when a target event related to the specified object is detected.
  • An electronic device comprising:
  • One or more processors are One or more processors;
  • a memory associated with the one or more processors the memory for storing program instructions that, when read by the one or more processors, perform the following operations:
  • the specified object material is added to the live image when the video in the second terminal is played to a target event related to the specified object.
  • the present application discloses the following technical effects:
  • the interactive material can be created according to the video/animation related to the specified object, and in the process of interacting specifically, the actual environment in which the user is located can be collected, and the scene is broadcasted in the second terminal.
  • the target event corresponding to the specified object is described, the specified object material is added to the live image for display. In this way, the user can obtain the experience that the relevant specified object comes to the space environment (for example, his own home), and therefore, the user's participation in the interaction can be improved.
  • FIG. 1 is a schematic diagram of a system provided by an embodiment of the present application.
  • 3-1 to 3-10 are schematic diagrams of a user interface provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a third method provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of a fourth method provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a fifth method provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of a seventh method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a first device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a second device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a third device provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a fourth device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a fifth device provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a sixth device provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a seventh device provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an electronic device according to an embodiment of the present application.
  • a new multi-screen interaction solution in which a mobile terminal device such as a user's mobile phone (referred to as a first terminal in the embodiment of the present application) and a television set are large.
  • the interaction between the terminal devices of the screen (referred to as the second terminal in the embodiment of the present application).
  • the interaction process may be performed in a process of playing a program such as a large-scale live broadcast party (of course, other types of programs) through the second terminal.
  • a program such as a large-scale live broadcast party (of course, other types of programs)
  • the program organizer will invite some entertainment stars and other performance programs, but in the prior art, the user can only watch the star's performance on the stage from the second terminal.
  • the user can obtain the experience of “star to my home” through some technical means.
  • the material related to the performance of a character such as a specific entertainment star may be provided in advance, and in the performance link of the character in the second terminal, the pre-recorded performance of the character is performed in the first terminal by means of augmented reality.
  • Video, animation, and other materials are projected into the real environment where the user is located. For example, a user usually watches a program in a second terminal such as a television at home, so that a specific character performance video/animation or the like can be projected to the user's home.
  • the background of the performance is the real scene image captured in the environment of the user, it is compared with the one viewed from the first terminal. In the performance of the stage, the user can get the experience that the "star" is really in his home.
  • the designated person in addition to the designated person, it may be an animal, or even a commodity, and the like, and is collectively referred to as a "designated object" in the embodiment of the present application.
  • the hardware device involved in the embodiment of the present application may include the foregoing first terminal and the second terminal, and the software involved may be installed in the first terminal.
  • An associated application client or, also, a program that is fixed in the first terminal, etc.
  • the first server in the cloud For example, suppose that the above interaction is provided during the "Double 11" party, because the organizer of the "Double 11" party is usually a company that sells online (for example, "Mobile Taobao", “Tmall”, etc.). Therefore, it is possible to provide technical support for the above multi-screen interaction through the application client and the server provided by the network sales platform.
  • the user can use the client of the application such as “Mobile Taobao” and “Tmall” to carry out the specific interaction process, and the data such as the materials needed in the interaction process can be provided by the server.
  • the second terminal mainly exists as a play terminal, and the content such as the video played by the second terminal may be controlled by the second server end (the server of the television station, etc.) of the back end, that is, the signal about the live video stream and the like.
  • the unified video signal transmission and the like may be performed by the second server, and then the video signal is transmitted to each of the second terminals for playing. That is to say, in the multi-screen interaction scenario provided by the embodiment of the present application, the first terminal and the second terminal correspond to different server terminals.
  • the first embodiment provides a multi-screen interaction method from the perspective of the client.
  • the method may specifically include:
  • the first terminal loads the interaction material, where the interaction material includes the specified object material created according to the specified object;
  • the interactive material is the material required for generating information content such as virtual images in the interactive process of performing augmented reality.
  • the specified object may specifically be information related to the designated person, or information related to the specified product, or may be information related to the item related to the offline game, and the like.
  • the different designated objects may correspond to different interactive scenarios. For example, when the specified object is a designated person, the specific scene may be a “star to your home” activity, that is, for example, the user watches the television program through television or the like. In the process, through the program, the "star" participating in the performance in the program can be "traversed" to the user's home.
  • the product may generally be a product related to a physical product sold in the online sales system, and usually, the user needs to pay more resources to purchase the product, but during the event. It can be given as a gift to the user by means of gift or ultra-low-cost sales.
  • the “cross-screen gift” can be implemented in the manner of the embodiment of the present application.
  • the content related to the specified product is played in the second terminal such as a TV, and “traversed” to
  • the first terminal of the user's mobile phone or the like may additionally provide an operation option for snapping the data object associated with the specified product.
  • the operation option is submitted to the server, and the server determines the panning result. Therefore, the user is given an opportunity to obtain a snap or a lottery, thereby obtaining a corresponding item, or obtaining an opportunity to purchase a corresponding item at an ultra-low price, and the like.
  • the specified object when it is an item related to the offline game, it may correspond to another form of "cross-screen gift", that is, during the event, if the system wants to provide the user with some coupons, "cash red envelope" "When non-physical gifts, you can associate the process of giving gifts with games such as offline magic. For example, in a process in which a second terminal such as a television plays a program such as a magic, a certain item may be used during the process, and the program of the embodiment of the present application may "traverse" the item to a first terminal device such as a user's mobile phone. In the middle display, the user can perform a non-physical gift collection operation by clicking on the item. That is, when the operation information of the target item is received, the operation information is submitted to the server, and the server determines the reward information obtained by the operation and returns, and then the first terminal can provide the obtained information. Reward information, and more.
  • the specified object material may specifically include a video material obtained by photographing the specified object.
  • the specified object specifically refers to a designated person, the program of singing, dancing, and the like performed by the designated character may be recorded in advance to obtain a video. material.
  • the specified object material may further include: a cartoon image made with the image of the specified object as a prototype, an animation material created based on the cartoon image, and the like.
  • a cartoon character image can be created according to the image of the designated person, and an animation material is created based on the cartoon character image, including a dance animation, a singing animation, and the like by a cartoon character image, wherein, if necessary, "Singing", etc., can be dubbed by the designated character for the cartoon image, or by playing the pre-recorded song of the designated character, and the like.
  • multiple sets of different specified object materials may be corresponding, for example, different characters of different performances of the same designated character may be generated separately, and the like. That is, the material corresponding to the same specified object may be multiple sets. Specifically, when the specified object enters a user's “home”, the user may select a specific material, and then use the selected material to provide a specific augmented reality. Picture.
  • the interactive material provided by the first server may further include: a material for indicating a transmission channel.
  • the material can be generated by a mascot such as a door, a tunnel, a wormhole, a "Tmall", a transmission light array, or the like.
  • the material used to represent the transfer channel can be used to: before the specified object material is specifically added to the live image, since the specified object is originally performed on the stage of the party scene, but then comes to the user's home, Therefore, in order to enhance the fun, it also makes the position change of the specified object more reasonable. It is also possible to first play the preset animation effect through the material for representing the transmission channel, thereby creating a kind of designated object to pass through.
  • the transmission channel "traverses" into the atmosphere of the home, giving the user a better experience.
  • the material for representing the delivery channel can also be provided with an animation of the opposite process when coming to the home, so that the user obtains the designated object to leave the home and transmit the channel.
  • the interactive material provided by the first server may further include a voice sample material recorded by the specified object, and the voice sample material may be used to greet the user to indicate a greeting when the specified object "enters" into the user's home.
  • the above greetings are greeted by the specified object by means of voice, and in order to achieve the above-mentioned "thousands of thousands of faces", it is not possible to directly record a greeting voice in advance.
  • a specific text can be read in advance by a specified object (specifically, corresponding to a designated person), and the voice of each character read aloud can be recorded, and most of the initials are included in the text. , vowels, and tones, etc.
  • the above specific text can usually be about one thousand, which can basically cover 90% of the pronunciation of Chinese characters.
  • the data amount of the above-mentioned interactive material may be relatively large, and the process of loading the interactive material by the first terminal may take a long time, and therefore, it may be downloaded to the first terminal locally in advance.
  • the user can watch the program in the second terminal through the main conference site interface provided by the first terminal, and prepare to interact through the main site interface moment.
  • the specific "star to your home” link may be synchronized with the state of the second terminal at some point during the evening party. Therefore, as soon as the party starts, the user enters the main conference site of the first terminal.
  • the related interactive material can also be temporarily downloaded.
  • a downgrade scheme may be provided. For example, only the foregoing specific object material may be downloaded, and the material for expressing the transmission channel and the voice sample material may be no longer downloaded. At this time, the user does not feel the "crossing" feeling, and does not receive the greeting of the specified object.
  • the first terminal may provide a corresponding activity page for activities such as “star to your home”, and an operation option for issuing an interaction request may be provided in the page.
  • FIG. 3-1 it is a schematic diagram of an active page in an example, in which a prompt information such as a related specified object may be provided, and a button such as “start immediately” may be provided, and the button may be sent by the user.
  • Action options for interactive requests Users can make specific interaction requests by clicking on the "Start Now" button.
  • the user's interaction request may also be received by other means.
  • the two-dimensional code may be displayed on the second terminal screen, and the user sends a request by scanning the two-dimensional code by the first terminal, etc. Wait.
  • the above-mentioned "immediately start” button and other operation options can be in an inoperable state before the start of the formal interaction process, avoiding the user's premature click.
  • the copy file displayed on the operation options can also be different, for example, in the inoperable state, it can be displayed as "exciting immediately open”, and so on.
  • the button shows a "breathing" effect on the display. For example, the button can be shrunk at 70%, then restored to its original size after 3S, then retracted after 3S, and the rhythm is repeated continuously, and so on.
  • the time point of receiving the user interaction request may be earlier than the time when the specified object officially disappears from the second terminal and “enters the user's home”.
  • the reason is that the client may perform some preparatory work in advance after the user sends the interaction request.
  • the real-time image acquisition in the first terminal may be first started, that is, the camera component on the first terminal may be activated, and then enter the state of the live-action shooting for subsequent augmented reality. The interaction is ready.
  • the specific real-time image acquisition before the specific real-time image acquisition is started, it may be first determined whether the first terminal has already loaded the interactive material locally, and if not yet loaded, the loading process of the interactive material may be first performed.
  • the virtual image presented to the user by means of augmented reality is related to the specific specified object material, etc., in order to make the interactive process more realistic, the specified object material may be displayed in the real scene. On one plane in the image, for example, it may be the ground, the plane of the table, and the like, so that if the specified object is a designated person, the performance process of the designated person may be performed on one plane. If no special processing is performed, after the specified object material is added to the real-life image, the specified object material may be "floating" in the air, if the corresponding specified object material is the dancing, singing, etc. of the designated character. The performance will make the designated character "floating" in mid-air performance, which will reduce the user experience and will not enable the user to obtain a more realistic immersive immersion.
  • the specified object material may also be added to a plane included in the real-life image for display.
  • the placement position in the collected real-life image may be determined.
  • adding the specified object material to the real-life image may include adding the specified object material.
  • the determining the placement position in the collected real-life image may include determining a plane position in the collected real-life image, where the placement position is located in the plane position.
  • plane detection may be performed in the collected real-life image, and then a cursor may be provided, and the playable range of the cursor may be determined according to the detected plane.
  • the position at which the cursor is placed can be taken as the placement position.
  • the location where the cursor is placed as the placement location may specifically include: establishing a coordinate system with the initial position where the first terminal is located as an origin, and determining that the position where the cursor is placed is in the coordinate system. Cursor coordinates, and the cursor coordinates are taken as the placement position.
  • the first terminal may perform plane recognition from the real-life image, and then add the specified object material to the plane in the real-life image to avoid the phenomenon of “floating” in the air. At this time, regarding the specific point at which the specified object material appears, it may be arbitrarily determined by the first terminal, as long as it is located on one plane. Alternatively, in another implementation manner, it is further possible that the user selects the appearance position of the specific specified object material.
  • the client may first perform plane detection from the collected real-life image. After detecting a plane, as shown in FIG. 3-2, a range may be drawn and a The moving cursor can also prompt the user to place the cursor in the drawn range that can be placed in the interface. After the user moves the cursor within the droppable range, the color of the cursor can change to alert the user that the placement position is available. At this point, the client can record where the cursor is actually placed. In a specific implementation, in order to record the position information of the cursor, there may be multiple ways. For example, in one mode, the position of the first terminal at a certain moment may be used as the initial position (for example, the cursor may be placed well).
  • the initial position which may be the geometric center point of the first terminal, etc.
  • the initial position which may be the geometric center point of the first terminal, etc.
  • the cursor is placed into a specific placeable After the range, you can record the position of the cursor relative to the coordinate system, so that when the specified object material is added to the live image, it can be added based on the position.
  • the material for indicating the transmission channel may be added to the real-life image, in the above manner.
  • a specific material for expressing the transmission channel may also be presented at the position where the cursor is located. For example, if the "transport” material is used as the transmission channel, as shown in Figure 3-3, after the user completes the placement of the cursor, the user may be prompted to "confirm the plane, click to place the portal", etc. After the user clicks on the cursor, the "gate" material can be presented at the corresponding position.
  • the specified object material is added to the real scene image, the cursor can disappear, and an animation effect can also be provided according to the transmission channel material for displaying that the specified object enters the captured real scene through the transmission channel.
  • the animation process in the image For example, as shown in Figures 3-4 and Figures 3-5, which show two states in the animation process described above, it can be seen that it presents the effect of "entering" the user's home through the portal.
  • the material for indicating the transfer channel disappears.
  • the material for representing the transfer channel can be re-displayed, and an animation process for displaying the specified object to leave through the transfer channel is provided, and after completely leaving, the material for indicating the transfer channel disappears. .
  • the time point at which the interaction starts may be related to the target event corresponding to the specified object broadcasted in the second terminal.
  • the so-called target event may specifically refer to an event such as the start of an interactive activity related to the specified object.
  • a "transportation gate” (which may be physical or may be virtualized by projection) may be placed on the stage, etc.
  • the event that the object walks out from the "transportation gate” on the stage can be used as the target event.
  • the time point becomes the starting time point of the interaction. Accordingly, the first terminal can execute the specific specified object material. Add to the real-life image for related processing.
  • the video played in the second terminal may be a live video stream.
  • the program in the second terminal is usually live broadcast, the synchronization with the time point at which the target event of the second terminal occurs cannot be maintained in such a manner that the time is set in the first terminal in advance.
  • the second terminal broadcasts usually a television signal, although the time point of the television signal transmission is the same, the time point at which the signal arrives at the user may be different for users of different geographical locations. That is to say, the same is the event that the specified object is worn out from the "transportation gate" on the stage.
  • the user in Beijing may see the event from the second terminal at 21:00:00, while the user in Guangzhou is It may be seen at 21:00:02, and so on.
  • the notification message about the target event is uniformly sent to each of the first terminals, and there may be cases in which the results actually experienced by the users in different regions are different, possibly Some users feel that the traversing process of the specified object can be seamlessly connected with the event on the second terminal, while some users may not feel it. It may happen that the specified object in the TV program has not yet passed through the portal but has entered The real-life image in the phone is medium.
  • the first terminal and the second terminal are usually located in the same space environment, and The distance between the people is not too far.
  • the first terminal may also be aware of the target event in the second terminal by: the television program producer may add a preset frequency to the video signal to be sent at the time when the target event occurs. Sound wave signal.
  • the television program producer may add a preset frequency to the video signal to be sent at the time when the target event occurs. Sound wave signal.
  • the first terminal receives the sound wave signal of the preset frequency emitted when the video is played, and can determine the occurrence of the target event.
  • the first terminal can use the sound wave signal as a sign of occurrence of the target event, and then perform a subsequent interaction process.
  • the occurrence flag of the target event can be carried in the specific video signal and transmitted to the first terminal through the second terminal, so that the event that the user sees at the second terminal can be ensured to be better.
  • the images seen in the first terminal are seamlessly connected for a better experience.
  • the sound wave signal may be sent by the video in the second terminal to a target event corresponding to the specified object, and the specific frequency information may be the first service. Determining, and being provided by the first server to the second server, in the process of transmitting the video signal by the second server, if it is found that the target event related to the specified object is occurring, the video signal may be corresponding to The sound wave signal is inserted in the position. On the other hand, the first server can also inform the first terminal of the frequency information of the sound wave signal in some manner, so that the first terminal and the second terminal can establish a connection through the sound wave signal. It should be noted that in the specific implementation, in the same evening party, there may be multiple “stars to your home” links, corresponding to different designated objects.
  • the first server may provide a correspondence between the specified object and the sound wave frequency to the second server.
  • the second server may add according to the corresponding relationship provided by the first server; and the corresponding The relationship is also provided to the first terminal, and the first terminal may determine, according to the frequency of the detected sound wave signal, which specified object corresponds to the current event.
  • the animation based on the transmission channel can be used as a marker for the start of the interaction.
  • the specified object material can be added to the real image, and if the user has specified the position, it can be added to the real image. Corresponding location.
  • a prompt identifier of an opposite direction is provided in an interface of the first terminal. For example, if the specified object material is added, the user has moved the position of the first terminal device, that is, it has changed relative to the initial position, so that the material does not appear in the display of the first terminal after the material is added. In the screen.
  • the coordinate system is previously created based on the initial position of the mobile terminal device (the position is no longer changed once determined), it can also be determined based on technologies such as SLAM (Concurrent Mapping and Localization). Determining the coordinates of the position of the first terminal after moving in the coordinate system, that is, determining where the first terminal has moved, and determining in which direction the first terminal is moved relative to the initial position, thereby guiding The user moves his first terminal in the opposite direction to enable the added material to appear in the picture of the first terminal. As shown in Figure 3-6, the user can be guided to move the first terminal by means of an "arrow".
  • SLAM Current Mapping and Localization
  • the user can choose.
  • a fixed video or the like can also be played.
  • the content of the fixed video can be a box and is continuously beating to express the preparation work of the designated object being changed clothes. and many more.
  • the selected material can be added to the real image for display, for example, as shown in FIG. 3-7, which is a specific example in which the specified material is displayed.
  • FIG. 3-7 is a specific example in which the specified material is displayed.
  • One of the frame images, wherein the partial image about the character is a virtual image, and the background behind the character is a real image captured by the user through the first terminal.
  • the voice sample material recorded by the specified object may also be included in the interactive material. Therefore, after the specified target material is added, the user name information of the associated user of the first terminal may also be obtained, and The associated user generates the user-specific greeting corpus, including the user name. Further, the greeting is expected to be converted into a voice according to the voice sample material and played.
  • the specified object material there may be an action, an expression, and the like when the specified object greets the user, so that the user feels that the designated object himself is greeting himself.
  • the user name may be determined according to the account that the current user logs in, or the real user name may be obtained according to the real-name authentication information provided by the user in advance, and the like, and the user may be implemented by using the foregoing manner. "Thousands of thousands of people.” Of course, if some users' nicknames, real names, etc. cannot be obtained, a relatively common name can be generated for the user according to the gender, age, and the like of the user, and the like.
  • a shooting operation option may also be provided.
  • the corresponding image may be generated by performing screen capture or screen recording on each image layer. (photo or video, etc.).
  • the live view image may further include a live view image of the person who needs to take a photo with the designated object. For example, in the process of a user's specific interaction, since it is usually interacting at home, there may be other people around, and if other people want to take a photo with the specified object, they can enter the real scene of the first terminal.
  • the image capturing area enables the first terminal to collect his/her real-life image, and then the user can implement a specific photographing operation by operating the operation option.
  • the depth of field information may also be used to distinguish the front-back positional relationship between the person in the real-life image and the specified object in the virtual image to further enhance the sense of reality.
  • the interface since the interface also includes some operation options such as buttons, when performing screen capture or screen recording, the image layer for displaying the operation options can be removed, only the real image layer and the image where the video/animation is located.
  • the layer performs a screen capture or screen recording operation to improve the realism of the generated photo or video.
  • the function of taking photos and videos can be provided through the same operation option, and the specific intention of the user can be distinguished by different operation modes. For example, a function of taking a photo when a click operation is performed on the operation option, a corresponding video capture function when a long press operation is performed on the operation option, and the like.
  • the screen capture operation can be triggered to generate a photo. If the user presses the hold option all the time, the screen capture operation is triggered until the user releases.
  • the operation option may be located on one side of the above-described operation options for taking a photo or video, and may present a prompt message such as "click to share a wonderful moment", and the like.
  • the sharing portal of multiple social network platforms can be provided, and the user can select the social network platform to share.
  • the user may also provide other interactive operation options, for example, providing a public welfare activity.
  • the operation option if the user is willing to participate, can click directly through the operation option.
  • the number of people participating in the corresponding public welfare activities through such channels may be associated with the completion progress of the public welfare activities, and the server collects such information on the number of people and provides the directors and other staff members in the live scene in real time to make the program site Scenes and the like can change correspondingly with the completion progress of the public welfare activities, for example, the scene of the program scene gradually changes from desert to oasis, and so on.
  • the specified object material will not be displayed again.
  • the specified object may appear again in the second terminal's screen. Therefore, in order to better display the user's "crossing" process, this still remains.
  • the material of the transmission channel can be displayed again, as shown in FIG. 3-8, and an animation process for displaying a character leaving through the transmission channel can also be provided, and the transmission channel material itself can also be gradually reduced. After leaving completely, the material for indicating the transfer channel also disappears from the screen.
  • a receiving page for browsing and sharing the captured photos or videos can also be provided. That is to say, after the interaction is over, a receiving page can be provided for guiding the user to share the captured photos or videos.
  • the photos or videos can be sorted in the order of shooting in the page. For example, as shown in FIG. 3-9, the photos or videos can be sequentially displayed from left to right according to the time of the near and far time, and the like.
  • the user can click on any one of the photos or videos to evoke the sharing component interface, as shown in Figure 3-10, the user can complete the specific sharing operation through the component.
  • the specified object material may be loaded, and in the process of interacting specifically, the real scene image may be collected in the actual environment where the user is located, and the target corresponding to the specified object is broadcasted in the second terminal.
  • the specified object material is added to the live image for display. In this way, the user can get the experience that the specific object comes to the space environment (for example, his own home), so that the user's participation in the interaction can be improved.
  • the second embodiment is corresponding to the first embodiment. From the perspective of the server, a multi-screen interaction method is provided. Referring to FIG. 4, the method may specifically include:
  • the first server saves the interaction material, and the interaction material includes the specified object material created according to the specified object;
  • S402 The interactive material is provided to the first terminal, and the real image is collected by the first terminal, and when the video in the second terminal is played to a target event corresponding to the specified object, the specified object material is added. Go to the live image.
  • the specified object includes a designated person.
  • it can also include animals, commodities, props, and so on.
  • a video material obtained by shooting the specified object may be provided.
  • a cartoon image prepared based on the image of the specified object and an animation material created based on the cartoon image are provided.
  • the voice sample material recorded by the designated person may also be provided.
  • the first server may further provide a sound signal of the preset frequency to the second server corresponding to the second terminal. Adding to the video when the video in the second terminal is played to a target event corresponding to the specified object, so that the first terminal learns by detecting the sound wave signal of the preset frequency. The occurrence of the target event.
  • the server can also perform statistics on the interaction of each client.
  • the statistical information may be further provided to the second server corresponding to the second terminal, where the second server adds the statistical information to the video played by the second terminal,
  • the second terminal publishes the statistical results, or it can also statistically influence the scenery of the party at the party, and so on.
  • the third embodiment provides a multi-screen interaction method from the perspective of the second terminal.
  • the method may specifically include:
  • S501 The second terminal plays the video.
  • S502 playing a sound wave signal of a preset frequency when the video is played to a target event related to the specified object, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal, and adds the specified object material to In the captured real image.
  • different specified objects may correspond to acoustic signals of different frequencies.
  • the fourth embodiment provides a multi-screen interaction method from the perspective of the second server corresponding to the second terminal.
  • the method may specifically include:
  • the second server receives the acoustic signal information of the preset frequency provided by the first server.
  • S602 inserting, in a video, a sound wave signal of the preset frequency at a position where a target event related to the specified object occurs, so that the first terminal is notified by detecting the sound wave signal during the playing of the video by the second terminal.
  • the target event occurs and the specified object material is added to the captured real-life image.
  • the second server may further receive the statistical information about the interaction of the first terminal provided by the first server, and add the statistical information to the video for sending, by using the The second terminal plays.
  • multi-screen interaction is implemented between the first terminal and the second terminal, and in actual applications, the user may also watch a video such as a live broadcast program of the party through the first terminal, in this case.
  • a video such as a live broadcast program of the party through the first terminal
  • the user can also get the experience of "star to my home". That is, video and interaction can be viewed through the same terminal.
  • the fifth embodiment provides a video interaction method, where the method specifically includes:
  • the first terminal loads an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • S703 Display a live image acquisition result in the interactive interface, and add the specified object material to the real image.
  • the video plays the target event related to the specified object, and then the user may jump to the interactive interface, in which the live image may be first performed.
  • the acquisition, and then the specified object material is added to the live image.
  • the user also obtains the experience that the specified object "traverses" from the "evening scene” to the space environment in which it is located.
  • the sixth embodiment is corresponding to the fifth embodiment, and provides a video interaction method from the perspective of the first server.
  • the method may specifically include:
  • the first server saves the interaction material, and the interaction material includes the specified object material created according to the specified object;
  • S802 Providing the interactive material to the first terminal, so that when the video in the first terminal is played to a target event related to the specified object, jumping to the interactive interface, and displaying the real scene in the interactive interface The image is captured and the specified object material is added to the live image.
  • the first terminal such as a mobile phone
  • the first terminal is used as an execution subject to provide a specific interaction result
  • the solution may be extended to other scenarios, for example, in addition to a mobile phone,
  • a wearable device such as smart glasses is used as the first terminal, in addition to interacting with the video played in the second terminal or the first terminal, or may be a video played in a movie screen, or a live performance, a performance, a merchant promotion.
  • a specific interactive process is carried out.
  • the seventh embodiment provides another interaction method. Referring to FIG. 9, the method may specifically include:
  • the first terminal loads an interaction material, where the interaction material includes a specified object material created according to the specified object;
  • an interface for loading the interactive material may be provided, and multiple options may be provided in the interface.
  • Interactive materials for example, materials related to the current movie being released in major theaters, materials related to offline performances, promotions, competitions, etc., etc.
  • users can select the materials they need to download.
  • users can book tickets through online operation, which can include not only movie tickets, but also various performances, competitions, etc. Related tickets. Therefore, when the interactive material is specifically provided, it can also be provided according to the specific booking information of the user.
  • the downloaded interactive material may be stored locally in a terminal such as a mobile phone, or may be downloaded to a terminal such as a wearable device to facilitate the process of watching a movie, a performance, and the like. interactive.
  • the specific designated object may also refer to a designated person, a commodity, a prop, and the like.
  • the wearable device has a device such as a camera. Therefore, the live view image can be collected by a wearable device or the like, and the user can actually see the image through the wearable device such as glasses during the process of watching a movie or a performance. It is a real-life image captured by such glasses.
  • S903 Add the specified object material to the live image when a target event related to the specified object is detected.
  • the specific target event may be that the specified object appears in a specific movie, performance, competition, promotion, etc., and the like.
  • some sound wave information may be inserted into a specific event node by a movie, a performance, a competition, a show party of a promotion, or a organizer.
  • a wearable device or the like knows the occurrence of a specific event by detecting such a signal.
  • the occurrence of the target event can be known directly by analyzing the collected real-life image.
  • the real-life image captured by the wearable device camera is usually the same as the real-life image actually viewed by the user, or has overlapping parts, and if the user sees a certain target, When an event occurs, the camera can actually collect information about the corresponding event.
  • the wearable device can also be equipped with a sound collector, etc. Therefore, image analysis, voice analysis, and the like can be used to know the occurrence of a specific target event. and many more.
  • the embodiment of the present application further provides a multi-screen interaction device.
  • the device is applied to the first terminal, including:
  • a first material loading unit 1001 configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • the first material adding unit 1003 is configured to add the specified object material to the live image when the video in the second terminal is played to a target event related to the specified object.
  • the video played in the second terminal is a live video stream.
  • the first terminal receives a sound wave signal of a preset frequency sent when the video is played, and determines occurrence of the target event.
  • the sound wave signal may be emitted when a video in the second terminal is played to a target event corresponding to the specified object.
  • the first material adding unit may be specifically configured to:
  • the specified object material is added to a plane included in the live image for display.
  • the device may further include:
  • a placement position determining unit configured to determine a placement position in the collected real-life image before the adding the specified object material
  • the first material adding unit may be specifically configured to: add the specified object material to the placement location.
  • the placement location determining unit may be specifically configured to:
  • the placement location determining unit may specifically include:
  • a plane detecting subunit for performing plane detection in the collected real scene image
  • the cursor provides a sub-unit for providing a cursor, and determining a playable range of the cursor according to the detected plane;
  • a placement position determining subunit is used as the placement position where the cursor is placed.
  • the placement location determining sub-unit may specifically include:
  • the coordinate system establishes a subunit for establishing a coordinate system with the initial position where the first terminal is located as an origin;
  • a coordinate determining subunit configured to determine cursor coordinates of the position where the cursor is placed in the coordinate system
  • a position determining subunit for using the cursor coordinates as the placement position.
  • the device may further include:
  • a change direction determining unit configured to determine, when the specified object material is added to the placement location, when the material does not appear at an interface of the first terminal, determining the first terminal relative to the initial The direction of change of position
  • a prompting unit configured to provide a prompt identifier in an opposite direction in an interface of the first terminal according to the changing direction.
  • the interactive material may further include a material for indicating a transmission channel
  • the device may further include:
  • a channel material adding unit configured to add the material for representing the transfer channel to the live image after the step of acquiring a live image.
  • the first material adding unit may be specifically configured to:
  • the interactive material may further include a voice sample material recorded by the specified object
  • the device may further include:
  • a user name obtaining unit configured to obtain user name information of the associated user of the first terminal
  • a greeting corpus generating unit configured to generate a greeting corpus including the user name for the associated user
  • a playing unit configured to convert the greeting into voice and play according to the voice sample material.
  • the specified object material is a plurality of sets of materials corresponding to the same specified object, and the device further includes:
  • a material selection option providing unit ⁇ providing an operation option for selecting the specified object material
  • the first material adding unit may be used to:
  • the selected specified object material to be selected is added to the live image.
  • the device may further include:
  • a shooting option providing unit configured to provide a shooting operation option during the process of displaying the specified object material into the real-life image
  • the image generating unit is configured to receive an operation request by the shooting operation option, and generate a corresponding image according to each image layer, where the image layer includes a real-life image and an image specifying a pair of pixel materials.
  • the image generating unit may be specifically configured to:
  • a screen shot or a screen is recorded for each image layer, and an image layer for displaying an operation option is removed, and a captured image is generated.
  • the real-life image further includes a character image that is combined with the specified object.
  • it can also include:
  • a sharing option providing unit for providing an operation option for sharing a captured image.
  • the page providing unit is provided for providing a page for browsing and sharing the captured image.
  • the specified object includes information of a specified person.
  • the specified object includes information specifying the item.
  • a snapping option providing unit configured to provide an operation option for snapping up a data object associated with the specified item after the adding the specified object material to the live view image
  • the submitting unit is configured to submit to the server when the panning operation is received through the operation option, and the server determines the panning result.
  • the specified object includes item information related to the offline game.
  • the device may further include:
  • An operation information submitting unit configured to submit the operation information to the server when receiving the operation information for the target item after adding the specified object material to the real-life image, by the service End determining the reward information obtained by the operation and returning;
  • a bonus information providing unit for providing the obtained bonus information.
  • the specified object material includes a video material obtained by capturing the specified object.
  • the specified object material includes a cartoon image made with the image of the specified object as a prototype, and an animation material created based on the cartoon image.
  • the embodiment of the present application further provides a multi-screen interactive device.
  • the device is applied to the first server, and includes:
  • a first interactive material saving unit 1101 configured to save an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a first interactive material providing unit 1102 configured to provide the interactive material to the first terminal, and collect, by the first terminal, a live image, when the video in the second terminal is played to a target event corresponding to the specified object And adding the specified object material to the live image.
  • the specified object includes a designated person.
  • the first interactive material saving unit may be specifically configured to: save the video material obtained by capturing the specified object.
  • a cartoon image created with the image of the specified object as a prototype and an animation material created based on the cartoon image are saved.
  • the specified object includes a designated person, and the first interactive material saving unit is further configured to:
  • the voice sample material recorded by the specified person is saved.
  • the device may further include:
  • a sound wave signal information providing unit configured to provide a sound wave signal of a preset frequency to a second server corresponding to the second terminal, for playing a video in the second terminal to a target corresponding to the specified object
  • the first terminal learns the occurrence of the target event by detecting the sound wave signal of the preset frequency.
  • it can also include:
  • a statistical information providing unit configured to provide the statistical information to the second server corresponding to the second terminal, where the second server adds the statistical information to the video played by the second terminal.
  • the embodiment of the present application further provides a multi-screen interaction device.
  • the device is applied to the second terminal, including:
  • a video playing unit 1201, configured to play a video
  • the sound wave signal playing unit 1202 is configured to: when the video is played to a target event related to the specified object, play the sound wave signal of the preset frequency, so that the first terminal knows the occurrence of the target event by detecting the sound wave signal, and Adds the specified object material to the captured live image.
  • the embodiment of the present application further provides a multi-screen interaction device.
  • the device is applied to the second server, and includes:
  • the sound wave signal information receiving unit 1301 is configured to receive sound wave signal information of a preset frequency provided by the first server;
  • the sound wave signal information insertion unit 1302 is configured to insert an acoustic wave signal of the preset frequency in a position where a target event related to the specified object occurs in the video, so that the first terminal is in the process of playing the video through the second terminal
  • the occurrence of the target event is known by detecting the acoustic signal, and the specified object material is added to the acquired real-life image.
  • the device may further comprise:
  • a statistical information receiving unit configured to receive statistical information about the interaction of the first terminal provided by the first server
  • a statistical information playing unit configured to add the statistical information to the video for transmission, for playing by using the second terminal.
  • the embodiment of the present application further provides a video interaction device.
  • the device is applied to the first terminal, and includes:
  • a loading unit 1401 configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • the interface jump unit 1402 is configured to jump to the interactive interface when the video in the first terminal is played to a target event related to the specified object;
  • the material adding unit 1403 is configured to display a live image capturing result in the interactive interface, and add the specified object material to the real image.
  • the embodiment of the present application further provides a video interaction device.
  • the device is applied to the first server, and includes:
  • a second material saving unit 1501 configured to save an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a second material providing unit 1502 configured to provide the interactive material to the first terminal, so that when the video in the first terminal is played to a target event related to the specified object, the jump to the interactive interface, and The live image acquisition result is displayed in the interactive interface, and the specified object material is added to the real image.
  • the embodiment of the present application further provides an interactive device.
  • the device may include:
  • a second material loading unit 1601 configured to load an interactive material, where the interactive material includes a specified object material created according to the specified object;
  • a second real image capturing unit 1602 configured to collect a real image
  • the second material adding unit 1603 is configured to add the specified object material to the live image when a target event related to the specified object is detected.
  • an electronic device including:
  • One or more processors are One or more processors;
  • a memory associated with the one or more processors the memory for storing program instructions that, when read by the one or more processors, perform the following operations:
  • the specified object material is added to the live image when the video in the second terminal is played to a target event related to the specified object.
  • FIG. 17 exemplarily shows the architecture of the electronic device.
  • the device 1700 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, and a personal digital assistant. , aircraft, etc.
  • device 1700 can include one or more of the following components: processing component 1702, memory 1704, power component 1706, multimedia component 1708, audio component 1710, input/output (I/O) interface 1712, sensor component 1714, And a communication component 1716.
  • Processing component 1702 typically controls the overall operation of device 1700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 1702 may include one or more processors 1720 to execute instructions to complete a traffic compression request when the preset condition is met in the video playing method provided by the technical solution of the present disclosure, and send the traffic compression request to the server, where the traffic is sent to the server.
  • processing component 1702 can include one or more modules to facilitate interaction between component 1702 and other components.
  • processing component 1702 can include a multimedia module to facilitate interaction between multimedia component 1708 and processing component 1702.
  • Memory 1704 is configured to store various types of data to support operation at device 1700. Examples of such data include instructions for any application or method operating on device 1700, contact data, phone book data, messages, pictures, videos, and the like. Memory 1704 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 1706 provides power to various components of device 1700.
  • Power component 1706 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1700.
  • the multimedia component 1708 includes a screen between the device 1700 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor can sense not only the boundaries of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1708 includes a front camera and/or a rear camera. When the device 1700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1710 is configured to output and/or input acoustic signals.
  • the audio component 1710 includes a microphone (MIC) that is configured to receive an external acoustic signal when the device 1700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received acoustic signals may be further stored in memory 1704 or transmitted via communication component 1716.
  • audio component 1710 also includes a speaker for outputting acoustic signals.
  • the I/O interface 1712 provides an interface between the processing component 1702 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1714 includes one or more sensors for providing device 1700 with a status assessment of various aspects.
  • sensor component 1714 can detect an open/closed state of device 1700, a relative positioning of components, such as the display and keypad of device 1700, and sensor component 1714 can also detect a change in position of one component of device 1700 or device 1700. The presence or absence of user contact with device 1700, device 1700 orientation or acceleration/deceleration, and temperature change of device 1700.
  • Sensor assembly 1714 can include a proximity sensor configured to detect the presence of nearby merchandise without any physical contact.
  • Sensor assembly 1714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1714 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 1716 is configured to facilitate wired or wireless communication between device 1700 and other devices.
  • Device 1700 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 1716 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 1716 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 1700 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • a non-transitory computer readable storage medium comprising instructions, such as a memory 1704 comprising instructions executable by processor 1720 of device 1700 to perform the video provided by the disclosed technical solutions
  • the traffic compression request is generated and sent to the server, where the traffic compression request records information for triggering the server to acquire the target attention area, and the traffic compression request is used to request the server priority.
  • the code content of the video content in the target area is ensured; the video content corresponding to the code stream file is played according to the code stream file returned by the server, where the code stream file is the server according to the traffic compression request to the target attention area.
  • the non-transitory computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • the present application can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM or a disk. , an optical disk, etc., includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present application or portions of the embodiments.
  • a computer device which may be a personal computer, server, or network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了多屏互动方法、装置及电子设备,该方法包括:第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;采集实景图像;当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。通过本申请实施例,可以提高用户对互动的参与度。

Description

多屏互动方法、装置及电子设备
本申请要求2017年10月19日递交的申请号为201710979621.2、发明名称为“多屏互动方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多屏互动技术领域,特别是涉及多屏互动方法、装置及电子设备。
背景技术
多屏互动是指通过无线网络连接,在不同的多媒体终端设备上(如常见的手机与电视之间等等),可进行多媒体(音频,视频,图片)内容的传输、解析、展示、控制等一系列操作,可以在不同终端设备显示同样的内容,并实现各个终端之间的内容互通。
现有技术中,从电视端到手机端的互动,通常是借助于图形码的方式来实现,例如,通常可以在电视屏幕上显示与当前正在播放的节目相关的二维码,用户可以用手机中安装的应用程序的“扫一扫”等功能对该二维码进行扫描,然后在手机端进行解析,并展示出具体的互动页面,然后,用户可以在页面中进行回答问题、抽奖等互动。
这种现有技术的方式虽然能够实现手机与电视之间的互动,但是具体的实现形式比较呆板,用户的实际参与度并不高。因此,如何提供形式更丰富的多屏互动,提高用户的参与度,成为需要本领域技术人员解决的技术问题。
发明内容
本申请提供了多屏互动方法、装置及电子设备,可以提高用户对互动的参与度。
本申请提供了如下方案:
一种多屏互动方法,包括:
第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
采集实景图像;
当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种多屏互动方法,包括:
第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种多屏互动方法,包括:
第二终端播放视频;
在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
一种多屏互动方法,包括:
第二服务端接收第一服务端提供的预置频率的声波信号信息;
在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
一种视频互动方法,包括:
第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
一种视频互动方法,包括:
第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
一种互动方法,包括:
加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
采集实景图像;
当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种多屏互动装置,应用于第一终端,包括:
第一素材加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第一实景图像采集单元,用于采集实景图像;
第一素材添加单元,用于当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种多屏互动装置,应用于第一服务端,包括:
第一互动素材保存单元,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第一互动素材提供单元,用于将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种多屏互动装置,应用于第二终端,包括:
视频播放单元,用于播放视频;
声波信号播放单元,用于在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
一种多屏互动装置,应用于第二服务端,包括:
声波信号信息接收单元,用于接收第一服务端提供的预置频率的声波信号信息;
声波信号信息插入单元,用于在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
一种视频互动装置,应用于第一终端,包括:
加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
界面跳转单元,用于当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
素材添加单元,用于在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
一种视频互动装置,应用于第一服务端,包括:
第二素材保存单元,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第二素材提供单元,用于将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
一种互动装置,包括:
第二素材加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第二实景图像采集单元,用于采集实景图像;
第二素材添加单元,用于当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
一种电子设备,包括:
一个或多个处理器;以及
与所述一个或多个处理器关联的存储器,所述存储器用于存储程序指令,所述程序指令在被所述一个或多个处理器读取执行时,执行如下操作:
加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
采集实景图像;
当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
根据本申请提供的具体实施例,本申请公开了以下技术效果:
通过本申请实施例,可以根据与指定对象相关的视频/动画创建互动素材,在具体进行互动的过程中,可以对用户所在的实际环境进行采集实景图像,并在第二终端中播出与所述指定对象对应的目标事件时,将指定对象素材添加到所述实景图像中进行展示。这样,使得用户可以获得相关指定对象来到自己所在空间环境(例如,自己家中等)的体验,因此,可以提高用户对互动的参与度。
当然,实施本申请的任一产品并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施 例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的系统的示意图;
图2是本申请实施例提供的第一方法的流程图;
图3-1至图3-10是本申请实施例提供的用户界面的示意图;
图4是本申请实施例提供的第二方法的流程图;
图5是本申请实施例提供的第三方法的流程图;
图6是本申请实施例提供的第四方法的流程图;
图7是本申请实施例提供的第五方法的流程图;
图8是本申请实施例提供的第六方法的流程图;
图9是本申请实施例提供的第七方法的流程图;
图10是本申请实施例提供的第一装置的示意图;
图11是本申请实施例提供的第二装置的示意图;
图12是本申请实施例提供的第三装置的示意图;
图13是本申请实施例提供的第四装置的示意图;
图14是本申请实施例提供的第五装置的示意图;
图15是本申请实施例提供的第六装置的示意图;
图16是本申请实施例提供的第七装置的示意图;
图17是本申请实施例提供的电子设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
在本申请实施例中,提供了一种新的多屏互动方案,在该方案中,主要是用户的手机等移动终端设备(本申请实施例中称为第一终端)与电视机等具有大屏幕的终端设备(本申请实施例中称为第二终端)之间的互动。具体的,可以在通过第二终端对一些大型直播类晚会等节目(当然,也可以是其他类型的节目)进行播放的过程中,进行上述互动过程。例如,在直播类晚会节目中,会节目主办方会邀请到一些娱乐明星等表演节 目,但是,在现有技术中,用户只能从第二终端中观看明星在舞台上的表演。而在本申请实施例中,则可以通过一些技术手段,使用户获得“明星到我家”的体验。具体实现时,可以预先提供与特定的娱乐明星等人物表演相关的素材,在第二终端中该人物的表演环节中,在第一终端中通过增强现实的方式,将预先录制的该人物的表演视频、动画等素材投射到用户所在的真实环境中。例如,用户通常是在家里观看电视等第二终端中的节目,因此,就可以将具体的人物表演视频/动画等投射到用户的家中。这样,虽然用户仍然需要隔着第二终端的屏幕观看到具体的投射结果,但是,由于表演的背景是用户所在环境中拍摄到的实景图像,因此,相对于从第一终端中观看到的在舞台上的表演而言,可以使得用户获得该“明星”真的在其家中的体验。当然,在具体实现时,除了指定人物之外,还可以是动物,甚至还可以是商品,等等,在本申请实施例中,统一称为“指定对象”。
具体实现时,从系统架构角度而言,参见图1,本申请实施例中涉及到的硬件设备就可以包括前述第一终端以及第二终端,而涉及到的软件可以是第一终端中安装的某关联的应用程序客户端(或者,也可以是固化在第一终端中的程序等),以及云端的第一服务端。例如,假设是在“双11”晚会的过程中提供上述互动,则由于“双11”晚会的主办方通常是某网络销售平台(例如,“手机淘宝”、“天猫”等)的公司,因此,就可以通过该网络销售平台提供的应用程序客户端以及服务端,为上述多屏互动提供技术支持。也就是说,用户可以使用“手机淘宝”、“天猫”等应用程序的客户端来进行具体的互动过程中,而互动过程中所需用到的素材等数据,则可以由服务端进行提供。需要说明的是,第二终端主要作为播放终端存在,而其中播放的视频等内容可以是由后端的第二服务端(电视台的服务器等)进行控制的,也就是说,关于直播视频流等信号,可以由第二服务端执行统一的视频信号发射等操作,之后,将视频信号传输到各个第二终端进行播放。也就是说,在本申请实施例提供的多屏互动场景中,第一终端与第二终端对应的是不同的服务端。
下面对具体的实现方案进行详细介绍。
实施例一
首先,该实施例一从客户端的角度,提供了一种多屏互动方法,参见图2,该方法具体可以包括:
S201:第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
其中,关于互动素材也就是具体在进行增强现实的互动过程中,用于生成虚拟图像等信息内容所需的素材。具体实现时,指定对象具体可以是指定人物相关的信息,或者指定商品相关的信息,再或者,还可以是与线下游戏相关的道具相关的信息,等等。其中,不同的指定对象可以对应不同的互动场景,例如,当指定对象为指定人物时,具体的场景可以是“明星到你家”活动,也即,例如,用户在通过电视等观看电视节目的过程中,通过该方案,可以将节目中参加表演的“明星”等“穿越”到用户家中。而在指定对象为指定商品时,该商品通常可以是与网络销售系统中销售的实物性商品等相关的商品,通常情况下,用户需要支付较多的资源来购买该商品,但是,在活动期间,可以通过赠送或者超低价销售等方式,作为礼物送给用户。而在送礼物的过程中,就可以采用本申请实施例中的方式实现“跨屏送礼物”,具体的,通过在电视等第二终端中播放与指定商品相关的内容,并“穿越”到用户手机等第一终端,另外还可以提供用于对所述指定商品关联的数据对象进行抢购的操作选项,通过该操作选项接收到抢购操作时,提交到服务端,由服务端确定抢购结果。因此,使得用户获得抢购或者抽奖等机会,进而获得对应的商品,或者,获得以超低价购买对应商品的机会,等等。
另外,在指定对象为与线下游戏相关的道具时,可以对应“跨屏送礼物”的另一种形式,也即,在活动期间,系统如果想要为用户提供一些优惠券、“现金红包”等非实物性的礼物,则可以将送礼物的过程,与线下的魔术等游戏关联。例如,在电视等第二终端播放魔术等节目的过程中,期间可能会用到某道具,则通本申请实施例的方案,可以将该道具“穿越”到也用户的手机等第一终端设备中进行显示,进而,用户可以通过对该道具进行点击等操作方式,进行非实物性礼物的领取操作。也即,接收到对所述目标道具进行操作信息时,将所述操作信息提交到服务端,由所述服务端确定该操作所获得的奖励信息并返回,然后,第一终端可以提供所获得的奖励信息,等等。
指定对象素材具体可以包括通过对所述指定对象进行拍摄所获得的视频素材,例如,如果指定对象具体是指指定人物,则可以预先对指定人物表演的唱歌、跳舞等节目进行视频录制,得到视频素材。或者,在另一种方式下,所述指定对象素材也可以包括:以所述指定对象的形象为原型制作的卡通形象,以及基于该卡通形象制作的动画素材等等。例如,在指定对象为指定人物的情况下,可以根据指定人物的形象制作卡通人物形象,并基于该卡通人物形象制作动画素材,包括由卡通人物形象的跳舞动画、唱歌动画等,其中,如果需要“唱歌”等,则可以由指定人物为该卡通形象进行配音,或者,播放该指定人物预先录制好的歌曲,等等。
其中,对于同一个指定对象而言,可以对应多套不同的指定对象素材,例如,对于同一个指定人物,表演的不同节目,可以分别生成不同的人物素材,等等。也即,同一指定对象对应的素材可以为多套,具体在该指定对象进入到某个用户的“家中”时,可以由用户选择具体的素材,然后,利用选定的素材提供具体的增强现实画面。
另外,第一服务端提供的互动素材还可以包括:用于表示传送通道的素材。例如,具体可以通过门、隧道、虫洞、“天猫”等吉祥物、传送光阵等生成该素材。这种用于表示传送通道的素材可以用于:在具体将指定对象素材添加到实景图像中之前,由于该指定对象本来是在晚会现场的舞台上进行表演,但紧接着会来到用户家中,因此,为了增强趣味性,也使得指定对象出现的位置变化显得更合理,还可以首先通过该用于表示传送通道的素材,播放预置的动画效果,营造出一种即将有指定对象通过这种传送通道“穿越”到其家中的氛围,使得用户获得更好的体验。另外,在互动结束后,指定对象需要从家中离开时,还可以通过该用于表示传送通道的素材提供与来到家中时相反过程的动画,使得用户获得该指定对象从其家中离开、传送通道逐渐关闭的体验。
再者,第一服务端提供的互动素材还可以包括由所述指定对象录制的语音样本素材,这种语音样本素材可以用于在指定对象“进入”到用户家中时向用户进行打招呼表示问候。并且,还可以在具体打招呼之前获得用户的用户名称(包括昵称、真实姓名等)等信息,实现“千人千面”的问候语,例如,“XXX,我到你家来了”,其中,对不同的用户而言,“XXX”的具体内容是不同的。上述问候语会由所述指定对象通过语音的方式进行问候,而为了达到上述“千人千面”的目的,不能直接通过预先录制一条问候语语音的方式来实现。为此,在本申请实施例中,可以预先由指定对象(具体可以对应指定人物的情形)朗读一段特定文字,并对朗读的各个文字的语音进行录制,这段文字中会包括大部分的声母、韵母以及音调等的发音方式。具体实现时,上述特定文字通常可以为一千个左右,其基本可以涵盖90%的汉字发音。这样,当指定对象刚“进入”用户家中时,在根据用户的专有名称生成了具体的问候语之后,就可以通过该语音样本素材中保存的各个汉字的发音信息,发出对应的语音,以达到由指定对象喊出用户名称并进行问候的效果。
当然,在实际应用中,还可以包括其他素材,这里不再一一列举。具体实现时,上述互动素材的数据量可能会比较大,第一终端加载该互动素材的过程可能需要花费较长的时间,因此,可以是提前下载到第一终端本地。例如,在第二终端中播放的晚会开始后,用户就可以通过第一终端提供的晚会主会场界面,边观看第二终端中的节目,边通 过该主会场界面时刻准备进行互动。而具体“明星到你家”环节可能是在晚会进行过程中的某个时刻,与第二终端的状态同步进行,因此,只要在晚会开始后,用户进入到第一终端的晚会主会场界面,即使具体的“明星到你家”活动尚未正式启动,也可以预先进行相关互动素材下载的操作,这样,具体的活动开始后,就可以快速进行到互动过程,避免由于互动素材尚未下载成功而导致的无法及时参加活动的情况。当然,对于没有提前进入到第一终端提供的主会场界面的用户,如果需要参加上述“明星到你家”活动,则也可以临时下载相关的互动素材。其中,对于临时下载的情况,为了避免下载所花费的时间过长,可以提供降级方案,例如,可以仅下载前述特定对象素材,关于用于表达传送通道的素材以及语音样本素材等可以不再下载,此时,用户体会不到“穿越”的感觉,也接收不到指定对象的问候。
S202:采集实景图像;
具体实现时,第一终端可以为“明星到你家”等活动提供对应的活动页面,在该页面中可以提供用于发出互动请求的操作选项。例如,如图3-1所示,其为一个例子中的活动页面展示示意图,其中可以提供相关指定对象等提示信息,还可以提供“立即启动”等按钮,该按钮就可以是所述用户发出互动请求的操作选项。用户可以通过点击该“立即启动”按钮发出具体的互动请求。当然,在实际应用中,还可以通过其他方式接收用户的互动请求,例如,可以在第二终端屏幕上展示出二维码,用户通过第一终端对二维码进行扫描的方式发出请求,等等。
具体实现时,上述“立即启动”的按钮等操作选项在正式互动过程开始之前可以处于不可操作状态,避免用户过早的点击。另外,在操作选项上显示的文案方面也可以有所不同,例如,在不可操作状态下,可以显示为“精彩马上开启”,等等。在互动即将开始之前,再将按钮上展示的文案修改为“立即启动”等状态,并且,为了为用户营造一种紧张的、焦急等待的氛围,同时更吸引用户执行点击操作,还可以将该按钮在显示上呈现出“呼吸”动效,例如,按钮可以以70%的比例收缩,3S后再恢复到原来大小,3S后再收缩,并不断重复此节奏,等等。
其中,接收用户互动请求的时间点可以早于指定对象正式从第二终端消失并“进入用户家中”的时间点,真是因为,在用户发出互动请求之后,客户端还可以预先执行一些准备工作。具体的,在接收到用户的互动请求后,可以首先开启第一终端中的实景图像采集,也即,可以启动第一终端上的摄像头组件,然后进入到实景拍摄的状态,为后续基于增强现实的互动做好准备。
具体实现时,在具体启动实景图像的采集之前,还可以首先判断第一终端本地是否已经加载了互动素材,如果尚未加载,则可以首先进行互动素材的加载处理。
需要说明的是,在本申请实施例中,通过增强现实的方式呈现给用户的虚拟图像是与具体指定对象素材等,为了使得互动的过程更加具有真实性,可以使得指定对象素材是展示在实景图像中的一个平面上,例如,可以是地面、桌子的平面,等等,这样,如果指定对象是指定人物,则可以使得指定人物的表演过程是在一个平面上进行的。而如果不进行特殊处理,则将所述指定对象素材添加到实景图像中之后,可能会出现指定对象素材“飘”在半空中的情况,如果对应的指定对象素材是指定人物的跳舞、唱歌等表演,则会使得指定人物“飘”在半空中表演,这会降低用户体验,无法使用户获得更真实的身临其境的沉浸感。
为此,在本申请的优选实施例中,还可以将与所述指定对象素材添加到所述实景图像中包括的平面上进行展示。在添加所述指定对象素材之前,可以先确定采集到的实景图像中的放置位置,那么,所述将所述指定对象素材添加到所述实景图像中,具体可以包括将所述指定对象素材添加到所述放置位置处。其中,所述确定采集到的实景图像中的放置位置,具体可以包括,确定所述采集到的实景图像中的平面位置,所述放置位置位于的所述平面位置中。具体地,可以在采集到的实景图像中进行平面检测,接着可以提供光标,并可以根据检测到的平面,确定光标的可放置范围。最后,可以将所述光标被放置的位置作为所述放置位置。其中,将所述光标被放置的位置作为所述放置位置,具体可以包括,以第一终端所在的初始位置为原点建立坐标系,并确定所述光标被放置的位置在所述坐标系中的光标坐标,并将所述光标坐标作为所述放置位置。具体实现时,可以由第一终端从实景图像中进行平面识别,然后,将所述指定对象素材添加到实景图像中该平面上,避免产生“飘”在空中的现象。此时,关于指定对象素材具体出现在哪个位置点上,可以是由第一终端任意决定,只要位于一个平面上即可。或者,在另一种实现方式下,还可以更进一步,由用户选择具体的指定对象素材的出现位置。具体的,在启动实景图像检测之后,客户端可以首先从采集到的实景图像中进行平面检测,在检测到一个平面之后,如图3-2所示,可以绘制出一个范围,并提供一个可移动的光标,还可以在界面中提示用户将光标放入绘制出的可放置范围内。在用户将光标移动到该可放置范围内之后,光标的颜色可以发生变化,以提示用户的放置位置可用。此时,客户端可以记录下光标具体被放置的位置。具体实现时,为了记录该光标被放置的位置信息,可以有多种方式,例如,在一种方式下,可以将某时刻第一终端所在的位置作为初始位 置(例如,就可以将光标放置好时第一终端所在的位置作为初始位置,等等),并以该初始位置(可以是第一终端的几何中心点等)作为坐标原点创建坐标系,然后,在光标被放入具体的可放置范围后,就可以记录下光标相对于该坐标系的位置,这样,后续在指定对象素材添加到实景图像中时,就可以以该位置为准进行添加。
另外,如前文所述,在可选的实施方式中,在指定对象素材正式“进入”到实景图像中之前,还可以将用于表示传送通道的素材添加到实景图像中,则在上述方式下,在用户完成光标放置后,还可以在该光标所在的位置处呈现出具体的用于表达传送通道的素材。例如,假设以“传送门”素材作为传送通道,则具体实现时,如图3-3所示,在用户完成光标的放置后,可以提示用户“已确认平面,点击放置传送门”,等等,在用户对光标完成点击之后,就可以在对应的位置出呈现出“传送门”素材。
后续具体在开始向实景图像中添加指定对象素材,该光标可以消失,并且,还可以根据传送通道素材提供动画效果,以用于展示出有指定对象通过所述传送通道进入所述拍摄到的实景图像中的动画过程。例如,如图3-4以及图3-5所示,其示出了上述动画过程中的两个状态,可以看出,其呈现出了即将有人通过该传送门“进入”用户家中的效果。在所述指定对象素材进入到所述实景图像中后,所述用于表示传送通道的素材消失。互动结束时,则可以重新展示所述用于表示传送通道的素材,并提供用于展示有指定对象通过所述传送通道离开的动画过程,完全离开后,所述用于表示传送通道的素材消失。
S203:当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
具体实现时,开始进行互动的时间点可以是以第二终端中播出的与所述指定对象对应的目标事件相关的。其中,所谓的目标事件具体可以是指与指定对象相关的互动活动开始等事件。例如,在第二终端播放的节目中,到了“明星到你家”环节时,可以在舞台上放置“传送门”(可以是实体的,或者也可以是通过投影的方式虚拟的)等,指定对象从舞台上的“传送门”穿出的事件就可以作为所述目标事件,此时,该时间点也就成为互动的开始时间点,相应的,第一终端就可以执行具体的指定对象素材添加到实景图像中进行展示的相关处理。
其中,所述第二终端中播放的视频可以为直播视频流。这样,由于第二终端中的节目通常是现场直播的,因此,无法按照预先在第一终端中设定好时间的方式,来保持与第二终端发生目标事件的时间点的同步。而由于第二终端中播放的通常是电视信号,虽 然电视信号发送的时间点是相同的,但是,对于不同地理位置的用户而言,信号到达用户的时间点可能会有所不同。也就是说,同样是指定对象从舞台上的“传送门”穿出这一事件,北京的用户可能是在21:00:00从第二终端上看到该事件的发生,而广州的用户则可能是在21:00:02时才看到,等等。因此,即使由第一服务端的工作人员在晚会现场看到事件发生时,统一向各个第一终端发送关于目标事件的通知消息,也可能存在不同地区的用户实际体验到的结果不同的情况,可能有些用户感觉到指定对象的穿越过程与第二终端上的事件可以无缝衔接,而有些用户则可能会感觉不到,可能出现电视节目中指定对象尚未从传送门穿出,但却已经进入到手机中的实景图像中等情况。
为此,在本申请实施例中,由于用户通常是在边看电视的情况下,边用手机等移动终端进行互动,因此,第一终端与第二终端通常是位于同一空间环境中,并且两者相隔的距离不会太远。在该情况下,还可以通过以下方式实现第一终端对第二终端中目标事件的感知:可以由电视节目制作方在所述目标事件发生时刻,在待发送的视频信号中加入一个预置频率的声波信号。这样,随着具体视频信号送到用户的第二终端,该声波信号也会随之送达,并且,该声波信号的频率可以是在人类的听力范围之外,也即,用户并不会感知该声波信号的存在,但是,第一终端则能够感知到该声波信号,进而,第一终端接收所述视频播放时发出的预置频率的声波信号,可以确定所述目标事件的发生。第一终端就可以将该声波信号作为目标事件的发生标志,进而执行后续的互动过程即可。通过这种方式,可以将目标事件的发生标志携带在具体的视频信号中,并通过第二终端传达给第一终端,因此,可以保证用户在第二终端看到的事件,能够更好的与第一终端中看到的图像进行无缝衔接,从而获得更好的体验。
其中,关于所述的声波信号,所述声波信号可以是由所述第二终端中的视频播放到与所述指定对象对应的目标事件时发出的,其具体的频率信息可以是由第一服务端确定的,并由第一服务端提供给第二服务端,由第二服务端在发送视频信号的过程中,如果发现正在发生所述指定对象相关的目标事件,就可以在视频信号对应的位置出插入该声波信号。另一方面,第一服务端也可以通过一些方式将该声波信号的频率信息告知给第一终端,这样,第一终端与第二终端之间就可以通过该声波信号建立联系。需要说明的是,在具体实现时,同一台晚会中,可能会具有多个“明星到你家”环节,对应不同的指定对象,因此,还可以分别为不同的指定对象提供不同频率的声波信号。第一服务端可以将指定对象与声波频率之间的对应关系提供给第二服务端,第二服务端在添加声波信号时,可以根据第一服务端提供的对应关系进行添加;并且,该对应关系也会提供给 第一终端,第一终端可以根据检测到的声波信号的频率的不同,确定出当前事件对应的具体是哪个指定对象。
具体开始互动之后,如前文所述,可以通过基于传送通道的动画作为互动开始的标志,之后,可以将与指定对象素材添加到实景图像中,如果用户已经指定位置,则可以添加到实景图像中对应的位置处。当所述素材未出现在所述第一终端的界面时,确定所述第一终端相对于所述初始位置的变化方向。根据所述变化方向,在所述第一终端的界面中提供相反方向的提示标识。例如,如果指定对象素材添加进来时,用户已经移动了第一终端设备的位置,也即其相对于初始位置已经发生了变化,以至于素材添加进来之后,却并没出现在第一终端的显示屏中。对于此情况,由于之前基于移动终端设备的初始位置(该位置一旦确定就不再变化)创建了坐标系,因此,还可以基于SLAM(Concurrent Mapping and Localization,即时定位与地图构建)等技术,确定出第一终端移动后的位置在该坐标系中的坐标,也即确定出第一终端移动到了什么位置,并且可以确定第一终端相对于初始位置是向什么方向发生了移动,进而就可以引导用户向该方向相反的方向移动其第一终端,以使得已经添加的素材能够出现在第一终端的画面中。如图3-6所示,可以通过“箭头”的方式引导用户对第一终端进行移动。
如前文所述,同一指定对象对应的素材可能有多套,例如,包括跳舞的素材,唱歌的素材等等,则具体在将指定对象素材添加到实景图像之前,还可以为用户提供用于选择具体素材的选项,用户可以进行选择。在用户进行选择的过程中,还可以播放一段固定的视频等,例如,这段固定视频的内容可以是一个盒子,并在不停的跳动,用以表达指定对象正在做换衣服等准备工作,等等。在用户选择了一个具体的素材后,就可以将被选中的素材添加到实景图像中进行展示,例如,如图3-7所示,其为一个具体例子中,展示所述指定素材展示过程中的其中一帧图像,其中,关于人物的部分图像为虚拟图像,人物后面的背景则是用户通过第一终端采集到的实景图像。
其中,由于互动素材中还可以包括由所述指定对象录制的语音样本素材,因此,在添加了所述指定对象素材之后,还可以获得所述第一终端关联用户的用户名称信息,并针对所述关联用户生成该用户专用的问候语语料,其中包括所述用户名称。进而,根据所述语音样本素材将所述问候语预料转化为语音并进行播放。相应的,在指定对象素材中,还可以存在指定对象向用户进行打招呼时的动作、表情等,使得用户感觉到确实是该指定对象本人在跟自己打招呼。其中,关于用户名称,可以根据当前用户登录到的账户,确定对应的用户昵称,或者还可以根据用户预先提供的实名认证信息,获得用户真 实姓名,等等,通过上述方式可以实现针对不同用户的“千人千面”。当然,如果无法获取到某些用户的昵称、真实姓名等,则还可以根据用户的性别、年龄等,为用户生成相对较通用的称呼,等等。
另外,具体在将指定对象素材添加到实景图像中之后,还可以提供拍摄操作选项,通过所述操作选项接收到操作请求时,可以通过对各图像层进行截屏或录屏等方式生成对应的图像(照片或者视频等)。通过这种方式,可以实现与所述指定对象的合影,等等。也就是说,在进行照片或者视频等图像的拍摄时,所述实景图像中还可以包括需要与所述指定对象进行合影的人物实景图像。例如,某用户具体进行互动的过程中,由于其通常是在家中进行互动,身边可能还会有其他人在,如果其他人想要与所述指定对象合影,则可以进入到第一终端的实景图像采集区域,使得第一终端可以采集到他/她的实景图像,之后,用户通过操作所述操作选项,即可实现具体的拍照操作。具体实现时,还可以利用景深信息,区分出实景图像中的人物与虚拟图像中的指定对象之间的前后位置关系,以进一步增强真实感。
其中,由于界面中还包括一些按钮等操作选项,因此,在进行截屏或录屏时,还可以去掉其中用于展示操作选项的图像层,仅对实景图像层以及所述视频/动画所在的图像层进行截屏或者录屏操作,以提高所生成照片或者视频的真实感。
具体实现时,可以通过同一个操作选项提供拍摄照片以及视频的功能,并通过不同的操作方式来区分用户的具体意图。例如,对所述操作选项进行点击操作时对应拍摄照片功能,对所述操作选项进行长按操作时对应拍摄视频功能,等等。也就是说,如果用户只是点击上述操作选项,则可以触发截屏操作,生成照片。如果用户按住操作选项一直不放,则触发录屏操作,直到用户放开为止。另外,具体实现时,还可以对每次录制的视频长度进行限制,例如,每段视频不超过10S,则用户按住操作选项超过10S后,即使仍然按住不放,也会结束录屏操作,生成最长为10S的视频,等等。
再者,在互动过程中,还可以提供对拍摄所得的照片或视频进行分享的操作选项。例如,该操作选项可以位于上述用于拍摄照片或视频的操作选项的一侧,并且可以展示出提示信息,例如:“点击可以分享精彩时刻哦”,等等。在用户进行点击之后,可以提供多个社交网络平台的分享入口,用户可以选择其中的社交网络平台进行分享。
此外,在互动过程中,除了可以播放指定对象对应的视频/动画,或者与指定对象进行拍照留念、分享等操作,还可以为用户提供其他的互动操作选项,例如,还可以提供参加某公益活动的操作选项,如果用户愿意参加,则可以直接通过该操作选项进行点 击。或者,还可以将通过这种渠道参加对应公益活动的人数,与公益活动的完成进度相关联,服务端通过统计这种人数信息,并实时提供给节目现场的导演等工作人员,使得节目现场的布景等可以随着公益活动的完成进度发生相应的变化,例如,节目现场的布景由沙漠逐渐变成绿洲,等等。
在互动结束后,指定对象素材将不会再展示,当然,该指定对象还可能会再次出现在第二终端的画面中,因此,为了更好的展示出用户的“穿越”过程,此时仍然可以再将传送通道的素材进行展示,如图3-8所示,还可以提供用于展示有人物通过所述传送通道离开的动画过程,传送通道素材本身也可以呈现出逐渐变小的过程,完全离开后,所述用于表示传送通道的素材也从画面中消失。
在互动结束后,可以退出对实景图像采集的界面,此时,在可选的实现方案中,还可以提供用于对拍摄所得的各照片或视频进行浏览以及分享的承接页面。也就是说,在互动结束后,可以提供一个承接页面,以用于引导用户对拍摄得到的照片或视频进行分享操作。其中,各个照片或视频在该页面中可以按照拍摄的先后顺序进行排序,例如,如图3-9所示,具体可以从左到右开始按照由近及远的时间进行依次展示,等等。在展示的过程中,用户点击其中任意一个照片或视频,都可以唤起分享组件界面,如图3-10所示,用户可以通过该组件完成具体的分享操作。
总之,通过本申请实施例,可以加载指定对象素材,在具体进行互动的过程中,可以对用户所在的实际环境进行采集实景图像,并在第二终端中播出与所述指定对象对应的目标事件时,将指定对象素材添加到所述实景图像中进行展示。这样,使得用户可以获得特定对象来到自己所在空间环境(例如,自己家中等)的体验,因此,可以提高用户对互动的参与度。
实施例二
该实施例二是与实施例一相对应的,从服务端的角度,提供了一种多屏互动方法,其中,参见图4,该方法具体可以包括:
S401:第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
S402:将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
具体实现时,所述指定对象包括指定人物。当然,具体实现时,还可以包括动物、商品、道具等等。
具体在提供互动素材时,可以提供通过对所述指定对象进行拍摄所获得的视频素材。或者,提供以所述指定对象的形象为原型制作的卡通形象,以及基于该卡通形象制作的动画素材。
具体的,在所述指定对象为指定人物时,还可以提供由所述指定人物录制的语音样本素材。
具体实现时,为了使得第一终端更方便的感知第二终端中所述目标事件的发生,第一服务端还可以向所述第二终端对应的第二服务端提供预置频率的声波信号,以用于在所述第二终端中的视频播放到与所述指定对象对应的目标事件时,添加到所述视频中,以便所述第一终端通过检测所述预置频率的声波信号获知所述目标事件的发生。
具体实现时,服务端还可以对各客户端的互动情况进行统计。其中,还可以将统计信息提供给所述第二终端对应的第二服务端,由所述第二服务端将所述统计信息添加到所述第二终端播放的视频中,以用于通过第二终端对统计结果进行公布,或者,还可以统计结果来影响晚会现场的布景,等等。
其中,由于该实施例二是与实施例一相对应的,因此,相关的具体实现可以参见前述实施例一中的记载,这里不再赘述。
实施例三
该实施例三是从第二终端的角度,提供了一种多屏互动方法,参见图5,该方法具体可以包括:
S501:第二终端播放视频;
S502:在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
具体实现时,不同指定对象可以对应不同频率的声波信号。
实施例四
该实施例四是从第二终端对应的第二服务端的角度,提供了一种多屏互动方法,参见图6,该方法具体可以包括:
S601:第二服务端接收第一服务端提供的预置频率的声波信号信息;
S602:在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
具体实现时,该第二服务端还可以接收所述第一服务端提供的对第一终端互动情况的统计信息,将所述统计信息添加到所述视频中进行发送,以用于通过所述第二终端进行播放。
实施例五
在前述实施例一至实施例四中,是在第一终端与第二终端之间实现多屏互动,而在实际应用中,用户还可以通过第一终端观看晚会直播节目等视频,在这种情况下,用户在通过第一终端观看视频的过程中,也可以获得“明星到我家”的体验。也即,可以通过同一终端进行观看视频以及互动。
具体的,参见图7,该实施例五提供了一种视频互动方法,该方法具体可以包括:
S701:第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
S702:当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
S703:在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
也就是说,用户在通过第一终端观看视频的过程中,该视频播放到了与所述指定对象相关的目标事件,则可以跳转到互动界面中,在该互动界面中,可以首先进行实景图像的采集,然后,将指定对象素材添加到所述实景图像中。这样,用户同样获得指定对象从“晚会现场”等地“穿越”到自己所在空间环境的体验。
关于该实施例五中其他的具体实现,可以参见前述各实施例中的记载,这里不再赘述。
实施例六
该实施例六是与实施例五相对应的,从第一服务端的角度提供了一种视频互动方法,参见图8,该方法具体可以包括:
S801:第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
S802:将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述 指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
实施例七
在前述各实施例中,均是以手机等第一终端为执行主体来提供具体的互动结果,而在实际应用中,该方案还可以扩展至其他场景,例如,除了手机之外,还可以将智能眼镜等可穿戴设备作为第一终端,除了可以与第二终端或者第一终端中播放的视频进行互动,还可以是与电影屏幕中播放的视频,或者现场观看的表演、演出、商家促销活动、体育赛事等等相关事件的过程中,进行具体的互动过程。为此,该实施例七提供了另一种互动方法,参见图9,该方法具体可以包括:
S901:第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
在具体实现时,由于该实施例中的应用场景可以不进行限定,因此,在提供具体的互动界面之前,还可以提供用于加载互动素材的界面,在该界面中,可以提供多种可选的互动素材,例如,与当前各大院线正在上映的电影相关的素材,与线下的演出、促销活动、比赛等事件相关的素材,等等,用户可以从中选择自己所需的素材进行下载。另外,在具体实现时,由于一些应用程序为用户提供了线上订票的功能,用户可以通过线上操作的方式来订票,其中不仅可以包括电影票,还可以包括各种演出、比赛等相关的门票。因此,具体提供互动素材时,还可以根据用户的具体订票信息进行提供,例如,在用户通过某在线订票系统预订了某电影票时,如果刚好存在与该电影相关的互动素材,则可以提示用户进行下载,等等。需要说明的是,在本申请实施例中,下载的互动素材可以保存在手机等终端本地,或者,还可以下载到可穿戴设备等终端中,以更方便的在观看电影、演出等过程中进行互动。
在该实施例中,具体的指定对象同样可以是指指定人物、商品、道具等等。
S902:采集实景图像;
通常,可穿戴设备带有摄像头等装置,因此,可以通过可穿戴设备等进行实景图像的采集,用户在该观看电影、演出等过程中,实际透过眼镜等可穿戴设备看到的图像,可以是这种眼镜采集到的实景图像。
S903:当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
具体的目标事件可以是指定对象出现在具体的电影、演出、比赛、促销活动等过程中,等等。具体在对目标事件进行检测时,可以有多种方式,例如,一种方式下,可以由电影、演出、比赛、促销活动的放映方或者举办方,在具体的事件节点上插入一些声波信息等,可穿戴设备等通过检测这种信号的方式获知具体事件的发生。或者,在其他的实现方式下,还可以直接通过对采集到的实景图像进行分析等方式来获知目标事件的发生。例如,具体在使用可穿戴设备进行互动的过程中,可穿戴设备摄像头采集到的实景图像与用户实际观看到的实景图像通常是相同的,或者具有相互重叠的部分,则如果用户看到某目标事件发生,则该摄像头实际上也能够采集到对应事件的信息,另外,可穿戴设备也可以带有声音采集器等,因此,可以通过图像分析、语音分析等方式来获知具体目标事件的发生,等等。
与实施例一相对应,本申请实施例还提供了一种多屏互动装置,具体的,参见图10,该装置应用于第一终端,包括:
第一素材加载单元1001,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第一实景图像采集单元1002,用于采集实景图像;
第一素材添加单元1003,用于当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
其中,所述第二终端中播放的视频为直播视频流。
所述第一终端接收所述视频播放时发出的预置频率的声波信号,确定所述目标事件的发生。其中,所述声波信号可以由所述第二终端中的视频播放到与所述指定对象对应的目标事件时发出的。
其中,所述第一素材添加单元具体可以用于:
将所述指定对象素材添加到所述实景图像中包括的平面上进行展示。
具体实现时,该装置还可以包括:
放置位置确定单元,用于所述添加所述指定对象素材之前,确定采集到的实景图像中的放置位置;
所述第一素材添加单元具体可以用于:将所述指定对象素材添加到所述放置位置处。
其中,所述放置位置确定单元具体可以用于:
确定所述采集到的实景图像中的平面位置,所述放置位置位于的所述平面位置中。
具体的,所述放置位置确定单元具体可以包括:
平面检测子单元,用于在采集到的实景图像中进行平面检测;
光标提供子单元,用于提供光标,根据检测到的平面,确定光标的可放置范围;
放置位置确定子单元,用于将所述光标被放置的位置作为所述放置位置。
其中,所述放置位置确定子单元具体可以包括:
坐标系建立子单元,用于以第一终端所在的初始位置为原点建立坐标系;
坐标确定子单元,用于确定所述光标被放置的位置在所述坐标系中的光标坐标;
位置确定子单元,用于将所述光标坐标作为所述放置位置。
具体的,所述装置还可以包括:
变化方向确定单元,用于在将所述指定对象素材添加到所述放置位置处后,当所述素材未出现在所述第一终端的界面时,确定所述第一终端相对于所述初始位置的变化方向;
提示单元,用于根据所述变化方向,在所述第一终端的界面中提供相反方向的提示标识。
具体实现时,所述互动素材还可以包括用于表示传送通道的素材,该装置还可以包括:
通道素材添加单元,用于在所述采集实景图像步骤之后,将所述用于表示传送通道的素材添加到所述实景图像中。
其中,所述第一素材添加单元具体可以用于:
基于所述传送通道素材,展示所述指定对象通过所述传送通道进入所述拍摄到的实景图像中的过程。
另外,所述互动素材中还可以包括由所述指定对象录制的语音样本素材;
所述装置还可以包括:
用户名称获得单元,用于获得所述第一终端关联用户的用户名称信息;
问候语料生成单元,用于针对所述关联用户生成包括该用户名称的问候语语料;
播放单元,用于根据所述语音样本素材,将所述问候语预料转化为语音并播放。
其中,所述指定对象素材为同一指定对象对应的多套素材,所述装置还包括:
素材选择选项提供单元,哟关于提供用于对所述指定对象素材进行选择的操作选项;
所述第一素材添加单元可以用于:
将被选中的指定对象素材添加到所述实景图像中。
另外,该装置还可以包括:
拍摄选项提供单元,用于在对所述指定对象素材添加到所述实景图像中进行展示的过程中,提供拍摄操作选项;
图像生成单元,用于通过所述拍摄操作选项接收操作请求,根据各图像层,生成对应的图像,所述图像层包括实景图像、指定对像素材的图像。
其中,所述图像生成单元具体可以用于:
对各图像层进行截屏或录屏,并去掉其中用于展示操作选项的图像层,生成拍摄图像。
具体实现时,所述实景图像中还包括与所述指定对象进行合影的人物图像。
另外,还可以包括:
分享选项提供单元,用于提供对拍摄图像进行分享的操作选项。
再者,还包括:
承接页面提供单元,用于提供对拍摄图像进行浏览以及分享的承接页面。
具体实现时,所述指定对象包括指定人物的信息。
或者,所述指定对象包括指定商品的信息。
具体的,还包括:
抢购选项提供单元,用于所述将所述指定对象素材添加到所述实景图像中之后,提供用于对所述指定商品关联的数据对象进行抢购的操作选项;
提交单元,用于通过该操作选项接收到抢购操作时,提交到服务端,由服务端确定抢购结果。
另外,所述指定对象包括线下游戏相关的道具信息。
此时,该装置还可以包括:
操作信息提交单元,用于所述将所述指定对象素材添加到所述实景图像中之后,接收到对所述目标道具进行操作信息时,将所述操作信息提交到服务端,由所述服务端确定该操作所获得的奖励信息并返回;
奖励信息提供单元,用于提供所获得的奖励信息。
其中,所述指定对象素材包括通过对所述指定对象进行拍摄所获得的视频素材。
或者,所述指定对象素材包括:以所述指定对象的形象为原型制作的卡通形象,以 及基于该卡通形象制作的动画素材。
与实施例二相对应,本申请实施例还提供了一种多屏互动装置,参见图11,该装置应用于第一服务端,包括:
第一互动素材保存单元1101,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第一互动素材提供单元1102,用于将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
其中,所述指定对象包括指定人物。
具体的,所述第一互动素材保存单元具体可以用于:保存通过对所述指定对象进行拍摄所获得的视频素材。
或者,保存以所述指定对象的形象为原型制作的卡通形象,以及基于该卡通形象制作的动画素材。
其中,所述指定对象包括指定人物,所述第一互动素材保存单元还可以用于:
保存由所述指定人物录制的语音样本素材。
另外,该装置还可以包括:
声波信号信息提供单元,用于向所述第二终端对应的第二服务端提供预置频率的声波信号,以用于在所述第二终端中的视频播放到与所述指定对象对应的目标事件时,添加到所述视频中,以便所述第一终端通过检测所述预置频率的声波信号获知所述目标事件的发生。
另外,还可以包括:
统计单元,用于对各第一终端的互动情况进行统计;
统计信息提供单元,用于将统计信息提供给所述第二终端对应的第二服务端,由所述第二服务端将所述统计信息添加到所述第二终端播放的视频中。
与实施例三相对应,本申请实施例还提供了一种多屏互动装置,参见图12,该装置应用于第二终端,包括:
视频播放单元1201,用于播放视频;
声波信号播放单元1202,用于在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
其中,不同指定对象对应不同频率的声波信号。
与实施例四相对应,本申请实施例还提供了一种多屏互动装置,参见图13,该装置应用于第二服务端,包括:
声波信号信息接收单元1301,用于接收第一服务端提供的预置频率的声波信号信息;
声波信号信息插入单元1302,用于在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
其中,该装置还可以包括:
统计信息接收单元,用于接收所述第一服务端提供的对第一终端互动情况的统计信息;
统计信息播放单元,用于将所述统计信息添加到所述视频中进行发送,以用于通过所述第二终端进行播放。
与实施例五相对应,本申请实施例还提供了一种视频互动装置,参见图14,该装置应用于第一终端,包括:
加载单元1401,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
界面跳转单元1402,用于当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
素材添加单元1403,用于在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
与实施例六相对应,本申请实施例还提供了一种视频互动装置,参见图15,该装置应用于第一服务端,包括:
第二素材保存单元1501,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第二素材提供单元1502,用于将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
与实施例七相对应,本申请实施例还提供了一种互动装置,参见图16,该装置可以包括:
第二素材加载单元1601,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
第二实景图像采集单元1602,用于采集实景图像;
第二素材添加单元1603,用于当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
另外,本申请实施例还提供了一种电子设备,包括:
一个或多个处理器;以及
与所述一个或多个处理器关联的存储器,所述存储器用于存储程序指令,所述程序指令在被所述一个或多个处理器读取执行时,执行如下操作:
加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
采集实景图像;
当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
其中,图17示例性的展示出了电子设备的架构,例如,设备1700可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理,飞行器等。
参照图17,设备1700可以包括以下一个或多个组件:处理组件1702,存储器1704,电源组件1706,多媒体组件1708,音频组件1710,输入/输出(I/O)的接口1712,传感器组件1714,以及通信组件1716。
处理组件1702通常控制设备1700的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理元件1702可以包括一个或多个处理器1720来执行指令,以完成本公开技术方案提供的视频播放方法中的当满足预设条件时,生成流量压缩请求,并发送给服务器,其中所述流量压缩请求中记录有用于触发服务器获取目标关注区域的信息,所述流量压缩请求用于请求服务器优先保证目标关注区域内视频内容的码率;根据服务器返回的码流文件播放所述码流文件对应的视频内容,其中所述码流文件为服务器根据所述流量压缩请求对所述目标关注区域之外的视频内容进行码率压缩处理得到的视频文件的全部或部分步骤。此外,处理组件1702可以包括一个或多 个模块,便于处理组件1702和其他组件之间的交互。例如,处理部件1702可以包括多媒体模块,以方便多媒体组件1708和处理组件1702之间的交互。
存储器1704被配置为存储各种类型的数据以支持在设备1700的操作。这些数据的示例包括用于在设备1700上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1704可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1706为设备1700的各种组件提供电力。电源组件1706可以包括电源管理系统,一个或多个电源,及其他与为设备1700生成、管理和分配电力相关联的组件。
多媒体组件1708包括在设备1700和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1708包括一个前置摄像头和/或后置摄像头。当设备1700处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1710被配置为输出和/或输入声波信号。例如,音频组件1710包括一个麦克风(MIC),当设备1700处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部声波信号。所接收的声波信号可以被进一步存储在存储器1704或经由通信组件1716发送。在一些实施例中,音频组件1710还包括一个扬声器,用于输出声波信号。
I/O接口1712为处理组件1702和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1714包括一个或多个传感器,用于为设备1700提供各个方面的状态评估。例如,传感器组件1714可以检测到设备1700的打开/关闭状态,组件的相对定位,例如所述组件为设备1700的显示器和小键盘,传感器组件1714还可以检测设备1700 或设备1700一个组件的位置改变,用户与设备1700接触的存在或不存在,设备1700方位或加速/减速和设备1700的温度变化。传感器组件1714可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近商品的存在。传感器组件1714还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1714还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1716被配置为便于设备1700和其他设备之间有线或无线方式的通信。设备1700可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件1716经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件1716还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,设备1700可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1704,上述指令可由设备1700的处理器1720执行以完成本公开技术方案提供的视频播放方法中的当满足预设条件时,生成流量压缩请求,并发送给服务器,其中所述流量压缩请求中记录有用于触发服务器获取目标关注区域的信息,所述流量压缩请求用于请求服务器优先保证目标关注区域内视频内容的码率;根据服务器返回的码流文件播放所述码流文件对应的视频内容,其中所述码流文件为服务器根据所述流量压缩请求对所述目标关注区域之外的视频内容进行码率压缩处理得到的视频文件。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例或者 实施例的某些部分所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统或系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的系统及系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上对本申请所提供的多屏互动方法、装置及电子设备,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本申请的限制。

Claims (48)

  1. 一种多屏互动方法,其特征在于,包括:
    第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    采集实景图像;
    当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
  2. 根据权利要求1所述的方法,其特征在于,所述第二终端中播放的视频为直播视频流。
  3. 根据权利要求1所述的方法,其特征在于,所述第一终端接收所述视频播放时发出的预置频率的声波信号,确定所述目标事件的发生。
  4. 根据权利要求3所述的方法,其特征在于,所述声波信号由所述第二终端中的视频播放到与所述指定对象对应的目标事件时发出的。
  5. 根据权利要求1所述的方法,其特征在于,所述将所述指定对象素材添加到所述实景图像中,包括:
    将所述指定对象素材添加到所述实景图像中包括的平面上进行展示。
  6. 根据权利要求5所述的方法,其特征在于,
    所述添加所述指定对象素材之前,还包括:
    确定采集到的实景图像中的放置位置;
    所述将所述指定对象素材添加到所述实景图像中,包括:
    将所述指定对象素材添加到所述放置位置处。
  7. 根据权利要求6所述的方法,其特征在于,
    所述确定采集到的实景图像中的放置位置,包括:
    确定所述采集到的实景图像中的平面位置,所述放置位置位于的所述平面位置中。
  8. 根据权利要求7所述的方法,其特征在于,
    所述确定所述采集到的实景图像中的平面位置,所述放置位置位于的所述平面位置中,包括:
    在采集到的实景图像中进行平面检测;
    提供光标,根据检测到的平面,确定光标的可放置范围;
    将所述光标被放置的位置作为所述放置位置。
  9. 根据权利要求8所述的方法,其特征在于,所述将所述光标被放置的位置作为 所述放置位置,包括:
    以第一终端所在的初始位置为原点建立坐标系;
    确定所述光标被放置的位置在所述坐标系中的光标坐标;
    将所述光标坐标作为所述放置位置。
  10. 根据权利要求9所述的方法,其特征在于,还包括:
    在将所述指定对象素材添加到所述放置位置处后,当所述素材未出现在所述第一终端的界面时,确定所述第一终端相对于所述初始位置的变化方向;
    根据所述变化方向,在所述第一终端的界面中提供相反方向的提示标识。
  11. 根据权利要求1所述的方法,其特征在于,所述互动素材还包括用于表示传送通道的素材,在所述采集实景图像步骤之后,还包括:
    将所述用于表示传送通道的素材添加到所述实景图像中。
  12. 根据权利要求11所述的方法,其特征在于,还包括:
    所述将所述指定对象素材添加到所述实景图像具体包括:
    基于所述传送通道素材,展示所述指定对象通过所述传送通道进入所述实景图像中的过程。
  13. 根据权利要求1所述的方法,其特征在于,所述互动素材中还包括由所述指定对象录制的语音样本素材;
    所述方法还包括:
    获得所述第一终端关联用户的用户名称信息;
    针对所述关联用户生成包括该用户名称的问候语语料;
    根据所述语音样本素材,将所述问候语预料转化为语音并播放。
  14. 根据权利要求1所述的方法,其特征在于,所述指定对象素材为同一指定对象对应的多套素材,所述方法还包括:
    提供用于对所述指定对象素材进行选择的操作选项;
    所述将所述指定对象素材添加到所述实景图像中,包括:
    将被选中的指定对象素材添加到所述实景图像中。
  15. 根据权利要求1所述的方法,其特征在于,还包括:
    在对所述指定对象素材添加到所述实景图像中进行展示的过程中,提供拍摄操作选项;
    通过所述拍摄操作选项接收操作请求,根据各图像层,生成对应的图像,所述图像 层包括实景图像、指定对像素材的图像。
  16. 根据权利要求15所述的方法,其特征在于,所述根据各图像层,生成拍摄图像,包括:
    对各图像层进行截屏或录屏,并去掉其中用于展示操作选项的图像层,生成拍摄图像。
  17. 根据权利要求15所述的方法,其特征在于,所述实景图像中还包括与所述指定对象进行合影的人物图像。
  18. 根据权利要求15所述的方法,其特征在于,还包括:
    提供对拍摄图像进行分享的操作选项。
  19. 根据权利要求15所述的方法,其特征在于,还包括:
    提供对拍摄图像进行浏览以及分享的承接页面。
  20. 根据权利要求1至19任一项所述的方法,其特征在于,所述指定对象包括指定人物的信息。
  21. 根据权利要求1至19任一项所述的方法,其特征在于,所述指定对象包括指定商品的信息。
  22. 根据权利要求21所述的方法,其特征在于,所述将所述指定对象素材添加到所述实景图像中之后,还包括:
    提供用于对所述指定商品关联的数据对象进行抢购的操作选项;
    通过该操作选项接收到抢购操作时,提交到服务端,由服务端确定抢购结果。
  23. 根据权利要求1至19任一项所述的方法,其特征在于,所述指定对象包括线下游戏相关的道具信息。
  24. 根据权利要求23所述的方法,其特征在于,所述将所述指定对象素材添加到所述实景图像中之后,还包括:
    接收到对所述目标道具进行操作信息时,将所述操作信息提交到服务端,由所述服务端确定该操作所获得的奖励信息并返回;
    提供所获得的奖励信息。
  25. 根据权利要求1至19任一项所述的方法,其特征在于,所述指定对象素材包括通过对所述指定对象进行拍摄所获得的视频素材。
  26. 根据权利要求1至19任一项所述的方法,其特征在于,所述指定对象素材包括:以所述指定对象的形象为原型制作的卡通形象,以及基于该卡通形象制作的动画素 材。
  27. 一种多屏互动方法,其特征在于,包括:
    第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
  28. 根据权利要求27所述的方法,其特征在于,所述指定对象包括指定人物。
  29. 根据权利要求27所述的方法,其特征在于,所述保存互动素材,包括:
    保存通过对所述指定对象进行拍摄所获得的视频素材。
  30. 根据权利要求27所述的方法,其特征在于,所述保存互动素材,包括:
    保存以所述指定对象的形象为原型制作的卡通形象,以及基于该卡通形象制作的动画素材。
  31. 根据权利要求27所述的方法,其特征在于,所述指定对象包括指定人物,所述保存互动素材,还包括:
    保存由所述指定人物录制的语音样本素材。
  32. 根据权利要求27所述的方法,其特征在于,所述方法之前还包括:
    向所述第二终端对应的第二服务端提供预置频率的声波信号,以用于在所述第二终端中的视频播放到与所述指定对象对应的目标事件时,添加到所述视频中,以便所述第一终端通过检测所述预置频率的声波信号获知所述目标事件的发生。
  33. 根据权利要求27所述的方法,其特征在于,还包括:
    对各第一终端的互动情况进行统计;
    将统计信息提供给所述第二终端对应的第二服务端,由所述第二服务端将所述统计信息添加到所述第二终端播放的视频中。
  34. 一种多屏互动方法,其特征在于,包括:
    第二终端播放视频;
    在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
  35. 根据权利要求34所述的方法,其特征在于,不同指定对象对应不同频率的声波信号。
  36. 一种多屏互动方法,其特征在于,包括:
    第二服务端接收第一服务端提供的预置频率的声波信号信息;
    在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
  37. 根据权利要求36所述的方法,其特征在于,还包括:
    接收所述第一服务端提供的对第一终端互动情况的统计信息;
    将所述统计信息添加到所述视频中进行发送,以用于通过所述第二终端进行播放。
  38. 一种视频互动方法,其特征在于,包括:
    第一终端加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
    在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
  39. 一种视频互动方法,其特征在于,包括:
    第一服务端保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
  40. 一种互动方法,其特征在于,包括:
    加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    采集实景图像;
    当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
  41. 一种多屏互动装置,其特征在于,应用于第一终端,包括:
    第一素材加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    第一实景图像采集单元,用于采集实景图像;
    第一素材添加单元,用于当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
  42. 一种多屏互动装置,其特征在于,应用于第一服务端,包括:
    第一互动素材保存单元,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    第一互动素材提供单元,用于将所述互动素材提供给第一终端,由所述第一终端采集实景图像,当第二终端中的视频播放到与所述指定对象对应的目标事件时,将所述指定对象素材添加到所述实景图像中。
  43. 一种多屏互动装置,其特征在于,应用于第二终端,包括:
    视频播放单元,用于播放视频;
    声波信号播放单元,用于在所述视频播放到与指定对象相关的目标事件时,播放预置频率的声波信号,以便第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
  44. 一种多屏互动装置,其特征在于,应用于第二服务端,包括:
    声波信号信息接收单元,用于接收第一服务端提供的预置频率的声波信号信息;
    声波信号信息插入单元,用于在视频中与指定对象相关的目标事件发生的位置插入所述预置频率的声波信号,以便在通过第二终端播放所述视频的过程中,由第一终端通过检测所述声波信号获知所述目标事件的发生,并将指定对象素材添加到采集到的实景图像中。
  45. 一种视频互动装置,其特征在于,应用于第一终端,包括:
    加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素 材;
    界面跳转单元,用于当所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面;
    素材添加单元,用于在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
  46. 一种视频互动装置,其特征在于,应用于第一服务端,包括:
    第二素材保存单元,用于保存互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    第二素材提供单元,用于将所述互动素材提供给第一终端,以便所述第一终端中的视频播放到与所述指定对象相关的目标事件时,跳转到互动界面,并在所述互动界面中展示实景图像采集结果,并将所述指定对象素材添加到所述实景图像中。
  47. 一种互动装置,其特征在于,包括:
    第二素材加载单元,用于加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    第二实景图像采集单元,用于采集实景图像;
    第二素材添加单元,用于当检测到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
  48. 一种电子设备,其特征在于,包括:
    一个或多个处理器;以及
    与所述一个或多个处理器关联的存储器,所述存储器用于存储程序指令,所述程序指令在被所述一个或多个处理器读取执行时,执行如下操作:
    加载互动素材,所述互动素材包括根据指定对象创建的指定对象素材;
    采集实景图像;
    当第二终端中的视频播放到与所述指定对象相关的目标事件时,将所述指定对象素材添加到所述实景图像中。
PCT/CN2018/109281 2017-10-19 2018-10-08 多屏互动方法、装置及电子设备 WO2019076202A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710979621.2 2017-10-19
CN201710979621.2A CN109688347A (zh) 2017-10-19 2017-10-19 多屏互动方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2019076202A1 true WO2019076202A1 (zh) 2019-04-25

Family

ID=66173994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109281 WO2019076202A1 (zh) 2017-10-19 2018-10-08 多屏互动方法、装置及电子设备

Country Status (3)

Country Link
CN (1) CN109688347A (zh)
TW (1) TW201917556A (zh)
WO (1) WO2019076202A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062290A (zh) * 2019-04-30 2019-07-26 北京儒博科技有限公司 视频互动内容生成方法、装置、设备和介质
CN113157178B (zh) * 2021-02-26 2022-03-15 北京五八信息技术有限公司 一种信息处理方法及装置
CN113556531B (zh) * 2021-07-13 2024-06-18 Oppo广东移动通信有限公司 图像内容分享方法、装置以及头戴显示设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260319A1 (en) * 2015-03-04 2016-09-08 Aquimo, Llc Method and system for a control device to connect to and control a display device
CN106028169A (zh) * 2016-07-04 2016-10-12 无锡天脉聚源传媒科技有限公司 一种抽奖互动的方法及装置
CN106730815A (zh) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 一种易实现的体感互动方法及系统
CN106792246A (zh) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 一种融合式虚拟场景互动的方法及系统
CN106899870A (zh) * 2017-02-23 2017-06-27 任刚 一种基于智能电视和移动终端的vr内容交互系统及方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776458B (zh) * 2012-10-23 2017-04-12 华为终端有限公司 导航信息处理方法和车载设备
US9129430B2 (en) * 2013-06-25 2015-09-08 Microsoft Technology Licensing, Llc Indicating out-of-view augmented reality images
CN105810131A (zh) * 2014-12-31 2016-07-27 吴建伟 虚拟接待员装置
CN104794834A (zh) * 2015-04-04 2015-07-22 金琥 一种智能语音门铃系统及其实现方法
CN105392022B (zh) * 2015-11-04 2019-01-18 北京符景数据服务有限公司 基于音频水印的信息交互方法与装置
CN107016704A (zh) * 2017-03-09 2017-08-04 杭州电子科技大学 一种基于增强现实的虚拟现实实现方法
CN107172411B (zh) * 2017-04-18 2019-07-23 浙江传媒学院 一种基于家庭视频业务环境下的虚拟现实业务场景呈现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260319A1 (en) * 2015-03-04 2016-09-08 Aquimo, Llc Method and system for a control device to connect to and control a display device
CN106028169A (zh) * 2016-07-04 2016-10-12 无锡天脉聚源传媒科技有限公司 一种抽奖互动的方法及装置
CN106730815A (zh) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 一种易实现的体感互动方法及系统
CN106792246A (zh) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 一种融合式虚拟场景互动的方法及系统
CN106899870A (zh) * 2017-02-23 2017-06-27 任刚 一种基于智能电视和移动终端的vr内容交互系统及方法

Also Published As

Publication number Publication date
CN109688347A (zh) 2019-04-26
TW201917556A (zh) 2019-05-01

Similar Documents

Publication Publication Date Title
US11178471B2 (en) Video interaction method, terminal, and storage medium
CN109167950B (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
WO2019128787A1 (zh) 网络视频直播方法、装置及电子设备
US20210306700A1 (en) Method for displaying interaction information, and terminal
CN110198484B (zh) 消息推送方法、装置及设备
CN112717423B (zh) 游戏对局的直播方法、装置、设备及存储介质
WO2020010819A1 (zh) 基于直播间的数据交互方法、装置、终端和存储介质
CN109803154B (zh) 棋类比赛的直播方法、设备及存储介质
CN114610191B (zh) 界面信息提供方法、装置及电子设备
CN109151565B (zh) 播放语音的方法、装置、电子设备及存储介质
CN114727146B (zh) 信息处理方法、装置、设备及存储介质
CN111901658A (zh) 评论信息显示方法、装置、终端及存储介质
CN114245221B (zh) 基于直播间的互动方法、装置、电子设备及存储介质
CN111327916B (zh) 基于地理对象的直播管理方法、装置、设备及存储介质
WO2023000652A1 (zh) 直播互动及虚拟资源配置方法
CN114466209A (zh) 直播互动方法、装置、电子设备、存储介质和程序产品
WO2019076202A1 (zh) 多屏互动方法、装置及电子设备
CN109729367B (zh) 提供直播媒体内容信息的方法、装置及电子设备
CN109771955B (zh) 邀请请求处理方法、装置、终端及存储介质
US20220078221A1 (en) Interactive method and apparatus for multimedia service
CN109788364B (zh) 视频通话互动方法、装置及电子设备
CN109754275B (zh) 数据对象信息提供方法、装置及电子设备
CN108449605A (zh) 信息同步播放方法、装置、设备、系统及存储介质
CN114845129A (zh) 虚拟空间中的互动方法、装置、终端以及存储介质
CN109788327B (zh) 多屏互动方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18868294

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18868294

Country of ref document: EP

Kind code of ref document: A1