WO2022213727A1 - 直播交互方法、装置、电子设备和存储介质 - Google Patents

直播交互方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022213727A1
WO2022213727A1 PCT/CN2022/076542 CN2022076542W WO2022213727A1 WO 2022213727 A1 WO2022213727 A1 WO 2022213727A1 CN 2022076542 W CN2022076542 W CN 2022076542W WO 2022213727 A1 WO2022213727 A1 WO 2022213727A1
Authority
WO
WIPO (PCT)
Prior art keywords
live broadcast
interaction
virtual object
target user
live
Prior art date
Application number
PCT/CN2022/076542
Other languages
English (en)
French (fr)
Inventor
杨沐
南天骄
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022213727A1 publication Critical patent/WO2022213727A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Definitions

  • the present disclosure relates to the technical field of Internet live broadcast, and in particular, to a live broadcast interaction method, apparatus, electronic device and storage medium.
  • embodiments of the present disclosure provide a live broadcast interaction method, apparatus, electronic device, and storage medium.
  • an embodiment of the present disclosure provides a live broadcast interaction method, which is applied to a terminal entering a virtual object live broadcast room, including:
  • the live broadcast layer is superimposed on the live broadcast interface of the virtual object
  • the live broadcast interface refers to the interface displayed synchronously in each terminal entering the live broadcast room of the virtual object, so The content displayed on the live broadcast layer and the live broadcast content displayed on the live broadcast interface are taken together as the content displayed on the terminal of the target user, and the interaction area and the live broadcast interface maintain frame synchronization;
  • the interactive information is displayed at a preset position on the live broadcast layer.
  • an embodiment of the present disclosure also provides a live interactive device, which is configured for a terminal entering a virtual object live broadcast room, including:
  • the interactive area determination module is used to determine the interactive area on the live broadcast layer, wherein the live broadcast layer is superimposed on the live broadcast interface of the virtual object, and the live broadcast interface refers to each terminal entering the live broadcast room of the virtual object
  • the interface for synchronous display in the live broadcast layer, the content displayed on the live broadcast layer and the live broadcast content displayed on the live broadcast interface are jointly used as the content displayed on the terminal of the target user, and the interaction area and the live broadcast interface maintain frame synchronization;
  • an interaction information determination module configured to determine the interaction information corresponding to the touch operation in response to the touch operation of the target user in the interaction area
  • An interaction information display module configured to display the interaction information at a preset position on the live broadcast layer.
  • embodiments of the present disclosure further provide an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the electronic device is made to The device implements any of the live broadcast interaction methods provided in the embodiments of the present disclosure.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a computing device, the computing device enables the computing device to implement the embodiment of the present disclosure Any of the provided live interaction methods.
  • the technical solutions provided by the embodiments of the present disclosure have at least the following advantages: in the embodiments of the present disclosure, for each terminal that enters the virtual object live broadcast room, the live broadcast interface of the virtual object displayed by the terminal is superimposed with a live picture. layer, there is an interactive area on the live broadcast layer, which is used to receive the touch operation of the target user.
  • the terminal determines the interactive information according to the touch operation of the target user, and then displays the interactive information at the preset position on the live broadcast layer.
  • the displayed content and the live broadcast content displayed on the live broadcast interface are collectively regarded as the content displayed on the target user's terminal.
  • the same display content can be viewed based on the live broadcast interface.
  • To the personalized display content for individuals it solves the problem that the existing live broadcast solution cannot achieve personalized live broadcast interaction effect, realizes personalized live broadcast interaction effect, and achieves that different users can see the difference in the live broadcast process. Display the effect of the content.
  • FIG. 1 is a flowchart of a live interaction method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of another live interaction method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a terminal display interface related to body interaction provided by an embodiment of the present disclosure
  • FIG. 4 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a terminal display interface related to item making according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of another terminal display interface related to item making provided by an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a terminal display interface for finding items according to an embodiment of the present disclosure.
  • FIG. 9 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a terminal display interface related to live image generation according to an embodiment of the present disclosure
  • FIG. 11 is a schematic diagram of another terminal display interface related to live image generation provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a live interactive device according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a live interactive method provided by an embodiment of the present disclosure.
  • the method can be applied to a live broadcast scene of a virtual object.
  • the virtual object refers to a virtual host who is different from a real person, and the behavior of the virtual object is performed by the live broadcast background device.
  • Control, virtual objects can be 2D or 3D animated characters, animals or objects.
  • the generation and driving of their images and behaviors are controlled by background devices; the actions of virtual objects can be controlled by real people or real animals with sensing devices. Or the object is driven, and the control device collects the sensing data of the sensing device, and then drives the generation of the corresponding virtual object action.
  • the collection, driving and live broadcast process can be carried out in real time.
  • the live broadcast interaction method provided by the embodiments of the present disclosure may be executed by a live broadcast interaction apparatus, which may be implemented by software and/or hardware, and may be integrated on electronic devices with computing capabilities, such as terminals such as mobile phones, tablet computers, and desktop computers.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • S101 Determine the interactive area on the live broadcast layer, wherein the live broadcast layer is superimposed on the live broadcast interface of the virtual object, and the live broadcast interface refers to the interface displayed synchronously in each terminal entering the live broadcast room of the virtual object, and the content displayed on the live broadcast layer Together with the live content displayed on the live interface, it is used as the content displayed on the target user's terminal, and the interactive area and the live interface maintain frame synchronization.
  • the interaction area on the live broadcast layer (or called the live broadcast floating layer) is used to receive the touch operation of the target user, so that the terminal determines the interaction information according to the touch operation of the target user, and then displays it on the live broadcast map Preset positions on the layer.
  • the terminal determines the interaction information according to the touch operation of the target user, and then displays it on the live broadcast map Preset positions on the layer.
  • the live broadcast interface or the live broadcast screen.
  • the live layer can be invisible to the user's eyes, for example, the live layer can be set to be completely transparent.
  • the interactive area on the live layer can prompt the user according to the preset prompting strategy, so that the user can clarify the effective position of the interactive area.
  • the target user's terminal after the target user's terminal enters the live broadcast room of the virtual object, it can display the location prompt information about the interactive area on the live broadcast interface, or display the dynamic effect of the location of the interactive area with a preset duration, after reaching the preset duration. , the location prompt information of the interactive area or the dynamic effect of the location of the interactive area disappears.
  • the interactive area and the live interface maintain frame synchronization, and the interactive area appears on a specific frame of the live video stream displayed by any terminal, depending on the live scene of different virtual objects.
  • the appearance and disappearance of the interactive area is a dynamic process. For example, with the playback of the live video stream, there is an interactive area on the live broadcast layer superimposed on the live broadcast interface of the Nth frame to the N+Mth frame of the live broadcast interface, the interactive area begins to appear on the Nth frame of the live broadcast interface, and the N+Mth live broadcast interface The interactive area on the interface begins to disappear, and there is no interactive area on the N-1th frame live interface and the N+M+1th frame interface.
  • N and M are both integers, and specific values may be preset according to different live broadcast scenarios, which are not specifically limited in the embodiments of the present disclosure.
  • the specific size of the interactive area and the specific interactive methods supported can also be determined according to different live broadcast scenarios.
  • determining the interactive area on the live layer includes:
  • the live broadcast content displayed on the live broadcast interface find the target interaction type that matches the live broadcast content among multiple interaction types; according to the target interaction type, determine the interaction area on the live broadcast layer; wherein, the multiple interaction types include physical interaction, items Crafting, item finding, and live image generation.
  • the live broadcast content may include, but is not limited to, the live broadcast background, as well as the actions, expressions, clothing, language, etc. of virtual objects.
  • the current live broadcast content displayed on the current live broadcast interface can be determined.
  • the target interaction type that is, the current specific live scene, and then determine the interaction area on the live layer.
  • the interaction area may be a preset area determined based on the position of the target limb of the virtual object.
  • the interaction area may be a circular area with a preset area size determined based on the fingertip of the virtual object ;
  • the interactive area may be a circular area with a preset area size determined based on the head of the virtual object.
  • Item making means that users can make specific items, such as clothes and backpacks of specific colors or styles, by performing touch operations on the terminal screen according to the production tasks given by virtual objects during the live broadcast.
  • the interaction area may be a preset area that supports the user's item making operation, for example, the interaction area may be an area corresponding to the upper half of the terminal screen on the live broadcast layer.
  • Item search means that users can search for specific items through touch operations on the terminal screen according to the search task given by the virtual object during the live broadcast.
  • the interaction area may be a response area determined based on the item to be found, for example, the interaction area may be an area covering a preset size of the item to be found.
  • Live image generation, or taking pictures of virtual objects means that during the live broadcast, users can touch the screen of the terminal according to the content displayed on the live broadcast interface and the image generation requirements (or photographing requirements) of virtual objects. Operation to realize the generation of live images (or photos of virtual objects).
  • the interaction area may be a response area capable of triggering an image generation operation (or a photographing operation).
  • the interaction area may be an area on the live layer corresponding to the entire terminal screen.
  • the interaction information corresponding to the user's touch operation is different.
  • the interaction information corresponding to the current target user's touch operation may be determined according to the correspondence between the user's touch operation and the interaction information in different interaction scenarios, as feedback for the target user's touch operation.
  • the interaction information can be downloaded to the terminal synchronously with the area information of the interaction area, or after receiving a preset user operation, the corresponding interaction information can be matched on the server and loaded into the terminal.
  • the display position of the interactive information on the live broadcast layer can be determined according to different live broadcast scenarios. For example, for limb interaction, the preset position for displaying interactive information can be determined according to the position of the target limb of the virtual object and the action of the target limb; for item making or item searching, the preset position for displaying interactive information can be based on It depends on the touch position of the user in the interactive area; for live image generation, the preset position for displaying interactive information can be any position on the live layer corresponding to the terminal screen.
  • the live interaction method provided by the embodiment of the present disclosure may further include: on the live broadcast interface, displaying shared information corresponding to the target interaction type; wherein the shared information is based on the information of one or more or all users in the virtual object live broadcast room.
  • the touch result corresponding to the touch operation is obtained.
  • each terminal entering the virtual object live room can determine the user's touch result according to the user's touch operation in the interaction area, and then send the touch result to the server according to the terminal ID or user ID, and the server
  • the touch results sent by each terminal are counted and analyzed, shared information is generated, and then sent to each terminal, so that the same shared information can be displayed in each terminal.
  • the touch result may include, but is not limited to, whether the user's touch pressure reaches a threshold, whether the user's touch position is within the standard touch position range, whether the user has completed the required interactive operation, whether the user has completed the item to be made, The evaluation result of the item to be made completed by the user, whether the user successfully found the item to be found, whether the user triggered the generation of the required live image, or the evaluation result of the live image triggered by the user, etc.
  • the shared information may include, but is not limited to, the server's ranking information for each user based on the received touch result, the total number of users entering the live broadcast room obtained by statistics, the total number of manufactured items obtained by statistics, and the like.
  • the live broadcast interface of the virtual object displayed by the terminal is superimposed with a live broadcast layer, and there is an interactive area on the live broadcast layer for receiving the touch operation of the target user.
  • the terminal determines the interactive information according to the touch operation of the target user, and then displays the interactive information at a preset position on the live broadcast layer.
  • the content displayed on the live broadcast layer and the live broadcast content displayed on the live broadcast interface are displayed on the terminal of the target user together.
  • they can watch the same display content based on the live broadcast interface, and based on the live broadcast layer, they can see the personalized display content for individuals, thus solving the problem that the existing live broadcast solutions cannot achieve personalized display content. It realizes the personalized live interaction effect, and achieves the effect that different users can see the differentiated display content during the live broadcast.
  • FIG. 2 is a flowchart of another live interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners.
  • Fig. 2 specifically takes the target interaction type as body interaction as an example to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • S202 Determine the interaction area on the live broadcast layer according to the action of the target limb part of the virtual object on the live broadcast interface and the position of the target limb part.
  • the interaction area is used for receiving the touch operation generated by the target user on the target limb part of the virtual object.
  • the action of the target limb part of the virtual object may be to extend the fingertip, indicating that the virtual object wishes to touch the target user with the fingertip (or fingertip interaction).
  • the interaction area can be determined based on the hand movement and hand position of the virtual object.
  • the interaction area can be a circular area with a preset size determined based on the fingertip of the virtual object; or, the target limb of the virtual object
  • the action can be extending the palm forward, indicating that the virtual object wishes to high-five the target user.
  • the interaction area can be a rectangular area with a preset size determined based on the palm position of the virtual object; or, the target limb is the virtual object.
  • the action of the target limb part of the virtual object can be shaking the head, indicating that the virtual object wants the target user to pat or stroke its head.
  • it can be determined based on the head movement and head position of the virtual object
  • the interaction area for example, the interaction area may be a circular area with a preset area size determined based on the head shaking direction of the virtual object and the head position.
  • the live interaction method provided by the embodiment of the present disclosure further includes: determining prompt information that matches the action of the target limb part of the virtual object, and displaying the prompt information on the live broadcast interface, wherein the prompt information It is used to prompt the target user to perform a touch operation in the interactive area that matches the action of the target limb part of the virtual object.
  • the prompt information can be implemented by at least one of static image effects, dynamic image effects and text.
  • the prompt information can be displayed on the live broadcast interface after a specific time, or can disappear after the terminal detects the user's touch operation.
  • FIG. 3 is a schematic diagram of a terminal display interface related to body interaction provided by an embodiment of the present disclosure, which is used to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation of the embodiment of the present disclosure.
  • prompt information can be displayed based on the position of the virtual object's fingertips on the live broadcast interface.
  • the prompt information can include ripples formed by dynamic circles and dynamic
  • the virtual hand is used to prompt the target user to touch the virtual object with their fingertips.
  • the interactive information determined according to the target user's touch operation is a dynamic heart-shaped image as an example, the display effect of the interactive information is exemplified, and the display position of the interactive information is based on the position of the fingertip of the virtual object Sure.
  • the interaction information corresponding to the touch operation can be dynamically determined according to the touch operation information of the target user, such as touch pressure, touch position, etc., thereby increasing the interaction interest.
  • the interaction information includes feedback dynamic effects that match the movements of the target limb part of the virtual object.
  • the interaction information may be a heart effect formed by a dynamic heart-shaped image
  • the interaction information can be dynamically formed by a circular ripple effect .
  • the preset position for displaying the interaction information can still be determined according to the position of the target limb of the virtual object and the action of the target limb.
  • the display position of the interactive information may be determined based on the fingertip position of the virtual object.
  • the server can collect the user IDs (or user names) participating in the physical interaction with the virtual object in each terminal and the user's touch operation time, and identify the user according to the touch operation time from long to short. Sort and send the sorted results to each terminal for display. The server can also send different rewards to each user according to the ranking (the higher the ranking, the more rewards can be), so as to achieve the effect of promoting the user's participation in physical interaction.
  • the embodiment of the present disclosure solves the problem that the existing live broadcast solution cannot realize the personalized live broadcast interaction effect through the physical interaction between the virtual object and the target user during the live broadcast process, realizes the personalized live broadcast interaction effect, and achieves the During the live broadcast, you can see the effect of differentiated display content.
  • FIG. 4 is a flowchart of another live interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners.
  • FIG. 4 specifically takes the object interaction type as an example of item making to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the interaction information corresponding to the user's touch operation in the interaction area includes the color selected by the user, and the preset position on the live layer for displaying the interaction information includes the filling position of the color.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • the preset drawing board area can also display tools required for making items, such as color plates, brushes, erasers, etc., which are not specifically limited in the embodiment of the present disclosure, and the tools to be displayed in the preset drawing board area can be determined according to the manufactured item.
  • the preset artboard area is used to receive the touch operation of the target user, so that the target user can complete the item making.
  • the specific position of the preset artboard area on the live broadcast layer and the size and shape of the preset artboard area are not specifically limited in the embodiments of the present disclosure.
  • the item to be made can be any item, such as clothing, luggage, portrait, etc., which is not specifically limited in the embodiment of the present disclosure.
  • the item to be made can be determined according to the item making task proposed by the virtual object.
  • a color selection area is displayed in the preset drawing board area. After the target user selects a specific color, he can touch the corresponding color position, and the terminal determines the color selected by the target user according to the touch operation of the target user.
  • the first touch operation of the target user may be used to determine the selected color
  • the second touch operation of the target user may be used to determine the position where the target user wishes to fill the color
  • the preset drawing board area supports operations such as zooming and moving of the item to be made, providing greater flexibility for user operations.
  • the target user After the target user completes the color filling of some or all areas of the item to be made, he can submit the item for production, indicating that the item is completed.
  • the terminal After the production item is submitted, the terminal can also score the production item based on the preset scoring rules. For example, the smaller the difference between the user's filling color and the preset standard filling color of the item, the higher the score, otherwise the lower the score, the better the score. Shown on the live layer.
  • the operation of scoring the crafted items can also be performed by the server. For example, the server can score the crafting items sent by the terminal based on the preset scoring rules, and then send the scores to the terminal based on the user ID or the terminal ID, so as to Shown on the live layer.
  • FIG. 5 is a schematic diagram of a terminal display interface related to item making provided by an embodiment of the present disclosure.
  • the item to be made is a T-shirt as an example to exemplify the embodiment of the present disclosure, which should not be construed as a Specific restrictions.
  • the preset drawing board area displays the T-shirt to be made, and based on the user's touch operation, the color on the T-shirt can be filled.
  • the target user can zoom in on the T-shirt and move it to the middle of the preset artboard area, so that it is convenient to fill a specific position on the T-shirt with a specific color until the T-shirt is made.
  • the live interaction method provided by the embodiment of the present disclosure may further include:
  • the preset selection strategy may include but Not limited to: Determine the crafting item with the highest score among the crafting items as the target crafting item, randomly select the crafting item from each crafting item as the target crafting item, and determine the crafting item with the earliest submission time among the crafting items as the target crafting item, etc. ;
  • the communication voice can be pre-configured according to the actual scene, and the embodiment of the present disclosure does not make specific limitations. For example, the communication voice can be "Thank you for the item you made, I like it very much, and then I will show you a target made item", etc.;
  • the manner in which the virtual object displays the target manufactured item may be determined according to the type of the manufactured item. For example, for the manufactured item of clothing, the virtual object may be used to display the target manufactured item on the body, and for other non-clothing items.
  • the crafted item can be displayed by using a virtual object to put the target crafted item on the hand.
  • FIG. 6 is a schematic diagram of another terminal display interface related to item making provided by an embodiment of the present disclosure.
  • the item submitted by a user is a T-shirt as an example to illustrate the embodiment of the present disclosure, and should not be construed as a description of the present disclosure. Specific limitations of the examples.
  • the terminal can display the corresponding score.
  • the server can determine the T-shirt with the highest score as the target T-shirt based on the scores of the T-shirts submitted by each terminal, and then use the image fusion technology to fuse the target T-shirt with the virtual object in the live video stream to achieve the virtual object's body.
  • the effect of wearing the target T-shirt, and the fusion result is sent to each terminal for display.
  • the display effect of each terminal is shown in the right figure in Figure 6.
  • the embodiment of the present disclosure solves the problem that the existing live broadcast solution cannot realize the personalized live broadcast interaction effect, and realizes the personalized live broadcast interaction effect through the user's production of items during the live broadcast process and the display of the target production item by the virtual object. This achieves the effect that different users can see differentiated display content during the live broadcast.
  • FIG. 7 is a flowchart of another live interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners.
  • FIG. 7 specifically takes the target interaction type as item finding as an example to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • S402. Determine an interaction area on the live broadcast layer according to the target interaction type, where the interaction area is an area in the live broadcast layer covering the item to be found.
  • the item to be found can be any type of item, and it can be determined by the item finding task provided by the virtual object. For example, during the live broadcast, the virtual object says "Welcome to join the pot-hunting game", which means that the item to be found is a pot. Due to differences in the display position of the item to be found on the live broadcast interface and the shape of the item to be found, the position and area of the interactive area on the live broadcast layer can be determined according to the actual situation. For example, the interactive area can be covered by the area to be found The rectangular area of the item.
  • the target If the touch position of the user matches the position of the item to be found in the interaction area, it can be determined that the target user has successfully found the item to be found; if the distance between the touch position of the target user and the center of the item to be found in the interaction area is greater than or equal to The distance threshold, or the user's touch position is not within the effective interaction area, then the touch position of the target user does not match the position of the item to be found in the interaction area, and it can be determined that the target user has not successfully found the item to be found.
  • the distance threshold the value can be determined adaptively
  • a corresponding dynamic effect can be generated to prompt the target user's current item finding result.
  • a dynamic image frame of a specific color can be superimposed on the successfully found item based on the live broadcast layer, indicating that the user has successfully found the item in the frame.
  • the dynamic alternate display effect of the item being enlarged and reduced is displayed.
  • the dynamic effect can end the display.
  • the live interaction method provided by the embodiment of the present disclosure may further include:
  • the types of items to be found and the number of items under each type that were successfully found are displayed.
  • the type of the item to be found is related to the item search task proposed by the virtual object. For example, if the item to be found proposed by the virtual object is a daily necessities-pot, then during the live broadcast, the type of the item to be found displayed on the live broadcast layer is a pot.
  • the number of successful items can be updated and displayed in real time according to the user's touch operation.
  • FIG. 8 is a schematic diagram of a terminal display interface related to item search provided by an embodiment of the present disclosure.
  • the item to be found is a pot as an example to illustrate the embodiment of the present disclosure, but it should not be construed as a reference to the embodiment of the present disclosure. Specific restrictions.
  • the target user can participate in the search for items under the guidance of the virtual object.
  • the display position of the pot which dynamically displays the red image frame (ie, the display box) surrounding the pot.
  • the virtual object can also communicate verbally with the target user according to the target user's item search results.
  • the virtual object can say "Congratulations on successfully finding xxx", “Don't be discouraged, continue to look for xxx”, etc., to increase the interactivity of the live broadcast.
  • the embodiment of the present disclosure solves the problem that the existing live broadcast solutions cannot achieve personalized live broadcast interaction effects, and realizes personalized live broadcast.
  • the interactive effect of live broadcast has achieved the effect that different users can see differentiated display content during the live broadcast.
  • FIG. 9 is a flowchart of another live interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners.
  • FIG. 9 specifically takes the target interaction type as live image generation as an example to illustrate the embodiment of the present disclosure, but should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • the photographing operation area may be an area corresponding to the entire terminal screen on the live broadcast layer, or may be an area for a part of the terminal screen, which is not specifically limited in this embodiment of the present disclosure.
  • photographing prompt information On the live broadcast interface, display photographing prompt information, wherein the photographing prompt information is used to prompt the target user to meet the conditions to trigger the photographing, and the conditions include the live broadcast background displayed in the live broadcast interface, the posture of the virtual object, and the expression of the virtual object. at least one.
  • the photographing prompt information can be displayed on the live broadcast interface, or the live broadcast image generation requirements based on virtual objects can be displayed.
  • the prompt information for taking pictures is displayed.
  • the virtual object dances, performs, or assumes a preset pose, etc., and at the same time sends out a voice "Can you take a picture for me?"
  • the terminal detects the virtual object based on the virtual object's behavior and voice and proposes a live broadcast.
  • the image generation task displays the photo prompt information on the live broadcast interface.
  • the live broadcast background can be used to describe the information of the current environment of the virtual object, such as indoor or outdoor environment information, etc.; the posture and expression of the virtual object can be used to describe the behavior and state of the virtual object, which can be preset by the server .
  • FIG. 10 is a schematic diagram of a terminal display interface related to live image generation according to an embodiment of the present disclosure
  • the left diagram in FIG. 10 shows a display mode of photographing prompt information, which is used to exemplify the embodiment of the present disclosure.
  • background photo corresponds to a pose of the virtual object
  • happy photo corresponds to an expression of the virtual object
  • sideways photo in the sunset corresponds to a pose of the virtual object and the background of the live broadcast
  • the target user needs to take a photo of the virtual object that meets the above three conditions.
  • FIG. 10 does not specifically show the live broadcast background. In practical applications, the corresponding live broadcast background may be displayed according to the content of the live broadcast video stream.
  • the target user when the target user determines the live broadcast background, the posture of the virtual object and the expression of the virtual object, and meets the conditions required in the photo prompt information, the target user can perform a touch operation in the interactive area to take a picture of the virtual object.
  • the live interaction method provided by the embodiment of the present disclosure may further include:
  • the photographing focal length is determined to determine the photo of the virtual object based on the photographing focal length.
  • the embodiments of the present disclosure support the target user to adjust the focal length of the photo during the process of taking a photo of a virtual object, for example, by swiping left and right or up and down in the interactive area to adjust the focal length, thereby realizing the effect of simulating a real shooting scene and ensuring that the virtual object is a virtual object. Take clear photos.
  • FIG. 11 is a schematic diagram of another terminal display interface related to live image generation provided by an embodiment of the present disclosure. As shown in FIG. 11 , during the live broadcast, prompt information of focal length adjustment may also be displayed on the live broadcast interface to prompt the user to adjust the focal length.
  • the interaction information corresponding to the touch operation of the target user in the interaction area is the evaluation result of the target user's photographing.
  • the terminal may evaluate the photos taken by the target user for the virtual object based on the preset photographing evaluation strategy, determine and display the photographing evaluation results.
  • the terminal can also send the photo taken by the target user for the virtual object to the server, and the server evaluates the photo according to the preset photo evaluation strategy, and sends the photo evaluation result to the terminal.
  • determining the photographing evaluation result of the target user based on the photo of the virtual object includes: comparing the photographic information of the virtual object with standard photo information to determine the photographing evaluation result of the target user. Specifically, the smaller the difference between the photo information of the virtual object and the standard photo information, the higher the score of the photo evaluation result; the greater the difference between the photo information of the virtual object and the standard photo information, the lower the score of the photo evaluation result.
  • the photo information of the virtual object includes at least one of the shooting trigger time, the live broadcast background, the posture of the virtual object, the expression of the virtual object, and the target item (for example, an ornament) on the virtual object;
  • the standard photo information includes the standard photo At least one of the trigger moment, the standard live broadcast background, the standard photographing pose of the virtual object, the standard photographing expression of the virtual object, and the standard target item on the virtual object.
  • the standard photo information refers to the description information corresponding to the photo when the standard photo is taken for the virtual object.
  • the photo-triggering moment refers to the generation time of the target user's photo-triggering operation during the live broadcast
  • the standard photo-triggering moment refers to the ideal generation time of the target user's photo-triggering operation preset by the server. Small, the higher the score of the target user's photo evaluation result, otherwise the lower.
  • the photo information participating in the photographing evaluation can be determined according to the information included in the photo of the virtual object.
  • the preset position for displaying the photographing evaluation result may be reasonably determined according to the interface display layout, which is not specifically limited in the embodiment of the present disclosure.
  • the photo evaluation results can be displayed on the live layer corresponding to the upper half of the terminal screen.
  • the photo evaluation results "Great” and "Perfect" are displayed in the The live layer corresponds to the upper 1/3 of the terminal screen.
  • the server can also determine the reward that the target user can get according to the evaluation result of the target user's photo, and synchronize the reward to the target user's account.
  • the live interaction method may further include:
  • the total number of pictures taken by the target user is determined, wherein the total number of pictures taken is used to determine the remaining number of pictures taken by the target user in the process of determining the pictures of the virtual object.
  • the live interaction method provided by the embodiment of the present disclosure may further include:
  • the number of shots remaining by the target user is displayed in the number of shots display area on the live broadcast layer. Each time the target user generates a photo-triggering operation, the remaining photo-taking times are reduced by one.
  • the photos to be taken include 3 types, and then it can be determined that the total number of photos taken by the target user is not less than 3 times (for example, there are at least 2 photos for each type of photos. case), in FIG. 10, the total number of photos taken is 3 times as an example.
  • the target user takes a back view of the virtual object the remaining number of photos displayed on the live layer is 2.
  • the remaining number of photos displayed on the live layer is 1.
  • the remaining 1 photo opportunity is used to take a side view of the virtual object under the sunset.
  • the virtual object after displaying the photo prompt information, before taking a photo of the virtual object, according to the live broadcast logic preset on the server, the virtual object is allowed to change clothes and switch to different live broadcast backgrounds (which can reflect that the virtual object is in different environment) to increase the fun of live interaction.
  • the target user interacts with the virtual object by taking pictures, and the information of the pictures taken by different users for the virtual object is different. Therefore, there are differences in the evaluation results of taking pictures displayed on the live broadcast layer, and the speed of taking pictures of different users is different. Therefore, the remaining number of shots displayed on the live layer will also vary.
  • the embodiment of the present disclosure solves the problem that the existing live broadcast solution cannot realize the personalized live broadcast interaction effect, realizes the personalized live broadcast interaction effect, and achieves the effect that different users can see differentiated display content during the live broadcast process.
  • FIG. 12 is a schematic structural diagram of a live interactive device according to an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can be integrated on an electronic device with computing capabilities, such as a mobile phone, tablet computer, desktop computer, etc. user terminal.
  • the live interaction apparatus 600 may include an interaction area determination module 601, an interaction information determination module 602, and an interaction information display module 603, wherein:
  • the interactive area determination module 601 is used to determine the interactive area on the live broadcast layer, wherein the live broadcast layer is superimposed on the live broadcast interface of the virtual object, and the live broadcast interface refers to the interface displayed synchronously in each terminal entering the live broadcast room of the virtual object.
  • the content displayed on the layer and the live broadcast content displayed on the live broadcast interface are collectively regarded as the content displayed on the target user's terminal, and the interactive area and the live broadcast interface maintain frame synchronization;
  • the interaction information determination module 602 is configured to determine the interaction information corresponding to the touch operation in response to the touch operation of the target user in the interaction area;
  • the interaction information display module 603 is configured to display interaction information at a preset position on the live broadcast layer.
  • the interaction area determination module 601 includes:
  • a target interaction type determination unit configured to search for a target interaction type matching the live broadcast content among multiple interaction types according to the live broadcast content displayed on the live broadcast interface
  • the interaction area determination unit is used to determine the interaction area on the live layer according to the target interaction type
  • multiple interaction types include physical interaction, item making, item finding, and live image generation.
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the shared information display module is used to display the shared information corresponding to the target interaction type on the live broadcast interface; wherein, the shared information is obtained based on the touch results corresponding to the user's touch operation in the virtual object live broadcast room.
  • the interaction area determination unit is specifically used for:
  • the interactive area on the live layer is determined.
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the action prompt information display module is used to determine the prompt information that matches the action of the target limb part of the virtual object, and display the prompt information on the live broadcast interface, wherein the prompt information is used to prompt the target user to perform the operation with the virtual object in the interactive area. Touch operations that match the movements of the target body parts.
  • the interaction information includes feedback dynamic effects that match the movements of the target limb part of the virtual object.
  • the interaction area is a preset drawing board area, and the item to be produced is displayed in the preset drawing board area;
  • the interaction information determination module 602 is specifically used for:
  • the interactive information display module 603 includes:
  • a filling position determination unit configured to determine the filling position selected by the target user on the live broadcast layer in response to the target user's touch operation on the item to be made
  • the color display unit is used to display the color selected by the target user in the fill position on the live layer.
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the communication voice playback module is used to play the communication voice of the virtual object about the target production item, wherein the target production item is obtained by screening the production items submitted by the user in the virtual object live broadcast room according to the preset selection strategy;
  • the item display module is used to display the target crafting item on the virtual object.
  • the interaction area is the area in the live broadcast layer covering the item to be found
  • the interaction information determination module 602 includes:
  • a touch position determination unit configured to determine the touch position of the target user in response to the touch operation of the target user in the interaction area
  • an item finding success determining unit configured to determine that the target user has successfully found the item to be found if the touch position of the target user matches the position of the item to be found in the interaction area, and generate a dynamic effect indicating that the item is found successfully;
  • the interactive information display module 603 is specifically used for:
  • the dynamic effect used to indicate the success of finding the item is displayed.
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the item information display module is used to display the type of the item to be found and the number of items under each type that have been successfully found on the live broadcast layer.
  • the interaction area is the photo operation area in the live layer
  • the interaction information determination module 602 includes:
  • the photo prompt information display unit is used to display photo prompt information on the live broadcast interface, wherein the photo prompt information is used to prompt the target user to trigger the conditions that need to be met for taking pictures, and the conditions include the live broadcast background displayed in the live broadcast interface, the posture of the virtual object, and the at least one of the expressions of the virtual object;
  • the photo determining unit is configured to determine the photo of the virtual object in response to the target user's photo-triggering operation in the photo-taking operation area;
  • a photographing evaluation result determination unit configured to determine the photographing evaluation result of the target user based on the photograph of the virtual object
  • the interactive information display module 603 is specifically used for:
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the photographing focal length determination module is configured to determine the photographing focal length in response to the target user's photographing focal length adjustment operation in the photographing operation area, so as to determine the photograph of the virtual object based on the photographing focal length.
  • the photographing evaluation result determination unit is specifically used for:
  • the photo information of the virtual object is compared with the standard photo information to determine the photo evaluation result of the target user.
  • the photo information of the virtual object includes at least one of a shooting trigger time, a live broadcast background, a pose of the virtual object, an expression of the virtual object, and a target item on the virtual object;
  • the standard photo information includes at least one of a standard photo-taking trigger moment, a standard live broadcast background, a standard photo-taking pose of a virtual object, a standard photo-taking expression of the virtual object, and a standard target item on the virtual object.
  • the live interactive device 600 provided by the embodiment of the present disclosure further includes:
  • the photo type information determination module is used to determine the type information of the photo to be taken based on the photo prompt information
  • a photographing times determining module used for determining the total photographing times of the target user based on the type information of the photos to be taken, wherein the total photographing times is used to determine the remaining photographing times of the target user in the process of determining the photos of the virtual object;
  • the remaining number of shots display module is used to display the remaining number of shots of the target user in the display area of the number of shots on the live layer.
  • the live interactive device provided by the embodiment of the present disclosure can execute any live interactive method provided by the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • any live interactive method provided by the embodiment of the present disclosure and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, which is used to exemplarily describe an electronic device that implements the live broadcast interaction method provided by the embodiment of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, Mobile terminals such as car navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, wearable electronic devices, servers, and the like.
  • the electronic device shown in FIG. 13 is only an example, and should not impose any limitation on the functions and occupancy scope of the embodiments of the present disclosure.
  • electronic device 700 includes one or more processors 701 and memory 702 .
  • Processor 701 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 700 to perform desired functions.
  • CPU central processing unit
  • Processor 701 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 700 to perform desired functions.
  • Memory 702 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache), among others.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on a computer-readable storage medium, and the processor 701 may execute the program instructions to implement the live broadcast interaction method provided by the embodiments of the present disclosure, and may also implement other desired functions.
  • Various contents such as input signals, signal components, noise components, etc. may also be stored in the computer-readable storage medium.
  • the live interaction method provided by the embodiment of the present disclosure is applied to a terminal entering a virtual object live room, including: determining an interaction area on a live layer, wherein the live layer is superimposed on the live interface of the virtual object, and the live interface is Refers to the interface displayed synchronously in each terminal entering the live broadcast room of the virtual object.
  • the content displayed on the live broadcast layer and the live broadcast content displayed on the live broadcast interface are taken together as the content displayed on the target user's terminal, and the interactive area and the live broadcast interface maintain frame synchronization;
  • the interaction information corresponding to the touch operation is determined; the interaction information is displayed at a preset position on the live broadcast layer.
  • the electronic device 700 may also perform other optional implementations provided by the method embodiments of the present disclosure.
  • the electronic device 700 may also include an input device 703 and an output device 704 interconnected by a bus system and/or other form of connection mechanism (not shown).
  • the input device 703 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 704 can output various information to the outside, including the determined distance information, direction information, and the like.
  • the output devices 704 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.
  • the electronic device 700 may also include any other suitable components according to the specific application.
  • the embodiments of the present disclosure may also be computer program products, which include computer programs or computer program instructions, which, when executed by a processor, cause a computing device to implement what is provided by the embodiments of the present disclosure. Any live interactive method of .
  • the computer program product may write program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc., as well as conventional procedural programming language, such as "C" language or similar programming language.
  • the program code may execute entirely on the user electronic device, partly on the user electronic device, as a stand-alone software package, partly on the user electronic device and partly on the remote electronic device, or entirely on the remote electronic device execute on.
  • embodiments of the present disclosure may further provide a computer-readable storage medium on which computer program instructions are stored, and when executed by the processor, the computer program instructions enable the computing device to implement any live broadcast interaction method provided by the embodiments of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure is applied to a terminal entering a virtual object live room, including: determining an interaction area on a live layer, wherein the live layer is superimposed on the live interface of the virtual object, and the live interface is Refers to the interface displayed synchronously in each terminal entering the live broadcast room of the virtual object.
  • the content displayed on the live broadcast layer and the live broadcast content displayed on the live broadcast interface are taken together as the content displayed on the target user's terminal, and the interactive area and the live broadcast interface maintain frame synchronization;
  • the interaction information corresponding to the touch operation is determined; the interaction information is displayed at a preset position on the live broadcast layer.
  • a computer-readable storage medium can employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

提供一种直播交互方法、装置、电子设备和存储介质,其中,该方法包括:确定直播图层上的交互区域,其中,直播图层叠加在虚拟对象的直播界面上,直播界面是指进入虚拟对象直播间的各个终端中同步展示的界面,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,交互区域与直播界面保持帧同步;响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息;在直播图层上的预设位置展示交互信息。

Description

直播交互方法、装置、电子设备和存储介质
相关申请的交叉引用
本申请要求于2021年4月7日提交的,申请名称为“直播交互方法、装置、电子设备和存储介质”的、中国专利申请号为“202110370982.3”的优先权,该中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及互联网直播技术领域,尤其涉及一种直播交互方法、装置、电子设备和存储介质。
背景技术
互联网直播技术的发展,使得直播成为一种流行的娱乐、消费方式。在现有直播模式下,用户在直播间可以通过与主播聊天、向主播赠送礼物、点赞等方式进行直播交互。
然而,在现有直播模式下,大量用户进入同一直播间,每个用户与主播的交互信息直接通过后台服务器同步至直播间的展示画面中,各个用户看到的直播画面基本一致,无法达到个性化的直播交互效果。
技术解决方案
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种直播交互方法、装置、电子设备和存储介质。
第一方面,本公开实施例提供了一种直播交互方法,应用于进入虚拟对象直播间的终端,包括:
确定直播图层上的交互区域,其中,所述直播图层叠加在所述虚拟对象的直播界面上,所述直播界面是指进入所述虚拟对象直播间的各个终端中同步展示的界面,所述直播图层上展示的内容和所述直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,所述交互区域与所述直播界面保持帧同步;
响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息;
在所述直播图层上的预设位置展示所述交互信息。
第二方面,本公开实施例还提供了一种直播交互装置,配置于进入虚拟对象直播间的终端,包括:
交互区域确定模块,用于确定直播图层上的交互区域,其中,所述直播图层叠加在所述虚拟对象的直播界面上,所述直播界面是指进入所述虚拟对象直播间的各个终端中同步展示的界面,所述直播图层上展示的内容和所述直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,所述交互区域与所述直播界面保持帧同步;
交互信息确定模块,用于响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息;
交互信息展示模块,用于在所述直播图层上的预设位置展示所述交互信息。
第三方面,本公开实施例还提供了一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述电子设备实现本公开实施例提供的任一所述的直播交互方法。
第四方面,本公开实施例还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现本公开实施例提供的任一所述的直播交互方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:在本公开实施例中,针对进入虚拟对象直播间的每个终端,终端展示的虚拟对象的直播界面上叠加有直播图层,直播图层上存在交互区域,用于接收目标用户的触控操作,终端根据目标用户的触控操作确定交互信息,然后在直播图层上的预设位置展示交互信息,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,针对于不同用户,在直播过程中,可以基于直播界面观看到相同的展示内容,基于直播图层,可以看到针对个人的个性化展示内容,从而解决了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种直播交互方法的流程图;
图2为本公开实施例提供的另一种直播交互方法的流程图;
图3为本公开实施例提供的一种关于肢体交互的终端展示界面的示意图;
图4为本公开实施例提供的另一种直播交互方法的流程图;
图5为本公开实施例提供的一种关于物品制作的终端展示界面的示意图;
图6为本公开实施例提供的另一种关于物品制作的终端展示界面的示意图;
图7为本公开实施例提供的另一种直播交互方法的流程图;
图8为本公开实施例提供的一种关于物品寻找的终端展示界面的示意图;
图9为本公开实施例提供的另一种直播交互方法的流程图;
图10为本公开实施例提供的一种关于直播图像生成的终端展示界面的示意图;
图11为本公开实施例提供的另一种关于直播图像生成的终端展示界面的示意图;
图12为本公开实施例提供的一种直播交互装置的结构示意图;
图13为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开 的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
图1为本公开实施例提供的一种直播交互方法的流程图,该方法可以适用于虚拟对象的直播场景,虚拟对象是指区别于真人的虚拟主播,虚拟对象的行为表现由直播后台设备进行控制,虚拟对象可以是2D或者3D动画人物,也可以是动物或者物体,其形象以及行为的生成和驱动,受后台设备控制;虚拟对象的动作可以由携带有传感设备的真人或者真实的动物或者物体驱动,控制设备收集传感设备的传感数据,进而驱动对应的虚拟对象动作的生成,该收集、驱动以及直播过程可以实时进行。本公开实施例提供的直播交互方法可以由直播交互装置执行,该装置可以采用软件和/或硬件实现,并可集成在具有计算能力的电子设备上,例如手机、平板电脑、台式电脑等终端。
如图1所示,本公开实施例提供的直播交互方法可以包括:
S101、确定直播图层上的交互区域,其中,直播图层叠加在虚拟对象的直播界面上,直播界面是指进入虚拟对象直播间的各个终端中同步展示的界面,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,交互区域与直播界面保持帧同步。
在本公开实施例中,直播图层(或称为直播浮层)上的交互区域用于接收目标用户的触控操作,使得终端根据目标用户的触控操作确定交互信息,然后展示在直播图层上的预设位置。对于不同用户,在直播过程中,可以基于直播界面(或称为直播画面)观看到相同的展示内容,基于直播图层,可以看到针对个人的个性化展示内容,从而解决现有直播方案不能实现个性化的直播交互效果的问题,实现个性化的直播交互效果。
直播图层对用户肉眼可以是不可见的,例如直播图层可以设置为完全透明的状态。直播图层上的交互区域可以按照预设的提示策略,对用户进行提示, 使得用户明确交互区域的有效位置。示例性地,目标用户的终端进入虚拟对象的直播间后,可以在直播界面上展示关于交互区域的位置提示信息,或者对交互区域所在位置进行预设时长的动态效果展示,达到预设时长后,交互区域的位置提示信息或交互区域所在位置的动态效果随之消失。
交互区域与直播界面保持帧同步,交互区域出现在任一终端展示的直播视频流的特定帧上,具体根据不同虚拟对象的直播场景而定。交互区域的出现、消失是一个动态的过程。例如随着直播视频流的播放,第N帧直播界面至第N+M帧直播界面上叠加的直播图层上存在交互区域,第N帧直播界面上开始出现交互区域,第N+M帧直播界面上交互区域开始消失,第N-1帧直播界面以及第N+M+1帧界面上均不存在交互区域。其中,N与M均为整数,具体取值可以根据不同直播场景进行预设,本公开实施例不作具体限定。交互区域的具体面积大小以及支持的具体交互方式,也可以根据不同直播场景而定。
在一种可选实施方式中,确定直播图层上的交互区域,包括:
根据直播界面上展示的直播内容,在多个交互类型中查找与直播内容匹配的目标交互类型;根据目标交互类型,确定直播图层上的交互区域;其中,多个交互类型包括肢体交互、物品制作、物品寻找以及直播图像生成。
不同交互类型下的直播界面展示的直播内容存在差异,直播内容可以包括但不限于直播背景,以及虚拟对象的动作、表情、穿着、语言等,可以根据当前直播界面上展示的直播内容,确定当前的目标交互类型,也即当前的具体直播场景,然后确定直播图层上的交互区域。
具体地,肢体交互是指在直播过程中用户可以通过终端屏幕与虚拟对象进行肢体上的交互,例如用户通过终端屏幕与虚拟对象进行指尖触碰或者击掌,用户通过终端屏幕抚摸虚拟对象的头部等。此时,交互区域可以是基于虚拟对象的目标肢体的位置确定的预设区域,例如针对指尖触碰的情况,交互区域可以是基于虚拟对象的指尖确定的预设面积大小的圆形区域;针对抚摸头部的情况,交互区域可以是基于虚拟对象的头部确定的预设面积大小的圆形区域。
物品制作是指直播过程中用户可以根据虚拟对象给出的制作任务,通过在 终端屏幕上进行触控操作,制作特定的物品,例如制作特定颜色或特定样式的衣服、背包等。此时,交互区域可以是支持用户的物品制作操作的预设区域,例如交互区域可以是直播图层上对应终端屏幕上半部分的区域。
物品寻找是指直播过程中用户可以根据虚拟对象给出的寻找任务,通过在终端屏幕上的触控操作,寻找特定的物品。此时,交互区域可以是基于待寻找物品确定的响应区域,例如交互区域可以是覆盖待寻找物品的预设面积大小的区域。
直播图像生成,或称为给虚拟对象拍照,是指直播过程中,用户可以根据直播界面展示的内容、以及虚拟对象的图像生成需求(或称为拍照需求),通过在终端屏幕上的触控操作,实现直播图像(或称为虚拟对象的照片)的生成。此时,交互区域可以是能够触发图像生成操作(或称为拍照操作)的响应区域,例如交互区域可以是直播图层上对应整个终端屏幕的区域。
S102、响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息。
针对不同的交互场景,用户的触控操作对应的交互信息不同。示例性地,可以根据不同交互场景下用户触控操作与交互信息的对应关系,确定当前目标用户的触控操作对应的交互信息,以作为对目标用户的触控操作的反馈。交互信息可以与交互区域的区域信息同步下载到终端,也可以当接收到预设用户操作后,在服务端匹配到对应的交互信息,并加载到终端。
S103、在直播图层上的预设位置展示交互信息。
交互信息在直播图层上的展示位置可以根据不同的直播场景确定。例如,针对肢体交互,用于展示交互信息的预设位置可以根据虚拟对象的目标肢体的位置以及目标肢体的动作而定;针对物品制作或者物品寻找,用于展示交互信息的预设位置可以根据用户在交互区域的触控位置而定;针对直播图像生成,用于展示交互信息的预设位置可以直播图层上对应终端屏幕的任意位置。
通过展示交互信息,可以增加直播过程中终端可展示内容的丰富性,增加用户与虚拟对象的交互趣味性,增加用户观看直播的体验。
可选地,本公开实施例提供的直播交互方法还可以包括:在直播界面上,展示目标交互类型对应的共享信息;其中,共享信息是基于虚拟对象直播间中一个或多个或全部用户的触控操作对应的触控结果得到。
示例性地,进入虚拟对象直播间的各个终端,可以根据用户在交互区域的触控操作,确定用户的触控结果,然后将触控结果按照终端标识或用户标识发送至服务端,服务端对各个终端发送的触控结果进行统计与分析,生成共享信息,然后发送至各个终端,使得各个终端中可以展示相同的共享信息。
其中,触控结果例如可以包括但不限于:用户的触控压力大小是否达到阈值、用户的触控位置是否在标准触控位置范围、用户是否完成要求的交互操作、用户是否完成待制作物品、用户完成的待制作物品的评估结果、用户是否成功寻找到待寻找物品、用户是否触发需求的直播图像的生成、或者用户触发生成的直播图像的评估结果等。相应地,共享信息可以包括但不限于服务端基于接收的触控结果对各用户的排名信息、统计得到的进入直播间的用户总数量、统计得到的制作物品总数量等。通过展示共享信息,可以实现直播过程中各个终端上个性化展示内容与直播视频流中各终端中同步展示的内容的有机结合,丰富直播交互的实现方式。
在本公开实施例中,针对进入虚拟对象直播间的每个终端,终端展示的虚拟对象的直播界面上叠加有直播图层,直播图层上存在交互区域,用于接收目标用户的触控操作,终端根据目标用户的触控操作确定交互信息,然后在直播图层上的预设位置展示交互信息,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,针对于不同用户,在直播过程中,可以基于直播界面观看到相同的展示内容,基于直播图层,可以看到针对个人的个性化展示内容,从而解决了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
图2为本公开实施例提供的另一种直播交互方法的流程图,基于上述技术方案进一步进行优化与扩展,并可以与上述各个可选实施方式结合。图2具体 以目标交互类型为肢体交互为例,对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。
如图2所示,本公开实施例提供的直播交互方法可以包括:
S201、根据直播界面上展示的直播内容,在多个交互类型中查找与直播内容匹配的目标交互类型,其中,目标交互类型是肢体交互。
S202、根据直播界面上虚拟对象的目标肢体部位的动作和目标肢体部位的位置,确定直播图层上的交互区域。
交互区域用于接收目标用户针对虚拟对象的目标肢体部位所产生的触控操作。以目标肢体部位为虚拟对象的手部为例,虚拟对象的目标肢体部位的动作可以是伸出指尖,表示虚拟对象希望与目标用户进行指尖触碰(或称为指尖交互),此时,可以基于虚拟对象的手部动作以及手部位置确定交互区域,例如,交互区域可以是基于虚拟对象的指尖确定的预设面积大小的圆形区域;或者,虚拟对象的目标肢体部位的动作可以是向前伸出手掌,表示虚拟对象希望与目标用户进行击掌,此时,交互区域可以是基于虚拟对象的手掌位置确定预设面积大小的矩形区域;或者,以目标肢体部位为虚拟对象的头部为例,虚拟对象的目标肢体部位的动作可以是晃动头部,表示虚拟对象希望目标用户拍拍或者抚摸其头部,此时,可以基于虚拟对象的头部动作以及头部位置确定交互区域,例如,交互区域可以是基于虚拟对象的头部晃动方向以及头部位置确定的预设面积大小的圆形区域。
在一种可选实施方式中,本公开实施例提供的直播交互方法还包括:确定与虚拟对象的目标肢体部位的动作相匹配的提示信息,并在直播界面上展示提示信息,其中,提示信息用于提示目标用户在交互区域执行与虚拟对象的目标肢体部位的动作相匹配的触控操作。提示信息可以采用静态图像效果、动态图像效果和文字中的至少一种方式实现。提示信息可以在展示特定时间后从直播界面上消息,或者可以在终端检测到用户的触控操作后消失。
图3为本公开实施例提供的一种关于肢体交互的终端展示界面的示意图,用于对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限 定。如图3中的左图所示,虚拟对象抬起手并伸出指尖后,可以基于直播界面上虚拟对象的指尖位置,展示提示信息,提示信息可以包括动态的圆圈形成的涟漪以及动态的虚拟手部,用于提示目标用户与虚拟对象进行指尖触碰。图3中的右图,以根据目标用户的触控操作确定的交互信息为动态的心形图像为例,对交互信息的展示效果进行示例,并且交互信息的展示位置基于虚拟对象的指尖位置确定。
S203、响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息。
针对肢体交互的直播场景,可以根据目标用户的触控操作信息,例如触控压力、触控位置等,动态确定与触控操作对应的交互信息,从而增加交互趣味性。可选地,交互信息包括与虚拟对象的目标肢体部位的动作相匹配的反馈动态效果。例如,如果目标用户的触控位置与交互区域的中心位置的距离小于第一距离阈值,或者目标用户的触控压力大于第一压力阈值,则交互信息可以是动态的心形图像形成的心动效果,如果目标用户的触控位置与交互区域的中心位置的距离大于或等于第一距离阈值,或者目标用户的触控压力小于或等于第一压力阈值,则交互信息可以动态的圆圈形成的涟漪效果。前述各个阈值的取值均可以灵活确定。
S204、在直播图层上的预设位置展示交互信息。
其中,用于展示交互信息的预设位置仍然可以根据虚拟对象的目标肢体的位置以及目标肢体的动作而定。例如,针对指尖触碰,交互信息的展示位置可以是基于虚拟对象的指尖位置确定。
此外,在直播过程中,服务端可以通过收集各个终端中参与与虚拟对象的肢体交互的用户标识(或用户名称)以及用户的触控操作时间,按照触控操作时间由长到短对用户标识进行排序,并将排序结果发送至各个终端中展示。服务端还可以根据排名情况,向各个用户发送不同的奖励(排名越靠前,奖励可以越多),以达到促进用户参与肢体交互的效果。
本公开实施例通过直播过程中虚拟对象与目标用户之间的肢体交互,解决 了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
图4为本公开实施例提供的另一种直播交互方法的流程图,基于上述技术方案进一步进行优化与扩展,并可以与上述各个可选实施方式结合。图4具体以目标交互类型为物品制作为例,对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。针对物品制作,与交互区域中用户的触控操作对应的交互信息包括用户选择的色彩,直播图层上用于展示交互信息的预设位置包括色彩的填充位置。
如图4所示,本公开实施例提供的直播交互方法可以包括:
S301、根据直播界面上展示的直播内容,在多个交互类型中查找与直播内容匹配的目标交互类型,其中,目标交互类型是物品制作。
S302、根据目标交互类型,确定直播图层上的交互区域,其中,交互区域为预设画板区域,预设画板区域中展示待制作物品。
预设画板区域中还可以展示有制作物品需要的工具,例如颜色版、画笔、橡皮擦等,本公开实施例不作具体限定,可以根据制作的物品确定预设画板区域中需要展示的工具。预设画板区域用于接收目标用户的触控操作,以使目标用户完成物品制作。预设画板区域在直播图层上的具体位置以及预设画板区域的面积大小、形状等,本公开实施例均不作具体限定。待制作物品可以是任意的物品,例如衣物、箱包、画像等,本公开实施例也不作具体限定,在直播过程中,可以根据虚拟对象提出的物品制作任务,确定待制作物品。
S303、响应于目标用户在预设画板区域的触控操作,确定目标用户选择的用于填充待制作物品的色彩。
预设画板区域中展示有颜色选择区域,目标用户选好特定颜色后,可以在相应的颜色位置进行触控,终端根据目标用户的触控操作,确定目标用户选择的色彩。
S304、响应于目标用户在待制作物品上的触控操作,确定目标用户在直播 图层上选择的填充位置。
在本公开实施例中,目标用户的第一触控操作,可以用于确定选择的色彩,目标用户的第二次触控操作,可以用于确定目标用户希望填充该色彩的位置。
需要说明的是,在物品制作过程中,不同用户选择的填充色彩不一样,因此,不同用户终端上展示的物品填充颜色不同,不同用户终端上展示差异化的物品制作效果。
并且,在制作物品过程中,预设画板区域支持待制作物品的缩放、移动等操作,为用户操作提供较大的灵活性。
S305、将目标用户选择的色彩展示在直播图层上的填充位置。
目标用户完成待制作物品上部分区域或所有区域的颜色填充后,可以进行物品制作提交,表示完成物品制作。制作物品提交完成后,终端还可以基于预设的评分规则对制作物品进行评分,例如用户填充色彩与预设的物品标准填充色彩的差异度越小,评分越高,否则评分越低,评分可以展示在直播图层上。此外,对制作物品进行评分的操作也可以由服务端执行,例如服务端可以基于预设的评分规则,对终端发送的制作物品进行评分,然后基于用户标识或者终端标识将评分发送至终端,以在直播图层上展示。
图5为本公开实施例提供的一种关于物品制作的终端展示界面的示意图,具体以待制作物品为T恤为例,对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。如图5所示,预设画板区域展示有待制作的T恤,基于用户的触控操作,可以实现对T恤上色彩的填充。在填充色彩过程中,目标用户可以对T恤进行放大并移动至预设画板区域的中间位置,从而便于对T恤上的特定位置进行特定色彩的填充,直至T恤制作完成。
在一种可选实施方式中,本公开实施例提供的直播交互方法还可以包括:
播放虚拟对象关于目标制作物品的交流语音,其中,目标制作物品是根据预设选择策略从虚拟对象直播间中一个或多个或全部用户提交的制作物品中筛选得到;预设选择策略可以包括但不限于:将各个制作物品中评分最高的制作 物品确定为目标制作物品,从各个制作物品中随机选择制作物品作为目标制作物品,将各个制作物品中提交时间最早的制作物品确定为目标制作物品等;交流语音可以根据实际场景进行预先配置,本公开实施例不作具体限定,例如交流语音可以“谢谢大家制作的物品,我非常喜欢,接下来我为大家展示一件目标制作物品”等;
在虚拟对象上展示目标制作物品。示例性地,可以根据制作物品的类型确定虚拟对象对目标制作物品的展示方式,例如针对衣物类的制作物品,可以采用虚拟对象将目标制作物品穿在身上的方式进行展示,针对其他非衣物类的制作物品,可以采用虚拟对象将目标制作物品放在手上的方式进行展示。
图6为本公开实施例提供的另一种关于物品制作的终端展示界面的示意图,具体以用户提交的制作物品为T恤为例,对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。如图6中的左图所示,目标用户提交制作完成的T恤后,终端可以展示相应的评分。服务端可以基于各个终端提交的T恤的评分,将评分最高的T恤确定为目标T恤,然后利用图像融合技术,将目标T恤与直播视频流中的虚拟对象进行融合,达到虚拟对象身穿目标T恤的效果,并将融合结果发送至各个终端中进行展示,各个终端的展示效果如图6中右图所示。
本公开实施例通过用户在直播过程中进行物品制作,以及虚拟对象对目标制作物品的展示,解决了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
图7为本公开实施例提供的另一种直播交互方法的流程图,基于上述技术方案进一步进行优化与扩展,并可以与上述各个可选实施方式结合。图7具体以目标交互类型为物品寻找为例,对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。
如图7所示,本公开实施例提供的直播交互方法可以包括:
S401、根据直播界面上展示的直播内容,在多个交互类型中查找与直播内 容匹配的目标交互类型,其中,目标交互类型是物品寻找。
S402、根据目标交互类型,确定直播图层上的交互区域,其中,交互区域为直播图层中覆盖待寻找物品的区域。
待寻找物品可以是任意类型的物品,具体可以虚拟对象提供的物品寻找任务确定,例如直播过程中虚拟对象说“欢迎加入找锅游戏”,则表示待寻找物品为锅。由于待寻找物品在直播界面上的展示位置、以及待寻找物品的形状等因素存在差异,交互区域在直播图层上的位置以及区域面积可以根据实际情况而定,例如交互区域可以是覆盖待寻找物品的矩形区域。
S403、响应于目标用户在交互区域的触控操作,确定目标用户的触控位置。
S404、如果目标用户的触控位置和交互区域中待寻找物品的位置相匹配,则确定目标用户成功寻找到待寻找物品,并生成用于表示物品寻找成功的动态效果。
示例性地,如果目标用户的触控位置与交互区域中待寻找物品的中心位置的距离小于距离阈值(取值可以适应性确定),或者用户的触控位置处于有效的交互区域内,则目标用户的触控位置和交互区域中待寻找物品的位置相匹配,可以确定目标用户成功寻找到待寻找物品;如果目标用户的触控位置与交互区域中待寻找物品的中心位置的距离大于或等于距离阈值,或者用户的触控位置未处于有效的交互区域内,则目标用户的触控位置和交互区域中待寻找物品的位置不匹配,可以确定目标用户未成功寻找到待寻找物品。
无论是用户成功寻找到带寻找物品,还是用户未成功寻找到待寻找物品,均可以生成对应的动态效果,以用于提示目标用户当前的物品寻找结果。
S405、在直播图层上,基于待寻找物品的位置,展示用于表示物品寻找成功的动态效果。
例如,针对物品寻找成功的情况,可以基于直播图层,在寻找成功的物品上叠加展示动态的特定颜色的图像框,表示用户成功寻找到框内的物品,同时,还可以针对寻找成功的物品,在直播图层上,基于物品位置展示该物品被放大、 被缩小的动态交替展示效果。当展示时间达到相应的时间阈值时,动态效果可以结束显示。
可选地,本公开实施例提供的直播交互方法还可以包括:
在直播图层上展示待寻找物品的类型、以及寻找成功的各类型下的物品数量。待寻找物品的类型与虚拟对象提出的物品寻找任务有关,例如,虚拟对象提出的待寻找物品为生活用品-锅,则直播过程中,直播图层上展示的待寻找物品的类型为锅,寻找成功的物品数量可以根据用户的触控操作进行实时的更新并显示。
图8为本公开实施例提供的一种关于物品寻找的终端展示界面的示意图,具体以待寻找物品为锅为例,对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。如图8所示,在直播过程中,目标用户可以在虚拟对象的引导下,参与物品寻找,基于目标用户的触控操作,确定目标用户成功寻找到锅之后,可以在直播图层上,基于锅的展示位置,动态展示包围锅的红色图像框(即展示框)。
另外,虚拟对象还可以根据目标用户的物品寻找结果,与目标用户进行语言交流,例如虚拟对象可以说“恭喜成功找到xxx”,“别灰心,继续寻找xxx”等,以增加直播的交互性。
由于不同用户的物品寻找结果存在差异,因此,直播过程中,不同用户终端上的展示内容存在差异,本公开实施例解决了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
图9为本公开实施例提供的另一种直播交互方法的流程图,基于上述技术方案进一步进行优化与扩展,并可以与上述各个可选实施方式结合。图9具体以目标交互类型为直播图像生成为例,对本公开实施例进行示例性说明,但不应理解为对本公开实施例的具体限定。
如图9所示,本公开实施例提供的直播交互方法可以包括:
S501、根据直播界面上展示的直播内容,在多个交互类型中查找与直播内容匹配的目标交互类型,其中,目标交互类型是直播图像生成。
S502、根据目标交互类型,确定直播图层上的交互区域,其中,交互区域为直播图层中的拍照操作区域。
示例性地,拍照操作区域可以是直播图层上对应整个终端屏幕的区域,也可以是针对部分终端屏幕的区域,本公开实施例不做具体限定。
S503、在直播界面上,展示拍照提示信息,其中,拍照提示信息用于提示目标用户触发拍照需满足的条件,条件包括直播界面中展示的直播背景、虚拟对象的姿势和虚拟对象的表情中的至少一种。
示例性地,在直播过程中,当直播的持续时间达到直播图像生成任务(或称为拍照任务)的触发时机时,可以在直播界面上展示拍照提示信息,或者基于虚拟对象的直播图像生成需求,检测到虚拟对象向目标用户提出直播图像生成任务时,展示拍照提示信息。例如,在直播过程中,虚拟对象进行舞蹈、表演或者摆出预设姿势等,同时发出语音“可以帮我拍个照吗”,终端基于虚拟对象的行为以及语音,检测出虚拟对象提出了直播图像生成任务,在直播界面上展示拍照提示信息。
其中,直播背景可以用于描述虚拟对象当前所处环境的信息,例如室内或者室外的环境信息等;虚拟对象的姿势和表情可以用于描述虚拟对象的行为、状态,可以由服务端进行预先设置。
图10为本公开实施例提供的一种关于直播图像生成的终端展示界面的示意图,图10中的左图示出了一种拍照提示信息的展示方式,用于对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。如图10中左图所示,“背景照”对应虚拟对象的一种姿势,“开心照”对应虚拟对象的一种表情,“夕阳下的侧身照”对应虚拟对象的一种姿势以及直播背景,目标用户需要为虚拟对象拍摄满足前述3种条件的照片。此外,图10中未具体显示直播背景,在实际应用中,可以根据直播视频流的内容展示相应的直播背景。
S504、响应于目标用户在拍照操作区域的拍照触发操作,确定虚拟对象的 照片。
目标用户可以根据直播进程,当确定直播背景、虚拟对象的姿势和虚拟对象的表情,满足拍照提示信息中要求的条件时,则通过在交互区域进行触控操作,实现为虚拟对象拍照。
在一种可选实施方式中,在确定虚拟对象的照片的过程中,本公开实施例提供的直播交互方法还可以包括:
响应于目标用户在拍照操作区域的拍照焦距调整操作,确定拍照焦距,以基于拍照焦距确定虚拟对象的照片。本公开实施例支持目标用户在为虚拟对象拍照过程中,调整拍照焦距,例如通过在交互区域进行左右划动或者上下划动,实现焦距调节,进而实现模拟真实拍摄场景的效果,确保为虚拟对象拍摄出清晰的照片。
图11为本公开实施例提供的另一种关于直播图像生成的终端展示界面的示意图。如图11所示,在直播过程中,直播界面上还可以展示焦距调节的提示信息,以提示用户进行焦距调节。
S505、基于虚拟对象的照片,确定目标用户的拍照评估结果。
针对直播图像生成的直播场景,与目标用户在交互区域中的触控操作对应的交互信息为目标用户的拍照评估结果。终端可以基于预设的拍照评估策略,对目标用户为虚拟对象拍摄的照片进行评估,确定拍照评估结果并进行展示。当然,终端也可以将目标用户为虚拟对象拍摄的照片发送至服务端,由服务端按照预设的拍照评估策略,对照片进行评估,并将拍照评估结果发送至终端。
在一种可选实施方式中,基于虚拟对象的照片,确定目标用户的拍照评估结果,包括:将虚拟对象的照片信息与标准照片信息进行比对,确定目标用户的拍照评估结果。具体地,虚拟对象的照片信息与标准照片信息的差异越小,拍照评估结果的分数越高;虚拟对象的照片信息与标准照片信息的差异越大,拍照评估结果的分数越低。
其中,虚拟对象的照片信息包括拍照触发时刻、直播背景、虚拟对象的姿 势、虚拟对象的表情、以及虚拟对象上的目标物品(例如可以是饰品)中的至少一种;标准照片信息包括标准拍照触发时刻、标准直播背景、虚拟对象的标准拍照姿势、虚拟对象的标准拍照表情、以及虚拟对象上的标准目标物品中的至少一种。标准照片信息是指为虚拟对象拍摄出标准照片时照片对应的描述信息。
具体地,拍照触发时刻是指直播过程中目标用户的拍照触发操作的产生时间,标准拍照触发时刻是指服务端预先设置的目标用户的拍照触发操作的理想产生时间,两者之间的时间差越小,目标用户的拍照评估结果的分数越高,否则越低。同理,虚拟对象的照片中展示的直播背景与标准直播背景之间的差异越小、虚拟对象的照片中展示的虚拟对象的姿势与虚拟对象的标准拍照姿势之间的差异越小、虚拟对象的照片中展示的虚拟对象的表情与虚拟对象的标准拍照表情之间的差异越小、或者虚拟对象的照片中展示的虚拟对象上的目标物品与虚拟对象上的标准目标物品之间的差异越小,则目标用户的拍照评估结果的分数越高,否则越低。在具体应用中,可以根据虚拟对象的照片中包括的信息,确定参与拍照评估的照片信息。
S506、在直播图层上的预设位置展示拍照评估结果。
用于展示拍照评估结果的预设位置,可以根据界面展示布局,进行合理确定,本公开实施例不作具体限定。例如,拍照评估结果可以展示在直播图层上对应终端屏幕上半部分的区域,示例性地,如图10中右侧的子图所示,拍照评估结果“Great”、“Perfect”分别展示在直播图层上对应终端屏幕上方1/3的区域。
此外,服务端还可以根据目标用户的拍照评估结果,确定目标用户可以得到的奖励,并将该奖励同步至目标用户的账户中。目标用户的拍照评估结果越高,或者基于拍照评估结果用户排名越靠前,则目标用户可以得到的奖励越多,以此提升用户参加直播交互的积极性。
可选地,在响应于目标用户在拍照操作区域的拍照触发操作,确定虚拟对象的照片之前,本公开实施例提供的直播交互方法还可以包括:
基于拍照提示信息,确定待拍照片的类型信息;
基于待拍照片的类型信息,确定目标用户的总拍照次数,其中,总拍照次数用于在确定虚拟对象的照片的过程中,确定目标用户的剩余拍照次数。
在响应于目标用户在拍照操作区域的拍照触发操作,确定虚拟对象的照片之后,本公开实施例提供的直播交互方法还可以包括:
在直播图层上的拍照次数展示区域展示目标用户的剩余拍照次数。目标用户每产生一次拍照触发操作,剩余拍照次数减少一次。
继续如图10所示,基于拍照提示信息,可以确定待拍照片包括3种类型,进而可以确定目标用户的总拍照次数不少于3次(例如存在每种类型的照片需要拍摄至少2张的情况),图10中以总拍照次数为3次为例。目标用户为虚拟对象拍摄背影照后,直播图层上展示的剩余拍照次数为2次,目标用户继续为虚拟对象拍摄开心照后,直播图层上展示的剩余拍照次数为1次。剩余1次拍照机会用于为虚拟对象拍摄夕阳下的侧身照。此外,需要说明的是,在展示拍照提示信息之后,为虚拟对象拍摄照片之前,按照服务端预先设置的直播逻辑,允许虚拟对象更换穿着,以及切换不同的直播背景(可以体现虚拟对象处于不同的环境),增加直播交互的趣味性。
在本公开实施例中,目标用户与虚拟对象进行拍照交互,不同用户为虚拟对象拍摄的照片的信息不同,因此,直播图层上展示的拍照评估结果存在差异,并且不同用户的拍照快慢不同,因此直播图层上展示的剩余拍照次数也会存在差异。本公开实施例解决了现有直播方案不能实现个性化的直播交互效果的问题,实现了个性化的直播交互效果,达到了不同用户在直播过程中可以看到差异化的展示内容的效果。
图12为本公开实施例提供的一种直播交互装置的结构示意图,该装置可以采用软件和/或硬件实现,并可集成在具有计算能力的电子设备上,例如手机、平板电脑、台式机等用户终端。
如图12所示,本公开实施例提供的直播交互装置600可以包括交互区域确定模块601、交互信息确定模块602和交互信息展示模块603,其中:
交互区域确定模块601,用于确定直播图层上的交互区域,其中,直播图 层叠加在虚拟对象的直播界面上,直播界面是指进入虚拟对象直播间的各个终端中同步展示的界面,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,交互区域与直播界面保持帧同步;
交互信息确定模块602,用于响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息;
交互信息展示模块603,用于在直播图层上的预设位置展示交互信息。
可选地,交互区域确定模块601包括:
目标交互类型确定单元,用于根据直播界面上展示的直播内容,在多个交互类型中查找与直播内容匹配的目标交互类型;
交互区域确定单元,用于根据目标交互类型,确定直播图层上的交互区域;
其中,多个交互类型包括肢体交互、物品制作、物品寻找以及直播图像生成。
可选地,本公开实施例提供的直播交互装置600还包括:
共享信息展示模块,用于在直播界面上,展示目标交互类型对应的共享信息;其中,共享信息是基于虚拟对象直播间中用户的触控操作对应的触控结果得到。
可选地,如果目标交互类型是肢体交互,则交互区域确定单元具体用于:
根据直播界面上虚拟对象的目标肢体部位的动作和目标肢体部位的位置,确定直播图层上的交互区域。
可选地,本公开实施例提供的直播交互装置600还包括:
动作提示信息展示模块,用于确定与虚拟对象的目标肢体部位的动作相匹配的提示信息,并在直播界面上展示提示信息,其中,提示信息用于提示目标用户在交互区域执行与虚拟对象的目标肢体部位的动作相匹配的触控操作。
可选地,交互信息包括与虚拟对象的目标肢体部位的动作相匹配的反馈动态效果。
可选地,如果目标交互类型是物品制作,则交互区域为预设画板区域,预设画板区域中展示待制作物品;
交互信息确定模块602具体用于:
响应于目标用户在预设画板区域的触控操作,确定目标用户选择的用于填充待制作物品的色彩;
交互信息展示模块603包括:
填充位置确定单元,用于响应于目标用户在待制作物品上的触控操作,确定目标用户在直播图层上选择的填充位置;
色彩展示单元,用于将目标用户选择的色彩展示在直播图层上的填充位置。
可选地,本公开实施例提供的直播交互装置600还包括:
交流语音播放模块,用于播放虚拟对象关于目标制作物品的交流语音,其中,目标制作物品是根据预设选择策略从虚拟对象直播间中用户提交的制作物品中筛选得到;
物品展示模块,用于在虚拟对象上展示目标制作物品。
可选地,如果目标交互类型是物品寻找,则交互区域为直播图层中覆盖待寻找物品的区域;
交互信息确定模块602包括:
触控位置确定单元,用于响应于目标用户在交互区域的触控操作,确定目标用户的触控位置;
物品寻找成功确定单元,用于如果目标用户的触控位置和交互区域中待寻找物品的位置相匹配,则确定目标用户成功寻找到待寻找物品,并生成用于表示物品寻找成功的动态效果;
交互信息展示模块603具体用于:
在直播图层上,基于待寻找物品的位置,展示用于表示物品寻找成功的动态效果。
可选地,本公开实施例提供的直播交互装置600还包括:
物品信息展示模块,用于在直播图层上展示待寻找物品的类型、以及寻找成功的各类型下的物品数量。
可选地,如果目标交互类型是为直播图像生成,则交互区域为直播图层中的拍照操作区域;
交互信息确定模块602包括:
拍照提示信息展示单元,用于在直播界面上,展示拍照提示信息,其中,拍照提示信息用于提示目标用户触发拍照需满足的条件,条件包括直播界面中展示的直播背景、虚拟对象的姿势和虚拟对象的表情中的至少一种;
照片确定单元,用于响应于目标用户在拍照操作区域的拍照触发操作,确定虚拟对象的照片;
拍照评估结果确定单元,用于基于虚拟对象的照片,确定目标用户的拍照评估结果;
交互信息展示模块603具体用于:
在直播图层上的预设位置展示拍照评估结果。
可选地,本公开实施例提供的直播交互装置600还包括:
拍照焦距确定模块,用于响应于目标用户在拍照操作区域的拍照焦距调整操作,确定拍照焦距,以基于拍照焦距确定虚拟对象的照片。
可选地,拍照评估结果确定单元具体用于:
将虚拟对象的照片信息与标准照片信息进行比对,确定目标用户的拍照评估结果。
可选地,虚拟对象的照片信息包括拍照触发时刻、直播背景、虚拟对象的姿势、虚拟对象的表情、以及虚拟对象上的目标物品中的至少一种;
标准照片信息包括标准拍照触发时刻、标准直播背景、虚拟对象的标准拍照姿势、虚拟对象的标准拍照表情、以及虚拟对象上的标准目标物品中的至少 一种。
可选地,本公开实施例提供的直播交互装置600还包括:
照片类型信息确定模块,用于基于拍照提示信息,确定待拍照片的类型信息;
拍照次数确定模块,用于基于待拍照片的类型信息,确定目标用户的总拍照次数,其中,总拍照次数用于在确定虚拟对象的照片的过程中,确定目标用户的剩余拍照次数;
剩余拍照次数展示模块,用于在直播图层上的拍照次数展示区域展示目标用户的剩余拍照次数。
本公开实施例所提供的直播交互装置可执行本公开实施例所提供的任意直播交互方法,具备执行方法相应的功能模块和有益效果。本公开装置实施例中未详尽描述的内容可以参考本公开任意方法实施例中的描述。
图13为本公开实施例提供的一种电子设备的结构示意图,用于对实现本公开实施例提供的直播交互方法的电子设备进行示例性说明。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机、智能家居设备、可穿戴电子设备、服务器等等的固定终端。图13示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和占用范围带来任何限制。
如图13所示,电子设备700包括一个或多个处理器701和存储器702。
处理器701可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备700中的其他组件以执行期望的功能。
存储器702可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器 (cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器701可以运行程序指令,以实现本公开实施例提供的直播交互方法,还可以实现其他期望的功能。在计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。
其中,本公开实施例提供的直播交互方法,应用于进入虚拟对象直播间的终端,包括:确定直播图层上的交互区域,其中,直播图层叠加在虚拟对象的直播界面上,直播界面是指进入虚拟对象直播间的各个终端中同步展示的界面,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,交互区域与直播界面保持帧同步;响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息;在直播图层上的预设位置展示交互信息。应当理解,电子设备700还可以执行本公开方法实施例提供的其他可选实施方案。
在一个示例中,电子设备700还可以包括:输入装置703和输出装置704,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置703还可以包括例如键盘、鼠标等等。
该输出装置704可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置704可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图13中仅示出了该电子设备700中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备700还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序或计算机程序指令,计算机程序或计算机程序指令在被处理器执行时使得计算设备实现本公开实施例所提供的任意直播交互方法。
计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,程序设计语言包括面向对象的程序设计语言, 诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户电子设备上执行、部分地在用户电子设备上执行、作为一个独立的软件包执行、部分在用户电子设备上且部分在远程电子设备上执行、或者完全在远程电子设备上执行。
此外,本公开实施例还可以提供一种计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令在被处理器运行时使得计算设备实现本公开实施例所提供的任意直播交互方法。
其中,本公开实施例提供的直播交互方法,应用于进入虚拟对象直播间的终端,包括:确定直播图层上的交互区域,其中,直播图层叠加在虚拟对象的直播界面上,直播界面是指进入虚拟对象直播间的各个终端中同步展示的界面,直播图层上展示的内容和直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,交互区域与直播界面保持帧同步;响应于目标用户在交互区域的触控操作,确定触控操作对应的交互信息;在直播图层上的预设位置展示交互信息。应当理解,计算机程序指令在被处理器运行时,还可以使得计算设备实现本公开方法实施例提供的其他可选实施方案。
计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其 他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (18)

  1. 一种直播交互方法,应用于进入虚拟对象直播间的终端,包括:
    确定直播图层上的交互区域,其中,所述直播图层叠加在所述虚拟对象的直播界面上,所述直播界面是指进入所述虚拟对象直播间的各个终端中同步展示的界面,所述直播图层上展示的内容和所述直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,所述交互区域与所述直播界面保持帧同步;
    响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息;
    在所述直播图层上的预设位置展示所述交互信息。
  2. 根据权利要求1所述的方法,其中,所述确定直播图层上的交互区域,包括:
    根据所述直播界面上展示的直播内容,在多个交互类型中查找与所述直播内容匹配的目标交互类型;
    根据所述目标交互类型,确定所述直播图层上的交互区域;
    其中,所述多个交互类型包括肢体交互、物品制作、物品寻找以及直播图像生成。
  3. 根据权利要求2所述的方法,其中,还包括:
    在所述直播界面上,展示所述目标交互类型对应的共享信息;其中,所述共享信息是基于所述虚拟对象直播间中用户的触控操作对应的触控结果得到。
  4. 根据权利要求2所述的方法,其中,如果所述目标交互类型是所述肢体交互,则所述根据所述目标交互类型,确定所述直播图层上的交互区域,包括:
    根据所述直播界面上虚拟对象的目标肢体部位的动作和所述目标肢体部位的位置,确定所述直播图层上的交互区域。
  5. 根据权利要求4所述的方法,其中,还包括:
    确定与所述虚拟对象的目标肢体部位的动作相匹配的提示信息,并在所述 直播界面上展示所述提示信息,其中,所述提示信息用于提示目标用户在所述交互区域执行与所述虚拟对象的目标肢体部位的动作相匹配的触控操作。
  6. 根据权利要求4或5所述的方法,其中,所述交互信息包括与所述虚拟对象的目标肢体部位的动作相匹配的反馈动态效果。
  7. 根据权利要求2所述的方法,其中,如果所述目标交互类型是所述物品制作,则所述交互区域为预设画板区域,所述预设画板区域中展示待制作物品;
    所述响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息,包括:
    响应于目标用户在所述预设画板区域的触控操作,确定目标用户选择的用于填充所述待制作物品的色彩;
    所述在所述直播图层上的预设位置展示所述交互信息,包括:
    响应于目标用户在所述待制作物品上的触控操作,确定目标用户在所述直播图层上选择的填充位置;
    将目标用户选择的色彩展示在所述直播图层上的填充位置。
  8. 根据权利要求7所述的方法,其中,还包括:
    播放所述虚拟对象关于目标制作物品的交流语音,其中,所述目标制作物品是根据预设选择策略从所述虚拟对象直播间中用户提交的制作物品中筛选得到;
    在所述虚拟对象上展示所述目标制作物品。
  9. 根据权利要求2所述的方法,其中,如果所述目标交互类型是所述物品寻找,则所述交互区域为所述直播图层中覆盖待寻找物品的区域;
    所述响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息,包括:
    响应于目标用户在所述交互区域的触控操作,确定目标用户的触控位置;
    如果目标用户的触控位置和所述交互区域中待寻找物品的位置相匹配,则 确定目标用户成功寻找到所述待寻找物品,并生成用于表示物品寻找成功的动态效果;
    所述在所述直播图层上的预设位置展示所述交互信息,包括:
    在所述直播图层上,基于所述待寻找物品的位置,展示所述用于表示物品寻找成功的动态效果。
  10. 根据权利要求9所述的方法,其中,还包括:
    在所述直播图层上展示所述待寻找物品的类型、以及寻找成功的各类型下的物品数量。
  11. 根据权利要求2所述的方法,其中,如果所述目标交互类型是为所述直播图像生成,则所述交互区域为所述直播图层中的拍照操作区域;
    所述响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息,包括:
    在所述直播界面上,展示拍照提示信息,其中,所述拍照提示信息用于提示目标用户触发拍照需满足的条件,所述条件包括所述直播界面中展示的直播背景、所述虚拟对象的姿势和所述虚拟对象的表情中的至少一种;
    响应于目标用户在所述拍照操作区域的拍照触发操作,确定所述虚拟对象的照片;
    基于所述虚拟对象的照片,确定目标用户的拍照评估结果;
    所述在所述直播图层上的预设位置展示所述交互信息,包括:
    在所述直播图层上的预设位置展示所述拍照评估结果。
  12. 根据权利要求11所述的方法,其中,在确定所述虚拟对象的照片的过程中,还包括:
    响应于目标用户在所述拍照操作区域的拍照焦距调整操作,确定拍照焦距,以基于所述拍照焦距确定所述虚拟对象的照片。
  13. 根据权利要求11所述的方法,其中,所述基于所述虚拟对象的照片, 确定目标用户的拍照评估结果,包括:
    将所述虚拟对象的照片信息与标准照片信息进行比对,确定目标用户的拍照评估结果。
  14. 根据权利要求13所述的方法,其中,所述虚拟对象的照片信息包括所述拍照触发时刻、所述直播背景、所述虚拟对象的姿势、所述虚拟对象的表情、以及所述虚拟对象上的目标物品中的至少一种;
    所述标准照片信息包括标准拍照触发时刻、标准直播背景、所述虚拟对象的标准拍照姿势、所述虚拟对象的标准拍照表情、以及所述虚拟对象上的标准目标物品中的至少一种。
  15. 根据权利要求11所述的方法,其中,还包括:
    基于所述拍照提示信息,确定待拍照片的类型信息;
    基于所述待拍照片的类型信息,确定目标用户的总拍照次数,其中,所述总拍照次数用于在确定所述虚拟对象的照片的过程中,确定目标用户的剩余拍照次数;
    所述方法还包括:在所述直播图层上的拍照次数展示区域展示目标用户的剩余拍照次数。
  16. 一种直播交互装置,配置于进入虚拟对象直播间的终端,包括:
    交互区域确定模块,用于确定直播图层上的交互区域,其中,所述直播图层叠加在所述虚拟对象的直播界面上,所述直播界面是指进入所述虚拟对象直播间的各个终端中同步展示的界面,所述直播图层上展示的内容和所述直播界面上展示的直播内容共同作为目标用户的终端上展示的内容,所述交互区域与所述直播界面保持帧同步;
    交互信息确定模块,用于响应于目标用户在所述交互区域的触控操作,确定所述触控操作对应的交互信息;
    交互信息展示模块,用于在所述直播图层上的预设位置展示所述交互信息。
  17. 一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计 算机程序,当所述计算机程序被所述处理器执行时,所述处理器执行权利要求1-15中任一项所述的直播交互方法。
  18. 一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-15中任一项所述的直播交互方法。
PCT/CN2022/076542 2021-04-07 2022-02-17 直播交互方法、装置、电子设备和存储介质 WO2022213727A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110370982.3 2021-04-07
CN202110370982.3A CN113115061B (zh) 2021-04-07 2021-04-07 直播交互方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022213727A1 true WO2022213727A1 (zh) 2022-10-13

Family

ID=76714563

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076542 WO2022213727A1 (zh) 2021-04-07 2022-02-17 直播交互方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN113115061B (zh)
WO (1) WO2022213727A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937430A (zh) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN116030191A (zh) * 2022-12-21 2023-04-28 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN116843800A (zh) * 2023-08-29 2023-10-03 深圳有咖互动科技有限公司 动画信息发送方法、装置、电子设备和计算机可读介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115061B (zh) * 2021-04-07 2023-03-10 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质
CN113596496A (zh) * 2021-07-28 2021-11-02 广州博冠信息科技有限公司 虚拟直播间的交互控制方法、装置、介质及电子设备
CN115278285B (zh) * 2022-07-26 2024-01-30 北京字跳网络技术有限公司 直播画面的显示方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020221186A1 (zh) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 一种虚拟形象控制方法、装置、电子设备及存储介质
CN112135154A (zh) * 2020-09-08 2020-12-25 网易(杭州)网络有限公司 直播间互动方法、电子设备及存储介质
CN112330819A (zh) * 2020-11-04 2021-02-05 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN112601100A (zh) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 一种直播互动方法、装置、设备及介质
CN112616063A (zh) * 2020-12-11 2021-04-06 北京字跳网络技术有限公司 一种直播互动方法、装置、设备及介质
CN113115061A (zh) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109618177B (zh) * 2018-12-26 2020-02-28 北京微播视界科技有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN110634483B (zh) * 2019-09-03 2021-06-18 北京达佳互联信息技术有限公司 人机交互方法、装置、电子设备及存储介质
CN110850983B (zh) * 2019-11-13 2020-11-24 腾讯科技(深圳)有限公司 视频直播中的虚拟对象控制方法、装置和存储介质
CN112135160A (zh) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 直播中虚拟对象控制方法及装置、存储介质和电子设备
CN112533017B (zh) * 2020-12-01 2023-04-11 广州繁星互娱信息科技有限公司 直播方法、装置、终端及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020221186A1 (zh) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 一种虚拟形象控制方法、装置、电子设备及存储介质
CN112135154A (zh) * 2020-09-08 2020-12-25 网易(杭州)网络有限公司 直播间互动方法、电子设备及存储介质
CN112330819A (zh) * 2020-11-04 2021-02-05 腾讯科技(深圳)有限公司 基于虚拟物品的交互方法、装置及存储介质
CN112601100A (zh) * 2020-12-11 2021-04-02 北京字跳网络技术有限公司 一种直播互动方法、装置、设备及介质
CN112616063A (zh) * 2020-12-11 2021-04-06 北京字跳网络技术有限公司 一种直播互动方法、装置、设备及介质
CN113115061A (zh) * 2021-04-07 2021-07-13 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937430A (zh) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN116030191A (zh) * 2022-12-21 2023-04-28 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN115937430B (zh) * 2022-12-21 2023-10-10 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN116030191B (zh) * 2022-12-21 2023-11-10 北京百度网讯科技有限公司 用于展示虚拟对象的方法、装置、设备及介质
CN116843800A (zh) * 2023-08-29 2023-10-03 深圳有咖互动科技有限公司 动画信息发送方法、装置、电子设备和计算机可读介质
CN116843800B (zh) * 2023-08-29 2023-11-24 深圳有咖互动科技有限公司 动画信息发送方法、装置、电子设备和计算机可读介质

Also Published As

Publication number Publication date
CN113115061B (zh) 2023-03-10
CN113115061A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2022213727A1 (zh) 直播交互方法、装置、电子设备和存储介质
US10511833B2 (en) Controls and interfaces for user interactions in virtual spaces
EP3306444A1 (en) Controls and interfaces for user interactions in virtual spaces using gaze tracking
US20180096507A1 (en) Controls and Interfaces for User Interactions in Virtual Spaces
US20180095636A1 (en) Controls and Interfaces for User Interactions in Virtual Spaces
US10545339B2 (en) Information processing method and information processing system
TWI775134B (zh) 互動方法、裝置、設備以及記錄媒體
JP6941245B1 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
WO2022151882A1 (zh) 虚拟现实设备
JP7301263B1 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
JP7253597B1 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
JP7333564B2 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
JP7455300B2 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
JP7253025B1 (ja) 情報処理システム、情報処理方法およびコンピュータプログラム
US11704854B2 (en) Information processing system, information processing method, and computer program
WO2023215637A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2024039887A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2024039885A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2023205145A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
Moore MAKING THINGS PERFECTLY SKETCH
JP2023098893A (ja) 情報処理システム、情報処理方法およびコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22783798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/02/2024)