WO2022121593A1 - 直播交互方法、装置、电子设备和存储介质 - Google Patents

直播交互方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022121593A1
WO2022121593A1 PCT/CN2021/129238 CN2021129238W WO2022121593A1 WO 2022121593 A1 WO2022121593 A1 WO 2022121593A1 CN 2021129238 W CN2021129238 W CN 2021129238W WO 2022121593 A1 WO2022121593 A1 WO 2022121593A1
Authority
WO
WIPO (PCT)
Prior art keywords
target virtual
resource
virtual resource
dynamic effect
live broadcast
Prior art date
Application number
PCT/CN2021/129238
Other languages
English (en)
French (fr)
Inventor
王骁玮
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022121593A1 publication Critical patent/WO2022121593A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present disclosure relates to the technical field of Internet live broadcast, and in particular, to a live broadcast interaction method, apparatus, electronic device and storage medium.
  • embodiments of the present disclosure provide a live broadcast interaction method, apparatus, electronic device, and storage medium.
  • an embodiment of the present disclosure provides a live broadcast interaction method, which is applied to a terminal entering a virtual object live broadcast room, including:
  • At least one first target virtual resource is determined; wherein each of the virtual resources is configured with resource data required for triggering;
  • a dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object; wherein, the preset time is related to the second target virtual resource, and the second target virtual resource
  • the determination is based on the first target virtual resource triggered by a plurality of the terminals.
  • the embodiments of the present disclosure also provide a live broadcast interaction method, which is applied to the server, including:
  • the preset time information is used to determine the time to display the dynamic effect corresponding to the at least one second target virtual resource on the virtual object ;
  • an embodiment of the present disclosure further provides a live broadcast interaction device, which is configured for a terminal entering a virtual object live broadcast room, including:
  • a first target virtual resource determination module configured to determine at least one first target virtual resource in response to a triggering operation for at least one virtual resource in the interactive area in the live broadcast interface; wherein each of the virtual resources is configured with triggering requirements resource data;
  • the dynamic effect display module is used to display the dynamic effect corresponding to at least one second target virtual resource on the virtual object within a preset time of the live broadcast; wherein, the preset time is related to the second target virtual resource,
  • the second target virtual resource is determined based on the first target virtual resource triggered by a plurality of the terminals.
  • an embodiment of the present disclosure further provides a live interactive device, which is configured on a server and includes:
  • a second target virtual resource determining module configured to determine at least one second target virtual resource according to at least one first target virtual resource triggered by multiple terminals
  • a preset time information determination module configured to determine preset time information of the live broadcast according to the at least one second target virtual resource; wherein the preset time information is used to determine to display the at least one second target virtual resource on the virtual object The time of the dynamic effect corresponding to the target virtual resource;
  • a data sending module configured to send the preset time information, the at least one second target virtual resource and the live video stream to the multiple terminals.
  • an embodiment of the present disclosure further provides an electronic device, including a memory and a processor, wherein: a computer program is stored in the memory, and when the computer program is executed by the processor, the electronic The device implements any of the live broadcast interaction methods provided by the embodiments of the present disclosure.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, enables the computing device to implement the embodiment of the present disclosure Any of the live interaction methods provided.
  • the user can, based on the resource data held, on the interactive area of the live broadcast interface displayed by the terminal, generate information for at least one The trigger operation of the virtual resource, the terminal determines at least one first target virtual resource in response to the trigger operation; then the server determines at least one second target virtual resource from the at least one first target virtual resource; after that, the terminal can During the preset time, the dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object in the live broadcast room, which realizes the transformation of the resource data held by the user to the specific dynamic effect displayed in the live broadcast interface.
  • the resource data changes the display state of virtual objects, enriches the interactive way of live broadcast in the virtual live broadcast scene, and improves the fun of live broadcast.
  • FIG. 1 is a flowchart of a live interaction method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of an interactive area displaying multiple virtual resources according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a live interface for displaying a dynamic effect corresponding to a second target virtual resource on a virtual object according to an embodiment of the present disclosure
  • FIG. 4 is another schematic diagram of a live interface for displaying a dynamic effect corresponding to a second target virtual resource on a virtual object according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of another interactive area displaying multiple virtual resources according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a resource receiving area provided by an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a live interactive device according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another live broadcast interaction apparatus provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the live broadcast interaction method may be executed by a live broadcast interaction apparatus configured on a terminal, which may be implemented by software and/or hardware, and the terminal may include any terminal entering the virtual object live broadcast room.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • each virtual resource in response to a triggering operation for at least one virtual resource in the interactive area, determine at least one first target virtual resource; wherein each virtual resource is configured with resource data required for triggering.
  • the terminal may display the interactive area superimposed on the live broadcast interface in response to a touch operation on a control having an interface calling function on the live broadcast interface.
  • Each virtual resource displayed in the interactive area corresponds to a dynamic effect, which can be used to change the display state of virtual objects in the live broadcast interface.
  • the virtual resources may be displayed in the interactive area by means of image logos of preset shapes, for example, multiple virtual resources displayed in the interactive area may be logos of different special effects.
  • the interactive area supports the user's touch operation, such as clicking, long pressing, etc., so that the user can generate a trigger operation for at least one virtual resource.
  • each virtual resource is configured with resource data required for triggering, that is, corresponding resource data needs to be consumed for triggering operations on the virtual resource.
  • the resource data may be user points, virtual currency or user level. For example, each time a user triggers (ie selects) a virtual resource in the interactive area, it needs to consume a certain amount of user points or virtual coins.
  • the server receives the first target virtual resource triggered by each terminal, and counts the number of each first target virtual resource triggered by the terminal, and ranks the first target virtual resource with the largest number or the number in the first preset ranking or the number exceeding the preset threshold.
  • the target virtual resource is used as the second target virtual resource.
  • the server receives the first target virtual resources A and D triggered by terminal A, the first target virtual resources A and B triggered by terminal B, the first target virtual resources A and B triggered by terminal C, and the first target virtual resources A and B triggered by terminal D.
  • One target virtual resource A and C after statistics, it is determined that the number of triggers of the first target virtual resource A is 4, the number of triggers of the first target virtual resource B is 2, and the number of triggers of the first target virtual resources C and D are both 1 , at this time, A with the largest number of triggers can be selected as the second target virtual resource, or A and B with the top 2 triggers can be respectively determined as the second target virtual resources, or the number of triggers exceeding the preset threshold 2 can be determined as the second target virtual resource. A as the second target virtual resource.
  • the terminal may also send resource data corresponding to each first target virtual resource to the server, so that the server uses the resource data as a screening condition for determining the second target virtual resource.
  • the server may determine the second target virtual resource from a plurality of first target virtual resources based on any one or several types of resource data and in combination with the number of each first target virtual resource triggered by the terminal resource. Taking the resource data as user credits as an example, the server can also consider the credits and trigger quantities corresponding to each first target virtual resource at the same time, and determine the first target virtual resource whose credits exceed the credit threshold and the trigger quantity exceeds the preset threshold as the first target virtual resource.
  • the second target virtual resource The value of each threshold can be flexibly set.
  • the terminal may determine the resource data provided by the user according to the duration of the user's touch on the virtual resource in the interaction area. For example, the longer the touch time, the more resource data the user provides. The more resource data provided by the user, the higher the probability of displaying the dynamic effect corresponding to the virtual resource selected by the user on the live broadcast interface. By dynamically controlling the resource data provided by the user according to the user's touch duration, it helps to improve the interactive fun.
  • the terminal After the terminal detects that the user has completed the trigger operation for the virtual resource (for example, the user's fingertip leaves the screen, etc.), the terminal can hide the interactive area, or can exit the operation according to the interactive area triggered by the user (for example, touch the exit control in the interactive area, or Click the preset area of the live broadcast interface (for triggering an exit request, etc.) to hide the interactive area, so as to prevent the interactive area from continuously blocking the live broadcast interface.
  • the terminal can hide the interactive area, or can exit the operation according to the interactive area triggered by the user (for example, touch the exit control in the interactive area, or Click the preset area of the live broadcast interface (for triggering an exit request, etc.) to hide the interactive area, so as to prevent the interactive area from continuously blocking the live broadcast interface.
  • the terminal may display a dynamic effect corresponding to at least one second target virtual resource on the virtual object.
  • the terminal may display a dynamic effect corresponding to at least one second target virtual resource on the virtual object at a preset time of the live broadcast.
  • the preset time of the live broadcast is determined based on the attribute characteristics corresponding to the second target virtual resource, and the attribute characteristics may include but are not limited to the resource ID and resource identifier of the second target virtual resource (used to uniquely identify the second target virtual resource) , the display duration of the corresponding dynamic effect, the type of the virtual resource, the popularity of the virtual resource (user popularity), the display frequency of the dynamic effect within a preset historical time, and other information.
  • the relationship between the attribute feature and the display duration of the dynamic effect corresponding to the second target virtual resource can be predefined. For example, it can be preset that the higher the popularity of the virtual resource, the longer the corresponding display duration of the dynamic effect, or the virtual resource within the preset historical time.
  • the display duration of the dynamic effect so as to dynamically determine the preset time of the live broadcast.
  • the preset time refers to the duration of displaying the dynamic effect corresponding to the at least one second target virtual resource on the virtual object.
  • the terminal may take the reception time of the multimedia data (including audio data and video data) corresponding to the second target virtual resource as the time starting point, and determine the time end point based on the display duration of the dynamic effect corresponding to the second target virtual resource.
  • the dynamic effects corresponding to the virtual object and the second target virtual resource are displayed superimposed in the segment.
  • a duration period may be determined as the preset time of the live broadcast based on the receiving time of the multimedia data corresponding to each second virtual resource and the display duration corresponding to each second virtual resource.
  • the receiving time of the multimedia data corresponding to the second virtual resource a previously displayed by the dynamic effect is t1
  • the display duration of the corresponding dynamic effect is x1
  • the display end time of the dynamic effect corresponding to the second virtual resource a is t1.
  • the time is the time t4 obtained from t3 plus x2, and the duration between t1 and t4 can be used as the preset time of the live broadcast; if the dynamic effects corresponding to the second virtual resource a and the second virtual resource b support simultaneous superposition on the virtual object Display, the receiving time of the multimedia data corresponding to the second virtual resource a and the second virtual resource b are both t1, the superimposed display duration x3 is determined based on the respective display durations, and the display end time of the two dynamic effects is the time obtained by t1 plus x3 t5, the duration between t1 and t5 may be used as the preset time of the live broadcast.
  • the dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object within a preset time of the live broadcast, including:
  • the dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object, including:
  • the superimposed live video stream within the preset time of the live broadcast; wherein, the superimposed live video stream is performed by the server according to the display position of the dynamic effect corresponding to the at least one second target virtual resource on the virtual object, and the live video stream and the at least one second target virtual resource are displayed.
  • the multimedia data corresponding to a second target virtual resource is obtained after superimposing processing;
  • the live content is generated based on the superimposed live video stream, and the virtual objects in the live content display dynamic effects.
  • the server determines at least one second target virtual resource
  • the multimedia data corresponding to each second target virtual resource and the live video stream can be sent separately, and after the terminal receives the multimedia data and the live video stream, the superposition of the two is completed.
  • Display for example, according to the display position information of the dynamic effect included in the multimedia data on the virtual object, the generated dynamic effect is displayed on the virtual object; the server can also directly superimpose the multimedia data and the live video stream in advance.
  • the live video stream is then pushed to the terminal for display, and the technical solutions of the embodiments of the present disclosure have more flexibility in implementation.
  • the server can carry the multimedia data in a display instruction and send it to the terminal, and the display instruction is used to instruct the terminal to display on the virtual object at least one corresponding to the second target virtual resource. dynamic effect.
  • the user may generate a trigger operation for at least one virtual resource on the interactive area of the live broadcast interface displayed by the terminal based on the resource data held, and the terminal determines at least one first target in response to the trigger operation virtual resource; then the server determines at least one second target virtual resource from the at least one first target virtual resource; then the terminal can display at least one second target virtual resource on the virtual object in the live broadcast room within the preset time of the live broadcast.
  • the dynamic effect corresponding to the resource realizes the transformation of the resource data held by the user to the specific dynamic effect displayed in the live broadcast interface.
  • the user can change the display state of the virtual object through the resource data held, which enriches the live broadcast in the virtual live broadcast scene.
  • the interactive method makes the live broadcast more interesting.
  • the changes in the display state of the virtual objects may include, but are not limited to, changes in physical characteristics, changes in dressing styles, changes in clothing, changes in actions, changes in expressions, changes in the environment, and the like.
  • the multimedia data corresponding to the virtual resources is the special effect data
  • the types of special effects that can be displayed in the interactive area of the live broadcast interface can include at least one of the following: clothing transformation special effects, dress transformation special effects Special effects, body transformation effects and scene prop transformation effects.
  • the special effects of clothing transformation include dress-up; for example, after displaying the dynamic effect corresponding to the special effects of dress-up on the virtual object, the user can see that the clothing of the virtual object has changed from clothing A to clothing B;
  • Dress-up transformation special effects include at least one of animal ears and garlands; for example, after displaying the dynamic effect corresponding to the special effects of animal ears (such as cat ears) on the virtual object, the user can see that the animal ears grow on the head of the virtual object ;After displaying the dynamic effect corresponding to the garland special effect on the virtual object, the user can see that the virtual object has a garland on its head;
  • animal ears such as cat ears
  • the special effects of body transformation include at least one of lengthening legs and narrowing hands; for example, after displaying the dynamic effects corresponding to the special effects of lengthening legs or narrowing hands on a virtual object, the user can see that the legs of the virtual object become longer or the hands become longer. Thinning;
  • the special effects of scene prop transformation include at least one of freezing, feeding snacks, drinking beverages, blowing air, presenting desserts, and trailing;
  • the freezing special effects refer to the display of effects related to the freezing state, such as displaying on virtual objects
  • the upper body of the virtual object such as the head
  • Snack feeding or drinking special effects refer to the display of effects related to snack feeding or drinking. For example, after displaying the dynamic effects corresponding to the snack feeding or drinking special effects on the virtual object, the user can see the virtual object showing snacks or drinks.
  • the state of drinking; the gift of dessert refers to the display of effects related to the performance of the host after receiving the dessert.
  • the user can see the virtual object presenting the hand.
  • the action of taking dessert, and the face can also show a satisfied and happy expression;
  • the blowing special effect refers to the display of effects related to the wind.
  • the user can see the virtual object. Hair or clothes are in a state of swinging with the wind.
  • the trailing effect refers to the anchor's feet or legs when walking.
  • the user can see that the virtual object is in a state of dragging some petals on its feet when walking.
  • FIG. 2 shows a schematic diagram of an interactive area displaying a plurality of virtual resources provided by an embodiment of the present disclosure by taking the virtual resources displayed in the interactive area as a special effect, which should not be construed as a specific limitation of the embodiment of the present disclosure.
  • the terminal can determine the special effect logo selected by the user according to the user's touch operation on any special effect logo, and at the same time determine the resource data provided by the user, such as user points.
  • FIG. 3 is a schematic diagram of a live interface for displaying a dynamic effect corresponding to a second target virtual resource on a virtual object according to an embodiment of the present disclosure.
  • the dynamic effect corresponding to the freezing special effect is taken as an example to illustrate the state of the virtual object It changes to a frozen state, and the upper body of the virtual object shows snow falling, and further, the expression of the virtual object can be changed to a cold expression.
  • FIG. 4 is another schematic diagram of a live interface for displaying a dynamic effect corresponding to a second target virtual resource on a virtual object according to an embodiment of the present disclosure. Specifically, taking the dynamic effect corresponding to the cat ear effect as an example, the virtual object is shown. changes in the state of dressing.
  • the live interaction method provided by the embodiment of the present disclosure further includes: receiving dynamic effect display prompt information; displaying the dynamic effect display prompt information on the live broadcast interface within a preset time of the live broadcast; wherein , the dynamic effect display prompt information is generated by the server based on the attribute characteristics corresponding to each second target virtual resource.
  • the attribute feature may include, but is not limited to, the resource ID of the second target virtual resource, the resource identifier, the display duration of the corresponding dynamic effect, the type of the virtual resource, the popularity of the virtual resource, and the display frequency of the dynamic effect within a preset historical time. and other information.
  • the server can not only determine the icon of each second target virtual resource on the live broadcast interface or interactive area, but also can determine each second target virtual resource according to the relationship between the attribute characteristics and the display duration of the dynamic effect.
  • the display duration of the dynamic effect corresponding to the target virtual resource and then based on the icon of the second target virtual resource and/or the display duration of the corresponding dynamic effect, the dynamic effect display prompt information is generated, and displayed by the terminal on the live broadcast interface, which can make the user Visually know what kind of virtual resource the currently displayed dynamic effect is, and at the same time, you can also clearly understand the display time of the dynamic effect, or the remaining display time and other information.
  • the dynamic effect display prompt information can be realized in the form of image logo or text. Displaying prompt information by displaying dynamic effects can improve the friendliness of live broadcast interaction.
  • the dynamic effect display prompt information includes a dynamic effect display prompt identifier, and the dynamic effect display prompt identifier is used to represent the icon corresponding to each second target virtual resource and/or the remaining display duration of the dynamic effect; within the preset time of the live broadcast, the The dynamic effect display prompt information is displayed on the live broadcast interface, including: within the preset time of the live broadcast, in response to the remaining display time of the dynamic effect corresponding to any second target virtual resource being zero, hiding the corresponding second target virtual resource on the live broadcast interface
  • the dynamic effect of the display prompt logo For example, during the live broadcast, the preset area on the dynamic effect display prompt sign is continuously filled to indicate that the dynamic effect is about to reach the display time, and when the remaining display time is zero, the live broadcast interface hides the dynamic effect display prompt sign.
  • Figure 3 shows the dynamic effect display prompt logo corresponding to the freezing effect, which is generated based on the cat ear special effect icon and the display duration of the cat ear dynamic effect
  • Figure 4 shows the dynamic effect display prompt logo corresponding to the cat ear special effect. , based on the display duration of the frozen special effect icon and the cat ear dynamic effect.
  • the circular area on the dynamic effect display prompt sign can be continuously filled clockwise. After the display time of the dynamic effect is reached, the circular area will appear. Completely filled state, and the live interface hides the dynamic effect display prompt logo.
  • the specific display position of the dynamic effect display prompt identification on the live broadcast interface is not specifically limited in the embodiment of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure further includes: receiving a sorting result of each virtual resource in the interactive area; wherein the sorting result is determined according to resource data corresponding to each virtual resource; and displaying the sorting result in the interactive area.
  • the server can count the resource data corresponding to each virtual resource in the interactive area according to the resource data of the multiple terminals for at least one virtual resource in the interactive area of the live broadcast interface; and then determine the interactive area according to the statistical result of the resource data.
  • the sorting results of each virtual resource in the APP; finally, the sorting results are sent to multiple terminals, so that multiple terminals display the sorting results in the interactive area.
  • the sorting results are displayed in the interactive area, including:
  • the display position of the sorting result of each virtual resource is determined, and the sorting result is displayed.
  • the specific display of the sorting result may be determined according to the page layout, which is not specifically limited in the embodiment of the present disclosure.
  • the sorting result may be displayed in the upper left corner of the display area corresponding to each virtual resource, etc.
  • FIG. 5 is a schematic diagram of another interactive area displaying multiple virtual resources according to an embodiment of the present disclosure. Specifically, virtual resources are used as special effects as an example. As shown in FIG. 5 , the upper left corner of the display area corresponding to each virtual resource is shown in FIG. 5 . The ranking corresponding to the virtual resource is displayed. With the constant interaction between the server and the terminal, the ranking corresponding to each virtual resource can be updated in real time.
  • the server determines the total number of votes corresponding to each virtual resource according to the number of votes of multiple terminals for at least one virtual resource in the interactive area of the live broadcast interface, and then sends it to multiple terminals. to display. As shown in Figure 5, the number of votes corresponding to each virtual resource is simultaneously displayed in the interactive area.
  • the live interaction method provided by the embodiment of the present disclosure may further include: receiving display timing prompt information, and displaying it in an interactive area of the live broadcast interface; wherein the display timing prompt information is displayed by the server according to the dynamic corresponding to each second target virtual resource.
  • the display sequence of the effects is generated, which is used to prompt the user about the sequence information of the dynamic effects to be displayed.
  • the display timing prompt information can be "Show the first interaction after xx seconds (that is, display the first-ranked interaction). #1 Dynamic Effect)".
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the resource data receiving prompt information is displayed in the resource receiving area of the live broadcast interface; exemplarily, after the terminal obtains the user's live broadcast interaction behavior, the live broadcast interaction behavior can be sent to the server, which is determined by the server. Resource data corresponding to each live broadcast interaction behavior, and generate resource data receipt prompt information, and then send it to the terminal for display; in addition, the terminal can also directly determine the resource data and resource data that the user can receive based on the user's live broadcast interaction behavior Receive reminder information;
  • a resource synchronization request is generated
  • the interactive behavior of live broadcast includes at least one of the following: performing tasks in the live broadcast room, watching the live broadcast, sending gifts to the host, liking the live broadcast, and logging in to the live broadcast room. That is, in the embodiment of the present disclosure, the user can continuously acquire and accumulate resource data through various live broadcast interactive behaviors in the live broadcast room, and then use the resource data to exchange dynamic effects that can be displayed on the live broadcast interface.
  • FIG. 6 is a schematic diagram of a resource claim area provided by an embodiment of the present disclosure, which is used to exemplarily illustrate the embodiment of the present disclosure, and specifically takes resource data as user points as an example.
  • the user can click "reward reward" to receive the corresponding points; for the live broadcast room task 1, because the user has not completed the task, therefore In the current state, the corresponding points cannot be collected.
  • Users can click "Do a task" to switch the terminal from the resource collection area to the task execution area or interface. After completing the corresponding task, switch to the resource collection area to collect the corresponding points; for live broadcasts During task 2, since the user completes this task, they can also directly collect points.
  • a leaderboard area can also be displayed on the live broadcast interface in the terminal.
  • the leaderboard area can display the live broadcast user's live broadcast viewing time list, gift giving list, like list, and the user's favorability between the live broadcaster (or Intimacy) rankings and other information, the relevant ranking information is specifically obtained by the server by counting the relevant information of each user to obtain the ranking information and sent to each terminal for display. By displaying a variety of ranking information, it helps to improve the fun of live broadcast interaction.
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the interactive effect of the virtual object in response to the third target virtual resource is displayed, and/or the interactive effect of the virtual pet of the virtual object in response to the third target virtual resource is displayed.
  • the third target virtual resource includes any virtual resource that can trigger a virtual object or a virtual pet to display an interactive effect, for example, a virtual gift or a specific special effect in the live broadcast room.
  • a virtual gift as an example, a user can generate a trigger operation to give a gift to a virtual object in the terminal, and the terminal can send the trigger operation to the server or generate an interactive data acquisition request based on the trigger operation and then send it to the server to request.
  • the server sends the interactive data of the virtual object in response to the virtual gift (for example, thank you animation data), and/or the virtual pet of the virtual object responds to the interactive data of the virtual gift (for example, the animation data of the swinging gift); the terminal receives the interactive data After the data is generated, the interactive effect is generated and displayed on the live broadcast interface; in addition, after determining the interactive data, the server can also use the interactive effect corresponding to the interactive data relative to the display position of the virtual object, or the interactive effect corresponding to the interactive data relative to the virtual pet's display position. Display position, the interactive data and live video stream are superimposed and sent to the terminal, and the terminal generates live content based on the superimposed live video stream. At this time, the interactive effect of the virtual object in response to the virtual gift is directly displayed on the live content, and /or, an interactive effect of the virtual pet displayed with the virtual object in response to the virtual gift.
  • the server sends the interactive data of the virtual object in response to the virtual gift (for example, thank you animation data), and/
  • FIG. 7 is a flowchart of another live interaction method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solutions, and may be combined with the foregoing optional implementation manners.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • each virtual resource in response to a triggering operation for at least one virtual resource in the interactive area, determine at least one first target virtual resource; wherein each virtual resource is configured with resource data required for triggering.
  • the first superimposed dynamic effect refers to the superposition of dynamic effects corresponding to each target virtual resource in the at least two second target virtual resources. Whether the dynamic effect corresponding to each second target virtual resource supports overlay display is determined by the server. After the server determines that the dynamic effects corresponding to each second target virtual resource can be displayed in a superimposed manner, it can send the multimedia data corresponding to each second target virtual resource to the terminal at the same time, for example, use the multimedia data corresponding to each second target virtual resource as A data packet, sent to the terminal together with the live video stream; or superimposing the multimedia data corresponding to each second target virtual resource and the live video stream according to the display position of the dynamic effect corresponding to each second target virtual resource on the virtual object After processing, the superimposed live video stream is obtained, and then sent to the terminal.
  • the preset time of the live broadcast that is, the superimposed display time of the first superimposed dynamic effect
  • the preset time for the first superimposed dynamic effect can be determined based on the display time of the dynamic effect corresponding to each second target virtual resource.
  • the preset time for the first superimposed dynamic effect, or the maximum value of each display duration is used as the preset time for displaying the first superimposed dynamic effect during the live broadcast.
  • each dynamic effect can also be displayed according to its own display time, that is, after reaching its own display time, the corresponding dynamic effect can be hidden in the live broadcast interface.
  • the dynamic effect display prompt information of each second target virtual resource displayed on the live interface may be generated based on the icon and/or the superimposed display duration of each second target virtual resource.
  • the superimposed display duration is the average duration of each display duration, that is, the display duration of each second target virtual resource is the same, only one dynamic effect display prompt information can be displayed on the live broadcast interface, and the prompt information includes an icon and the remaining display duration of the dynamic effect
  • the above-mentioned icons include each of the above-mentioned second target virtual resources, or represent each of the above-mentioned second target virtual resources; if the superimposed display duration is the maximum of the display durations, you can
  • the dynamic effect display prompt information corresponding to each second target virtual resource is respectively displayed on the live broadcast interface, and the prompt information includes the icon of each second target virtual resource and the remaining display time of the dynamic effect.
  • the server can also first determine the overlay resource icon corresponding to each second target virtual resource, and then generate dynamic effect display prompt information based on the overlay resource icon and/or the overlay display duration, and finally a dynamic effect will be displayed on the live broadcast interface.
  • the effect display prompt information for example, a dynamic effect display prompt logo is displayed on the live broadcast interface, which is used to represent the superimposed resource icon corresponding to each second target virtual resource and the remaining display time of the superimposed dynamic effect.
  • the superimposed resource icon may be a preset icon, or a new icon generated based on the icon corresponding to each second target virtual resource (the icon generation method can be customized), or may be a combination of each second target virtual resource
  • the corresponding icons are formed by combining and arranging in a preset manner. For example, icons corresponding to each second target virtual resource are arranged up and down or left and right to form superimposed resource icons. Of course, other methods that can be used to generate overlay resource icons can also be used.
  • the preset time of the live broadcast can be obtained based on the cumulative value of the display durations of the dynamic effects corresponding to the second target virtual resources.
  • the preset time of the live broadcast is the display time of each display. The cumulative value of the duration.
  • the second superimposed dynamic effect refers to the superposition of the dynamic effects corresponding to at least two target virtual resources among the at least three second target virtual resources
  • the non-superimposed dynamic effect refers to at least two of the at least three second target virtual resources except for the superposition. Dynamic effects corresponding to the remaining target virtual resources other than the target virtual resources.
  • the preset time of the live broadcast can be determined based on the superimposed display duration of the second superimposed dynamic effect and the respective display duration of each non-superimposed dynamic effect, for example, the superimposed display duration of the second superimposed dynamic effect and the display of each non-superimposed dynamic effect
  • the cumulative value of the duration is used as the preset time for the second superimposed dynamic effect and each non-superimposed dynamic effect displayed during the live broadcast.
  • the superimposed display duration of the second superimposed dynamic effect may be determined with reference to the method for determining the superimposed display duration of the first superimposed dynamic effect.
  • the dynamic effects corresponding to the second target virtual resources are displayed on the virtual objects, including:
  • Display the second superimposed dynamic effects and non-superimposed dynamic effects corresponding to at least three second target virtual resources on the virtual object including:
  • the second superimposed dynamic effect is displayed on the virtual object respectively. Effects and non-stacking dynamic effects;
  • the acquisition order of multimedia data is determined according to the resource data corresponding to each second target virtual resource, and accordingly, the server determines the sending order of each multimedia data according to the resource data corresponding to each second target virtual resource; the second target virtual resource is based on Resource data corresponding to the first target virtual resource triggered by multiple terminals is determined.
  • the sending order or acquisition order of the multimedia data corresponding to the second target virtual resources is the same, and the server can use the sum of the resource data corresponding to the second target virtual resources as each
  • the resource data corresponding to the second target virtual resource is then compared with the resource data corresponding to the second target virtual resources whose dynamic effects cannot be superimposed and displayed, thereby determining the sending order of the multimedia data corresponding to each second target virtual resource.
  • the terminal may also obtain the sequence of acquiring multimedia data corresponding to each second target virtual resource.
  • the server determines three special effects: special effect a, special effect b, and special effect c, and determines that the dynamic effects corresponding to special effect a and special effect b can be displayed in a superimposed manner, and the server can display them according to the special effect. a.
  • Resource data corresponding to special effect b and special effect c respectively determine the sending order of multimedia data corresponding to special effect a, special effect b and special effect c respectively: first send the multimedia data corresponding to special effect c, and then send special effects together The multimedia data corresponding to a and the special effect b; correspondingly, the terminal first obtains the multimedia data corresponding to the special effect c, then obtains the multimedia data corresponding to the special effect a and the special effect b, and sequentially displays the dynamic effect on the live broadcast interface according to the obtaining order.
  • special effect a as the animal ear special effect and special effect b as the blowing special effect
  • the user after displaying the dynamic effects corresponding to the animal ear special effect and the blowing special effect on the virtual object at the same time, the user will see the animal ear growing on the head of the virtual object. , at the same time the effect of the animal ears shaking with the wind.
  • the superimposition effect of multiple dynamic effects can be displayed on the virtual object, or each dynamic effect can be displayed separately, and the user can watch the live broadcast interface with different effects, which enriches the live broadcast in the virtual live broadcast scene.
  • the interactive method makes the live broadcast more interesting.
  • FIG. 8 is a flowchart of another live interaction method provided by an embodiment of the present disclosure.
  • the method can be applied to a server and executed by a live interaction device configured on the server, and the device can be implemented by software and/or hardware.
  • the live broadcast interaction method applied to the server provided by the embodiment of the present disclosure is executed in conjunction with the live broadcast interaction method applied to the client provided by the embodiment of the present disclosure.
  • the live interaction method provided by the embodiment of the present disclosure may include:
  • the server can determine at least one second target virtual resource from at least one first target virtual resource according to a preset determination rule, and the determination rule can be flexibly set, which is not specifically limited in this embodiment of the present disclosure, for example, it can be determined according to each first target virtual resource. Resource data corresponding to the virtual resources, from which a plurality of second target virtual resources are determined.
  • S302. Determine preset time information of the live broadcast according to the at least one second target virtual resource; wherein the preset time information is used to determine the time to display the dynamic effect corresponding to the at least one second target virtual resource on the virtual object.
  • the preset time information of the live broadcast may be determined according to the attribute feature corresponding to each second target virtual resource, and the attribute feature may include the display duration of the dynamic effect corresponding to the second target virtual resource.
  • S303 Send the preset time information, the at least one second target virtual resource, and the live video stream to multiple terminals.
  • determining at least one second target virtual resource according to at least one first target virtual resource triggered by multiple terminals including:
  • At least one second target virtual resource is determined from the at least one first target virtual resource.
  • determining at least one second target virtual resource from at least one first target virtual resource according to a statistical result of resource data including:
  • the first target virtual resources are sorted, and a preset number of first target virtual resources ranked first are determined as at least one second target virtual resource; or,
  • a first target virtual resource whose statistical result of resource data exceeds the data threshold is determined as at least one second target virtual resource.
  • the method further includes:
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • each second target virtual resource is determined according to the resource data corresponding to the target virtual resource that can be displayed superimposed with the dynamic effect, and the resource data corresponding to the target virtual resource that cannot be displayed in a superimposed manner with the dynamic effect.
  • the multimedia data and live video streams corresponding to at least one second target virtual resource are respectively sent to multiple terminals, including:
  • the multimedia data corresponding to each second target virtual resource is sequentially sent to the multiple terminals, and the live video stream is sent to the multiple terminals.
  • performing overlay processing on the multimedia data corresponding to the at least one second target virtual resource and the live video stream according to the display position of the dynamic effect corresponding to the at least one second target virtual resource on the virtual object, to obtain a superimposed live video stream include:
  • the multimedia data corresponding to each second target virtual resource and the live video stream are sequentially superimposed to obtain the superimposed live video stream.
  • the preset time information of the live broadcast is determined according to at least one second target virtual resource, including:
  • the second preset time information of the live broadcast is determined according to the respective display durations of the target virtual resources whose dynamic effects cannot be displayed in a superimposed manner in the second target virtual resources.
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the dynamic effect display prompt information is sent to multiple terminals, so that the multiple terminals display the dynamic effect display prompt information on the live broadcast interface.
  • the dynamic effect display prompt information includes a dynamic effect display prompt identifier, and based on the attribute characteristics corresponding to each second target virtual resource, generating dynamic effect display prompt information corresponding to each second target virtual resource, including:
  • a dynamic effect display prompt identifier corresponding to each second target virtual resource is generated.
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the multiple terminals According to the resource data of at least one virtual resource in the interactive area of the live broadcast interface by the multiple terminals, count the resource data corresponding to each virtual resource in the interactive area;
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the resource data received by the user in multiple terminals is synchronized to the user account.
  • the live interaction method provided by the embodiment of the present disclosure further includes:
  • the interactive data and the live video stream are superimposed and sent to multiple terminals according to the interactive effect corresponding to the interactive data relative to the display position of the virtual object, or according to the interactive effect corresponding to the interactive data relative to the display position of the virtual pet.
  • the transformation of the user resource data into the dynamic effect displayed in the live video frame is realized, and the user can change the display state of the virtual object through the virtual resource data held by the user. , which enriches the interactive mode of live broadcast in the virtual live broadcast scene and improves the fun of live broadcast.
  • FIG. 9 is a schematic structural diagram of a live interactive device according to an embodiment of the present disclosure.
  • the device can be configured on a terminal entering a virtual object live broadcast room, and can be implemented by software and/or hardware.
  • the live broadcast interaction apparatus configured on the terminal provided by the embodiment of the present disclosure and the live broadcast interaction method applied to the terminal provided by the embodiment of the present disclosure belong to the same inventive concept.
  • the live broadcast interaction apparatus 400 may include a first target virtual resource determination module 401 and a dynamic effect display module 402, wherein:
  • the first target virtual resource determination module 401 is configured to, in the live broadcast interface, determine at least one first target virtual resource in response to a triggering operation for at least one virtual resource in the interactive area; wherein, each virtual resource is configured with the required triggering operation resource data;
  • the dynamic effect display module 402 is used for displaying the dynamic effect corresponding to at least one second target virtual resource on the virtual object within a preset time of the live broadcast; wherein, the preset time is related to the second target virtual resource, and the second target virtual resource The determination is based on the first target virtual resource triggered by multiple terminals.
  • the dynamic effect display module 402 includes:
  • the first obtaining unit is used for obtaining the multimedia data corresponding to the at least one second target virtual resource and the live video stream respectively within a preset time of the live broadcast;
  • a first display unit configured to generate a dynamic effect corresponding to at least one second target virtual resource based on the multimedia data, and generate live content based on the live video stream;
  • the dynamic effect display module 402 includes:
  • the second obtaining unit is used to obtain the superimposed live video stream within the preset time of the live broadcast; wherein, the superimposed live video stream is stored by the server according to the display position of the dynamic effect corresponding to the at least one second target virtual resource on the virtual object.
  • the multimedia data corresponding to the at least one second target virtual resource and the live video stream are superimposed and obtained;
  • the second display unit is used for generating live content based on the superimposed live video stream, and the virtual objects in the live content display dynamic effects.
  • the dynamic effect display module 402 includes:
  • the third display unit is used to display the first superimposed dynamic effects corresponding to the at least two second target virtual resources on the virtual object within the preset time of the live broadcast; or,
  • the fourth display unit is used for displaying the dynamic effects corresponding to the second target virtual resources on the virtual objects within the preset time of the live broadcast; or,
  • a fifth display unit used for displaying the second superimposed dynamic effect and the non-superimposed dynamic effect corresponding to the at least three second target virtual resources on the virtual object respectively within a preset time of the live broadcast;
  • the first superimposed dynamic effect refers to the superposition of the dynamic effects corresponding to each target virtual resource in the at least two second target virtual resources
  • the second superimposed dynamic effect refers to at least two target virtual resources in the at least three second target virtual resources
  • the superposition of the dynamic effects corresponding to the resources, the non-superimposed dynamic effects refer to the dynamic effects corresponding to the remaining target virtual resources except at least two target virtual resources among the at least three second target virtual resources.
  • the preset time of the live broadcast is determined based on attributes corresponding to the second target virtual resource.
  • the fourth display unit is specifically used for:
  • the dynamic effects corresponding to each second target virtual resource are displayed on the virtual object in sequence;
  • the fifth display unit is specifically used for:
  • the acquisition sequence of the multimedia data is determined according to resource data corresponding to each second target virtual resource, and the second target virtual resource is determined based on resource data corresponding to the first target virtual resource triggered by multiple terminals.
  • the live interactive device 400 provided by the embodiment of the present disclosure further includes:
  • the prompt information receiving module is configured to receive the dynamic effect display prompt information; wherein, the dynamic effect display prompt information is generated by the server based on the attribute characteristics corresponding to each second target virtual resource;
  • the prompt information display module is used to display the dynamic effect display prompt information on the live broadcast interface within the preset time of the live broadcast.
  • the dynamic effect display prompt information includes a dynamic effect display prompt identifier, and the dynamic effect display prompt identifier is used to represent the icon corresponding to each second target virtual resource and the remaining display duration of the dynamic effect;
  • the prompt information display module is specifically used for:
  • the dynamic effect display prompt identifier corresponding to the second target virtual resource is hidden on the live broadcast interface.
  • the live interactive device 400 provided by the embodiment of the present disclosure further includes:
  • a sorting result receiving module configured to receive the sorting result of each virtual resource in the interactive area; wherein, the sorting result is determined according to the resource data corresponding to each virtual resource;
  • the sorting result display module is used to display the sorting results in the interactive area.
  • the sorting result display module is specifically used for:
  • the display position of each virtual resource in the interactive area the display position of the sorting result of each virtual resource is determined, and the sorting result is displayed.
  • the live interactive device 400 provided by the embodiment of the present disclosure further includes:
  • the receiving prompt information display module is used to display the receiving prompt information of resource data in the resource receiving area of the live broadcast interface according to the user's live broadcast interaction behavior;
  • the resource synchronization request generation module is used to generate a resource synchronization request in response to the user's resource receiving operation
  • the resource synchronization request sending module is used to send the resource synchronization request to the server, so that the server synchronizes the resource data received by the user to the user account.
  • the live interactive device 400 provided by the embodiment of the present disclosure further includes:
  • the interactive effect display module is used to display the interactive effect of the virtual object in response to the third target virtual resource in response to the trigger operation for the third target virtual resource, and/or, the virtual pet that displays the virtual object responds to the third target virtual resource interactive effect.
  • the live broadcast interaction apparatus configured on the terminal provided by the embodiment of the present disclosure can execute the live broadcast interaction method applied to the terminal provided by the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • the live broadcast interaction method applied to the terminal provided by the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 10 is a schematic structural diagram of another live interactive device according to an embodiment of the present disclosure.
  • the device may be configured on a server and implemented by software and/or hardware.
  • the live broadcast interaction apparatus configured on the server provided by the embodiment of the present disclosure and the live broadcast interaction method applied to the server provided by the embodiment of the present disclosure belong to the same inventive concept.
  • the live interactive device 500 may include a second target virtual resource determination module 501, a preset time information determination module 502, and a data transmission module 503, wherein:
  • a second target virtual resource determining module 501 configured to determine at least one second target virtual resource according to at least one first target virtual resource triggered by multiple terminals;
  • the preset time information determination module 502 is configured to determine the preset time information of the live broadcast according to the at least one second target virtual resource; wherein, the preset time information is used to determine the display corresponding to the at least one second target virtual resource on the virtual object The time of the dynamic effect;
  • the data sending module 503 is configured to send preset time information, at least one second target virtual resource and live video stream to multiple terminals.
  • the second target virtual resource determining module 501 includes:
  • a resource data statistics unit configured to count resource data corresponding to each first target virtual resource according to resource data corresponding to at least one first target virtual resource determined by multiple terminals;
  • the second target virtual resource determining unit is configured to determine at least one second target virtual resource from the at least one first target virtual resource according to the statistical result of the resource data.
  • the second target virtual resource determining unit is specifically configured to:
  • the first target virtual resources are sorted, and a preset number of first target virtual resources ranked first are determined as at least one second target virtual resource; or,
  • a first target virtual resource whose statistical result of resource data exceeds the data threshold is determined as at least one second target virtual resource.
  • the data sending module 503 includes:
  • a first sending unit configured to send preset time information to multiple terminals
  • a second sending unit configured to send multimedia data and live video streams corresponding to at least one second target virtual resource to multiple terminals respectively;
  • the superimposed live video stream determination unit is configured to perform superimposition processing on the multimedia data corresponding to the at least one second target virtual resource and the live video stream according to the display position of the dynamic effect corresponding to the at least one second target virtual resource on the virtual object, to obtain Overlay live video streams;
  • the third sending unit is configured to send the superimposed live video stream to multiple terminals.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • a superimposed display determination module configured to determine whether there is a superimposed display conflict in the dynamic effect corresponding to each second target virtual resource
  • the virtual resource distinguishing module is used for the target virtual resources that can be displayed in a superimposed manner and the target virtual resources that cannot be displayed in a superimposed manner in each of the second target virtual resources according to the determination result, so that multiple terminals can display the target virtual resources whose dynamic effects can be superimposed and displayed.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • a first sending order determining module configured to determine the sending order of multimedia data corresponding to each second target virtual resource according to the resource data corresponding to each second target virtual resource if there is a superimposed display conflict;
  • the second sending order determination module is configured to, if there are target virtual resources whose dynamic effects can be superimposed and displayed, resource data corresponding to the target virtual resources that can be superimposed and displayed according to the dynamic effects, and resources corresponding to the target virtual resources whose dynamic effects cannot be superimposed and displayed data, determine the sending order of multimedia data corresponding to each second target virtual resource;
  • the second sending unit is specifically used for:
  • the multimedia data corresponding to each second target virtual resource is sequentially sent to the multiple terminals, and the live video stream is sent to the multiple terminals.
  • the superimposed live video stream determination unit is specifically used for:
  • the multimedia data corresponding to each second target virtual resource and the live video stream are sequentially superimposed to obtain the superimposed live video stream.
  • the preset time information determination module 502 includes:
  • a first preset time information determining unit configured to determine the first preset time information of the live broadcast according to the superimposed display duration corresponding to the target virtual resources whose dynamic effects can be superimposedly displayed in each second target virtual resource;
  • the second preset time information determining unit is configured to determine the second preset time information of the live broadcast according to the respective display durations of the target virtual resources whose dynamic effects cannot be displayed in a superimposed manner in the second target virtual resources.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • a prompt information generation module configured to generate prompt information for dynamic effect display corresponding to each second target virtual resource based on the attribute characteristics corresponding to each second target virtual resource;
  • the prompt information sending module is used to send the dynamic effect display prompt information to multiple terminals, so that the multiple terminals display the dynamic effect display prompt information on the live broadcast interface.
  • the dynamic effect display prompt information includes a dynamic effect display prompt identifier, and the prompt information generation module is specifically used for:
  • a dynamic effect display prompt identifier corresponding to each second target virtual resource is generated.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • a resource data statistics module configured to count resource data corresponding to each virtual resource in the interactive area according to the resource data of at least one virtual resource in the interactive area of the live interface by multiple terminals;
  • the sorting result determination module is used to determine the sorting result of each virtual resource in the interactive area according to the statistical result of the resource data
  • the sorting result sending module is used for sending the sorting results to multiple terminals, so that the multiple terminals display the sorting results in the interactive area.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • the receiving prompt information sending module is used to determine the resource data to be received by the user according to the live interactive behavior of the users in the multiple terminals, and send the receiving prompt information of the resource data to the multiple terminals, so that the resources of the multiple terminals can be displayed on the live broadcast interface.
  • the receiving area displays the receiving prompt information of the resource data; the resource data to be received by the user mentioned here can be understood as the resource data available to the user.
  • a resource synchronization request receiving module configured to receive resource synchronization requests sent by multiple terminals; wherein, the resource synchronization request is generated according to a user's resource receiving operation;
  • the resource data synchronization module is used for synchronizing the resource data received by the user in the multiple terminals to the user account according to the resource synchronization request.
  • the live interactive device 500 provided by the embodiment of the present disclosure further includes:
  • the interaction data determination module is configured to determine the interaction data of the virtual object in response to the third target virtual resource according to the triggering operation of the third target virtual resource in the multiple terminals, and/or determine that the virtual pet of the virtual object responds to the third target virtual resource Interaction data of the target virtual resource;
  • the first interactive data sending module 503 is used to send interactive data and live video streams to multiple terminals respectively; or,
  • the second interactive data sending module 503 is configured to superimpose the interactive data and the live video stream according to the display position of the interactive effect corresponding to the interactive data relative to the virtual object, or according to the interactive effect corresponding to the interactive data relative to the display position of the virtual pet. to multiple terminals.
  • the live broadcast interaction apparatus configured on the server provided by the embodiment of the present disclosure can execute the live broadcast interaction method applied to the server provided by the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • the live broadcast interaction apparatus configured on the server provided by the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. It is used to exemplarily describe the electronic device that implements any live broadcast interaction method provided by the embodiments of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, Mobile terminals such as car navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, wearable electronic devices, servers, and the like.
  • PDAs personal digital assistants
  • PADs tablets
  • PMPs portable multimedia players
  • vehicle-mounted terminals eg, Mobile terminals such as car navigation terminals
  • stationary terminals such as digital TVs, desktop computers, smart home devices, wearable electronic devices, servers, and the like.
  • the electronic device shown in FIG. 11 is only an example, and should not
  • electronic device 600 includes one or more processors 601 and memory 602 .
  • Processor 601 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 600 to perform desired functions.
  • CPU central processing unit
  • Processor 601 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 600 to perform desired functions.
  • Memory 602 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include, for example, random access memory (RAM) and/or cache memory, among others.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 601 may execute the program instructions to implement the live broadcast interaction method provided by the embodiments of the present disclosure, and may also implement other desired functions.
  • Various contents such as input signals, signal components, noise components, etc. may also be stored in the computer-readable storage medium.
  • the live interaction method provided by the embodiment of the present disclosure is applied to a terminal entering a virtual object live room.
  • the method may include: in the live broadcast interface, in response to a trigger operation for at least one virtual resource in the interactive area, determining at least one A first target virtual resource; wherein, each of the virtual resources is configured with resource data required for triggering; within a preset time of the live broadcast, a dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object; wherein , the preset time is related to the second target virtual resource, and the second target virtual resource is determined based on the first target virtual resource triggered by a plurality of the terminals.
  • the live interaction method applied to the server may include: determining at least one second target virtual resource according to at least one first target virtual resource triggered by multiple terminals; according to the at least one second target virtual resource; The target virtual resource, determining the preset time information of the live broadcast; wherein, the preset time information is used to determine the time for displaying the dynamic effect corresponding to the at least one second target virtual resource on the virtual object; the preset time The information, the at least one second target virtual resource and the live video stream are sent to the plurality of terminals.
  • the electronic device 600 may also perform other optional implementations provided by the method embodiments of the present disclosure.
  • the electronic device 600 may also include an input device 603 and an output device 604 interconnected by a bus system and/or other form of connection mechanism (not shown).
  • the input device 603 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 604 can output various information to the outside, including the determined distance information, direction information, and the like.
  • the output device 604 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.
  • the electronic device 600 may also include any other suitable components according to the specific application.
  • the embodiments of the present disclosure may also be computer program products, which include computer program instructions that, when executed by the computing device, enable the computing device to implement any live interaction method provided by the embodiments of the present disclosure.
  • the computer program product may write program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc., as well as conventional procedural programming language, such as "C" language or similar programming language.
  • the program code may execute entirely on the user electronic device, partly on the user electronic device, as a stand-alone software package, partly on the user electronic device and partly on the remote electronic device, or entirely on the remote electronic device execute on.
  • embodiments of the present disclosure may further provide a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions When executed by the computing device, the computer program instructions enable the computing device to implement any live interaction method provided by the embodiments of the present disclosure. .
  • the live interaction method provided by the embodiment of the present disclosure is applied to a terminal entering a virtual object live room.
  • the method may include: in the live broadcast interface, in response to a trigger operation for at least one virtual resource in the interactive area, determining at least one A first target virtual resource; wherein, each of the virtual resources is configured with resource data required for triggering; within a preset time of the live broadcast, a dynamic effect corresponding to at least one second target virtual resource is displayed on the virtual object; wherein , the preset time is related to the second target virtual resource, and the second target virtual resource is determined based on the first target virtual resource triggered by a plurality of the terminals.
  • the live interaction method applied to the server may include: determining at least one second target virtual resource according to at least one first target virtual resource triggered by multiple terminals; according to the at least one second target virtual resource; The target virtual resource, determining the preset time information of the live broadcast; wherein, the preset time information is used to determine the time for displaying the dynamic effect corresponding to the at least one second target virtual resource on the virtual object; the preset time The information, the at least one second target virtual resource and the live video stream are sent to the plurality of terminals.
  • the processor may also cause the processor to execute other optional implementations provided by the method embodiments of the present disclosure.
  • a computer-readable storage medium can employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • a readable storage medium may include, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本公开实施例涉及一种直播交互方法、装置、电子设备和存储介质,其中,该方法可以包括:响应于在直播界面中针对互动区域中至少一个虚拟资源的触发操作,确定所述触发操作对应的至少一个第一目标虚拟资源;其中,各虚拟资源配置有触发所需的资源数据;在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,第二目标虚拟资源基于多个终端对应的第一目标虚拟资源确定。本公开实施例实现了用户持有的资源数据到直播界面中展示特定动态效果的转化,用户可以通过持有的资源数据让虚拟对象的展示状态发生变化,丰富了虚拟直播场景下的直播互动方式,提高了直播趣味性。

Description

直播交互方法、装置、电子设备和存储介质
本申请要求于2020年12月11日提交中国国家知识产权局、申请号为202011463611.1、申请名称为“直播交互方法、装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及互联网直播技术领域,尤其涉及一种直播交互方法、装置、电子设备和存储介质。
背景技术
互联网直播技术的发展,使得直播成为一种流行的娱乐、消费方式。在现有直播模式下,观众在直播间可以通过与主播聊天、向主播赠送礼物、点赞等方式进行直播交互。
然而,在虚拟直播场景下,即采用基于人工智能的虚拟主播(或称为虚拟对象)替代真人主播,观众与虚拟主播的交互方式仍然比较单一,直播趣味性较低。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种直播交互方法、装置、电子设备和存储介质。
第一方面,本公开实施例提供了一种直播交互方法,应用于进入虚拟对象直播间的终端,包括:
在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各所述虚拟资源配置有触发所需的资源数据;
直播的预设时间内,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述预设时间与所述第二目标虚拟资源相关,所述第二目标虚拟资源基于多个所述终端触发的所述第一目标虚拟资源确定。
第二方面,本公开实施例还提供了直播交互方法,应用于服务端,包括:
根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;
根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,所述预设时间信息用于确定在虚拟对象上展示所述至少一个第二目标虚拟资源对应的动态效果的时间;
将所述预设时间信息、所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
第三方面,本公开实施例还提供了一种直播交互装置,配置于进入虚拟对象直播间的终端,包括:
第一目标虚拟资源确定模块,用于在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各所述虚拟资源配置有触发所需的资源数据;
动态效果展示模块,用于直播的预设时间内,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述预设时间与所述第二目标虚拟资源相关,所述第二目标虚拟资源基于多个所述终端触发的所述第一目标虚拟资源确定。
第四方面,本公开实施例还提供了一种直播交互装置,配置于服务端,包括:
第二目标虚拟资源确定模块,用于根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;
预设时间信息确定模块,用于根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,所述预设时间信息用于确定在虚拟对象上展示所述至少一个第二目标虚拟资源对应的动态效果的时间;
数据发送模块,用于将所述预设时间信息、所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
第五方面,本公开实施例还提供了一种电子设备,包括存储器和处理器,其中:所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行 时,使得所述电子设备实现本公开实施例提供的任一直播交互方法。
第六方面,本公开实施例还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,使得所述计算设备实现本公开实施例提供的任一直播交互方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:在本公开实施例中,用户可以基于持有的资源数据,在终端展示的直播界面的互动区域上,产生针对至少一个虚拟资源的触发操作,终端响应于该触发操作,确定至少一个第一目标虚拟资源;然后由服务端从至少一个第一目标虚拟资源确定至少一个第二目标虚拟资源;其后终端可以在直播的预设时间内,在直播间的虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,实现了用户持有的资源数据到直播界面中展示特定动态效果的转化,用户可以通过持有的资源数据让虚拟对象的展示状态发生变化,丰富了虚拟直播场景下的直播互动方式,提高了直播趣味性。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种直播交互方法的流程图;
图2为本公开实施例提供的一种展示有多个虚拟资源的互动区域的示意图;
图3为本公开实施例提供的一种在虚拟对象上展示第二目标虚拟资源对应的动态效果的一种直播界面示意图;
图4为本公开实施例提供的另一种在虚拟对象上展示第二目标虚拟资源对应的动态效果的一种直播界面示意图;
图5为本公开实施例提供的另一种展示有多个虚拟资源的互动区域的示意图;
图6为本公开实施例提供的一种资源领取区域的示意图;
图7为本公开实施例提供的另一种直播交互方法的流程图;
图8为本公开实施例提供的另一种直播交互方法的流程图;
图9为本公开实施例提供的一种直播交互装置的结构示意图;
图10为本公开实施例提供的另一种直播交互装置的结构示意图;
图11为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
图1为本公开实施例提供的一种直播交互方法的流程图,该方法可以适用于在虚拟直播场景下用户(即观众)与虚拟对象(即虚拟主播)如何进行交互的情况。该直播交互方法可以由配置于终端的直播交互装置执行,该装置可以采用软件和/或硬件实现,该终端可以包括进入虚拟对象直播间的任意终端。
如图1所示,本公开实施例提供的直播交互方法可以包括:
S101、在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各虚拟资源配置有触发所需的资源数据。
示例性的,终端进入虚拟对象的直播间后,可以响应于针对直播界面上具有界面调用功能的控件的触控操作,将互动区域叠加展示在直播界面上。互动区域中展示的每个虚拟资源对应一种动态效果,可以用于改变直播界面中虚拟对象的展示状态。虚拟资源可以采用预设形状的图像标识等方式展示在互动区域中,例如互动区域中展示的多个虚拟资源可以是不同特效的标识。互动区域 支持用户的触控操作,例如点击、长按等,从而使得用户产生针对至少一个虚拟资源的触发操作。
在一个示例中,各虚拟资源配置有触发所需的资源数据,即针对虚拟资源触发操作需消耗对应的资源数据。该资源数据可以是用户积分、虚拟币或者用户等级。例如,用户在互动区域中每触发(即选中)一个虚拟资源,需要消耗一定的用户积分或者虚拟币,或者,当用户等级达到预设等级时,用户才有权限选中虚拟资源,即资源数据是一种用于触发虚拟资源的交换资源。
进一步地,服务端接收各终端触发的第一目标虚拟资源,并统计各第一目标虚拟资源被终端触发的数量,将数量最多或数量排在前预设名次或数量超过预设阈值的第一目标虚拟资源作为第二目标虚拟资源。例如,服务端接收到终端A触发的第一目标虚拟资源甲和丁、终端B触发的第一目标虚拟资源甲和乙、终端C触发的第一目标虚拟资源甲和乙、终端D触发的第一目标虚拟资源甲和丙,经过统计后确定第一目标虚拟资源甲的触发数量为4、第一目标虚拟资源乙的触发数量为2、第一目标虚拟资源丙和丁的触发数量均为1,此时,可以选择触发数量最多的甲作为第二目标虚拟资源,也可以将触发数量排名前2的甲和乙分别确定为第二目标虚拟资源,也可以将触发数量超过预设阈值2的甲作为第二目标虚拟资源。
可选地,终端也可以将各个第一目标虚拟资源对应的资源数据发送至服务端,以使得服务端将资源数据作为确定第二目标虚拟资源的一种筛选条件。
在本公开实施例中,服务端可以基于任一种或任几种类型的资源数据,结合各第一目标虚拟资源被终端触发的数量,从多个第一目标虚拟资源中确定第二目标虚拟资源。以资源数据为用户积分为例,服务端也可以同时考虑各第一目标虚拟资源对应的积分量和触发数量,将积分量超过积分阈值且触发数量超过预设阈值的第一目标虚拟资源确定为第二目标虚拟资源。其中,各阈值取值可以灵活设置。
以用户积分或虚拟币为例,可选地,终端可以根据用户在交互区域中针对虚拟资源的触控时长,确定用户提供的资源数据。例如,触控时间越长,用户 提供的资源数据越多。用户提供的资源数据越多,则直播界面上展示用户选择的虚拟资源对应的动态效果的概率越大。通过按照用户的触控时长,动态控制用户提供的资源数据,有助于提高交互趣味性。
终端监测到用户完成针对虚拟资源的触发操作后(例如用户指尖离开屏幕等),可以将互动区域隐藏,也可以根据用户触发的互动区域退出操作(例如触控交互区域中的退出控件,或者点击直播界面的预设区域(用于触发退出请求)等),将互动区域隐藏,从而避免互动区域持续遮挡直播界面。
S102、直播的预设时间内,在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,预设时间与第二目标虚拟资源相关,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源确定。
用户针对直播界面中的互动区域中至少一个虚拟资源的触发操作之后,所述终端可以在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果。在一个示例中,所述终端可以在直播的预设时间,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果。
其中,直播的预设时间基于第二目标虚拟资源对应的属性特征确定,该属性特征可以包括但不限于第二目标虚拟资源的资源ID、资源标识(用于唯一标识该第二目标虚拟资源)、对应的动态效果的展示时长、虚拟资源的类型、以及虚拟资源的热度(受用户欢迎程度)、预设历史时间内的动态效果的展示频率等信息。可以预先定义属性特征与第二目标虚拟资源对应的动态效果展示时长的关联关系,例如可以预先设定虚拟资源的热度越高,对应的动态效果展示时长越长,或者预设历史时间内的虚拟资源动态效果的展示频率越大,则动态效果的当前展示时长越长,或者为不同类型的虚拟资源预先设定不同的动态效果展示时长等,从而根据属性特征动态确定第二目标虚拟资源对应的动态效果的展示时长,进而动态确定直播的预设时间。该预设时间是指在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果的持续时间。
例如,终端可以将第二目标虚拟资源对应的多媒体数据(包括音频数据和视频数据)的接收时间作为时间起点,基于第二目标虚拟资源对应的动态效果 的展示时长,确定时间终点,在该时间段内叠加展示虚拟对象和第二目标虚拟资源对应的动态效果。如果第二目标虚拟资源包括至少两个,则可以基于各第二虚拟资源对应的多媒体数据的接收时间以及各第二虚拟资源对应的展示时长,确定一个持续时间段作为前述直播的预设时间。
进一步地例如,动态效果在先展示的第二虚拟资源a对应的多媒体数据的接收时间为t1,对应的动态效果的展示时长为x1,则第二虚拟资源a对应的动态效果展示结束时间为t1加x1得到的时间t2;动态效果在后展示的第二虚拟资源b对应的多媒体数据的接收时间为t3,对应的动态效果的展示时长为x2,则第二虚拟资源b对应的动态效果展示结束时间为t3加x2得到的时间t4,可以将t1至t4之间的持续时间作为直播的预设时间;如果第二虚拟资源a和第二虚拟资源b对应的动态效果支持在虚拟对象上同时叠加展示,第二虚拟资源a和第二虚拟资源b对应的多媒体数据的接收时间均为t1,基于各自的展示时长确定叠加展示时长x3,两个动态效果的展示结束时间为t1加x3得到的时间t5,可以将t1至t5之间的持续时间作为直播的预设时间。
在一种可选实施方式中,直播的预设时间内,在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,包括:
直播的预设时间内,分别获取直播视频流和至少一个第二目标虚拟资源对应的多媒体数据;
基于多媒体数据生成至少一个第二目标虚拟资源对应的动态效果,并基于直播视频流生成直播内容;
在直播内容中的虚拟对象上展示动态效果。
在另一种可选实施方式中,直播的预设时间内,在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,包括:
直播的预设时间内,获取叠加直播视频流;其中,叠加直播视频流由服务端按照至少一个第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将直播视频流和所述至少一个第二目标虚拟资源对应的多媒体数据进行叠加处理后得到;
基于叠加直播视频流生成直播内容,直播内容中虚拟对象上展示有动态效果。
即服务端确定出至少一个第二目标虚拟资源后,可以将各第二目标虚拟资源对应的多媒体数据和直播视频流进行单独发送,终端收到多媒体数据和直播视频流后,完成两者的叠加展示,例如按照多媒体数据中包括的动态效果在虚拟对象上的展示位置信息,将生成的动态效果展示在虚拟对象上;服务端也可以直接预先将多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流,然后推送至终端中进行展示,本公开实施例的技术方案在实现上具有较多的灵活性。
并且,针对多媒体数据和直播视频流单独发送的情况,服务端可以将多媒体数据携带在展示指令中发送至终端,展示指令用于指示终端在在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果。
在本公开实施例中,用户可以基于持有的资源数据,在终端展示的直播界面的互动区域上,产生针对至少一个虚拟资源的触发操作,终端响应于该触发操作,确定至少一个第一目标虚拟资源;然后由服务端从至少一个第一目标虚拟资源确定至少一个第二目标虚拟资源;其后终端可以在直播的预设时间内,在直播间的虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,实现了用户持有的资源数据到直播界面中展示特定动态效果的转化,用户可以通过持有的资源数据让虚拟对象的展示状态发生变化,丰富了虚拟直播场景下的直播互动方式,提高了直播趣味性。
其中,虚拟对象的展示状态的变化可以包括但不限于:身体特征的变化、装扮风格的变化、服装的变化、动作的变化、表情的变化以及所处环境的变化等。
以直播界面的互动区域中虚拟资源为特效为例,虚拟资源对应的多媒体数据即特效数据,直播界面的互动区域中可以展示的特效类型可以包括以下至少之一:服装变换类特效、装扮变换类特效、身体变换类特效和场景道具变换类特效。
示例性的,服装变换类特效包括变装;例如,在虚拟对象上展示变装特效对应的动态效果后,用户可以看到虚拟对象的服装由服装A变为了服装B;
装扮变换类特效包括兽耳和花环中的至少之一;例如,在虚拟对象上展示兽耳(具体例如猫耳)特效对应的动态效果后,用户可以看到虚拟对象头上长出了兽耳;在虚拟对象上展示花环特效对应的动态效果后,用户可以看到虚拟对象头上戴有花环;
身体变换类特效包括变长腿和变细手中的至少之一;例如,在虚拟对象上展示变长腿或变细手特效对应的动态效果后,用户可以看到虚拟对象的腿变长或者手变细;
场景道具变换类特效包括冰冻、零食投喂、喝饮料、吹风、赠送甜品和拖尾中的至少之一;示例性的,冰冻特效是指与冰冻状态相关的效果展示,例如在虚拟对象上展示冰冻特效对应的动态效果后,用户可以看到虚拟对象呈现冰冻状态,虚拟对象上身(例如头部)还可以展示雪花等,并且,此时虚拟对象的动作相比于冰冻特效展示之前的动作减缓;零食投喂或喝饮料特效是指与吃零食或喝饮料相关的效果展示,例如在虚拟对象上展示零食投喂或喝饮料特效对应的动态效果后,用户可以看到虚拟对象呈现吃零食或者喝饮料的状态;赠送甜品是指与主播收到甜品后的表现相关的效果展示,例如在虚拟对象上展示赠送甜品(例如赠送巧克力)特效对应的动态效果后,用户可以看到虚拟对象呈现手拿甜品的动作,并且,脸部还可以呈现满足、开心的表情;吹风特效是指与风相关的效果展示,例如在虚拟对象上展示吹风特效对应的动态效果后,用户可以看到虚拟对象的头发或者衣服等呈现随风摆动的状态,同时直播界面中的其他可随风摆动的物品也可以同时摆动,并且,还可以播放风声的特效音频;拖尾特效是指主播走路时脚上或者腿部会出现预设装饰物的效果展示,例如在虚拟对象上展示拖尾特效对应的动态效果后,用户可以看到虚拟对象走路时脚上呈现拖动一些花瓣的状态。
需要说明的是,上述关于特效的示例,用于对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。特效数据的具体类型可以根据业 务需求灵活设置。
图2以互动区域中展示的虚拟资源为特效为例,示出了本公开实施例提供的一种展示有多个虚拟资源的互动区域的示意图,不应理解为对本公开实施例的具体限定。如图2所示,终端可以根据用户在任一特效标识上的触控操作,确定用户选择的特效标识,同时确定用户提供的资源数据,例如用户积分。
图3为本公开实施例提供的一种在虚拟对象上展示第二目标虚拟资源对应的动态效果的一种直播界面示意图,具体以冰冻特效对应的动态效果为例,示出了虚拟对象的状态变化为冰冻状态,并且虚拟对象的上身展示有雪花飘落,进一步地,虚拟对象的表情可以变化为受冷的表情。
图4为本公开实施例提供的另一种在虚拟对象上展示第二目标虚拟资源对应的动态效果的一种直播界面示意图,具体以猫耳特效对应的动态效果为例,示出了虚拟对象的装扮状态的变化。
在上述技术方案的基础上,可选地,本公开实施例提供的直播交互方法还包括:接收动态效果展示提示信息;直播的预设时间内,将动态效果展示提示信息展示在直播界面;其中,动态效果展示提示信息由服务端基于各第二目标虚拟资源对应的属性特征生成。该属性特征可以包括但不限于第二目标虚拟资源的资源ID、资源标识、对应的动态效果的展示时长、虚拟资源的类型、以及虚拟资源的热度、预设历史时间内的动态效果的展示频率等信息。服务端根据各第二目标虚拟资源对应的属性特征不仅可以确定出各第二目标虚拟资源在直播界面或互动区域的图标,还可以根据属性特征与动态效果展示时长的关联关系,确定各第二目标虚拟资源对应的动态效果的展示时长,进而基于第二目标虚拟资源的图标和/或对应的动态效果的展示时长,生成动态效果展示提示信息,并由终端展示在直播界面上,可以使得用户直观化知晓当前展示的动态效果是哪种虚拟资源,同时还可以清楚了解动态效果的展示时长,或者剩余展示时长等信息。动态效果展示提示信息可以采用图像标识或文本等形式实现。通过展示动态效果展示提示信息,可以提高直播交互的友好性。
进一步地,动态效果展示提示信息包括动态效果展示提示标识,动态效果 展示提示标识用于表征各第二目标虚拟资源对应的图标和/或动态效果的剩余展示时长;直播的预设时间内,将动态效果展示提示信息展示在直播界面,包括:直播的预设时间内,响应于任一第二目标虚拟资源对应的动态效果的剩余展示时长为零,在直播界面隐藏该第二目标虚拟资源对应的动态效果展示提示标识。例如,在直播过程中,动态效果展示提示标识上的预设区域被不断填充,以表示动态效果即将达到展示时长,并在剩余展示时长为零时,直播界面隐藏该动态效果展示提示标识。
例如图3中展示了与冰冻特效对应的动态效果展示提示标识,基于猫耳特效图标和猫耳动态效果的展示时长生成,或者例如图4中展示了与猫耳特效对应的动态效果展示提示标识,基于冰冻特效图标和猫耳动态效果的展示时长生成,在直播过程中,动态效果展示提示标识上的圆形区域可以被不断地顺时针填充,达到动态效果的展示时长后,圆形区域呈现完全填充状态,并且,直播界面隐藏该动态效果展示提示标识。动态效果展示提示标识在直播界面的具体展示位置本公开实施例不作具体限定。
可选地,本公开实施例提供的直播交互方法还包括:接收互动区域中各虚拟资源的排序结果;其中,排序结果根据各虚拟资源对应的资源数据确定;将排序结果展示在互动区域。
示例性的,服务端可以根据多个终端针对直播界面的互动区域中至少一个虚拟资源的资源数据,统计互动区域中每个虚拟资源对应的资源数据;然后根据资源数据的统计结果,确定互动区域中各虚拟资源的排序结果;最后将排序结果发送至多个终端,以使多个终端在互动区域展示排序结果。通过展示各个虚拟资源的排序结果,用户可以直观的看到每个虚拟资源的受欢迎程度,提高直播交互趣味性。
进一步地,将排序结果展示在互动区域,包括:
根据互动区域中各虚拟资源的展示位置,确定各虚拟资源的排序结果的展示位置,并展示排序结果。排序结果的具体展示可以根据页面布局而定,本公开实施例不作具体限定,例如,排序结果可以展示在每个虚拟资源对应的展示 区域的左上角等。
图5为本公开实施例提供的另一种展示有多个虚拟资源的互动区域的示意图,具体以虚拟资源为特效为例,如图5所示,每个虚拟资源对应的展示区域的左上角展示有该虚拟资源对应的排名。随着服务端与终端的不断交互,每个虚拟资源对应的排名可以实时更新。
以资源数据可以转化为投票数量为例,服务端根据多个终端针对直播界面的互动区域中至少一个虚拟资源的投票数量,确定出每个虚拟资源对应的总投票数量,然后发送至多个终端,以进行展示。如图5所示,交互区域中同时展示了每个虚拟资源对应的投票数量。
并且,本公开实施例提供的直播交互方法还可以包括:接收展示时序提示信息,并展示在直播界面的互动区域中;其中,展示时序提示信息由服务端根据各第二目标虚拟资源对应的动态效果的展示顺序生成,用于提示用户关于待展示动态效果的顺序信息。继续如图5所示,以待展示的动态效果对应的第二目标虚拟资源是排名第一的目标虚拟资源为例,展示时序提示信息可以是“xx秒之后展示排名第一的互动(即展示排名第一的动态效果)”。
可选地,本公开实施例提供的直播交互方法还包括:
根据用户的直播交互行为,在直播界面的资源领取区域展示资源数据的领取提示信息;示例性,终端获取到用户的直播交互行为后,可以将该直播交互行为发生至服务端,由服务端确定每种直播交互行为对应的资源数据,并生成资源数据的领取提示信息,然后发送至终端中进行展示;此外,终端也可以直接根据用户的直播交互行为,确定用户可领取的资源数据以及资源数据的领取提示信息;
响应于用户的资源领取操作,生成资源同步请求;
将资源同步请求发送至服务端,以使服务端将用户领取的资源数据同步至用户账户中。
其中,直播交互行为包括以下至少之一:执行直播间的任务、观看直播、向主播送出礼物、直播点赞和登录直播间。即在本公开实施例中,用户可以通 过在直播间的多种直播交互行为,不断获取并积累资源数据,然后利用资源数据交换直播界面可以展示的动态效果。
图6为本公开实施例提供的一种资源领取区域的示意图,用于对本公开实施例进行示例性说明,并且具体以资源数据为用户积分为例。如图6所示,在资源领取区域上,针对观看直播以及登录直播间的用户行为,用户可以通过点击“领奖励”领取相应的积分;针对直播间任务1,由于用户未完成该任务,因此当前状态下不能领取相应的积分,用户可以通过点击“做任务”,使得终端由资源领取区域切换至任务执行区域或界面,完成相应任务后,再切换至资源领取区域领取相应的积分;针对直播间任务2,由于用户完成该任务,因此,也可以直接进行积分领取。
此外,终端中的直播界面上还可以展示排行榜区域,该排行榜区域上可以展示直播用户的直播观看时长排行榜、礼物赠送排行榜、点赞排行榜、用户与直播间的好感度(或亲密度)排行榜等信息,相关排行榜信息具体由服务端统计各个用户的相关信息得到排行榜信息并发送至各个终端中展示。通过展示多种排行榜信息,有助于提升直播交互的趣味性。
可选地,本公开实施例提供的直播交互方法还包括:
响应于针对第三目标虚拟资源的触发操作,展示虚拟对象响应于第三目标虚拟资源的互动效果,和/或,展示虚拟对象的虚拟宠物响应于第三目标虚拟资源的互动效果。
其中,第三目标虚拟资源包括任意的能够触发虚拟对象或虚拟宠物展示互动效果的虚拟资源,例如可以是直播间的虚拟礼物或者特定特效等。以虚拟礼物为例,用户可以在终端中产生向虚拟对象赠送礼物的触发操作,终端可以将该触发操作发送至服务端或者基于该触发操作生成互动数据获取请求后再发送至服务端,以请求服务端发送虚拟对象响应于该虚拟礼物的互动数据(例如答谢动画数据),和/或,虚拟对象的虚拟宠物响应于该虚拟礼物的互动数据(例如摆动礼物的动画数据);终端收到互动数据后生成互动效果,在直播界面上展示;此外,服务端在确定出互动数据后,也可以根据互动数据对应的互动效 果相对虚拟对象的展示位置,或者互动数据对应的互动效果相对虚拟宠物的展示位置,将互动数据和直播视频流叠加处理后发送至终端,终端基于叠加处理后的直播视频流生成直播内容,此时直播内容上直接展示有虚拟对象响应于该虚拟礼物的互动效果,和/或,展示有虚拟对象的虚拟宠物响应于该虚拟礼物的互动效果。
图7为本公开实施例提供的另一种直播交互方法的流程图,基于上述技术方案进一步优化与扩展,并可以与上述各个可选实施方式进行结合。如图7所示,本公开实施例提供的直播交互方法可以包括:
S201、在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各虚拟资源配置有触发所需的资源数据。
S202、直播的预设时间内,在虚拟对象上展示至少两个第二目标虚拟资源对应的第一叠加动态效果;其中,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源确定。
其中,第一叠加动态效果是指至少两个第二目标虚拟资源中各目标虚拟资源对应的动态效果的叠加。各第二目标虚拟资源对应的动态效果是否支持叠加展示由服务端确定。服务端确定出各第二目标虚拟资源对应的动态效果可叠加展示后,可以将各第二目标虚拟资源对应的多媒体数据同时发送至终端,例如,将各第二目标虚拟资源对应的多媒体数据作为一个数据包,与直播视频流一起发送至终端;或者按照各第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将各第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流,然后发送至终端。
此时,直播的预设时间,即第一叠加动态效果的叠加展示时长,可以基于各第二目标虚拟资源对应的动态效果的展示时长确定,例如将各展示时长的平均时长作为直播过程中展示第一叠加动态效果的预设时间,或者将各展示时长中的最大值作为直播过程中展示第一叠加动态效果的预设时间。
此外,在展示第一叠加动态效果的过程中,各个动态效果还可以按照各自 的展示时长进行展示,即达到自己的展示时长后,可以在直播界面隐藏相应的动态效果。
针对存在叠加动态效果的情况,直播界面展示的各第二目标虚拟资源的动态效果展示提示信息,可以是基于各第二目标虚拟资源的图标和/或叠加展示时长生成。例如,如果叠加展示时长为各展示时长的平均时长,即每个第二目标虚拟资源的展示时长是相同的,则可以在直播界面上仅展示一个动态效果展示提示信息,该提示信息中包括图标以及动态效果的剩余展示时长,需要说明的是,上述图标中包括各上述第二目标虚拟资源,或表征各上述第二目标虚拟资源;如果叠加展示时长为各展示时长中的最大值,则可以在直播界面上分别展示各第二目标虚拟资源对应的动态效果展示提示信息,提示信息包括各第二目标虚拟资源的图标以及动态效果的剩余展示时长。
具体来说,服务端还可以首先确定各第二目标虚拟资源对应的叠加资源图标,然后基于该叠加资源图标和/或叠加展示时长生成动态效果展示提示信息,最终直播界面上将会展示一个动态效果展示提示信息,例如直播界面上展示了一个动态效果展示提示标识,用于表征各第二目标虚拟资源对应的叠加资源图标和叠加动态效果的剩余展示时长。其中,叠加资源图标可以是一个预设的图标,也可以是基于各第二目标虚拟资源对应的图标生成的新图标(图标生成方式可以自定义设置),还可以是将各第二目标虚拟资源对应的图标按照预设方式进行组合排列形成的图标,例如将各第二目标虚拟资源对应的图标进行上下或左右排列形成叠加资源图标等。当然,其他能够用于生成叠加资源图标的方式也可以采用。
S203、直播的预设时间内,在虚拟对象上分别展示各第二目标虚拟资源对应的动态效果;其中,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源确定。
此时,直播的预设时间可以基于各第二目标虚拟资源对应的动态效果的展示时长的累计值得到,在相邻动态效果之间不存在展示时间间隔时,直播的预设时间即各展示时长的累计值。
S204、直播的预设时间内,在虚拟对象上分别展示至少三个第二目标虚拟资源对应的第二叠加动态效果和非叠加动态效果;其中,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源确定。
其中,第二叠加动态效果是指至少三个第二目标虚拟资源中至少两个目标虚拟资源对应的动态效果的叠加,非叠加动态效果是指至少三个第二目标虚拟资源中除去至少两个目标虚拟资源外的剩余目标虚拟资源对应的动态效果。
此时,直播的预设时间可以基于第二叠加动态效果的叠加展示时长和各非叠加动态效果各自的展示时长确定,例如将第二叠加动态效果的叠加展示时长和各非叠加动态效果的展示时长的累计值,作为直播过程中展示的第二叠加动态效果和各非叠加动态效果的预设时间。第二叠加动态效果的叠加展示时长可以参考第一叠加动态效果的叠加展示时长的确定方式来确定。
在上述技术方案的基础上,可选地,在虚拟对象上分别展示各第二目标虚拟资源对应的动态效果,包括:
按照各第二目标虚拟资源对应的多媒体数据的获取顺序,在虚拟对象上依次展示各第二目标虚拟资源对应的动态效果;
在虚拟对象上分别展示至少三个第二目标虚拟资源对应的第二叠加动态效果和非叠加动态效果,包括:
按照至少三个第二目标虚拟资源对应的多媒体数据中参与第二叠加动态效果的多媒体数据的获取顺序、以及参与非叠加动态效果的多媒体数据的获取顺序,在虚拟对象上分别展示第二叠加动态效果和非叠加动态效果;
其中,多媒体数据的获取顺序根据各第二目标虚拟资源对应的资源数据确定,相应的,服务端根据各第二目标虚拟资源对应的资源数据确定各多媒体数据的发送顺序;第二目标虚拟资源基于多个终端触发的第一目标虚拟资源对应的资源数据确定。
第二目标虚拟资源对应的资源数据越多,第二目标虚拟资源对应的多媒体数据的发送顺序或获取顺序越靠前。针对动态效果可叠加展示的第二目标虚拟资源,各第二目标虚拟资源对应的多媒体数据的发送顺序或获取顺序相同,服 务端可以将各第二目标虚拟资源对应的资源数据的总和作为每个第二目标虚拟资源对应的资源数据,然后与动态效果不可叠加展示的第二目标虚拟资源各自对应的资源数据进行比较,从而确定各第二目标虚拟资源对应的多媒体数据的发送顺序。对应于该发送顺序,终端也可以各第二目标虚拟资源对应的多媒体数据的获取顺序。
以第二目标虚拟资源为特效为例,例如,服务端确定出三个特效:特效a、特效b和特效c,并且确定出特效a和特效b对应的动态效果可以叠加展示,服务端根据特效a、特效b和特效c分别对应的资源数据(例如用户积分),确定特效a、特效b和特效c分别对应的多媒体数据的发送顺序为:首先发送特效c对应的多媒体数据,然后一起发送特效a和特效b对应的多媒体数据;相应的,终端首先获取特效c对应的多媒体数据,然后获取特效a和特效b对应的多媒体数据,并按照该获取顺序依次在直播界面展示动态效果。
进一步地,以特效a为兽耳特效,特效b为吹风特效,将兽耳特效和吹风特效对应的动态效果同时在虚拟对象上展示后,用户将会看到虚拟对象的头上长有兽耳,同时兽耳随风晃动的效果。
在本公开实施例中,直播过程中虚拟对象上可以叠加展示多个动态效果的叠加效果,也可以单独展示各个动态效果,用户可以观看到不同效果的直播界面,丰富了虚拟直播场景下的直播互动方式,提高了直播趣味性。
图8为本公开实施例提供的另一种直播交互方法的流程图,该方法可以应用于服务端,并可以由配置于服务端的直播交互装置执行,该装置可以采用软件和/或硬件实现。
本公开实施例提供的应用于服务端的直播交互方法,与本公开实施例提供的应用于客户端的直播交互方法配合执行,以下未详细解释的内容可以参考上述实施例中的解释。
如图8所示,本公开实施例提供的直播交互方法可以包括:
S301、根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源。
服务端可以根据预设的确定规则,从至少一个第一目标虚拟资源中确定至少一个第二目标虚拟资源,该确定规则可以灵活设置,本公开实施例不作具体限定,例如可以根据各第一目标虚拟资源对应的资源数据,从中确定多个第二目标虚拟资源。
S302、根据至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,预设时间信息用于确定在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果的时间。
示例性的,可以根据各第二目标虚拟资源对应的属性特征确定直播的预设时间信息,属性特征可以包括第二目标虚拟资源对应的动态效果的展示时长。
S303、将预设时间信息、至少一个第二目标虚拟资源和直播视频流发送至多个终端。
可选地,根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源,包括:
根据多个终端确定的至少一个第一目标虚拟资源对应的资源数据,统计每个第一目标虚拟资源对应的资源数据;
根据资源数据的统计结果,从至少一个第一目标虚拟资源中确定至少一个第二目标虚拟资源。
可选地,根据资源数据的统计结果,从至少一个第一目标虚拟资源中确定至少一个第二目标虚拟资源,包括:
根据资源数据的统计结果,对各第一目标虚拟资源进行排序,并将排名在前的预设数量的第一目标虚拟资源,确定为至少一个第二目标虚拟资源;或者,
将资源数据的统计结果超过数据阈值的第一目标虚拟资源,确定为至少一个第二目标虚拟资源。
可选地,将预设时间信息、至少一个第二目标虚拟资源和直播视频流发送至多个终端,包括:
将预设时间信息发送至多个终端;
分别将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流发送 至多个终端;或者,
按照至少一个第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流;
将叠加直播视频流发送至多个终端。
可选地,在根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源之后,还包括:
确定各第二目标虚拟资源对应的动态效果是否存在叠加展示冲突;
根据确定结果各第二目标虚拟资源中动态效果可叠加展示的目标虚拟资源和不可叠加展示的目标虚拟资源,以使得多个终端展示动态效果可叠加展示的目标虚拟资源对应的叠加动态效果,以及分别展示动态效果不可叠加展示的目标虚拟资源各自对应的动态效果。
可选地,本公开实施例提供的直播交互方法还包括:
如果均存在叠加展示冲突,则根据各第二目标虚拟资源对应的资源数据,确定各第二目标虚拟资源对应的多媒体数据的发送顺序;或者,
如果存在动态效果可叠加展示的目标虚拟资源,则根据动态效果可叠加展示的目标虚拟资源对应的资源数据,以及动态效果不可叠加展示的目标虚拟资源对应的资源数据,确定各第二目标虚拟资源对应的多媒体数据的发送顺序;
相应的,分别将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流发送至多个终端,包括:
按照发送顺序,依次将各第二目标虚拟资源对应的多媒体数据发送至多个终端,并将直播视频流发送至多个终端。
可选地,按照至少一个第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流,包括:
按照发送顺序,以及各第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,依次将各第二目标虚拟资源对应的多媒体数据和直播视频流进行叠 加处理,得到叠加直播视频流。
可选地,根据至少一个第二目标虚拟资源,确定直播的预设时间信息,包括:
根据各第二目标虚拟资源中动态效果可叠加展示的目标虚拟资源对应的叠加展示时长,确定直播的第一预设时间信息;
根据各第二目标虚拟资源中动态效果不可叠加展示的目标虚拟资源各自对应的展示时长,确定直播的第二预设时间信息。
可选地,本公开实施例提供的直播交互方法还包括:
基于各第二目标虚拟资源对应的属性特征,生成各第二目标虚拟资源对应的动态效果展示提示信息;
将动态效果展示提示信息发送至多个终端,以使多个终端在直播界面展示动态效果展示提示信息。
可选地,动态效果展示提示信息包括动态效果展示提示标识,基于各第二目标虚拟资源对应的属性特征,生成各第二目标虚拟资源对应的动态效果展示提示信息,包括:
基于各第二目标虚拟资源对应的图标和展示时长,生成各第二目标虚拟资源对应的动态效果展示提示标识。
可选地,本公开实施例提供的直播交互方法还包括:
根据多个终端针对直播界面的互动区域中至少一个虚拟资源的资源数据,统计互动区域中每个虚拟资源对应的资源数据;
根据资源数据的统计结果,确定互动区域中各虚拟资源的排序结果;
将排序结果发送至多个终端,以使多个终端在互动区域展示排序结果。
可选地,本公开实施例提供的直播交互方法还包括:
根据多个终端中用户的直播交互行为,确定用户可得到的资源数据,并向多个终端发送资源数据的领取提示信息,以使多个终端在直播界面的资源领取区域展示资源数据的领取提示信息;
接收多个终端发送的资源同步请求;其中,资源同步请求根据用户的资源 领取操作生成;
根据资源同步请求,将多个终端中用户领取的资源数据同步至用户账户中。
可选地,本公开实施例提供的直播交互方法还包括:
根据多个终端中针对第三目标虚拟资源的触发操作,确定虚拟对象响应于第三目标虚拟资源的互动数据,和/或,确定虚拟对象的虚拟宠物响应于第三目标虚拟资源的互动数据;
分别将互动数据和直播视频流发送至多个终端;或者,
根据互动数据对应的互动效果相对虚拟对象的展示位置,或者根据互动数据对应的互动效果相对虚拟宠物的展示位置,将互动数据和直播视频流叠加处理后发送至多个终端。
在本公开实施例中,通过终端与服务端之间的交互,实现了用户资源数据到直播视频帧中展示动态效果的转化,用户可以通过持有的虚资源数据让虚拟对象的展示状态发生变化,丰富了虚拟直播场景下的直播互动方式,提高了直播趣味性。
图9为本公开实施例提供的一种直播交互装置的结构示意图,该装置可以配置于进入虚拟对象直播间的终端,并可以采用软件和/或硬件实现。
本公开实施例提供的配置于终端的直播交互装置与本公开实施例提供的应用于终端的直播交互方法属于相同的发明构思,以下未详细描述的内容可以参考前述实施例中的描述。
如图9所示,本公开实施例提供的直播交互装置400可以包括第一目标虚拟资源确定模块401和动态效果展示模块402,其中:
第一目标虚拟资源确定模块401,用于在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各虚拟资源配置有触发所需的资源数据;
动态效果展示模块402,用于直播的预设时间内,在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,预设时间与第二目标虚拟资源相关,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源确定。
可选地,动态效果展示模块402包括:
第一获取单元,用于直播的预设时间内,分别获取至少一个第二目标虚拟资源对应的多媒体数据和直播视频流;
第一展示单元,用于基于多媒体数据生成至少一个第二目标虚拟资源对应的动态效果,并基于直播视频流生成直播内容;
在直播内容中的虚拟对象上展示动态效果。
可选地,动态效果展示模块402包括:
第二获取单元,用于直播的预设时间内,获取叠加直播视频流;其中,叠加直播视频流由服务端按照至少一个第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理后得到;
第二展示单元,用于基于叠加直播视频流生成直播内容,直播内容中虚拟对象上展示有动态效果。
可选地,动态效果展示模块402包括:
第三展示单元,用于直播的预设时间内,在虚拟对象上展示至少两个第二目标虚拟资源对应的第一叠加动态效果;或者,
第四展示单元,用于直播的预设时间内,在虚拟对象上分别展示各第二目标虚拟资源对应的动态效果;或者,
第五展示单元,用于直播的预设时间内,在虚拟对象上分别展示至少三个第二目标虚拟资源对应的第二叠加动态效果和非叠加动态效果;
其中,第一叠加动态效果是指至少两个第二目标虚拟资源中各目标虚拟资源对应的动态效果的叠加,第二叠加动态效果是指至少三个第二目标虚拟资源中至少两个目标虚拟资源对应的动态效果的叠加,非叠加动态效果是指至少三个第二目标虚拟资源中除去至少两个目标虚拟资源外的剩余目标虚拟资源对应的动态效果。
可选地,直播的预设时间基于第二目标虚拟资源对应的属性特征确定。可选地,第四展示单元具体用于:
直播的预设时间内,按照各第二目标虚拟资源对应的多媒体数据的获取顺序,在虚拟对象上依次展示各第二目标虚拟资源对应的动态效果;
可选地,第五展示单元具体用于:
直播的预设时间内,按照至少三个第二目标虚拟资源对应的多媒体数据中参与第二叠加动态效果的多媒体数据的获取顺序、以及参与非叠加动态效果的多媒体数据的获取顺序,在虚拟对象上分别展示第二叠加动态效果和非叠加动态效果;
其中,多媒体数据的获取顺序根据各第二目标虚拟资源对应的资源数据确定,第二目标虚拟资源基于多个终端触发的第一目标虚拟资源对应的资源数据确定。
可选地,本公开实施例提供的直播交互装置400还包括:
提示信息接收模块,用于接收动态效果展示提示信息;其中,动态效果展示提示信息由服务端基于各第二目标虚拟资源对应的属性特征生成;
提示信息展示模块,用于直播的预设时间内,将动态效果展示提示信息展示在直播界面。
可选地,动态效果展示提示信息包括动态效果展示提示标识,动态效果展示提示标识用于表征各第二目标虚拟资源对应的图标和动态效果的剩余展示时长;
提示信息展示模块具体用于:
直播的预设时间内,响应于任一第二目标虚拟资源对应的动态效果的剩余展示时长为零,在直播界面隐藏该第二目标虚拟资源对应的动态效果展示提示标识。
可选地,本公开实施例提供的直播交互装置400还包括:
排序结果接收模块,用于接收互动区域中各虚拟资源的排序结果;其中,排序结果根据各虚拟资源对应的资源数据确定;
排序结果展示模块,用于将排序结果展示在互动区域。
可选地,排序结果展示模块具体用于:
根据互动区域中各虚拟资源的展示位置,确定各虚拟资源的排序结果的展示位置,并展示排序结果。
可选地,本公开实施例提供的直播交互装置400还包括:
领取提示信息展示模块,用于根据用户的直播交互行为,在直播界面的资源领取区域展示资源数据的领取提示信息;
资源同步请求生成模块,用于响应于用户的资源领取操作,生成资源同步请求;
资源同步请求发送模块,用于将资源同步请求发送至服务端,以使服务端将用户领取的资源数据同步至用户账户中。
可选地,本公开实施例提供的直播交互装置400还包括:
互动效果展示模块,用于响应于针对第三目标虚拟资源的触发操作,展示虚拟对象响应于第三目标虚拟资源的互动效果,和/或,展示虚拟对象的虚拟宠物响应于第三目标虚拟资源的互动效果。
本公开实施例所提供的配置于终端的直播交互装置可执行本公开实施例所提供的应用于终端的直播交互方法,具备执行方法相应的功能模块和有益效果。本公开装置实施例中未详尽描述的内容可以参考本公开任意方法实施例中的描述。
图10为本公开实施例提供的另一种直播交互装置的结构示意图,该装置可以配置于服务端,并可采用软件和/或硬件实现。
本公开实施例提供的配置于服务端的直播交互装置与本公开实施例提供的应用于服务端的直播交互方法属于相同的发明构思,以下未详细的内容可以参考前述实施例中的描述。
如图10所示,本公开实施例提供的直播交互装置500可以包括第二目标虚拟资源确定模块501、预设时间信息确定模块502和数据发送模块503,其中:
第二目标虚拟资源确定模块501,用于根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;
预设时间信息确定模块502,用于根据至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,预设时间信息用于确定在虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果的时间;
数据发送模块503,用于将预设时间信息、至少一个第二目标虚拟资源和直播视频流发送至多个终端。
可选地,第二目标虚拟资源确定模块501包括:
资源数据统计单元,用于根据多个终端确定的至少一个第一目标虚拟资源对应的资源数据,统计每个第一目标虚拟资源对应的资源数据;
第二目标虚拟资源确定单元,用于根据资源数据的统计结果,从至少一个第一目标虚拟资源中确定至少一个第二目标虚拟资源。
可选地,第二目标虚拟资源确定单元具体用于:
根据资源数据的统计结果,对各第一目标虚拟资源进行排序,并将排名在前的预设数量的第一目标虚拟资源,确定为至少一个第二目标虚拟资源;或者,
将资源数据的统计结果超过数据阈值的第一目标虚拟资源,确定为至少一个第二目标虚拟资源。
可选地,数据发送模块503包括:
第一发送单元,用于将预设时间信息发送至多个终端;
第二发送单元,用于分别将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流发送至多个终端;或者,
叠加直播视频流确定单元,用于按照至少一个第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,将至少一个第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流;
第三发送单元,用于将叠加直播视频流发送至多个终端。
可选地,本公开实施例提供的直播交互装置500还包括:
叠加展示确定模块,用于确定各第二目标虚拟资源对应的动态效果是否存在叠加展示冲突;
虚拟资源区分模块,用于根据确定结果各第二目标虚拟资源中动态效果可 叠加展示的目标虚拟资源和不可叠加展示的目标虚拟资源,以使得多个终端展示动态效果可叠加展示的目标虚拟资源对应的叠加动态效果,以及分别展示动态效果不可叠加展示的目标虚拟资源各自对应的动态效果。
可选地,本公开实施例提供的直播交互装置500还包括:
第一发送顺序确定模块,用于如果均存在叠加展示冲突,则根据各第二目标虚拟资源对应的资源数据,确定各第二目标虚拟资源对应的多媒体数据的发送顺序;或者,
第二发送顺序确定模块,用于如果存在动态效果可叠加展示的目标虚拟资源,则根据动态效果可叠加展示的目标虚拟资源对应的资源数据,以及动态效果不可叠加展示的目标虚拟资源对应的资源数据,确定各第二目标虚拟资源对应的多媒体数据的发送顺序;
相应的,第二发送单元具体用于:
按照发送顺序,依次将各第二目标虚拟资源对应的多媒体数据发送至多个终端,并将直播视频流发送至多个终端。
可选地,叠加直播视频流确定单元具体用于:
按照发送顺序,以及各第二目标虚拟资源对应的动态效果在虚拟对象上的展示位置,依次将各第二目标虚拟资源对应的多媒体数据和直播视频流进行叠加处理,得到叠加直播视频流。
可选地,预设时间信息确定模块502包括:
第一预设时间信息确定单元,用于根据各第二目标虚拟资源中动态效果可叠加展示的目标虚拟资源对应的叠加展示时长,确定直播的第一预设时间信息;
第二预设时间信息确定单元,用于根据各第二目标虚拟资源中动态效果不可叠加展示的目标虚拟资源各自对应的展示时长,确定直播的第二预设时间信息。
可选地,本公开实施例提供的直播交互装置500还包括:
提示信息生成模块,用于基于各第二目标虚拟资源对应的属性特征,生成各第二目标虚拟资源对应的动态效果展示提示信息;
提示信息发送模块,用于将动态效果展示提示信息发送至多个终端,以使多个终端在直播界面展示动态效果展示提示信息。
可选地,动态效果展示提示信息包括动态效果展示提示标识,提示信息生成模块具体用于:
基于各第二目标虚拟资源对应的图标和展示时长,生成各第二目标虚拟资源对应的动态效果展示提示标识。
可选地,本公开实施例提供的直播交互装置500还包括:
资源数据统计模块,用于根据多个终端针对直播界面的互动区域中至少一个虚拟资源的资源数据,统计互动区域中每个虚拟资源对应的资源数据;
排序结果确定模块,用于根据资源数据的统计结果,确定互动区域中各虚拟资源的排序结果;
排序结果发送模块,用于将排序结果发送至多个终端,以使多个终端在互动区域展示排序结果。
可选地,本公开实施例提供的直播交互装置500还包括:
领取提示信息发送模块,用于根据多个终端中用户的直播交互行为,确定用户待领取的资源数据,并向多个终端发送资源数据的领取提示信息,以使多个终端在直播界面的资源领取区域展示资源数据的领取提示信息;此处提及的用户待领取的资源数据,可以理解为用户可得到的资源数据。
资源同步请求接收模块,用于接收多个终端发送的资源同步请求;其中,资源同步请求根据用户的资源领取操作生成;
资源数据同步模块,用于根据资源同步请求,将多个终端中用户领取的资源数据同步至用户账户中。
可选地,本公开实施例提供的直播交互装置500还包括:
互动数据确定模块,用于根据多个终端中针对第三目标虚拟资源的触发操作,确定虚拟对象响应于第三目标虚拟资源的互动数据,和/或,确定虚拟对象的虚拟宠物响应于第三目标虚拟资源的互动数据;
第一互动数据发送模块503,用于分别将互动数据和直播视频流发送至多 个终端;或者,
第二互动数据发送模块503,用于根据互动数据对应的互动效果相对虚拟对象的展示位置,或者根据互动数据对应的互动效果相对虚拟宠物的展示位置,将互动数据和直播视频流叠加处理后发送至多个终端。
本公开实施例所提供的配置于服务端的直播交互装置可执行本公开实施例所提供的应用于服务端的直播交互方法,具备执行方法相应的功能模块和有益效果。本公开装置实施例中未详尽描述的内容可以参考本公开任意方法实施例中的描述。
图11为本公开实施例提供的一种电子设备的结构示意图。用于对实现本公开实施例提供的任意直播交互方法的电子设备进行示例性说明。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机、智能家居设备、可穿戴电子设备、服务器等等的固定终端。图11示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和占用范围带来任何限制。
如图11所示,电子设备600包括一个或多个处理器601和存储器602。
处理器601可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备600中的其他组件以执行期望的功能。
存储器602可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器601可以运行程序指令,以实现本公开实施例提供的直播交互方法,还可以实现其他期望的功能。在计算机可读存储介质中还可以存储诸如输入信号、信号 分量、噪声分量等各种内容。
一方面,本公开实施例提供的直播交互方法,应用于进入虚拟对象直播间的终端的,该方法可以包括:在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各所述虚拟资源配置有触发所需的资源数据;直播的预设时间内,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述预设时间与所述第二目标虚拟资源相关,所述第二目标虚拟资源基于多个所述终端触发的所述第一目标虚拟资源确定。
另一方面,本公开实施例提供的应用于服务端的直播交互方法可以包括:根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,所述预设时间信息用于确定在虚拟对象上展示所述至少一个第二目标虚拟资源对应的动态效果的时间;将所述预设时间信息、所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
应当理解,电子设备600还可以执行本公开方法实施例提供的其他可选实施方案。
在一个示例中,电子设备600还可以包括:输入装置603和输出装置604,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置603还可以包括例如键盘、鼠标等等。
该输出装置604可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置604可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图11中仅示出了该电子设备600中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备600还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,计算机程序指令在被计算设备执行时使得计算设备实现 本公开实施例所提供的任意直播交互方法。
计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户电子设备上执行、部分地在用户电子设备上执行、作为一个独立的软件包执行、部分在用户电子设备上且部分在远程电子设备上执行、或者完全在远程电子设备上执行。
此外,本公开实施例还可以提供一种计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令在被计算设备执行时,使得计算设备实现本公开实施例所提供的任意直播交互方法。
一方面,本公开实施例提供的直播交互方法,应用于进入虚拟对象直播间的终端的,该方法可以包括:在直播界面中,响应于针对互动区域中至少一个虚拟资源的触发操作,确定至少一个第一目标虚拟资源;其中,各所述虚拟资源配置有触发所需的资源数据;直播的预设时间内,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述预设时间与所述第二目标虚拟资源相关,所述第二目标虚拟资源基于多个所述终端触发的所述第一目标虚拟资源确定。
另一方面,本公开实施例提供的应用于服务端的直播交互方法可以包括:根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,所述预设时间信息用于确定在虚拟对象上展示所述至少一个第二目标虚拟资源对应的动态效果的时间;将所述预设时间信息、所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
应当理解,计算机程序指令在被处理器运行时,还可以使得处理器执行本公开方法实施例提供的其他可选实施方案。
计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于 电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (35)

  1. 一种直播交互方法,其特征在于,应用于进入虚拟对象的直播间的终端,包括:
    响应于针对直播界面中的互动区域中至少一个虚拟资源的触发操作,确定所述触发操作对应的至少一个第一目标虚拟资源;其中,各所述虚拟资源配置有触发所需的资源数据;
    在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述第二目标虚拟资源基于多个所述终端对应的所述第一目标虚拟资源确定。
  2. 根据权利要求1所述的方法,其特征在于,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,包括:
    在直播的预设时间内,在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,其中,所述预设时间根据所述至少一个第二目标虚拟资源确定。
  3. 根据权利要求1所述的方法,其特征在于,所述在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,包括:
    获取直播视频流和所述至少一个第二目标虚拟资源对应的多媒体数据;
    基于所述多媒体数据生成所述至少一个第二目标虚拟资源对应的动态效果,并基于所述直播视频流生成直播内容;
    在所述直播内容中的所述虚拟对象上展示所述动态效果。
  4. 根据权利要求1所述的方法,其特征在于,所述在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果,包括:
    获取叠加直播视频流;其中,所述叠加直播视频流由服务端按照所述至少一个第二目标虚拟资源对应的动态效果在所述虚拟对象上的展示位置,将直播视频流和所述至少一个第二目标虚拟资源对应的多媒体数据进行叠加处理后得到;
    基于所述叠加直播视频流生成直播内容,所述直播内容中所述虚拟对象上展示有所述动态效果。
  5. 根据权利要求1所述的方法,其特征在于,所述在所述虚拟对象上展 示至少一个第二目标虚拟资源对应的动态效果,包括:
    在所述虚拟对象上展示至少两个第二目标虚拟资源对应的第一叠加动态效果;或者,
    在所述虚拟对象上分别展示各第二目标虚拟资源对应的动态效果;或者,
    在所述虚拟对象上分别展示至少三个第二目标虚拟资源对应的第二叠加动态效果和非叠加动态效果;
    其中,所述第一叠加动态效果是指所述至少两个第二目标虚拟资源中各目标虚拟资源对应的动态效果的叠加,所述第二叠加动态效果是指所述至少三个第二目标虚拟资源中至少两个目标虚拟资源对应的动态效果的叠加,所述非叠加动态效果是指所述至少三个第二目标虚拟资源中除去所述至少两个目标虚拟资源外的剩余目标虚拟资源对应的动态效果。
  6. 根据权利要求5所述的方法,其特征在于,所述在所述虚拟对象上分别展示各第二目标虚拟资源对应的动态效果,包括:
    按照所述各第二目标虚拟资源对应的多媒体数据的获取顺序,在所述虚拟对象上依次展示所述各第二目标虚拟资源对应的动态效果。
  7. 根据权利要求5所述的方法,其特征在于,所述在所述虚拟对象上分别展示至少三个第二目标虚拟资源对应的第二叠加动态效果和非叠加动态效果,包括:
    按照所述至少三个第二目标虚拟资源对应的多媒体数据中参与所述第二叠加动态效果的多媒体数据的获取顺序、以及参与所述非叠加动态效果的多媒体数据的获取顺序,在所述虚拟对象上分别展示所述第二叠加动态效果和所述非叠加动态效果。
  8. 根据权利要求6或7所述的方法,其特征在于,所述各第二目标虚拟资源对应的多媒体数据的获取顺序,根据所述各第二目标虚拟资源对应的资源数据确定,所述第二目标虚拟资源对应的资源数据基于所述多个终端触发所述第一目标虚拟资源对应的资源数据确定。
  9. 根据权利要求2所述的方法,其特征在于,所述直播的预设时间基于 所述第二目标虚拟资源对应的属性特征确定。
  10. 根据权利要求9所述的方法,其特征在于,还包括:
    所述直播的预设时间内,将动态效果展示提示信息展示在所述直播界面,所述动态效果展示提示信息由所述服务端基于所述各第二目标虚拟资源对应的属性特征确定。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    接收所述服务端发送的所述动态效果展示提示信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述动态效果展示提示信息包括动态效果展示提示标识,所述动态效果展示提示标识用于表征所述各第二目标虚拟资源对应的图标和/或动态效果的剩余展示时长;
    所述直播的预设时间内,将动态效果展示提示信息展示在所述直播界面,包括:
    所述直播的预设时间内,响应于任一所述第二目标虚拟资源对应的动态效果的剩余展示时长为零,在所述直播界面隐藏该第二目标虚拟资源对应的动态效果展示提示标识。
  13. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述互动区域显示各虚拟资源的排序结果,所述排序结果根据所述各虚拟资源对应的资源数据确定。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    接收所述服务端发送的所述各虚拟资源的排序结果。
  15. 根据权利要求13或14所述的方法,其特征在于,将所述排序结果展示在所述互动区域显示各虚拟资源的排序结果,包括:
    根据所述互动区域中各虚拟资源的展示位置,确定所述各虚拟资源的排序结果的展示位置,并展示所述排序结果。
  16. 根据权利要求1所述的方法,其特征在于,还包括:
    根据用户的直播交互行为,在所述直播界面的资源领取区域展示所述资源数据的领取提示信息;
    响应于用户的资源领取操作,生成资源同步请求;
    将所述资源同步请求发送至服务端,以使所述服务端将用户领取的资源数据同步至用户账户中。
  17. 根据权利要求1所述的方法,其特征在于,还包括:
    响应于针对第三目标虚拟资源的触发操作,展示所述虚拟对象响应于所述第三目标虚拟资源的互动效果,和/或,展示所述虚拟对象的虚拟宠物响应于所述第三目标虚拟资源的互动效果。
  18. 一种直播交互方法,其特征在于,应用于服务端,包括:
    根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;
    将所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
  19. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息;其中,所述预设时间信息用于确定在虚拟对象上展示所述至少一个第二目标虚拟资源对应的动态效果的时间;
    将所述预设时间信息发送至所述多个终端。
  20. 根据权利要求19所述的方法,其特征在于,所述根据多个终端触发的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源,包括:
    根据所述多个终端触发的至少一个第一目标虚拟资源对应的资源数据,统计每个第一目标虚拟资源对应的资源数据;
    根据所述资源数据的统计结果,从所述至少一个第一目标虚拟资源中确定所述至少一个第二目标虚拟资源。
  21. 根据权利要求20所述的方法,其特征在于,所述根据所述资源数据的统计结果,从所述至少一个第一目标虚拟资源中确定所述至少一个第二目标虚拟资源,包括:
    根据所述资源数据的统计结果,对各第一目标虚拟资源进行排序,并将排名在前的预设数量的第一目标虚拟资源,确定为所述至少一个第二目标虚拟资 源;或者,
    将所述资源数据的统计结果超过数据阈值的第一目标虚拟资源,确定为所述至少一个第二目标虚拟资源。
  22. 根据权利要求19所述的方法,其特征在于,所述将所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端,包括:
    分别将所述至少一个第二目标虚拟资源对应的多媒体数据和所述直播视频流发送至所述多个终端;或者,
    按照所述至少一个第二目标虚拟资源对应的动态效果在所述虚拟对象上的展示位置,将所述至少一个第二目标虚拟资源对应的多媒体数据和所述直播视频流进行叠加处理,得到叠加直播视频流;
    将所述叠加直播视频流发送至所述多个终端。
  23. 根据权利要求22所述的方法,其特征在于,在所述根据多个终端对应的触发操作对应的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源之后,还包括:
    确定各第二目标虚拟资源对应的动态效果是否存在叠加展示冲突;
    根据确定结果确定所述各第二目标虚拟资源中动态效果可叠加展示的目标虚拟资源和不可叠加展示的目标虚拟资源,以使得所述多个终端展示所述动态效果可叠加展示的目标虚拟资源对应的叠加动态效果,以及分别展示所述动态效果不可叠加展示的目标虚拟资源各自对应的动态效果。
  24. 根据权利要求23所述的方法,其特征在于,还包括:
    如果所述至少一个第二目标虚拟资源中的任意两个第二目标虚拟资源均存在叠加展示冲突,则根据所述各第二目标虚拟资源对应的资源数据,确定所述各第二目标虚拟资源对应的多媒体数据的发送顺序;或者,
    如果存在动态效果可叠加展示的目标虚拟资源,则根据所述动态效果可叠加展示的目标虚拟资源对应的资源数据,以及所述动态效果不可叠加展示的目标虚拟资源对应的资源数据,确定所述各第二目标虚拟资源对应的多媒体数据的发送顺序;
    相应的,所述分别将所述至少一个第二目标虚拟资源对应的多媒体数据和所述直播视频流发送至所述多个终端,包括:
    按照所述发送顺序,依次将所述各第二目标虚拟资源对应的多媒体数据发送至所述多个终端,并将所述直播视频流发送至所述多个终端。
  25. 根据权利要求24所述的方法,其特征在于,所述按照所述至少一个第二目标虚拟资源对应的动态效果在所述虚拟对象上的展示位置,将所述至少一个第二目标虚拟资源对应的多媒体数据和所述直播视频流进行叠加处理,得到叠加直播视频流,包括:
    按照所述发送顺序,以及所述各第二目标虚拟资源对应的动态效果在所述虚拟对象上的展示位置,依次将所述各第二目标虚拟资源对应的多媒体数据和所述直播视频流进行叠加处理,得到所述叠加直播视频流。
  26. 根据权利要求23所述的方法,其特征在于,所述根据所述至少一个第二目标虚拟资源,确定直播的预设时间信息,包括:
    根据所述各第二目标虚拟资源中动态效果可叠加展示的目标虚拟资源对应的叠加展示时长,确定所述直播的第一预设时间信息;
    根据所述各第二目标虚拟资源中动态效果不可叠加展示的目标虚拟资源各自对应的展示时长,确定所述直播的第二预设时间信息。
  27. 根据权利要求18所述的方法,其特征在于,还包括:
    基于各第二目标虚拟资源对应的属性特征,生成所述各第二目标虚拟资源对应的动态效果展示提示信息;
    将所述动态效果展示提示信息发送至所述多个终端,以使所述多个终端在直播界面展示所述动态效果展示提示信息。
  28. 根据权利要求27所述的方法,其特征在于,所述动态效果展示提示信息包括动态效果展示提示标识,所述基于各第二目标虚拟资源对应的属性特征,生成所述各第二目标虚拟资源对应的动态效果展示提示信息,包括:
    基于所述各第二目标虚拟资源对应的图标和/或展示时长,生成所述各第二目标虚拟资源对应的动态效果展示提示标识。
  29. 根据权利要求18所述的方法,其特征在于,还包括:
    根据所述多个终端针对直播界面的互动区域中至少一个虚拟资源的资源数据,统计所述互动区域中每个虚拟资源对应的资源数据;
    根据所述资源数据的统计结果,确定所述互动区域中各虚拟资源的排序结果;
    将所述排序结果发送至所述多个终端,以使所述多个终端在所述互动区域展示所述排序结果。
  30. 根据权利要求18所述的方法,其特征在于,还包括:
    根据所述多个终端中用户的直播交互行为,确定用户待领取的资源数据,并向所述多个终端发送所述资源数据的领取提示信息,以使所述多个终端在直播界面的资源领取区域展示所述资源数据的领取提示信息;
    接收所述多个终端发送的资源同步请求;其中,所述资源同步请求根据用户的资源领取操作生成;
    根据所述资源同步请求,将所述多个终端中用户领取的资源数据同步至用户账户中。
  31. 根据权利要求18所述的方法,其特征在于,还包括:
    根据所述多个终端中针对第三目标虚拟资源的触发操作,确定所述虚拟对象响应于所述第三目标虚拟资源的互动数据,和/或,确定所述虚拟对象的虚拟宠物响应于所述第三目标虚拟资源的互动数据;
    分别将所述互动数据和所述直播视频流发送至所述多个终端;或者,根据所述互动数据对应的互动效果相对所述虚拟对象的展示位置,或者根据所述互动数据对应的互动效果相对所述虚拟宠物的展示位置,将所述互动数据和所述直播视频流叠加处理后发送至所述多个终端。
  32. 一种直播交互装置,其特征在于,配置于进入虚拟对象的直播间的终端,包括:
    第一目标虚拟资源确定模块,用于响应于针对直播界面中的互动区域中至少一个虚拟资源的触发操作,确定所述触发操作对应的至少一个第一目标虚拟 资源;
    动态效果展示模块,用于在所述虚拟对象上展示至少一个第二目标虚拟资源对应的动态效果;其中,所述第二目标虚拟资源基于多个所述终端对应的所述第一目标虚拟资源确定。
  33. 一种直播交互装置,其特征在于,配置于服务端,包括:
    第二目标虚拟资源确定模块,用于根据多个终端对应的触发操作对应的至少一个第一目标虚拟资源,确定至少一个第二目标虚拟资源;
    数据发送模块,用于将所述至少一个第二目标虚拟资源和直播视频流发送至所述多个终端。
  34. 一种电子设备,其特征在于,包括存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述电子设备实现权利要求1-17中任一项所述的直播交互方法,或者实现权利要求18-31中任一项所述的直播交互方法。
  35. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-17中任一项所述的直播交互方法,或者实现权利要求18-31中任一项所述的直播交互方法。
PCT/CN2021/129238 2020-12-11 2021-11-08 直播交互方法、装置、电子设备和存储介质 WO2022121593A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011463611.1A CN112672175A (zh) 2020-12-11 2020-12-11 直播交互方法、装置、电子设备和存储介质
CN202011463611.1 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022121593A1 true WO2022121593A1 (zh) 2022-06-16

Family

ID=75405471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129238 WO2022121593A1 (zh) 2020-12-11 2021-11-08 直播交互方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN112672175A (zh)
WO (1) WO2022121593A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097967A (zh) * 2022-06-24 2022-09-23 北京达佳互联信息技术有限公司 互动信息设置方法、装置、电子设备及存储介质
CN115484489A (zh) * 2022-09-14 2022-12-16 抖音视界有限公司 资源处理方法、装置、电子设备、存储介质及程序产品
CN116668796A (zh) * 2023-07-03 2023-08-29 佛山市炫新智能科技有限公司 一种互动式仿真人直播信息管理系统

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672175A (zh) * 2020-12-11 2021-04-16 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质
CN113034208A (zh) * 2021-04-21 2021-06-25 腾讯科技(深圳)有限公司 虚拟资源的处理方法、装置、电子设备及存储介质
CN113518240B (zh) * 2021-07-20 2023-08-08 北京达佳互联信息技术有限公司 直播互动、虚拟资源配置、虚拟资源处理方法及装置
CN113515212A (zh) * 2021-07-28 2021-10-19 北京字节跳动网络技术有限公司 交互方法、装置、计算机设备以及计算机可读存储介质
CN113873270A (zh) * 2021-08-30 2021-12-31 北京达佳互联信息技术有限公司 一种游戏直播方法、装置、系统、电子设备及存储介质
CN113852839B (zh) * 2021-09-26 2024-01-26 游艺星际(北京)科技有限公司 虚拟资源分配方法、装置及电子设备
CN113873314A (zh) * 2021-09-30 2021-12-31 北京有竹居网络技术有限公司 直播互动方法、装置、可读介质及电子设备
CN113965770B (zh) * 2021-10-22 2023-08-22 北京达佳互联信息技术有限公司 虚拟资源的处理方法、装置、服务器和存储介质
CN114237792A (zh) * 2021-12-13 2022-03-25 广州繁星互娱信息科技有限公司 虚拟对象展示方法和装置、存储介质及电子设备
CN114237450B (zh) * 2021-12-17 2022-12-13 北京字跳网络技术有限公司 虚拟资源转移方法、装置、设备、可读存储介质及产品
CN116521038A (zh) * 2022-01-24 2023-08-01 腾讯科技(深圳)有限公司 一种数据处理方法、装置、计算机设备以及可读存储介质
CN114860148B (zh) * 2022-04-19 2024-01-16 北京字跳网络技术有限公司 一种交互方法、装置、计算机设备及存储介质
CN115086693A (zh) * 2022-05-07 2022-09-20 北京达佳互联信息技术有限公司 虚拟对象交互方法、装置、电子设备和存储介质
CN115243096A (zh) * 2022-07-27 2022-10-25 北京字跳网络技术有限公司 直播间展示方法、装置、电子设备及存储介质
CN115314728A (zh) * 2022-07-29 2022-11-08 北京达佳互联信息技术有限公司 信息展示方法、系统、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3343447A1 (en) * 2016-12-30 2018-07-04 Facebook, Inc. Systems and methods to transition between media content items
CN109874021A (zh) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 直播互动方法、装置及系统
CN110519611A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 直播互动方法、装置、电子设备及存储介质
CN110850983A (zh) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 视频直播中的虚拟对象控制方法、装置和存储介质
CN110856032A (zh) * 2019-11-27 2020-02-28 广州虎牙科技有限公司 一种直播方法、装置、设备及存储介质
CN111083509A (zh) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 交互任务执行方法、装置、存储介质和计算机设备
US20200143447A1 (en) * 2016-12-26 2020-05-07 Hong Kong Liveme Corporation Limited Method and device for recommending gift and mobile terminal
CN112672175A (zh) * 2020-12-11 2021-04-16 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599396B (zh) * 2019-09-19 2024-02-02 网易(杭州)网络有限公司 信息处理方法及装置
CN111541928B (zh) * 2020-04-20 2022-11-18 广州酷狗计算机科技有限公司 直播显示方法、装置、设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200143447A1 (en) * 2016-12-26 2020-05-07 Hong Kong Liveme Corporation Limited Method and device for recommending gift and mobile terminal
EP3343447A1 (en) * 2016-12-30 2018-07-04 Facebook, Inc. Systems and methods to transition between media content items
CN109874021A (zh) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 直播互动方法、装置及系统
CN110519611A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 直播互动方法、装置、电子设备及存储介质
CN110850983A (zh) * 2019-11-13 2020-02-28 腾讯科技(深圳)有限公司 视频直播中的虚拟对象控制方法、装置和存储介质
CN110856032A (zh) * 2019-11-27 2020-02-28 广州虎牙科技有限公司 一种直播方法、装置、设备及存储介质
CN111083509A (zh) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 交互任务执行方法、装置、存储介质和计算机设备
CN112672175A (zh) * 2020-12-11 2021-04-16 北京字跳网络技术有限公司 直播交互方法、装置、电子设备和存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097967A (zh) * 2022-06-24 2022-09-23 北京达佳互联信息技术有限公司 互动信息设置方法、装置、电子设备及存储介质
CN115097967B (zh) * 2022-06-24 2024-03-01 北京达佳互联信息技术有限公司 互动信息设置方法、装置、电子设备及存储介质
CN115484489A (zh) * 2022-09-14 2022-12-16 抖音视界有限公司 资源处理方法、装置、电子设备、存储介质及程序产品
CN116668796A (zh) * 2023-07-03 2023-08-29 佛山市炫新智能科技有限公司 一种互动式仿真人直播信息管理系统
CN116668796B (zh) * 2023-07-03 2024-01-23 佛山市炫新智能科技有限公司 一种互动式仿真人直播信息管理系统

Also Published As

Publication number Publication date
CN112672175A (zh) 2021-04-16

Similar Documents

Publication Publication Date Title
WO2022121593A1 (zh) 直播交互方法、装置、电子设备和存储介质
CN108924661B (zh) 基于直播间的数据交互方法、装置、终端和存储介质
KR101994565B1 (ko) 정보 처리 방법 및 장치, 단말기 및 기억 매체
US20090064017A1 (en) Tuning/customization
US8887185B2 (en) Method and system for providing virtual co-presence to broadcast audiences in an online broadcasting system
TW201141213A (en) System and method for identifying, providing, and presenting supplemental content on a mobile device
KR20160003837A (ko) 통합형 상호 작용 텔레비전 엔터테인먼트 시스템
WO2022156638A1 (zh) 互动方法、装置、电子设备和存储介质
US9665965B2 (en) Video-associated objects
WO2023186107A1 (zh) 信息处理方法、装置、设备及存储介质
CN103151057A (zh) 音乐播放方法及第三方应用
CN111277852A (zh) 动态提醒方法、装置、设备及存储介质
WO2022032916A1 (zh) 一种显示系统
CN115190366B (zh) 一种信息展示方法、装置、电子设备、计算机可读介质
WO2022252920A1 (zh) 应用程序的页面显示方法、装置和电子设备
WO2024012392A1 (zh) 交互方法、装置、电子设备和存储介质
CN108174227B (zh) 虚拟物品的显示方法、装置及存储介质
CN105721904B (zh) 显示装置和控制显示装置的内容输出的方法
CN114302160B (zh) 信息显示方法、装置、计算机设备及介质
US10594762B2 (en) Associating users based on user-access positions in sequential content
WO2018157554A1 (zh) 同步播放方法、系统及设备
CN113242453B (zh) 弹幕播放方法、服务器以及计算机可读存储介质
CN115291829A (zh) 显示设备及订阅消息提醒方法
CN114125476A (zh) 显示界面的显示处理方法、电子设备及存储介质
CN107015980B (zh) 一种进行信息展示的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902300

Country of ref document: EP

Kind code of ref document: A1