CN115604499A - Live broadcast room gift control method and device, electronic equipment and storage medium - Google Patents

Live broadcast room gift control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115604499A
CN115604499A CN202211282707.7A CN202211282707A CN115604499A CN 115604499 A CN115604499 A CN 115604499A CN 202211282707 A CN202211282707 A CN 202211282707A CN 115604499 A CN115604499 A CN 115604499A
Authority
CN
China
Prior art keywords
avatar
gift
display effect
control parameter
avatars
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211282707.7A
Other languages
Chinese (zh)
Inventor
刘家诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202211282707.7A priority Critical patent/CN115604499A/en
Publication of CN115604499A publication Critical patent/CN115604499A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a live broadcast room gift control method and device, electronic equipment and a storage medium. The method for controlling the gift of the live broadcast room is applied to a server and is realized by the following steps: receiving one or more gift-giving instructions from a viewer client; generating one or more first avatars corresponding to the one or more gift-giving instructions; for each of the one or more first avatars, determining whether a position of the first avatar overlaps a position of a second avatar of the anchor user; in response to determining that the position of the first avatar overlaps with the position of a second avatar of the anchor user, configuring a first display effect for the second avatar; and providing the live stream to the anchor client and the viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene containing the one or more first avatars and a second avatar having a first display effect.

Description

Live broadcast room gift control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and an apparatus for controlling a gift in a live broadcast room, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the continuous development of internet technology and the progress of streaming media technology, live webcasting is more and more concerned by users. Currently, common live broadcast methods include live broadcast interaction methods based on a live broadcast live person, which interact with viewers in a live broadcast room by capturing live broadcast pictures of the live broadcast live person. In addition, with the diversification of live broadcast modes, virtual live broadcast modes based on virtual images are widely applied. Compared with a live broadcast mode of a live broadcaster, the virtual live broadcast can realize live broadcast virtualization and live broadcast scene virtualization.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a live room gift control method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a live room gift control method applied to a server, including: receiving one or more gift-giving instructions from a viewer client; generating one or more first avatars corresponding to the one or more gift-giving instructions; for each of the one or more first avatars, determining whether the position of the first avatar overlaps with the position of a second avatar of the anchor user; in response to determining that the position of the first avatar overlaps with the position of a second avatar of the anchor user, configuring a first display effect for the second avatar; and providing the live stream to the anchor client and the viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene containing the one or more first avatars and a second avatar having a first display effect.
According to another aspect of the present disclosure, there is also provided a live room gift control method applied to a server, including: receiving one or more gift-giving instructions from a viewer client; generating one or more first avatars corresponding to the one or more gift-giving instructions; for each of the one or more first avatars, determining whether the position of the first avatar overlaps with the position of a second avatar of the anchor user; in response to determining that the position of the first avatar overlaps the position of the anchor user's second avatar, configuration information including a vibration effect is provided to the anchor client and/or the spectator client.
According to another aspect of the present disclosure, there is also provided a live room gift control method applied to a viewer client, including: sending one or more gift-giving instructions; receiving a live stream of a live broadcast room from a server, wherein the live stream comprises one or more first avatars corresponding to one or more gift giving instructions, a second avatar of an anchor user and a display effect of the second avatar; and displaying the one or more first avatars and a second avatar having the display effect based on a live stream, wherein the one or more first avatars are generated according to the live room gift control method applied to the server, and wherein the display effect is determined according to the live room gift control method applied to the server.
According to another aspect of the present disclosure, there is also provided a live room gift control method applied to a viewer client, including: sending one or more gift-giving instructions; receiving a live stream of a live broadcast room and configuration information associated with a shock effect from a server, the live stream including one or more first avatars corresponding to the one or more gift giving instructions and a second avatar of an anchor user; and configuring the associated shock effect for the viewer client based on the configuration information, wherein the one or more first avatars are generated according to the live room gift controlling method applied to the server, and wherein the configuration information is provided according to the live room gift controlling method applied to the server.
According to another aspect of the present disclosure, there is also provided a live room gift control apparatus applied to a server, including: a receiving unit configured to receive one or more gift-giving instructions from a viewer client; a generating unit configured to generate one or more first avatars corresponding to the one or more gift giving instructions; a determination unit configured to determine, for each of the one or more first avatars, whether a position of the first avatar overlaps with a position of a second avatar of the anchor user; a configuration unit configured to configure a display effect for a second avatar of the anchor user in response to determining that the position of the first avatar overlaps with the position of the second avatar; and a providing unit configured to provide the live stream to the anchor client and the viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene including one or more first avatars and a second avatar having the display effect.
According to another aspect of the present disclosure, there is also provided a live room gift control apparatus applied to a server, including: a receiving unit configured to receive one or more gift-giving instructions from a viewer client; a generating unit configured to generate one or more first avatars corresponding to the one or more gift giving instructions; a determination unit configured to determine, for each of the one or more first avatars, whether a position of the first avatar overlaps with a position of a second avatar of the anchor user; and a providing unit; is configured to provide configuration information including a vibration effect to the anchor client and/or the spectator client in response to determining that the position of the first avatar overlaps the position of the second avatar of the anchor user.
According to another aspect of the present disclosure, there is also provided a live room gift control apparatus applied to a viewer client, including: a transmission unit configured to transmit one or more gift-giving instructions; a receiving unit configured to receive a live stream of a live room from a server, the live stream including one or more first avatars corresponding to one or more gift-giving instructions, a second avatar of an anchor user, and a display effect thereof; and a display unit configured to display the one or more first avatars and a second avatar having the display effect based on a live stream, wherein the one or more avatars are generated according to the live room gift control method applied to the server, and wherein the display effect is determined according to the live room gift control method applied to the server.
According to another aspect of the present disclosure, there is also provided a live room gift control apparatus applied to a viewer client, including: a transmitting unit configured to transmit one or more gift-giving instructions; a receiving unit configured to receive a live stream of a live room and configuration information associated with a shake effect from a server, the live stream including one or more first avatars corresponding to the one or more gift-giving instructions and a second avatar of an anchor user; and a configuration unit configured to configure the associated shake effect for the viewer client based on the configuration information, wherein the one or more first avatars are generated according to the live room gift control method applied to the server, and wherein the configuration information is provided according to the live room gift control method applied to the server.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and at least one memory communicatively coupled to the at least one processor, wherein the at least one memory stores a computer program that, when executed by the at least one processor, implements the live gift control method applied to the server and the live gift control method applied to the viewer client as described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the live room gift control method applied to a server and the live room gift control method applied to a viewer client described above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program when executed by a processor implements the above-described live room gift control method applied to a server and the live room gift control method applied to a viewer client.
According to one or more embodiments of the present disclosure, by configuring a display effect for an avatar of a anchor user when the avatar of the anchor user overlaps with an avatar of a live room, an interaction between the avatar and the avatar of the anchor user can be simulated, and operability of the avatar is extended, so that live contents are more diversified. Therefore, direct interaction between the anchor user and audience users is achieved, the immersion degree of the users in the live broadcast process is improved, and the user use experience of the live broadcast platform is improved. Alternatively or additionally, according to one or more embodiments of the present disclosure, live content may be enriched from multiple dimensions by providing a vibration effect when a virtual gift of a live broadcast room overlaps with an avatar of a main broadcast user, so that the interactive diversity between the audience user and the main broadcast user is further expanded, and a more realistic virtual live broadcast scene is created for the audience user.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of example only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with embodiments of the present disclosure;
fig. 2 illustrates a flow chart of a live room gift control method applied to a server according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for determining whether a position of a gift avatar overlaps with a position of a anchor avatar according to an embodiment of the present disclosure;
FIG. 4 illustrates a flow chart for determining whether a motion trajectory of a gift avatar overlaps with a position of an avatar of a anchor user in accordance with an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of generating collision volumes for a gift avatar and an avatar of an anchor user, respectively, in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a flow chart for configuring display effects for an avatar of an anchor user in accordance with an embodiment of the present disclosure;
figure 7 shows a schematic diagram of configuring display effects for a gift avatar and an avatar of a anchor user in response to determining that the position of the gift avatar overlaps the position of the anchor user's avatar, in accordance with an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of configuring another different display effect for an avatar of an anchor user after stopping displaying a gift avatar according to an embodiment of the present disclosure;
fig. 9 shows a flow chart of another live room gift control method applied to a server according to an embodiment of the present disclosure;
fig. 10 illustrates a flow chart of a live room gift control method applied to a viewer client in accordance with an embodiment of the present disclosure;
FIG. 11 illustrates a flow chart of another live room gift control method applied to a viewer client in accordance with an embodiment of the present disclosure;
fig. 12 shows a block diagram of a live room gift control apparatus applied to a server according to an embodiment of the present disclosure;
fig. 13 is a block diagram illustrating another live room gift control apparatus applied to a server according to an embodiment of the present disclosure;
FIG. 14 shows a block diagram of a live room gift control device applied to a viewer client, in accordance with an embodiment of the present disclosure;
fig. 15 shows a block diagram of another live room gift control device applied to a viewer client in accordance with an embodiment of the present disclosure;
FIG. 16 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing the particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
With the continuous enrichment of live broadcast categories, live broadcast is not limited to live broadcast, but a virtual anchor can be generated based on various virtualization technologies and virtual live broadcast is performed based on the virtual anchor. The inventor notices that the virtual live broadcast room created based on the virtualization technology only realizes the main broadcast virtualization and the live broadcast scene virtualization, and the gifts given by the audience users in the live broadcast process are only displayed in a specific gift display area, and the direct broadcast content is not influenced. This will result in less interaction between the audience user and the anchor user, thereby reducing the user's immersion during the live broadcast.
In view of this, embodiments of the present disclosure provide a live broadcast room gift control method, which can simulate interaction between a virtual gift and an avatar of a host user, and expand operability of the virtual gift, so that live broadcast content is more diversified. Therefore, direct interaction between the anchor user and audience users is achieved, the immersion degree of the users in the live broadcast process is improved, and the user use experience of the live broadcast platform is improved. In addition, by providing a vibration effect when simulating the interaction between the virtual gift and the avatar of the anchor user, live content can be enriched from multiple dimensions, further expanding the interactive diversity between audience users and anchor users.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications. It should be understood that although fig. 1 depicts only six client devices, the present disclosure may support any number of client devices.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the live room gift control methods described in the present disclosure to be performed.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein, and is not intended to be limiting.
A user may initiate communication with server 120 using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as gift-giving instructions, preset configurations. The database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The database 130 may be of different types. In certain embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or conventional stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with this disclosure.
Fig. 2 shows a flow diagram of a live room gift control method 200 applied to a server according to an embodiment of the present disclosure. As shown in fig. 2, method 200 may include: step S210, receiving one or more gift giving instructions from the audience client; step S220, generating one or more first avatars corresponding to the one or more gift giving instructions; step S230, determining, for each of the one or more first avatars, whether a position of the first avatar overlaps a position of a second avatar of the anchor user; step S240, responding to the situation that the position of the first virtual image is overlapped with the position of a second virtual image of the anchor user, and configuring a first display effect for the second virtual image; and a step S250 of providing the live stream to the anchor client and the viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene containing the one or more first avatars and a second avatar having a first display effect.
By generating the avatar corresponding to the gift giving instruction and configuring the display effect for the avatar of the anchor user when the avatar overlaps with the avatar of the anchor user, the interaction between the avatar and the anchor user can be simulated, the operability of the avatar is expanded, and the live content is diversified. Furthermore, direct interaction between the virtual anchor and audience users can be realized, and the requirement of the audience users on creation of live content is met, so that the distance between the audience users and the anchor users is effectively increased, and the interaction rate is increased.
It should be understood that the method according to the present disclosure may also be used for a real character displayed in a live broadcast page by a live anchor, that is, when it is determined that the position of the virtual gift character overlaps with the position of the real character displayed in the live broadcast page by the live anchor, configuring a corresponding display effect for the real character, such as a map, an animation effect, and the like corresponding to lacrimation, flushing, a surprise expression, and the like.
According to some embodiments of the present disclosure, the method 200 may further comprise: determining, for each of the one or more gift giving instructions, a gift category corresponding to the gift giving instruction; and determining the number of gift giving instructions under each gift category, and wherein the generating of the one or more first avatars corresponding to the one or more gift giving instructions at step S220 may include: for each gift category, in response to determining that the number of gift rendering instructions under the gift category exceeds a predetermined threshold, a first avatar corresponding to the gift category is generated.
According to some embodiments, the gift category may be specific to each live platform, such as rocket, love heart, yacht, and the like. When the same gift-giving instruction is received, the gift class corresponding to the gift-giving instruction may be incremented by 1. When the number of gift-giving instructions under the gift category exceeds a predetermined threshold (e.g., 20, 50, 100, etc.), a corresponding gift avatar is generated. In some examples, the gift avatar may be an avatar that has a higher correspondence to a gift category, such as generating a rocket avatar for a gift category rocket, generating a love-heart avatar for a gift category love-heart, generating a yacht avatar for a yacht gift category, and the like. In other examples, the gift avatar may also be an avatar with a lower correspondence to the gift category, such as a spark avatar generated for a gift category rocket, a cupper avatar generated for a gift category loving, a wave avatar generated for a yacht gift category, and so forth.
According to other embodiments, the number of gift-giving instructions may also be counted without gift classification, and when the number of gift-giving instructions exceeds a predetermined threshold, a corresponding gift avatar is generated.
By setting the gift giving instruction quantity threshold value aiming at each gift category, the display of too many gift avatars on the live broadcast interface can be prevented, and the influence on the live broadcast content due to disorder of the live broadcast interface is avoided.
Fig. 3 shows a flow chart for determining whether a position of a gift avatar overlaps with a position of a anchor avatar according to an embodiment of the present disclosure. As shown in fig. 3, the step of determining whether the position of the first avatar overlaps with the position of the second avatar of the anchor user for each of the one or more first avatars S230 may include: step S310, determining the motion track of the first virtual image according to a first preset configuration; and step S320, determining whether the motion track of the first avatar is overlapped with the position of the second avatar of the anchor user. Therefore, any motion track which accords with the preference of the anchor user or audience user can be configured for the virtual gift, and the interestingness of live broadcast content is enhanced while the user immersion degree in the live broadcast process is improved.
According to some embodiments of the present disclosure, the first preset configuration may include a movement speed of the one or more first avatars. For example, the moving speed of each gift avatar may be set to be constant to display the gift avatars moving at a uniform speed with the moving trajectory being a straight line.
According to some embodiments of the present disclosure, the first preset configuration may include a motion acceleration of the one or more first avatars. For example, the motion acceleration of each gift avatar may be set to the gravity acceleration to generate a motion trajectory having a free-fall characteristic.
According to some embodiments of the present disclosure, the first preset configuration may include a movement direction of the one or more first avatars. For example, the direction of movement of each gift avatar may be set to be toward the anchor user's avatar to simulate the animation effect of the gift avatar pounding against the anchor user's avatar.
According to some embodiments of the present disclosure, the first preset configuration may include a starting position of the one or more first avatars. For example, the starting position of the gift avatar may be any position such as the top, bottom, left side, or right side of the live interface. For another example, after obtaining the starting position of the gift avatar, the movement trace of the gift avatar may also be controlled by configuring the movement acceleration for the gift avatar. For example, the acceleration may be the acceleration of gravity of the earth, and the gift avatar has the property of falling freely on the earth, and the motion trajectory is a falling motion trajectory. The acceleration may also be the acceleration of gravity of the moon, and the gift avatar has the characteristic of moving on the moon, and the movement trajectory is also the falling movement trajectory. And the acceleration can be zero, so that the gift virtual image has the characteristic of weightlessness, and the motion track is a floating motion track.
According to some embodiments of the present disclosure, the first preset configuration may include a target position of one or more first avatars. For example, the target position of the gift avatar may be the avatar of the anchor user to simulate the animation effect of the gift avatar pounding onto the avatar of the anchor user.
Through the first preset configuration, customized movement tracks can be configured for each gift virtual image, so that the diversity of display effects of the live broadcast room is enriched. For example, the gift avatar can be moved in the virtual scene of the live broadcast room according to the actual requirement, such as horizontal movement, diagonal movement and the like, so as to further expand the operability of the virtual gift. It should be understood that although the above examples are shown separately for illustrative and non-limiting purposes, they may be combined in any manner, for example, the first preset configuration may include one or more of a moving speed, a moving acceleration, a moving direction, a starting position, and a target position of the one or more first avatars.
According to further embodiments of the present disclosure, the configuration of the target position, direction of movement of the gift avatar by the viewer user may also be received from the viewer client.
Fig. 4 shows a flow chart for determining whether a motion trajectory of a gift avatar overlaps with a position of an avatar of a anchor user according to an embodiment of the present disclosure. As shown in fig. 4, the step of determining whether the motion trajectory of the first avatar overlaps with the position of the second avatar of the anchor user at step S320 may include: step S410, generating corresponding collision bodies for the first virtual image and the second virtual image respectively; step S420 of determining one or more first coordinates of the contour of the collision volume corresponding to the first avatar; step S430, determining one or more second coordinates of the contour of the collision volume corresponding to the second avatar; and step S440 of determining that the motion trajectory of the first avatar overlaps the position of the second avatar in response to determining that at least one of the one or more first coordinates is the same as at least one of the one or more second coordinates.
According to some embodiments of the present disclosure, a Unity engine may be utilized to generate a respective collision volume for each of the gift avatar and the anchor user's avatar. For example, a collision volume corresponding to the gift avatar may be set as a sphere with the Unity engine, in which case the coordinates of the outline of the spherical collision volume may be determined based on the center of the sphere and the radius of the sphere. For another example, a collision volume corresponding to the gift avatar may be set as a cylinder with the Unity engine, in which case the coordinates of the outline of the cylinder collision volume may be determined based on the radius and height. For another example, a collision volume corresponding to the avatar of the anchor user may be set to an ellipsoid with the Unity engine, in which case the coordinates of the outline of the ellipsoid collision volume may be determined based on the focal point, major axis and minor axis.
Fig. 5 shows collision volumes 530 and 540 generated for a gift avatar 510 and an avatar 520 of an anchor user, respectively, under a 2D scene. As shown in fig. 5, the collision volume 530 of the gift avatar 510 may have the outline of a cylinder and the collision volume 540 of the avatar 520 may have the outline of an ellipsoid. When the gift avatar 510 moves along the motion trajectory, if any one point coordinate on the outline of the collision volume 530 is the same as any one point coordinate on the outline of the collision volume 540, it may be determined that the motion trajectory of the gift avatar overlaps with the position of the avatar of the anchor user.
By generating corresponding collision bodies for each gift avatar and the avatar of the anchor user, a collision effect can be triggered when the motion trail of the gift avatar overlaps with the position of the avatar of the anchor user, so that direct interaction between the anchor user and the audience user can be realized according to the characteristics of the collision bodies. In addition, based on the characteristics of the collision volumes of the gift avatar and the anchor user avatar, a variety of collision effects may be conveniently implemented for display.
It should be appreciated that the respective collision volumes may also be generated for each of the gift avatar and the anchor user's avatar by other suitable methods, and are not limited to the Unity engine in the above example. It should also be understood that the collision volume of the gift avatar and the collision volume of the anchor user's avatar in 2D mode are shown in fig. 5 for illustrative purposes only, but in actual operation, the collision volumes are implemented in 3D mode. The anchor user can adjust the display effect by switching the 2D mode (e.g., orthogonal mode) or the 3D mode (e.g., perspective mode) of the camera. Furthermore, no collision volumes generated for the gift avatar and the avatar of the anchor user are displayed in the live interface. It should also be appreciated that the 2D or 3D gift avatar model and anchor user avatar model may also be generated by other suitable algorithms such that when the contours of both overlap, a corresponding collision effect is triggered.
Fig. 6 illustrates a flow chart for configuring display effects for an avatar of a anchor user when a gift avatar overlaps with a position of the anchor user's avatar according to an embodiment of the present disclosure. As shown in fig. 6, the configuring of the first display effect for the second avatar of the anchor user at step S240 in response to determining that the position of the first avatar overlaps with the position of the second avatar may include: step S610, determining one or more first control parameters associated with the first display effect; step S620, determining one or more second control parameters associated with a second display effect, wherein the second display effect is associated with a currently captured action of the anchor user, and the action includes an expressive action and/or a physical action; step S630, determining, for each of the one or more first control parameters, whether the first control parameter corresponds to one or more second control parameters; step S640, in response to determining that the first control parameter corresponds to any of the one or more second control parameters: assigning a first weight to the first control parameter and a second weight to a corresponding second control parameter; and configuring a first display effect and a second display effect according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof.
According to some embodiments of the present disclosure, the first display effect may include, but is not limited to, lacrimation, blush, surprised expression, and the like, and accordingly, the first control parameter may include, but is not limited to, a lacrimation parameter, a blush parameter, an eye size parameter, and the like.
According to some embodiments, face capture techniques may be utilized to capture any action of a anchor user, such as squinting, laughing, and the like, and based thereon control the avatar of the anchor user to achieve various display effects. The second display effect may be the same as the first display effect or may be different from the first display effect. For example, the second display effect may include tearing, flushing, surprise, forward leaning of the body, falling, and the like. Accordingly, the second control parameter may be the same as the first control parameter, or may be different from the first control parameter. For example, the second control parameter may include a lacrimation parameter, a blush parameter, an eye size parameter, a body tilt angle parameter, and the like.
It should be understood that the above examples are shown for illustrative purposes only and are not limiting, and that the first control parameter and the second control parameter may also be a combination of two or more thereof, and the scope of the present disclosure is not limited in this respect.
According to some embodiments, the first control parameter is determined to correspond to the second control parameter (i.e. the first display effect conflicts with the second display effect) if the characteristics of the avatar of the anchor user (i.e. the body part) controlled by the first control parameter are the same as the characteristics of the avatar of the anchor user controlled by the second control parameter. In some examples, the first display effect is the same as the second display effect (e.g., both are squinting), in which case both control the eyes of the avatar of the anchor user, the first control parameter may be determined to correspond to the second control parameter. In other examples, the first display effect is different from the second display effect (e.g., the first display effect is glancing eye, but the second display effect associated with the currently captured action of the anchor user is squinting), in which case both control the eyes of the avatar of the anchor user, and the first control parameter may be determined to correspond to the second control parameter even though the first display effect is different from the second display effect. How to assign weights to the first control parameter and the second control parameter in this case and configure the first display effect and the second display effect based thereon is described in detail below.
When the gift avatar overlaps with the avatar of the anchor user and the first control parameter corresponds to any one of the second control parameters, different priority combinations for the display results can be obtained by setting weight values for the first control parameter and the second control parameter, so that different display results are obtained, and the interaction diversity between the anchor user and the audience user is expanded.
According to some embodiments of the present disclosure, with continued reference to fig. 6, the step S240, in response to determining that the position of the first avatar overlaps with the position of a second avatar of the anchor user, configuring a first display effect for the second avatar may further include: step S650, in response to determining that the first control parameter does not correspond to any of the one or more second control parameters: configuring a first display effect according to the first control parameter; and configuring a second display effect according to the one or more second control parameters.
According to some embodiments, if it is determined that the first display effect is a map effect, it may be determined that the first control parameter does not correspond to any of the second control parameters (i.e., the first display effect does not conflict with the second display effect). For example, the first display effect is tearing in a map effect, and since the map effect does not involve control over the body part characteristics of the avatar of the anchor user, it may be determined that the first control parameter controlling the first display effect does not correspond to any of the second control parameters. In this case, the tearing effect may be displayed in a map manner according to the corresponding first control parameter, and the corresponding second display effect may be configured for simultaneous display according to the second control parameter. For example, the avatar of the anchor user displays the lacrimation map effect and squinting expressions simultaneously.
According to some embodiments of the present disclosure, the method 200 may further comprise: in response to determining that the first control parameter corresponds to any of the one or more second control parameters, determining whether the first display effect associated with the first control parameter and the first display effect associated with the corresponding second control parameter are capable of being superimposed, and wherein assigning a first weight to the first control parameter and a second weight to the corresponding second control parameter in step S640 may include: in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter cannot be superimposed, setting the first weight to 1 and the second weight to 0.
In some embodiments, if the first control parameter corresponds to one or more second control parameters and the first display effect is a display effect that completely changes the expression and/or action of the avatar of the anchor user, it may be determined that the first display effect and the second display effect cannot be superimposed. Accordingly, the first display effect may be configured and displayed in accordance with the first control parameter while foregoing configuring the second display effect associated with the currently captured expression and/or action of the anchor user. In other embodiments, if the first control parameter corresponds to one or more second control parameters and the first display effect is different from the second display effect, it may be determined that the two cannot be superimposed. For example, if the first display effect is a surprise glaring effect in "collision effect" and the second display effect is determined to be a squinting effect because the action currently captured to the anchor user is squinting, it may be determined that the first display effect and the second display effect cannot be superimposed.
In some embodiments, if the first control parameter corresponds to one or more second control parameters, it may be determined that the expression and/or action corresponding to the first display effect may be superimposed by determining that the expression and/or action corresponding to the first display effect is the same as the expression and/or action corresponding to the second display effect. For example, if it is determined that the expression and/or the motion corresponding to the first display effect belong to the first category and it is determined that the expression and/or the motion corresponding to the second display effect also belong to the first category, it may be determined that the first display effect and the second display effect may be superimposed. For example, if the first display effect corresponds to a low head in the action category, and it is determined that the second display effect also corresponds to a low head in the action category because the current capture is that of the anchor user, it may be determined that the first display effect and the second display effect may be superimposed.
By setting a weight value larger than the second display effect for the first display effect when the first display effect and the second display effect cannot be superimposed, a higher priority can be given to the 'collision effect', so that the 'collision effect' is preferentially displayed compared with the display effect determined according to the currently captured action of the anchor user, the interaction between the virtual gift and the avatar of the anchor user is more truly simulated, and the immersion of the audience user on the live content is improved.
According to further embodiments of the present disclosure, the assigning of the first weight to the first control parameter and the assigning of the second weight to the corresponding second control parameter in step S640 may further include: in response to determining that the first display effect associated with the first control parameter and the second display effect associated with the corresponding second control parameter are capable of being superimposed, the first weight and the second weight are set to arbitrary values, for example, the first weight and the second weight are set to 50%, the first weight is set to 30%, and the second weight is set to 70%, and so on.
According to some embodiments of the present disclosure, the method 200 for controlling a gift of a live broadcast room may further include: acquiring the display duration of the first display effect; and wherein configuring the first display effect and the second display effect for display according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof in step S640 may comprise: in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter cannot be superimposed, during a display duration: configuring a first display effect according to the first control parameter; and forgoing configuring the second display effect; and in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter are capable of being superimposed, during a display duration: and superposing the first display effect and the second display effect according to the first control parameter and the corresponding second control parameter.
In some examples, as described above, the first display effect is a glaring effect of "collision effect" and the second display effect is determined to be an example of a squinting effect because the action currently capturing the anchor user is squinting, since the weight assigned to the first control parameter is 1 and the weight assigned to the second control parameter is 0, the second display effect (i.e., the squinting effect) will be forgotten to be displayed while only the first display effect (i.e., the glaring effect) is displayed.
In other examples, as an example where the first display effect is a heads-down effect in the "impact effect" and the second display effect is also a heads-down effect as determined by the current capture of the action of the anchor user as heads-down, the heads-down control parameter associated with the first display effect is, for example, 50%, and the heads-down control parameter associated with the second display effect is, for example, 25%, then the first display effect and the second display effect may be superimposed in conjunction with these heads-down control parameters and their corresponding weights, for example, in a case where the weight of the heads-down control parameter associated with the first display effect is 50% and the weight of the heads-down control parameter associated with the second display effect is also 50%, a configuration of a heads-down effect of 75% with respect to the normal state heads-down level may be obtained.
Therefore, the interaction between the gift virtual image and the virtual image of the anchor user and the current expression action or body action of the anchor user can be comprehensively considered, so that the animation effect is more real. Meanwhile, the interaction diversity between the anchor user and the audience user can be expanded, and the immersion of the audience user on the live content is further improved.
Fig. 7 schematically illustrates a display effect configured for the avatar 720 of the anchor user on the live interface 700 when the position of the gift avatar 710 overlaps with the position of the avatar 720 of the anchor user. As shown in fig. 7, the display effect may be a surprise expression 730. It should be understood that the display effects configured for the avatar of the anchor user may include eye masquerading, body tilt, squinting, anger, etc., in addition to the example shown in fig. 7. In addition, in order to further enhance the audience user's sense of immersion in live content, it is also possible to play an associated sound effect, such as a sound of "hello" or the like, while displaying an animation effect of the avatar of the anchor user.
According to some embodiments of the present disclosure, the method for controlling a gift of a live broadcast room may further include: determining whether a motion trajectory of another first avatar identical to the first avatar is overlapped with a position of a second avatar during a display time period; in response to determining that a motion trajectory of another first avatar identical to the first avatar overlaps with a position of a second avatar, a display duration is increased.
The server may continue to receive gift-giving instructions transmitted by the viewer client during the display period. Accordingly, it is possible to receive the same gift-giving instruction transmitted by the viewer client as the gift-giving instruction corresponding to the currently displayed gift avatar, or the number of the same gift-giving instructions exceeds a predetermined threshold, and generate the same gift avatar for display. In this case, if the motion trajectory of the newly generated gift avatar overlaps with the position of the anchor user's avatar again, the display time period of the first display effect configured for the anchor user's avatar may be increased. For example, with continued reference to fig. 7, when the positions of the gift avatar 710 (pineapple) and the anchor avatar 720 overlap, the anchor avatar 720 is configured with a surprise expression as an animation effect and displayed during the display duration. When another gift avatar pineapple is newly generated and collides with the head of the anchor avatar 720, the duration of the surprised expression may be extended (e.g., extending the display duration from 5 seconds to 10 seconds) without having to wait for the facial action of the anchor avatar to resume before reconfiguring the surprised expression again.
Therefore, the diversity of the display effect of the live broadcast room is further enriched, the reaction of the anchor user to the gifts given by the audience users is more real, and the interestingness of the live broadcast content is increased.
According to some embodiments of the present disclosure, assigning a first weight to the first control parameter and assigning a second weight to the corresponding second control parameter in step S640 may further include: the first weight is gradually decreased and the second weight is gradually increased for a predetermined period of time after the display period. By gradually decreasing the first weight and gradually increasing the second weight during the transition, the priority of the "collision effect" may be adjusted, simulating a transition animation effect in which the avatar of the anchor user is restored from an animation effect due to the "collision" to an animation effect based on the currently captured action of the anchor user.
According to some embodiments, the predetermined period of time may be any period of time, such as 3 seconds, 5 seconds, 10 seconds, etc., and the scope of the claimed disclosure is not limited in this respect.
According to some embodiments, if a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter cannot be superimposed, during the predetermined period of time, when the first weight decreases to less than the second weight, the configuration of the first display effect is abandoned and the second display effect is configured for display according to the corresponding second control parameter. As an example of the first display effect being a surprise glaring effect and the second display effect being an squint effect as described above, and thus determining that superimposition cannot be performed, the weight assigned to the first control parameter will gradually decrease from 1, and the weight assigned to the second control parameter will gradually increase from 0. When the weight assigned to the first control parameter is reduced to be less than the weight assigned to the second control parameter, the first display effect (i.e., the glaring effect) is abandoned and only the second display effect (i.e., the squint effect) is displayed, so that the real reaction of the anchor user to the 'collision' is simulated.
According to further embodiments, if a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter are capable of being superimposed, the first display effect and the second display effect are superimposed for display according to the first control parameter and a first weight thereof, and the corresponding second control parameter and a second weight thereof during a predetermined period of time. As an example of the first display effect being a heads-down effect in the "impact effect" as described above and the second display effect being determined to be also a heads-down effect due to the current capture of the action of the anchor user being a head-down, and thus determining that the first display effect and the second display effect can be superimposed, the head-down control parameter associated with the first display effect is, for example, 50%, and the head-down control parameter associated with the second display effect is, for example, 25%, then the first display effect and the second display effect can be superimposed in combination with these head-down control parameters and their corresponding weights. For example, in the case where the weight of the heads-down control parameter associated with the first display effect is 50% and the weight of the heads-down control parameter associated with the second display effect is also 50%, a configuration of the heads-down effect of 75% with respect to the normal state heads-down level may be obtained.
Along with the change of the priority of the collision effect, the interaction between the virtual image of the main broadcast user and the virtual image of the gift can be simulated more really, and the immersion of the audience user on the live broadcast is enhanced.
According to some embodiments of the present disclosure, the method for controlling a gift of a live broadcast room may further include: after a predetermined period of time, a third display effect different from the first display effect and the second display effect is configured for the second avatar.
According to some embodiments, the third display effect may comprise any expressive or physical action of the avatar of the anchor user. For example, fig. 8 schematically illustrates configuring another different display effect for the avatar of the anchor user on the live interface 800 after stopping displaying the gift avatar. As shown in fig. 8, the avatar 810 of the anchor user changes from the surprised expression 730 in fig. 7 to the sweaty expression 820 and the red face expression 830 and is also configured with a tie-pulling action 840.
After the 'collision effect' is finished, the interaction diversity between the audience user and the anchor user can be further expanded by configuring another different display effect for the avatar of the anchor user. Meanwhile, a more real virtual live scene is created for audience users.
According to some embodiments of the disclosure, the first display effect is determined according to a first preset configuration.
For example, if the starting position of the gift avatar is determined to be further from the avatar of the anchor user (e.g., above a preset threshold) according to a first preset configuration, the first display effect may be set to lacrimation and squinting. If the starting position of the gift avatar is determined to be closer to the anchor user's avatar according to a first preset configuration (e.g., less than the preset threshold), the first display effect may be set to tearing only.
Similarly, if the movement speed of the gift avatar is determined to be large (e.g., exceeding a preset threshold) according to the first preset configuration, the first display effect may be set to be a glaring and body tilt. If the movement speed of the gift avatar is determined to be small (e.g., less than the preset threshold) according to the first preset configuration, the first display effect may be set to be only glaring.
It should be understood that the above examples are shown for illustrative purposes only, and the corresponding first display effect may also be determined according to the first preset configuration including, for example, the motion acceleration, the motion direction, the target position, and the like of the gift avatar.
The display effect of the avatar of the anchor user is determined according to the first preset configuration of the motion information indicating the virtual gift avatar, so that the interactive reality and interestingness between the virtual gift and the anchor user are further enhanced, and further the viscosity of audience users to live content is favorably increased.
According to some embodiments of the present disclosure, a motion trajectory of the gift avatar may also be changed in response to determining that the position of the gift avatar overlaps with the position of the anchor user's avatar. For example, if the motion trajectory is a falling motion trajectory and the gift avatar having a free-fall characteristic collides with the head of the avatar of the anchor user during the falling, in such a scenario, the gift avatar may be displayed on the live interface to deviate from the original motion trajectory due to the collision, for example, to lean to one side and present a bounce effect (associated with the current speed, acceleration, etc. of the gift avatar), and then continue the free-fall. If the subsequent free fall continues to be overlapped and collided with the outline of the virtual image of the main broadcasting user, the motion track of the virtual image of the gift can be further changed until the bottom of the live broadcasting interface is reached.
According to some embodiments of the present disclosure, a display effect may also be configured for the gift avatar in response to determining that the position of the gift avatar overlaps with the position of the anchor user's avatar. With continued reference to fig. 7, there is schematically illustrated the display effect of an impinging spark configured for the gift avatar 710 when the position of the gift avatar 710 overlaps with the position of the anchor user's avatar 720. It should be understood that the display effects configured for the gift avatar may include effects such as cracking of the gift avatar, firing a colored light spot, and the like, in addition to the example shown in fig. 7. In addition, to further enhance the audience user's sense of immersion in live content, associated sound effects, such as sounds of collisions or explosion, etc., may also be played while the animation effect of the gift avatar is displayed.
Through the display effect that corresponds or change its motion trajectory for the configuration of present avatar when the position of present avatar overlaps with the position of anchor user's avatar, can be comparatively real simulation collision effect, satisfy audience user's the creation to the live content, promote audience user to the participation sense of live. The method can improve the immersion degree of the users in the live broadcast process and can also increase the live broadcast interest, thereby enhancing the viscosity of the live broadcast content to audience users.
It should be appreciated that each of the two operations described above may be performed separately or may be performed simultaneously when the position of the gift avatar overlaps with the position of the anchor user's avatar.
Fig. 9 shows a flow diagram of another live room gift control method 900 applied to a server according to an embodiment of the present disclosure. As shown in fig. 9, method 900 may include: step S910, receiving one or more gift giving instructions from the audience client; step S920 of generating one or more first avatars corresponding to the one or more gift-giving instructions; step S930, determining, for each of the one or more first avatars, whether a position of the first avatar overlaps a position of a second avatar of the anchor user; and step S940 of providing configuration information associated with the vibration effect to the anchor client and the spectator client in response to determining that the position of the first avatar overlaps with the position of the second avatar of the anchor user.
Live content can be enriched from multiple dimensions by configuring a vibration effect for a 'collision effect', so that the interaction diversity between audience users and anchor users is further expanded. Meanwhile, a more real virtual live scene is created for audience users.
It should be understood that the above-described live room gift control method according to the present disclosure may also be used for a real character displayed in a live page by a live anchor, i.e., configuring configuration information associated with a vibration effect upon determining that a position of the gift avatar overlaps a position of the real character displayed in the live page by the live anchor.
It should also be understood that the operations, features, elements, etc. contained in steps S910, S920, and S930 of the live room gift control method 900 described above may correspond to the operations, features, elements, etc. contained in corresponding steps S210, S220, and S230 of the live room gift control method 200, and have been described with reference to fig. 2-4 and 6. Thus, the advantages described above with respect to these steps in the live room gift control method 200 are equally applicable to the corresponding steps in the live room gift control method 900. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
According to some embodiments, upon determining, via the server, that the position of the gift avatar overlaps with the position of the anchor user's avatar, the determination may be transmitted to the anchor client, and a shock start instruction may be generated via the anchor client. The shock onset instruction may then be transmitted by a live streaming tool (e.g., OBS software) to the server as a whole along with the live stream (video stream). And when receiving the vibration starting instruction and the whole live stream, the server demodulates the vibration starting instruction from the vibration starting instruction and provides vibration parameters associated with the vibration effect for each client supporting the vibration effect. For example, if the client is a mobile phone, vibration of the mobile phone may be triggered, and if the client is a tablet computer, a notebook computer, or the like, vibration of the handle may be triggered when it is detected that the client is connected to an input device such as a handle.
According to some embodiments of the present disclosure, the configuration information associated with the vibration effect may include a preset duration for the one or more first avatars. For example, the duration of the vibration effect may be configured by the anchor client, the viewer client, or the live platform default duration. For example, the live platform default shake duration may be 1 frame (60 milliseconds) or the like each time the gift avatar "collides" with the anchor user's avatar. When the duration is over, the shock complete instruction may again be transmitted by the live push tool (e.g., OBS software) to the server along with the live stream (video stream) as a whole. And when receiving the vibration ending instruction and the whole live stream, the server demodulates the vibration ending instruction and informs each client supporting the vibration effect to end the vibration effect.
According to further embodiments of the present disclosure, the configuration information associated with the vibration effect may include a preset vibration force level for the one or more first avatars to determine the vibration effect. For example, the vibration strength of the vibration effect can be configured by the anchor client and the audience client, and the vibration strength of the vibration effect can also be configured as the default vibration strength of the live platform. For example, the shake force of the live platform default may be 10% intensity, etc. each time the gift avatar "collides" with the anchor user's avatar.
In some examples, the shock effect may be provided via left and right motors in the client device or an external input device (e.g., a handle, etc.) of the client device. In this case, the configuration information associated with the vibration effect may further include information indicating that the left motor vibrates, the right motor vibrates, or both the left and right motors vibrate simultaneously, and the corresponding vibration strength.
According to some embodiments of the present disclosure, the configuration information associated with the shock effect may be determined based on a preset configuration, wherein the preset configuration comprises one or more of: a starting position, a target position, a movement speed, a movement acceleration, and a movement direction of the one or more first avatars. In some examples, the vibration effect may be determined based on a starting position of one or more first avatars included in the first preset configuration. For example, if the starting location is farther from the location of the avatar of the anchor user, a longer duration and greater shock strength may be configured. In other examples, the vibration effect may be determined based on a movement speed of one or more first avatars included in the first preset configuration. For example, if the movement speed is greater, a longer duration and a greater shock strength may be configured. In still other examples, the shake effect may be determined based on a motion acceleration of one or more first avatars included in the first preset configuration. For example, if the motion acceleration is large, a longer duration and a larger shock force can be configured. In still other examples, the vibration effect may also be determined based on two or more of the start position, the movement velocity, and the movement acceleration described above.
The configuration information associated with the vibration effect is determined through preset configuration, if the duration is preset and the vibration effect is preset, the specific vibration effect can be configured for the collision effect every time and the vibration effect is associated with the virtual gift image, so that the sense of reality of audience users to the live broadcast room is increased, and the interactive diversity of the live broadcast room is further enriched.
Fig. 10 illustrates a flow chart of a live room gift control method 1000 applied to a viewer client in accordance with an embodiment of the present disclosure. As shown in fig. 10, the live room gift control method 1000 may include: step S1010, one or more gift giving instructions are sent; step S1020, receiving a live stream of a live broadcast room from a server, the live stream including one or more first avatars corresponding to one or more gift giving instructions, a second avatar of a host user, and a display effect thereof; and a step S1030 of displaying the one or more first avatars and the second avatar having the display effect based on the live stream, wherein the one or more first avatars are generated according to the live room gift control method 200 applied to the server of the present disclosure, and wherein the display effect is determined according to the live room gift control method 200 applied to the server of the present disclosure.
It should be understood that the operations, features, elements, etc. included in the various steps of the live room gift control method 1000 may correspond to the operations, features, elements, etc. included in the various steps of the live room gift control method 200 and have been described with reference to fig. 2-4 and 6. Thus, the advantages described above for the live room gift control method 200 are equally applicable to the live room gift control method 1000. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Fig. 11 shows a flow diagram of a live room gift control method 1100 applied to a viewer client, in accordance with an embodiment of the present disclosure. As shown in fig. 11, the live room gift control method 1100 may include: step S1110 of transmitting one or more gift-giving instructions; step S1120 of receiving a live stream of a live broadcast room and configuration information associated with a vibration effect from a server, the live stream including one or more first avatars corresponding to the one or more gift giving instructions and a second avatar of an anchor user; and a step S1130 of configuring the associated shock effect for the viewer client based on the configuration information, wherein the one or more first avatars are generated according to the live room gift control method 900 applied to the server of the present disclosure, and wherein the configuration information is provided according to the live room gift control method 900 applied to the server of the present disclosure.
It should be understood that the operations, features, elements, etc. included in the various steps of the live room gift control method 1100 may correspond to the operations, features, elements, etc. included in the various steps of the live room gift control method 900 and have been described with reference to fig. 9. Thus, the advantages described above for the live room gift control method 900 apply equally to the live room gift control method 1100. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Fig. 12 shows a block diagram of a live room gift control apparatus 1200 applied to a server according to an embodiment of the present disclosure. As shown in fig. 12, the live room gift control apparatus 1200 may include: a receiving unit 1210 configured to receive one or more gift-giving instructions from a viewer client; a generating unit 1220 configured to generate one or more first avatars corresponding to the one or more gift-giving instructions; a determining unit 1230 configured to determine, for each of the one or more first avatars, whether the position of the first avatar overlaps with the position of a second avatar of the anchor user; a configuration unit 1240 configured to configure a display effect for a second avatar of the anchor user in response to determining that the position of the first avatar overlaps with the position of the second avatar; and a providing unit 1250 configured to provide the live streams to the anchor client and the viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene including the one or more first avatars and the second avatar having the display effect.
According to some embodiments of the present disclosure, the configuration unit 1240 may include: a first control parameter determination unit configured to determine one or more first control parameters associated with a first display effect; a second control parameter determination unit configured to determine one or more second control parameters associated with a second display effect, wherein the second display effect is associated with a currently captured action of the anchor user, the action comprising an expressive action and/or a physical action; a corresponding unit configured to determine, for each of the one or more first control parameters, whether the first control parameter corresponds to one or more second control parameters; a weight assignment unit configured to assign a first weight to the first control parameter and a second weight to the corresponding second control parameter in response to determining that the first control parameter corresponds to any of the one or more second control parameters; and a first configuration subunit configured to, in response to determining that the first control parameter corresponds to any of one or more second control parameters, configure a first display effect and a second display effect according to the first control parameter and a first weight thereof, and the corresponding second control parameter and a second weight thereof.
According to some embodiments of the present disclosure, the configuration unit 1240 may further include: a second configuration subunit configured to, in response to determining that the first control parameter does not correspond to any of the one or more second control parameters, configure a first display effect according to the first control parameter; and a third configuration subunit, configured to, in response to determining that the first control parameter does not correspond to any of the one or more second control parameters, configure a second display effect according to the one or more second control parameters.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: a superimposing unit configured to determine whether a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter are able to be superimposed in response to determining that the first control parameter corresponds to any of the one or more second control parameters; and wherein the weight assigning unit may include: means configured to set the first weight to 1 and the second weight to 0 in response to determining that the first display effect associated with the first control parameter and the second display effect associated with the corresponding second control parameter cannot be superimposed.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: a unit configured to acquire a display duration of the first display effect, and wherein the first configuration subunit may include: a unit configured to, in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter cannot be superimposed, configure the first display effect in accordance with the first control parameter during a display duration, and forgo configuring the second display effect; and means configured to, in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter are capable of being superimposed, superimpose the first display effect and the second display effect during the display duration in accordance with the first control parameter and its weight and the corresponding second control parameter and its weight.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: a unit configured to determine whether a position of another first avatar identical to the first avatar overlaps a position of a second avatar during the display period; and a unit configured to increase a display time period in response to determining that a position of another first avatar identical to the first avatar overlaps with a position of a second avatar.
According to some embodiments of the disclosure, the weight assignment unit may further include: a unit configured to gradually decrease the first weight and gradually increase the second weight at a predetermined period of time after the display period.
According to some embodiments of the disclosure, the first configuration subunit may comprise: a unit configured to, in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter cannot be superimposed, forgo configuring the first display effect when the first weight decreases to less than the second weight during a predetermined period of time, and configure the second display effect according to the corresponding second control parameter; and means configured to, in response to determining that a first display effect associated with the first control parameter and a second display effect associated with a corresponding second control parameter are capable of being superimposed, superimpose the first display effect and the second display effect according to the first control parameter and a first weight thereof, and the corresponding second control parameter and a second weight thereof, during a predetermined period of time.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: a unit configured to configure a third display effect different from the first display effect and the second display effect for the second avatar after a predetermined period of time.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: means configured to determine, for each of the one or more gift-giving instructions, a gift category corresponding to the gift-giving instruction; and a unit configured to determine the number of gift giving instructions under each gift category, and wherein the generating unit 1220 may include: means configured to generate, for each gift class, a first avatar corresponding to the gift class in response to determining that a number of gift-giving instructions under the gift class exceeds a predetermined threshold.
According to some embodiments of the present disclosure, the determining unit 1230 may include: a motion trajectory determining unit configured to determine a motion trajectory of the first avatar according to a first preset configuration; and an overlapping unit configured to determine whether a motion trajectory of the first avatar overlaps with a position of a second avatar of the anchor user.
According to some embodiments of the disclosure, the first preset configuration may comprise one or more of: a starting position of one or more first avatars; a target position of one or more first avatars; a speed of movement of the one or more first avatars; motion acceleration of one or more first avatars; and a direction of movement of the one or more first avatars.
According to some embodiments of the disclosure, the overlapping unit may include: a unit configured to generate respective collision volumes for the first avatar and the second avatar, respectively; means for determining one or more first coordinates of an outline of a collision volume corresponding to the first avatar; a unit configured to determine one or more second coordinates of the outline of the collision volume corresponding to the second avatar; and means configured to determine that a motion trajectory of the first avatar overlaps a position of a second avatar in response to determining that at least one of the one or more first coordinates is the same as at least one of the one or more second coordinates.
According to some embodiments of the present disclosure, the first display effect is determined according to a first preset configuration.
According to some embodiments of the present disclosure, the live room gift control apparatus 1200 may further include: means for changing a motion trajectory of the first avatar in response to determining that the position of the first avatar overlaps with a position of a second avatar of the anchor user; and/or a unit configured to configure a fourth display effect for the first avatar in response to determining that the position of the first avatar overlaps with the position of the second avatar of the anchor user.
It should be understood that the various units 1210-1250 of the apparatus 1200 shown in fig. 12 may correspond to the various steps S210-S250 in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 1200 and the units included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity. Fig. 13 shows a block diagram of another live room gift control apparatus 1300 applied to a server according to an embodiment of the present disclosure. As shown in fig. 13, the live room gift control apparatus 1300 may include: a receiving unit 1310 configured to receive one or more gift-giving instructions from a viewer client; a generating unit 1320 configured to generate one or more first avatars corresponding to the one or more gift-giving instructions; a determining unit 1330 configured to determine, for each of the one or more first avatars, whether a position of the first avatar overlaps with a position of a second avatar of the anchor user; and a providing unit 1340 configured to provide configuration information including a vibration effect to the anchor client and/or the viewer client in response to determining that the position of the first avatar overlaps with the position of the second avatar of the anchor user.
According to some embodiments of the disclosure, wherein the configuration information associated with the shock effect comprises: a preset duration for one or more first avatars; and/or a preset vibration strength for the one or more first avatars.
According to some embodiments of the disclosure, wherein the configuration information associated with the shock effect is determined based on a preset configuration, and wherein the preset configuration comprises one or more of: a starting position, a target position, a movement speed, a movement acceleration, and a movement direction of the one or more first avatars.
It should be understood that the various units 1310-1340 of the apparatus 1300 shown in fig. 13 may correspond to the various steps S910-S940 of the method 900 described with reference to fig. 9. Thus, the operations, features and advantages described above with respect to the method 900 are equally applicable to the apparatus 1300 and the units included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Fig. 14 shows a block diagram of a live room gift control apparatus 1400 applied to a viewer client according to an embodiment of the present disclosure. As shown in fig. 14, the live room gift control apparatus 1400 may include: a transmitting unit 1410 configured to transmit one or more gift-giving instructions; a receiving unit 1420 configured to receive a live stream of a live room from a server, the live stream including one or more first avatars corresponding to the one or more gift-giving instructions, a second avatar of an anchor user, and a display effect thereof; and a display unit 1430 configured to display the one or more first avatars and a second avatar having the display effect based on the live stream, wherein the one or more avatars are generated according to the live room gift control method 200 applied to the server of the present disclosure, and wherein the display effect is determined according to the live room gift control method 200 applied to the server of the present disclosure.
It should be understood that the various units 1410-1430 of the apparatus 1400 shown in FIG. 14 may correspond to the various steps S1010-S1030 of the method 1000 described with reference to FIG. 10. Thus, the operations, features and advantages described above with respect to method 1000 are equally applicable to apparatus 1400 and the units included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Fig. 15 shows a block diagram of a live room gift control apparatus 1500 applied to a viewer client according to an embodiment of the present disclosure. As shown in fig. 15, the live room gift control apparatus 1500 may include: a transmitting unit 1510 configured to transmit one or more gift giving instructions; a receiving unit 1520 configured to receive a live stream of a live room and configuration information associated with a shake effect from a server, the live stream including one or more first avatars corresponding to the one or more gift-giving instructions and a second avatar of an anchor user; and a configuration unit 1530 configured to configure the associated shock effect for the viewer client based on the configuration information, wherein the one or more first avatars are generated by the live room gift control method 900 applied to the server according to the present disclosure, and wherein the configuration information is provided by the live room gift control method 900 applied to the server according to the present disclosure.
It should also be appreciated that various techniques may be described herein in the general context of software, hardware elements, or program modules. The various elements described above with respect to fig. 12-15 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, these units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, these units may be implemented as hardware logic/circuits. For example, in some embodiments, one or more of the receiving unit 1210, the generating unit 1220, the determining unit 1230, the configuring unit 1240 and the providing unit 1250, or the receiving unit 1310, the generating unit 1320, the determining unit 1330 and the providing unit 1340, or the transmitting unit 1410, the receiving unit 1420 and the display unit 1430, or the transmitting unit 1510, the receiving unit 1520 and the configuring unit 1530 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip (which includes one or more components of a Processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry), and may optionally execute received program code and/or include embedded firmware to perform functions.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and at least one memory communicatively coupled to the at least one processor; wherein the at least one memory stores a computer program that, when executed by the at least one processor, implements the live room gift control method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the live room gift control method described above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program when executed by a processor implements the above-described live room gift control method.
Referring to fig. 16, a block diagram of a structure of an electronic device 1600 that may be a server of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 16, the electronic device 1600 may include at least one processor 1610, a working memory 1620, an input unit 1640, a display unit 1650, a speaker 1660, a storage unit 1670, a communication unit 1680, and other output units 1690 that can communicate with each other through a system bus 1630.
Processor 1610 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. Processor 1610 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Processor 1610 may be configured to retrieve and execute computer-readable instructions, such as program code for operating system 1620a, program code for application programs 1620b, and the like, stored in working memory 1620, storage unit 1670, or other computer-readable medium.
Working memory 1620 and storage unit 1670 are examples of computer-readable storage media for storing instructions that are executed by processor 1610 to implement the various functions described above. The working memory 1620 may include both volatile and non-volatile memory (e.g., RAM, ROM, etc.). Further, storage unit 1670 may include a hard disk drive, solid state drive, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and so forth. Both working memory 1620 and storage unit 1670 may be referred to collectively herein as memory or computer-readable storage medium and may be a non-transitory medium capable of storing computer-readable, processor-executable program instructions as computer program code, which may be executed by processor 1610 as a particular machine configured to implement the operations and functions described in the examples herein.
Input unit 1660 can be any type capable of inputting information to electronic device 1600Of the type, the input unit 1660 may receive input numeric or character information and generate key signal inputs relating to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. The output units may be any type of device capable of presenting information and may include, but are not limited to, a display unit 1650, speakers 1660, and other output units 1690, which may include, but are not limited to, video/audio output terminals, vibrators, and/or printers. The communication unit 1680 allows the electronic device 1600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as bluetooth TM Devices, 802.11 devices, wi-Fi devices, wiMAX devices, cellular communication devices, and/or the like.
The application 1620b in the work register 1620 may be loaded to perform the various methods and processes described above, such as steps S210-S250 in fig. 2, steps S310-S320 in fig. 3, steps S410-S440 in fig. 4, steps S610-S650 in fig. 6, steps S910-S940 in fig. 9, steps S1010-S1030 in fig. 10, and steps S1110-S1130 in fig. 11. For example, in some embodiments, the various methods described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1670. In some embodiments, some or all of the computer program can be loaded and/or installed onto electronic device 1600 via storage unit 1670 and/or communications unit 1680. When loaded and executed by processor 1610, may perform one or more of the steps of the methods 200, 900, 1000, 1100 described above. Alternatively, in other embodiments, the processor 1610 may be configured to perform the methods 200, 900, 1000, 1100 by any other suitable means (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical aspects of the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, the various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (27)

1. A live broadcast room gift control method is applied to a server and comprises the following steps:
receiving one or more gift-giving instructions from a viewer client;
generating one or more first avatars corresponding to the one or more gift giving instructions;
for each of the one or more first avatars, determining whether the position of the first avatar overlaps with the position of a second avatar of the anchor user;
in response to determining that the position of the first avatar overlaps with the position of a second avatar of the anchor user, configuring a first display effect for the second avatar; and
providing a live stream to a anchor client and a viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene containing the one or more first avatars and the second avatar having the first display effect.
2. The method of claim 1 wherein configuring a first display effect for a second avatar of an anchor user in response to determining that the position of the first avatar overlaps the position of the second avatar comprises:
determining one or more first control parameters associated with the first display effect;
determining one or more second control parameters associated with a second display effect, wherein the second display effect is associated with a currently captured action of the anchor user, the action comprising an expressive action and/or a physical action;
determining, for each of the one or more first control parameters, whether the first control parameter corresponds to the one or more second control parameters;
in response to determining that the first control parameter corresponds to any of the one or more second control parameters:
assigning a first weight to the first control parameter and a second weight to a corresponding second control parameter; and is
And configuring the first display effect and the second display effect according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof.
3. The method of claim 2 wherein configuring a first display effect for a second avatar of an anchor user in response to determining that the position of the first avatar overlaps the position of the second avatar further comprises:
in response to determining that the first control parameter does not correspond to any of the one or more second control parameters:
configuring the first display effect according to the first control parameter; and is provided with
Configuring the second display effect according to the one or more second control parameters.
4. The method of claim 2 or 3, further comprising:
in response to determining that the first control parameter corresponds to any of the one or more second control parameters, determining whether a second display effect associated with the first control parameter and associated with the corresponding second control parameter is capable of being superimposed; and is
Wherein assigning a first weight to the first control parameter and a second weight to the corresponding second control parameter comprises:
in response to determining that a first display effect associated with the first control parameter and a second display effect associated with the corresponding second control parameter cannot be superimposed, setting the first weight to 1 and the second weight to 0.
5. The method of claim 4, further comprising:
acquiring the display duration of the first display effect; and is
Wherein configuring the first display effect and the second display effect according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof comprises:
in response to determining that the first display effect associated with the first control parameter and the second display effect associated with the corresponding second control parameter cannot be superimposed, during the display duration:
configuring the first display effect according to the first control parameter; and is provided with
Forgoing configuration of the second display effect; and
in response to determining that a first display effect associated with the first control parameter and a second display effect associated with the corresponding second control parameter are capable of being superimposed, during the display duration:
and superposing the first display effect and the second display effect according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof.
6. The method of claim 5, further comprising:
determining whether a position of another first avatar identical to the first avatar overlaps with a position of the second avatar during the display duration; and
in response to determining that the position of another first avatar identical to the first avatar overlaps the position of the second avatar, increasing the display duration.
7. The method of claim 5 or 6, wherein assigning a first weight to the first control parameter and a second weight to the corresponding second control parameter further comprises:
gradually decreasing the first weight and gradually increasing the second weight a predetermined period of time after the display period.
8. The method of claim 7, wherein configuring the first and second display effects according to the first control parameter and its first weight, and the corresponding second control parameter and its second weight comprises:
in response to determining that the first display effect associated with the first control parameter and the second display effect associated with the corresponding second control parameter cannot be superimposed, during the predetermined period of time:
when the first weight decreases below the second weight:
forgoing configuration of the first display effect; and is provided with
Configuring the second display effect according to the corresponding second control parameter; and
in response to determining that a first display effect associated with the first control parameter and a second display effect associated with the corresponding second control parameter are capable of being superimposed, during the predetermined period of time:
and superposing the first display effect and the second display effect according to the first control parameter and the first weight thereof, and the corresponding second control parameter and the second weight thereof.
9. The method of claim 7 or 8, further comprising: configuring a third display effect different from the first display effect and the second display effect for the second avatar after the predetermined period of time.
10. The method according to any one of claims 1-9, further comprising:
determining, for each of the one or more gift giving instructions, a gift category corresponding to the gift giving instruction; and
determining the number of gift-giving instructions under each gift category, and
wherein generating one or more first avatars corresponding to the one or more gift-giving instructions comprises:
for each gift category, in response to determining that the number of gift rendering instructions under the gift category exceeds a predetermined threshold, a first avatar corresponding to the gift category is generated.
11. The method of any one of claims 1-9, wherein determining, for each of the one or more first avatars, whether the position of the first avatar overlaps with a position of a second avatar of the anchor user includes:
determining a motion track of the first avatar according to a first preset configuration; and
determining whether the motion trajectory of the first avatar overlaps with a position of the second avatar of the anchor user.
12. The method of claim 11, wherein the first preset configuration comprises one or more of:
a movement speed of the one or more first avatars;
a motion acceleration of the one or more first avatars;
a direction of movement of the one or more first avatars;
a starting position of the one or more first avatars; and
a target position of the one or more first avatars.
13. The method of claim 11, wherein determining whether the motion trajectory of the first avatar overlaps the position of the second avatar of the anchor user comprises:
generating corresponding collision volumes for the first avatar and the second avatar, respectively;
determining one or more first coordinates of the outline of the collision volume corresponding to the first avatar;
determining one or more second coordinates of the outline of the collision volume corresponding to the second avatar; and
in response to determining that at least one of the one or more first coordinates is the same as at least one of the one or more second coordinates, determining that the motion trajectory of the first avatar overlaps the position of the second avatar of the anchor user.
14. The method of claim 11, wherein the first display effect is determined according to the first preset configuration.
15. The method of claim 11, further comprising:
in response to determining that the position of the first avatar overlaps the position of the second avatar of the anchor user:
changing the motion track of the first virtual image; and/or
Configuring a fourth display effect for the first avatar.
16. A live broadcast room gift control method is applied to a server and comprises the following steps:
receiving one or more gift-giving instructions from a viewer client;
generating one or more first avatars corresponding to the one or more gift giving instructions;
for each of the one or more first avatars, determining whether the position of the first avatar overlaps with the position of a second avatar of the anchor user; and
in response to determining that the position of the first avatar overlaps the position of a second avatar of a anchor user, providing configuration information associated with a vibration effect to the anchor client and/or the viewer client.
17. The method of claim 16, wherein the configuration information associated with the shock effect comprises:
a preset duration for the one or more first avatars; and/or
A preset shock force for the one or more first avatars.
18. The method of claim 16, wherein the configuration information associated with the shock effect is determined based on a preset configuration, and wherein the preset configuration comprises one or more of: a starting position, a target position, a movement speed, a movement acceleration, and a movement direction of the one or more first avatars.
19. A method for controlling a gift of a live broadcast room is applied to a client of a viewer and comprises the following steps:
sending one or more gift-giving instructions;
receiving a live stream of the live broadcast room from a server, wherein the live stream comprises one or more first avatars corresponding to the one or more gift giving instructions, a second avatar of an anchor user and a display effect of the second avatar; and
displaying the one or more first avatars and the second avatar having the display effect based on the live stream,
wherein the one or more first avatars are generated according to the method of any one of claims 1-15, and wherein the display effect is determined according to the method of any one of claims 1-15.
20. A method for controlling a gift of a live broadcast room is applied to a client of a viewer and comprises the following steps:
sending one or more gift-giving instructions;
receiving a live stream of the live broadcast room and configuration information associated with a shock effect from a server, the live stream including one or more first avatars corresponding to the one or more gift rendering instructions and a second avatar of an anchor user; and
configuring the viewer client with an associated shock effect based on the configuration information,
wherein the one or more first avatars are generated according to the method of any one of claims 16-18, and wherein the configuration information is provided according to the method of any one of claims 16-18.
21. A live broadcast room gift control device is applied to a server and comprises:
a receiving unit configured to receive one or more gift-giving instructions from a viewer client;
a generating unit configured to generate one or more first avatars corresponding to the one or more gift giving instructions;
a determination unit configured to determine, for each of the one or more first avatars, whether a position of the first avatar overlaps with a position of a second avatar of the anchor user;
a configuration unit configured to configure a display effect for a second avatar of an anchor user in response to determining that the position of the first avatar overlaps with the position of the second avatar; and
a providing unit configured to provide live streams to a anchor client and a viewer client such that the live room is displayed at the anchor client and the viewer client as a live room scene containing the one or more first avatars and the second avatar having the display effect.
22. A direct broadcast room gift control device is applied to a server and comprises:
a receiving unit configured to receive one or more gift-giving instructions from a viewer client;
a generating unit configured to generate one or more first avatars corresponding to the one or more gift giving instructions;
a determination unit configured to determine, for each of the one or more first avatars, whether a position of the first avatar overlaps with a position of a second avatar of the anchor user; and
a providing unit configured to provide configuration information including a vibration effect to the anchor client and/or the viewer client in response to determining that the position of the first avatar overlaps the position of the anchor user's second avatar.
23. A live room gift control apparatus for use at a viewer client, comprising:
a transmitting unit configured to transmit one or more gift-giving instructions;
a receiving unit configured to receive a live stream of the live broadcast room from a server, the live stream including one or more first avatars corresponding to the one or more gift giving instructions, a second avatar of an anchor user, and a display effect thereof; and
a display unit configured to display the one or more first avatars and the second avatar having the display effect based on the live stream,
wherein the one or more first avatars are generated according to the method of any one of claims 1-15, and wherein the display effect is determined according to the method of any one of claims 1-15.
24. A live room gift control apparatus for use at a viewer client, comprising:
a transmitting unit configured to transmit one or more gift-giving instructions;
a receiving unit configured to receive a live stream of the live broadcast room and configuration information associated with a shake effect from a server, the live stream including one or more first avatars corresponding to the one or more gift giving instructions and a second avatar of an anchor user; and
a configuration unit configured to configure the viewer client with an associated shock effect based on the configuration information,
wherein the one or more first avatars are generated according to the method of any one of claims 16-18, and wherein the configuration information is provided according to the method of any one of claims 16-18.
25. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the at least one processor,
wherein the at least one memory stores a computer program that, when executed by the at least one processor, implements the method of any one of claims 1-20.
26. A non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-20.
27. A computer program product comprising a computer program, wherein the computer program realizes the method according to any of claims 1-20 when executed by a processor.
CN202211282707.7A 2022-10-19 2022-10-19 Live broadcast room gift control method and device, electronic equipment and storage medium Pending CN115604499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211282707.7A CN115604499A (en) 2022-10-19 2022-10-19 Live broadcast room gift control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211282707.7A CN115604499A (en) 2022-10-19 2022-10-19 Live broadcast room gift control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115604499A true CN115604499A (en) 2023-01-13

Family

ID=84849120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211282707.7A Pending CN115604499A (en) 2022-10-19 2022-10-19 Live broadcast room gift control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115604499A (en)

Similar Documents

Publication Publication Date Title
US10740951B2 (en) Foveal adaptation of particles and simulation models in a foveated rendering system
CN107029429B (en) System, method, and readable medium for implementing time-shifting tutoring for cloud gaming systems
EP2934710B1 (en) Client rendering of latency sensitive game features
US10092827B2 (en) Active trigger poses
JP6196668B2 (en) Dynamic allocation of drawing resources in cloud game systems
US20210283514A1 (en) In-game location based game play companion application
CN110935172B (en) Virtual object processing method, device, system and storage medium thereof
US20170329503A1 (en) Editing animations using a virtual reality controller
JP7339318B2 (en) In-game location-based gameplay companion application
JP7431497B2 (en) Game provision method and system based on video calls and object recognition
JP7503122B2 (en) Method and system for directing user attention to a location-based gameplay companion application - Patents.com
CN110891659A (en) Optimized delayed illumination and foveal adaptation of particle and simulation models in a point of gaze rendering system
JP6379107B2 (en) Information processing apparatus, control method therefor, and program
JP7410905B2 (en) Car window display method and device
CN115604499A (en) Live broadcast room gift control method and device, electronic equipment and storage medium
WO2019104533A1 (en) Video playing method and apparatus
JP7365132B2 (en) Information processing device, display method and computer program
CN113448441A (en) User handheld equipment with touch interaction function, touch interaction method and device
US20200094145A1 (en) Methods and systems for recording virtual avatars maintained within a single browser window
EP4379516A1 (en) Object display method and apparatus, electronic device, and storage medium
WO2024037559A1 (en) Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium
KR102252191B1 (en) Method and system for expressing moving character by recognizing user face infomation
CN117915158A (en) Live broadcasting room interaction control method and device, electronic equipment and storage medium
WO2023226851A1 (en) Generation method and apparatus for image with three-dimensional effect, and electronic device and storage medium
WO2024066723A1 (en) Location updating method for virtual scene, and device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination