CN114885199A - Real-time interaction method, device, electronic equipment, storage medium and system - Google Patents

Real-time interaction method, device, electronic equipment, storage medium and system Download PDF

Info

Publication number
CN114885199A
CN114885199A CN202210405864.6A CN202210405864A CN114885199A CN 114885199 A CN114885199 A CN 114885199A CN 202210405864 A CN202210405864 A CN 202210405864A CN 114885199 A CN114885199 A CN 114885199A
Authority
CN
China
Prior art keywords
avatar
user account
target
preset
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210405864.6A
Other languages
Chinese (zh)
Other versions
CN114885199B (en
Inventor
冯雨南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210405864.6A priority Critical patent/CN114885199B/en
Publication of CN114885199A publication Critical patent/CN114885199A/en
Application granted granted Critical
Publication of CN114885199B publication Critical patent/CN114885199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The utility model relates to a real-time interaction method, a real-time interaction device, an electronic device, a storage medium and a real-time interaction system, which relate to the technical field of live broadcast. The virtual image corresponding to the user account meeting the preset condition is displayed in the virtual image area, so that an interaction mode of the audience user account and the anchor user account is provided, the participation sense of the audience user account can be improved, the heat in a live broadcast room is effectively improved, and the live broadcast watching experience of audiences is improved.

Description

Real-time interaction method, device, electronic equipment, storage medium and system
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a real-time interaction method, apparatus, electronic device, storage medium, and system.
Background
At present, audiences in a live webcast room express the favor of anchor mainly by modes of praise, comment, virtual resource transfer, vermicelli ball addition and the like.
However, the real-time interaction mode between the audiences and the anchor is single, and the enthusiasm of the audiences cannot be excited, so that the viscosity between the audiences and the live broadcast room is poor, and the use experience of the audiences is influenced.
Disclosure of Invention
The utility model provides a real-time interaction method, device, electronic equipment, storage medium and system, can effectively improve the viscidity between spectator and the live broadcast room, promote the hotness in the live broadcast room simultaneously, improve spectator's live broadcast and watch experience.
The technical scheme of the embodiment of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a real-time interaction method is provided, which is applied to an electronic device where a target user account is located, and includes: and displaying a virtual space interface of the anchor user account, wherein the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively. When the target user account meets a first preset condition corresponding to the first virtual image area, displaying the target virtual image in the first virtual image area, wherein the first virtual image area is any virtual image area, and the target virtual image is the virtual image of the target user account.
Optionally, the virtual space interface further includes a anchor avatar area, and an avatar of the anchor user account is displayed in the anchor avatar area.
Optionally, after the target avatar is displayed in the first avatar region, the real-time interaction method further includes: and responding to a preset operation input by the target user account, and playing the animation resources corresponding to the target virtual image.
Optionally, the preset operations have multiple different types, the target avatar corresponds to multiple animation resources, and the different types of preset operations are respectively associated with different animation resources; responding to preset operation input by a target user account, and playing animation resources corresponding to the target virtual image, wherein the preset operation comprises the following steps: and playing the animation resources corresponding to the target virtual image and associated with the type of the preset operation.
Optionally, the target avatar corresponds to multiple animation resources, different animation resources are associated with different operation situations, and the operation situations are related to the frequency of preset operations input by the target user account and the type of the preset operations; responding to preset operation input by a target user account, and playing animation resources corresponding to the target virtual image, wherein the preset operation comprises the following steps: and playing the animation resources corresponding to the target virtual image and associated with the operation situation.
Optionally, after the target avatar is displayed in the first avatar region, the real-time interaction method further includes: and canceling the display of the target avatar in the first avatar area when the duration of the target user account without inputting the predetermined operation exceeds the preset duration.
Optionally, after the target avatar is displayed in the first avatar region, the real-time interaction method further includes: and when the target user account does not meet the first preset condition, canceling the display of the target avatar in the first avatar area.
Optionally, after canceling the display of the target avatar in the first avatar region, the real-time interaction further includes: and displaying first information, wherein the first information is used for prompting that the display of the target avatar of the target user account in the first avatar area is cancelled.
Optionally, the real-time interaction method further includes: responding to a preset operation input by a target user account, and sending a message for indicating that the preset operation is received to a server; and receiving a processing result of the message returned by the server, wherein the processing result represents a preset condition met by the target user account, and/or the processing result is an animation resource playing instruction, and the animation resource playing instruction is used for enabling the electronic equipment to play the animation resource corresponding to the target virtual image.
Optionally, the preset operation includes an operation of transferring a virtual resource to the anchor user account, an operation of interacting with the anchor user account, and/or a control operation for the target avatar.
Optionally, the preset condition is that bit sequences of weighted values corresponding to the target user account in all user accounts belong to a preset interval, and all user accounts are user accounts entering a virtual space of the anchor user account within a preset time period; the weight value of the user account is determined according to the condition that the user account transfers the virtual resources to the anchor user account within a preset time period and/or the condition that the user account interacts with the anchor user account.
Optionally, after canceling the display of the target avatar in the first avatar region, the real-time interaction further includes: and displaying second information, wherein the second information comprises a weight value corresponding to the target user account and/or a difference value between the weight value corresponding to the target user account and the target weight value, and the target weight value is a minimum weight value meeting a first preset condition.
Optionally, the virtual space interface includes at least two avatar regions, and after the target avatar is displayed in the first avatar region, the real-time interaction method further includes: when the preset condition met by the target user account is changed from a first preset condition to a second preset condition, displaying the target avatar in the second avatar area, and canceling the display of the target avatar in the first avatar area; the second avatar area is any one of the avatar areas except the first avatar area, and the second preset condition is a preset condition corresponding to the second avatar area.
Optionally, the virtual space interface includes two avatar regions, wherein a right end point value of a preset interval corresponding to one avatar region is smaller than a left end point value of a preset interval corresponding to another avatar region.
Optionally, the virtual space interface further includes an information display area, and the real-time interaction method further includes: and when the target virtual image is displayed in the first virtual image area, displaying third information in the information display area, wherein the third information is used for indicating the target virtual image to be displayed in the first virtual image area.
According to a second aspect of the embodiments of the present disclosure, a real-time interaction apparatus is provided, including an interface display unit and an avatar display unit, where the interface display unit is configured to execute a virtual space interface for displaying an anchor user account, the virtual space interface includes at least one avatar area, and different avatar areas correspond to different preset conditions respectively; and the image display unit is configured to display the target avatar in the first avatar area when the target user account meets a first preset condition corresponding to the first avatar area, wherein the first avatar area is any one avatar area, and the target avatar is the avatar of the target user account.
Optionally, the virtual space interface further includes a anchor avatar area, and an avatar of the anchor user account is displayed in the anchor avatar area.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit is further configured to perform a preset operation in response to the target user account input, and play an animation resource corresponding to the target avatar.
Optionally, the preset operations have multiple different types, the target avatar corresponds to multiple animation resources, and the different types of preset operations are respectively associated with different animation resources; and the character display unit is also configured to play animation resources which correspond to the target virtual character and are associated with the type of the preset operation.
Optionally, the target avatar corresponds to multiple animation resources, different animation resources are associated with different operation situations, and the operation situations are related to the frequency of preset operations input by the target user account and the type of the preset operations; and the character display unit is also configured to play animation resources which correspond to the target virtual character and are associated with the operation situation.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit is further configured to perform canceling the display of the target avatar in the first avatar region when a duration in which the target user account does not input the predetermined operation exceeds a preset duration.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit is further configured to perform canceling the display of the target avatar in the first avatar region when the target user account does not satisfy the first preset condition.
Optionally, after canceling the presentation of the target avatar in the first avatar region, the avatar presentation unit is further configured to perform displaying of first information for prompting the target user to cancel the presentation of the target avatar in the first avatar region.
Optionally, the image presentation unit is further configured to perform a preset operation in response to the target user account input, and send a message indicating that the preset operation is received to the server; and receiving a processing result of the message returned by the server, wherein the processing result represents a preset condition met by the target user account, and/or the processing result is an animation resource playing instruction which is used for enabling the electronic equipment to play animation resources corresponding to the target virtual image.
Optionally, the preset operation includes an operation of transferring a virtual resource to the anchor user account, an operation of interacting with the anchor user account, and/or a control operation for the target avatar.
Optionally, the preset condition is that bit sequences of weight values corresponding to the target user account in all user accounts belong to a preset interval, and all user accounts are user accounts entering a virtual space of the anchor user account within a preset time period; the weight value of the user account is determined according to the condition that the user account transfers the virtual resources to the anchor user account within a preset time period and/or the condition that the user account interacts with the anchor user account.
Optionally, after canceling the display of the target avatar in the first avatar area, the avatar display unit is further configured to perform displaying of second information, where the second information includes a weight value corresponding to the target user account and/or a difference between the weight value corresponding to the target user account and the target weight value, and the target weight value is a minimum weight value meeting a first preset condition.
Optionally, the virtual space interface includes at least two avatar areas, after the target avatar is displayed in the first avatar area, the avatar display unit is further configured to perform displaying the target avatar in the second avatar area and canceling the display of the target avatar in the first avatar area when the preset condition met by the target user account is changed from the first preset condition to a second preset condition; the second avatar area is any one of the avatar areas except the first avatar area, and the second preset condition is a preset condition corresponding to the second avatar area.
Optionally, the virtual space interface includes two avatar regions, wherein a right end point value of a preset interval corresponding to one avatar region is smaller than a left end point value of a preset interval corresponding to another avatar region.
Optionally, the virtual space interface further includes an information display area, and the avatar display unit is further configured to display third information in the information display area when the target avatar is displayed in the first avatar area, where the third information is used to indicate that the target avatar is displayed in the first avatar area.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, which may include: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement any one of the above-described first aspect, optionally the real-time interaction method.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions stored thereon, where the instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-mentioned first aspect, optionally, real-time interaction methods.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which when executed by a processor, implements a real-time interaction method as any one of the optional implementations of the first aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
based on any one of the above aspects of the present disclosure, a new interaction mode between a user account and an anchor user account is provided, wherein a virtual space interface of the anchor user account includes at least one avatar area, and different avatar areas correspond to different preset conditions; when the target user account meets a first preset condition corresponding to the first virtual image area, displaying the target virtual image in the first virtual image area, wherein the first virtual image area is any virtual image area, and the target virtual image is the virtual image of the target user account. Based on the interactive mode, the interactive interest between audiences and the anchor can be increased, the active interaction between the audience user account and the anchor user account in the live broadcast room is stimulated, the stickiness of audience users in the live broadcast room is improved, the content heat of the live broadcast room is improved, and the live broadcast watching experience of the audiences is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram of a real-time interactive system provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a real-time interaction method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a virtual space interface provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
FIG. 9 is an interaction diagram of devices in a real-time interaction system provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating yet another virtual space interface provided by an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram illustrating a real-time interaction apparatus provided in an embodiment of the present disclosure;
fig. 12 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
The data to which the present disclosure relates may be data that is authorized by a user or sufficiently authorized by parties.
In the related technology, audiences in a live webcast room express the love to the anchor mainly through modes of praise, comment, virtual resource transfer, vermicelli ball addition and the like. However, these viewers interact with the anchor in a single manner. For example, when audience and anchor interact, only show corresponding text information (like barrage, comment, etc.) in the live interface, audience participation is lower, can't arouse audience's enthusiasm, leads to the stickness between audience and the live room poor, is unfavorable for promoting the live room heat simultaneously, influences audience's use experience.
Based on this, the embodiment of the present disclosure provides a real-time interaction method, including: and displaying a virtual space interface of the anchor user account, wherein the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively. When the target user account meets a first preset condition corresponding to the first virtual image area, displaying the target virtual image in the first virtual image area, wherein the first virtual image area is any virtual image area, and the target virtual image is the virtual image of the target user account. Based on the interactive mode, the interactive interest between audiences and the anchor can be increased, the active interaction between the audience user account and the anchor user account in the live broadcast room is stimulated, the stickiness of audience users in the live broadcast room is improved, the content heat of the live broadcast room is improved, and the live broadcast watching experience of the audiences is improved.
An application scenario of the real-time interaction method provided by the embodiment of the present disclosure is exemplarily described as follows:
fig. 1 is a schematic diagram of a real-time interactive system according to an embodiment of the present disclosure, as shown in fig. 1, the real-time interactive system may include: a live server 110, a first terminal 120 and a second terminal 130. The live server 110 is communicatively coupled to a first terminal 120 and a second terminal 130, respectively. Live applications, such as express applications, are installed on both the first terminal 120 and the second terminal 130.
Based on the real-time interactive system, the anchor user can log in the live application on the first terminal 120 and perform live broadcast by using the user account 1, where the user account 1 is the anchor user account. The audience user can use the user account 2, where the user account 2 is the target user account, log in the live broadcast application on the second terminal 130, and enter the live broadcast room of the user account 1 to view the live broadcast content. Specifically, when the anchor user performs live broadcasting using the user account 1, the first terminal 120 generates push stream information of live content and transmits the push stream information to the live server 110. After receiving the push stream information uploaded by the first terminal 120, the live broadcast server 110 sends the push stream information to the second terminal 130, and the second terminal 130 parses the push stream information and displays the parsed live broadcast content picture on a live broadcast room interface.
The server may be a single server, or may be a server cluster including a plurality of servers (or micro servers). The server cluster may also be a distributed cluster. The present disclosure is also not limited to a particular implementation of the server 110.
The first terminal 120 and the second terminal 130 may be any computer devices, where the computer devices include, but are not limited to, a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a palm terminal, an Augmented Reality (AR) device, a Virtual Reality (VR) device, and other devices that can be installed and used in a live broadcast application (such as a express), and the specific form of the terminal device is not particularly limited in the embodiment of the present disclosure. The system can be used for man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment and the like.
The first terminal, the second terminal, and the live server may be electronic devices.
Alternatively, the server 110 in the real-time interactive system shown in fig. 1 may be connected to a plurality of electronic devices. The present disclosure does not limit the number or types of electronic devices.
For example, the real-time interaction method provided by the embodiment of the present disclosure may be applied to an electronic device where the audience user account is located, for example, the second terminal 130 shown in fig. 1. In the embodiment of the present disclosure, the aforementioned viewer user account is referred to as a target user account.
The following describes in detail a specific implementation of the real-time interaction method provided by the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 2 is a flowchart of a real-time interaction method provided in an embodiment of the present disclosure, and as shown in fig. 2, when the real-time interaction method is applied to an electronic device corresponding to a target user account, the real-time interaction method may include:
s201, displaying a virtual space interface of the anchor user account, wherein the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively.
Specifically, when the network live broadcast is performed, the interface constructed by using the electronic device on the anchor user account side, the electronic device on the audience user account side and the software and hardware resources in the live broadcast server is a virtual space interface, or a live broadcast room interface. The anchor user can log in the live application through the electronic equipment, create or log in a virtual space (or called a live room), and send the live content to the electronic equipment on the account side of the audience user through the live server. Other user accounts serving as audiences can log in the live broadcast application through the electronic equipment and enter a virtual space of the anchor user account, and a virtual space interface is displayed on the electronic equipment so as to watch live broadcast content released by the anchor user.
It should be noted that the virtual space interface may be a special type of live broadcast room interface, the virtual space interface includes at least one avatar area, each avatar area corresponds to a different preset condition, and the avatar area is used to display an avatar of a user account meeting the preset condition, where the avatar may be a three-dimensional avatar of the user account. The user object can select the corresponding three-dimensional image according to the self requirement. The number of the avatars displayed in the avatar area can be set according to the actual situation, and is not limited herein.
Illustratively, fig. 3 is a schematic diagram of a virtual space interface provided by an embodiment of the present disclosure, where the virtual space interface includes a first control 301, a second control 302, and an avatar area 303. The user account may trigger the transfer of any virtual resources to the anchor user account by operating first control 301. It should be noted that the types of the virtual resources may be multiple, and the types of the virtual resources correspond to different values, and each type of the virtual resource corresponds to a different resource identifier. For example, the virtual resource may be a sports car, a yacht, a rocket, or the like. The user account may publish comments on the live content by operating the second control 302. The avatar area 303 is used to display an avatar of a user account satisfying a corresponding preset condition.
In some embodiments, the preset condition is that bit sequences of weighted values corresponding to the target user account in all user accounts belong to a preset interval, where all user accounts are user accounts entering a virtual space interface of a live user account within a preset time period, and the weighted values of the user accounts are determined according to a condition that the user accounts transfer the virtual resources to a anchor user account within the preset time period and/or a condition that the user accounts interact with the anchor user account.
In the disclosure, one or more weight value algorithms may be preset, and the condition that the user account transfers the virtual resources to the anchor user account within a preset time period, such as the number of times and value of transferring the virtual resources, and the condition that the user account interacts with the anchor user account within the preset time period, such as the number of times of comments and the quality of the comment content, may be used as variables of the weight value algorithms, and the weight value of the user account may be obtained by substituting the variables into the algorithm formula. For example, the weight value may be a weighted sum of the amount of virtual resources transferred by the target user account to the anchor user account and the number of comments made on the virtual space interface. Optionally, the amount of virtual resources transferred to the anchor user account has a first weighting factor, and the number of comments made on the virtual space interface has a second weighting factor.
The weighting value may be in any form, such as a fraction, and is not limited herein.
Based on this, certain preset conditions are: the bit sequence of the weight value in all the user accounts belongs to the user accounts positioned in the top 10. Because the bit sequence of the user account a is the 10 th bit, at this time, the user account a meets the preset condition, and the corresponding avatar is displayed in the avatar area corresponding to the preset condition. When the user account B transfers the virtual resources to the anchor user account, so that the bit sequence of the user account A is changed into the 11 th name, the user account A does not meet the preset conditions, and the corresponding virtual image is cancelled to be displayed in the corresponding virtual image area.
Therefore, a specific implementation mode of the preset condition is provided, whether the avatar of the user account is displayed in the avatar area is determined according to the bit sequence of the weight value corresponding to each user account, the interactive enthusiasm of the user object can be stimulated, and the content popularity of the live broadcast room is improved.
It should be noted that the number of the avatar regions in the virtual space interface may be plural, and is not limited to the one shown in fig. 3. Each avatar region corresponds to a different preset condition.
Referring to fig. 4 in conjunction with fig. 3, the virtual space interface includes a first avatar region 3031 and a second avatar region 3032. The preset conditions corresponding to the first avatar region 3031 and the second avatar region 3032 are different.
For example, the preset condition of the first avatar region 3031 may be: the amount of transfer of virtual resources to the anchor user account ranks the top 10 user accounts among all user accounts. The preset condition of the second avatar region 3032 may be: the amount of virtual resource transfers to the anchor user account ranks 11-50 user accounts among all user accounts.
S202, when the target user account meets a first preset condition corresponding to the first virtual image area, displaying the target virtual image in the first virtual image area, wherein the first virtual image area is any virtual image area, and the target virtual image is the virtual image of the target user account. For example, fig. 5 is a schematic diagram of a virtual space interface provided by an embodiment of the present disclosure, and when a user account a meets a first preset condition corresponding to a first avatar region 3031, an avatar corresponding to the user account a is displayed in the first avatar region 3031.
From S201 to S202, in the present disclosure, the virtual space interface of the anchor user account includes at least one avatar area, the different avatar areas correspond to different preset conditions, respectively, and when the target user account satisfies a first preset condition corresponding to the first avatar area, the target avatar is displayed in the first avatar area, the first avatar area is any one avatar area, and the target avatar is an avatar of the target user account. The virtual image corresponding to the user account meeting the preset condition is displayed in the virtual image area, so that an interaction mode of the audience user account and the anchor user account is provided, the participation sense of the audience user account can be improved, the heat in a live broadcast room is effectively improved, and the live broadcast watching experience of audiences is improved.
In some embodiments, the virtual space interface further includes an anchor avatar presentation area in which an avatar of an anchor user account is presented.
Specifically, the avatar of the anchor user account may be a three-dimensional avatar of the anchor user object, and the anchor user object may select a corresponding avatar's clothing and hair style according to its own needs.
Exemplarily, referring to fig. 6 in conjunction with fig. 5, fig. 6 is a schematic diagram of a virtual space interface provided in the embodiment of the present disclosure, where the virtual space interface includes an anchor avatar display area 601, and in this embodiment, the anchor avatar display area 601 displays an avatar corresponding to an anchor user account B.
It should be noted that fig. 3-6 only show an exemplary avatar region. It is understood that the shape of the avatar region may be a sector, a circle, or any other shape, and the avatar region may be displayed as any region on the virtual space interface, which is not limited herein.
In some embodiments, referring to fig. 3, the live view of the anchor user account in the virtual space interface may be located below the avatar area, i.e., the avatar portion of the presentation overlays the live view. In other implementations, the live view may also be displayed in a floating window in any area of the virtual space interface, and is not covered by the displayed avatar, which is not limited herein.
According to the content, the interaction between the anchor user account and the audience user account can be enhanced by displaying the virtual image corresponding to the anchor user account in the virtual space interface, so that the enthusiasm of the audience is stimulated, and the popularity of the live broadcast room is favorably improved.
In some embodiments, after S202, the method further includes: and responding to a preset operation input by the target user account, and playing the animation resources corresponding to the target virtual image.
Specifically, the content of the animation resources may make different actions for the target avatar. For example, various dance postures such as a tap dance, a seaweed dance or a skirt dance. The content of the animation resource may be: the target avatar is subjected to various shape changes or special effects such as changes of clothes, hair style, or is caused to flash a plurality of times in the first avatar region.
In some embodiments, the preset operations have different types, the target avatar corresponds to the animation resources, and the different types of preset operations are respectively associated with different animation resources. Correspondingly, responding to the preset operation input by the target user account, playing the animation resources corresponding to the target virtual image, and the method comprises the following steps: and playing the animation resources corresponding to the target virtual image and associated with the type of the preset operation.
In some embodiments, the preset operations include an operation to transfer a virtual resource to the anchor user account, an operation to interact with the anchor user account, and/or a control operation for the target avatar. These operations may each be defined as different types of preset operations.
Wherein the operation of interacting with the anchor user account comprises commenting on the virtual space interface, and paying attention to the anchor user account on the virtual space interface. The control operation aiming at the target avatar is the control operation input by the user object through a related control arranged on the virtual control interface or a preset gesture. And the terminal equipment responds to the control operation input by the user account and plays the animation resources corresponding to the target virtual image. By providing various types of preset operation, the interaction mode between the target user account and the anchor user account in the virtual space interface is enriched, and the watching experience of the target user account is prompted.
Correspondingly, the type of the preset operation may be to transfer the virtual resource to the anchor user account, or to comment or like on the virtual space interface. In one implementation manner of the present disclosure, each type of preset operation is associated with a different animation resource, and when any type of preset operation is input into the target user account, the animation resource corresponding to the target avatar and associated with the type of preset operation is played.
Illustratively, when the type of the preset operation input by the target user account a is comment on the virtual space interface, the animation resource associated with the comment on the virtual space interface is a seaweed dance gesture, and at this time, the target avatar corresponding to the target user account a displays the seaweed dance gesture in the first avatar area. When the type of the preset operation input by the target user account A is praise on the virtual space interface, the animation resource associated with the comment on the virtual space interface is the ballet dancing posture, and at the moment, the target avatar corresponding to the target user account A displays the ballet dancing posture in the first avatar area.
For another example, when the type of the preset operation is to transfer a virtual resource to an anchor user account, each virtual resource is respectively associated with a different animation resource. For example, when the virtual resource transferred from the target user account to the anchor user account is a "bouquet", the target avatar corresponding to the target user account displays a tap dance gesture in the first avatar region at this time. When the virtual resource transferred from the target user account to the anchor user account is 'sports car', the target avatar corresponding to the target user account displays the dance posture of jazz in the first avatar area at the moment.
Therefore, the user can control the playing of different animation resources by inputting different types of preset operations, for example, when the user transfers a virtual resource to an anchor user account, a first animation resource is played, and when the user comments on the virtual space interface, a second animation resource is played.
In some embodiments, the target avatar corresponds to a plurality of animation resources, and different animation resources are associated with different operation situations, and the operation situations are related to the frequency of inputting the preset operation by the target user account and the type of the preset operation. Correspondingly, responding to the preset operation input by the target user account, playing the animation resources corresponding to the target virtual image, and the method comprises the following steps: and playing the animation resources corresponding to the target virtual image and associated with the operation situation.
In particular, when the target user account is at the virtual space interface, there are a number of operational scenarios. For example, the operation condition may be that the number of virtual resources transferred to the anchor user account in a unit time exceeds a preset threshold, the number of times of virtual resources transferred to the anchor user account in a unit time exceeds a preset threshold, or the number of times of comments on the virtual space interface to the anchor user account in a unit time exceeds a preset threshold. And each operating scenario is associated with a different animation resource. When the user inputs the preset operation of any operation situation, the animation resources corresponding to the target virtual image and associated with the operation situation are played.
Illustratively, when the operation condition of the preset operation input by the target user account a is that the number of times of transferring the virtual resource to the anchor user account in the unit time exceeds a preset threshold, the animation resource associated with the comment on the virtual space interface is the avatar to be enlarged for 3 seconds, and at this time, the target avatar corresponding to the target user account a is enlarged for three seconds in the first avatar area. When the operation condition of the preset operation input by the target user account A is that the number of times of the virtual space interface comment exceeds a preset threshold value, the animation resource associated with the virtual space interface comment is that the avatar flickers for 3 seconds, and at the moment, the target avatar corresponding to the target user account A flickers for 3 seconds in the first avatar area.
From the above, the user can realize multiple operation situations by controlling the frequency and the type of the input preset operation, and further control to play different animation resources, for example, when the user transfers the gift three times continuously, the played animation resource is an avatar, and when the user transfers the gift more than the preset amount, the corresponding avatar flickers continuously.
In some embodiments, after S202, the method further includes: and canceling the display of the target avatar in the first avatar area when the duration of the target user account without inputting the predetermined operation exceeds the preset duration.
Specifically, when the target user account does not perform any interactive operation in the virtual space interface for a long time, the target avatar corresponding to the target user account is hidden in the first avatar area. For example, when user account a has not interacted with the anchor user account in the virtual space interface for more than one hour, at which time user account a is a less active user, at which time user account a is hidden from the first avatar region. In this embodiment, the number of the avatars displayed in the first avatar area is preset, and when the user account a is hidden from the first avatar area, the avatars of other user accounts can be displayed in the first avatar area.
By the above, the virtual image corresponding to the target user account which is not subjected to the preset operation on the virtual space interface and the anchor user account for a long time is hidden, under the condition that the quantity of the displayed images is limited in the first virtual image area, other users with higher liveness can be displayed, the enthusiasm of the target user account for interaction through the preset operation in the virtual space interface can be effectively stimulated, the stickiness of users in the live broadcast room is improved, the heat of contents in the live broadcast room is improved, and the live broadcast experience of the audience is watched by the audience.
In some embodiments, after S202, the method further includes: and when the target user account does not meet the first preset condition, canceling the display of the target avatar in the first avatar area.
According to the method, when the target user account does not meet the first preset condition due to the preset operation input by other user accounts, the virtual image corresponding to the target user account is cancelled to be displayed, the target user account can be subjected to the preset operation again, so that the first virtual image area where the corresponding target virtual image is located is displayed, the initiative of interaction of the target user account in the virtual space interface through the preset operation is stimulated, and the heat of the content of the live broadcast room is improved.
In some embodiments, canceling the presentation of the target avatar in the first avatar region further comprises: and displaying first information, wherein the first information is used for prompting that the display of the target avatar of the target user account in the first avatar area is cancelled.
Specifically, the first information may be displayed at any position of the virtual space interface, for example, referring to fig. 7, fig. 7 is a schematic diagram of a virtual space interface provided in the embodiment of the present disclosure, where the virtual space interface includes the first information 701, and the content of the first information 701 is: your avatar has been dismissed from the first avatar area. It should be noted that the display form of the first information 701 may be different special display effects such as a floating screen barrage, a rich text in a comment area, and a strong reminder. By displaying the first information, the target user account can timely acquire the display condition of the target virtual image, after the display of the target virtual image in the first virtual image area is cancelled, the enthusiasm of the target user account can be excited, the preset operation is input again, so that the corresponding target virtual image is displayed in the first virtual image area, and the heat of the content of the live broadcast room is improved.
In some embodiments, canceling the presentation of the target avatar in the first avatar region further comprises: and displaying second information, wherein the second information comprises a weight value corresponding to the target user account and/or a difference value between the weight value corresponding to the target user account and the target weight value, and the target weight value is a minimum weight value meeting a first preset condition.
Specifically, the second information may be displayed at any position of the virtual space interface, or may be displayed simultaneously with the first information, for example, with reference to fig. 7, see fig. 8, where fig. 8 is a schematic diagram of a virtual space interface provided in the embodiment of the present disclosure, the virtual space interface includes first information 701 and second information 801, and the content of the first information is: your avatar has been dismissed from the first avatar area. The second information is: the number of virtual resources from the re-appearing avatar is 30. It should be noted that the display form of the second information may be different special display effects such as a floating screen barrage, a rich text in a comment area, a strong reminder, and the like. Through displaying the second information, the target user account can timely acquire the avatar to be displayed again and needs to be subjected to preset operation, the enthusiasm of the target user account can be effectively stimulated, the preset operation is input again, so that the corresponding target avatar is displayed in the first avatar area, and the heat of the content of the live broadcast room is improved.
In some embodiments, the real-time interaction method further comprises:
responding to a preset operation input by a target user account, and sending a message for indicating that the preset operation is received to a server;
and receiving a processing result of the message returned by the server, wherein the processing result represents a preset condition met by the target user account, and/or the processing result is an animation resource playing instruction, and the animation resource playing instruction is used for enabling the electronic equipment to play the animation resource corresponding to the target virtual image.
Fig. 9 is an interaction diagram of a device in a real-time interaction system according to an embodiment of the present disclosure. Referring to fig. 9, the terminal device 130 transmits a message indicating the reception of a preset operation to the server in response to the preset operation input by the target user account. The server 110 updates the weight value of the target user account according to the preset operation, and judges whether the latest weight value of the target user account meets a first preset condition; and if the latest weighted value of the target user account meets the first preset condition, sending a processing result to the terminal device 130, wherein the processing result comprises information for representing that the target user account meets the first preset condition and an animation resource playing instruction, and the animation resource playing instruction is used for enabling the terminal device 130 to play animation resources corresponding to the target virtual image. The terminal device 130 displays the target avatar in the first avatar region according to the processing result, and plays the animation resources corresponding to the target avatar. And if the latest weight value of the target user account does not satisfy the first preset condition, sending a processing result to the terminal device 130, wherein the processing result includes information for representing that the target user account does not satisfy the first preset condition.
Specifically, the determining, by the server, a processing result according to the message indicating that the preset operation is received includes: updating the target user account and the corresponding weight value according to preset operation input by the target user account, adding the updated weight value into a weight value list, determining the weight value list according to the weight values corresponding to all user accounts in the virtual space interface, and sequencing all user accounts in the weight value list from large to small according to the weight values; and if the bit sequence of the target user account is located in a preset interval corresponding to the first preset condition in the weight value list, returning a processing result, wherein the processing result is used for indicating the terminal equipment to display the target avatar of the target user account in the first avatar area.
In some embodiments, the server includes a communication module, a data processing module, a data sorting module, and a caching module. The communication module is used for realizing the communication between the server and each terminal device. When a user object inputs preset operation on the terminal equipment, the terminal equipment generates corresponding preset operation information and sends the preset operation information to a communication module of the server, and the communication module stores the preset operation information in a cache module; and the data processing module acquires the preset operation information from the cache module and processes the preset operation information so as to update the weighted value of the target user account. The data sorting module is used for sorting all the user accounts and the corresponding weight values to obtain corresponding weight value list. The cache module is used for temporarily storing preset operation information, a user account and a corresponding weight value, so that system blocking is avoided, and processing efficiency is improved.
Specifically, the server updates the target user account and the corresponding weight value according to preset operation input by the target user account, and adds the updated weight value to the weight value list. The real-time interactive experience of the user account is prevented from being influenced by display errors of the real-time interactive system caused by the fact that a plurality of users transfer virtual resources at the same time.
Specifically, the data structure of the weight value list may be a redis zset structure, and the redis zset structure may sort the stored data according to a certain order. When a plurality of user accounts simultaneously send preset operation to a user of the anchor account, the updated weighted values of all the user accounts are added into the weighted value list, the position sequence of the weighted value corresponding to each current user account in the weighted value list can be accurately determined by utilizing the characteristic of redisset structure sequencing, the user accounts meeting the preset condition are determined, and the phenomenon that when the preset operation is input into the plurality of user accounts simultaneously, due to the fact that all transfer behaviors are parallel, the virtual image display of the user accounts in a virtual space interface is wrong, and the real-time interaction experience of the user accounts is seriously influenced can be effectively avoided. The format of the data stored in the redis zset includes a name of the user account, a corresponding weight value, and a time when the user account inputs a preset operation last time. The bit sequence of the user account in the weight value list can be determined through the name of the user account and the corresponding weight value, whether the user account is still an active user can be determined through the time of the user account inputting the preset operation last time, and therefore the virtual image corresponding to the user account which does not perform the preset operation for a long time is hidden in the virtual image area.
Therefore, the target user account changes the corresponding weight value by inputting the preset operation, and can change the virtual image area in the virtual space interface to display different virtual images. For example, the user a transfers virtual resources to the anchor user account, so that the avatar corresponding to the user a is displayed in the first avatar area, and the user B wants to transfer virtual resources to the anchor user account after seeing the virtual resources, thereby achieving the purpose of displaying the avatar corresponding to the user B in the first avatar area.
In some embodiments, the virtual space interface includes at least two avatar regions, which may be referred to as a first avatar region and a second avatar region, respectively, the preset condition corresponding to the first avatar region may be referred to as a first preset condition, and the preset condition corresponding to the second avatar region may be referred to as a second preset condition.
After the target avatar is displayed in the first avatar area, if the preset condition satisfied by the target user account is changed from the first preset condition to a second preset condition, displaying the target avatar in the second avatar area, and canceling the display of the target avatar in the first avatar area.
Illustratively, the preset interval corresponding to the first avatar area is (11, 20), the preset interval corresponding to the second avatar area is (1, 10), the bit sequence of the current corresponding weight value of the target user account a in the weight values corresponding to all user accounts is 15, at this time, the target user account a satisfies the first preset condition corresponding to the first avatar area, the target avatar corresponding to the target user account a is displayed in the first avatar area, when the target user account a inputs the preset operation in the virtual space interface, the bit sequence of the current corresponding weight value of the target user account a in the weight values corresponding to all user accounts is changed to 8, at this time, the target user account a satisfies the second preset condition corresponding to the second avatar area, the target avatar corresponding to the target user account a is displayed in the second avatar area, and simultaneously, the target avatar corresponding to the target user account A is cancelled and displayed in the first avatar area.
According to the method, the plurality of virtual image areas are set and correspond to different preset conditions respectively, different bit sequences of the weighted values corresponding to the target user accounts in the weighted values corresponding to all the user accounts respectively meet the different preset conditions, the target virtual images corresponding to the target user accounts are displayed in the different virtual image areas, the enthusiasm of the target user accounts can be effectively stimulated, and the heat of a virtual space interface is improved.
In some embodiments, the virtual space interface includes two avatar regions, wherein a right end point value of the preset interval corresponding to one avatar region is smaller than a left end point value of the preset interval corresponding to the other avatar region.
Specifically, the two avatar areas correspond to different preset intervals respectively, and correspond to different preset intervals respectively according to different bit sequences of the weighted values corresponding to the target user account in the weighted values corresponding to all user accounts, that is, correspond to different avatar areas. Each preset interval comprises a left end point and a right end point, and the left end point of the preset interval corresponding to the first virtual image area is smaller than the right end point of the preset interval corresponding to the second virtual image area.
Illustratively, the bit sequence of the weighted value corresponding to the target user account a in the weighted values corresponding to all user accounts is 15, the preset interval corresponding to the first avatar region is (11, 20), the preset interval corresponding to the second avatar region is (1, 10), and the left end point 11 of the preset interval corresponding to the first avatar region is greater than the right end point 10 of the preset interval corresponding to the second avatar region.
According to the method, the user accounts are divided into the multiple layers according to different bit sequences of the weighted values corresponding to the user accounts in the weighted values corresponding to all the user accounts by setting different preset intervals for each virtual image area, and the user accounts are displayed in the different virtual image areas, so that the enthusiasm of the target user account can be effectively stimulated, and the heat of a virtual space interface is improved.
In some embodiments, the virtual space interface further comprises an information presentation area. And when the target virtual image is displayed in the first virtual image area, displaying third information in the information display area, wherein the third information is used for indicating the target virtual image to be displayed in the first virtual image area.
Exemplarily, referring to fig. 10 in conjunction with fig. 3, fig. 10 is a schematic diagram of a virtual space interface provided in an embodiment of the present disclosure, where the virtual space interface includes an information display area 1001. In this embodiment, when the target user account a meets the first preset condition corresponding to the first avatar region, the avatar corresponding to the target user account a is displayed in the first avatar region, and at this time, the information display region 1001 displays third information, and the third information is added to the first avatar region for the target user account a.
According to the above, by setting the information display area in the virtual space interface, when the target user account meets the first preset condition corresponding to the first avatar area, the information that the target user account meets the first preset condition corresponding to the first avatar area can be displayed to other user accounts located in the virtual space interface through the information display area. The enthusiasm of the user account can be stimulated, and the interface heat of the virtual space is improved.
It is understood that, in practical implementation, the terminal/server of the embodiment of the present disclosure may include one or more hardware structures and/or software modules for implementing the corresponding resource scheduling methods, and these hardware structures and/or software modules may constitute an electronic device. Those of skill in the art will readily appreciate that the present disclosure can be implemented in hardware or a combination of hardware and computer software for implementing the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Based on such understanding, the embodiment of the present disclosure also provides a real-time interaction apparatus, which can be applied to an electronic device. Fig. 11 shows a schematic structural diagram of a real-time interaction device provided in an embodiment of the present disclosure. As shown in fig. 11, the real-time interaction apparatus may include: the interface display unit 1101 is used for displaying a virtual space interface of the anchor user account, the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively; for example, step S201 shown in fig. 2 is executed. An image displaying unit 1102, configured to display a target avatar in a first avatar area when the target user account satisfies a first preset condition corresponding to the first avatar area, where the first avatar area is any one of the avatar areas and the target avatar is an avatar of the target user account, for example, step S202 shown in fig. 2 is performed.
Optionally, the virtual space interface further includes a anchor avatar area, and an avatar of the anchor user account is displayed in the anchor avatar area.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit 1102 is further configured to: and responding to a preset operation input by the target user account, and playing the animation resources corresponding to the target virtual image.
Optionally, the preset operations have multiple different types, the target avatar corresponds to multiple animation resources, and the different types of preset operations are respectively associated with different animation resources; the character display unit 1102 is further configured to play an animation resource corresponding to the target avatar and associated with the type of the preset operation.
Optionally, the target avatar corresponds to multiple animation resources, different animation resources are associated with different operation situations, and the operation situations are related to the frequency of preset operations input by the target user account and the type of the preset operations; the character display unit 1102 is further configured to play animation resources corresponding to the target avatar and associated with the operation scenario.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit 1102 is further configured to: and canceling the display of the target avatar in the first avatar area when the duration of the target user account without inputting the predetermined operation exceeds the preset duration.
Optionally, after the target avatar is displayed in the first avatar region, the avatar display unit 1102 is further configured to: and when the target user account does not meet the first preset condition, canceling the display of the target avatar in the first avatar area.
Optionally, after canceling the presentation of the target avatar in the first avatar region, the avatar presentation unit 1102 is further configured to: and displaying first information, wherein the first information is used for prompting that the display of the target avatar of the target user account in the first avatar area is cancelled.
Optionally, the character display unit 1102 is further configured to: responding to a preset operation input by a target user account, and sending a message for indicating that the preset operation is received to a server; and receiving a processing result of the message returned by the server, wherein the processing result represents a preset condition met by the target user account, and/or the processing result is an animation resource playing instruction, and the animation resource playing instruction is used for enabling the electronic equipment to play the animation resource corresponding to the target virtual image.
Optionally, the preset operation includes an operation of transferring a virtual resource to the anchor user account, an operation of interacting with the anchor user account, and/or a control operation for the target avatar.
Optionally, the preset condition is that bit sequences of weight values corresponding to the target user account in all user accounts belong to a preset interval, and all user accounts are user accounts entering a virtual space of the anchor user account within a preset time period; the weight value of the user account is determined according to the condition that the user account transfers the virtual resources to the anchor user account within a preset time period and/or the condition that the user account interacts with the anchor user account.
Optionally, after canceling the presentation of the target avatar in the first avatar region, the avatar presentation unit 1102 is further configured to: and displaying second information, wherein the second information comprises a weight value corresponding to the target user account and/or a difference value between the weight value corresponding to the target user account and the target weight value, and the target weight value is a minimum weight value meeting a first preset condition.
Optionally, the virtual space interface includes at least two avatar regions, and after the target avatar is displayed in the first avatar region, the avatar display unit 1102 is further configured to: when the preset condition met by the target user account is changed from a first preset condition to a second preset condition, displaying the target avatar in the second avatar area, and canceling the display of the target avatar in the first avatar area; the second avatar area is any one of the avatar areas except the first avatar area, and the second preset condition is a preset condition corresponding to the second avatar area.
Optionally, the virtual space interface includes two avatar regions, wherein a right end point value of a preset interval corresponding to one avatar region is smaller than a left end point value of a preset interval corresponding to another avatar region.
Optionally, the virtual space interface further includes an information display area and an image display unit 1102, and is further configured to: and when the target virtual image is displayed in the first virtual image area, displaying third information in the information display area, wherein the third information is used for indicating the target virtual image to be displayed in the first virtual image area.
As above, the embodiment of the present disclosure may perform division of functional modules on an electronic device according to the above method example. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is schematic, and is only a logic function division, and there may be another division manner in actual implementation. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block.
Regarding the real-time interaction apparatus in the foregoing embodiments, the specific manner in which each module performs operations and the beneficial effects thereof have been described in detail in the foregoing method embodiments, and are not described herein again.
The embodiment of the disclosure also provides an electronic device. Fig. 12 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. The electronic device may be a real-time interactive apparatus and may comprise at least one processor 1201, a communication bus 1202, a memory 1203 and at least one communication interface 1204.
The processor 1201 may be a Central Processing Unit (CPU), a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the disclosed aspects. As an example, in conjunction with fig. 11, the interface display unit 1101 and the character presentation unit 1102 in the electronic device implement the same functions as the processor 1201 in fig. 12 implements.
The communication bus 1202 may include a path for communicating information between the aforementioned components.
Communication interface 1204, using any transceiver or the like, may be used to communicate with other devices or communication networks, such as servers, ethernet, Radio Access Networks (RAN), Wireless Local Area Networks (WLAN), etc.
As one example, memory 1203 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory may be self-contained and connected to the processing unit by a bus. The memory may also be integrated with the processing unit.
The memory 1203 is used for storing application program codes for executing the disclosed solution, and the processor 1201 controls the execution. The processor 1201 is configured to execute application program code stored in the memory 1203 to implement the functions in the disclosed methods.
In particular implementations, processor 1201 may include one or more CPUs such as CPU0 and CPU1 in fig. 12, for example, as an example.
In particular implementations, an electronic device may include multiple processors, such as processor 1201 and processor 1205 in fig. 12, for example, as an example. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, the electronic device may also include an input device 1206 and an output device 1207, as an example. The input device 1206 and the output device 1207 are in communication and can accept input from a user in a variety of ways. For example, the input device 1206 may be a mouse, keyboard, touch screen device, or sensing device, among others. The output device 1207, in communication with the processor 1201, may display information in a variety of ways. For example, the output device 1201 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, or the like.
Those skilled in the art will appreciate that the configuration shown in fig. 12 does not constitute a limitation of the electronic device, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
The embodiment of the disclosure also provides an electronic device. The electronic device may be a real-time interaction device. The electronic devices may vary widely in configuration or performance and may include one or more processors and one or more memories. At least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the real-time interaction method provided by the above method embodiments. Of course, the electronic device may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the electronic device may further include other components for implementing the functions of the device, which is not described herein again.
The present disclosure also provides a computer-readable storage medium comprising instructions stored thereon, which when executed by a processor of a computer device, enable a computer to perform the real-time interaction method provided by the above-described illustrative embodiments. For example, the computer readable storage medium may be a memory 1203 comprising instructions executable by a processor 1201 of the terminal to perform the above-described method. Also for example, the computer-readable storage medium may be a memory comprising instructions executable by a processor of an electronic device to perform the above-described method. Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present disclosure also provides a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the real-time interaction method as described in any of fig. 1-10 above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A real-time interaction method is applied to an electronic device where a target user account is located, and comprises the following steps:
displaying a virtual space interface of an anchor user account, wherein the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively;
when the target user account meets a first preset condition corresponding to a first virtual image area, the target virtual image is displayed in the first virtual image area, the first virtual image area is any one virtual image area, and the target virtual image is the virtual image of the target user account.
2. The real-time interaction method of claim 1, wherein said virtual space interface further comprises a anchor avatar area in which an avatar of said anchor user account is displayed.
3. The real-time interaction method of claim 1, wherein after presenting a target avatar in the first avatar region, the method further comprises:
and responding to a preset operation input by the target user account, and playing the animation resources corresponding to the target virtual image.
4. The real-time interaction method of claim 3, wherein the preset operations have a plurality of different types, the target avatar corresponds to a plurality of animation resources, and the preset operations of different types are respectively associated with different animation resources;
the playing of the animation resources corresponding to the target avatar in response to the preset operation input by the target user account comprises the following steps:
and playing animation resources corresponding to the target virtual image and associated with the type of the preset operation.
5. The real-time interaction method of claim 3, wherein the target avatar corresponds to a plurality of animation resources, different animation resources being associated with different operation situations, the operation situations being related to the frequency of the preset operations inputted by the target user account and the type of the preset operations;
the playing of the animation resources corresponding to the target avatar in response to the preset operation input by the target user account includes:
playing an animation resource corresponding to the target avatar and associated with the operating scenario.
6. The real-time interaction method of claim 3, wherein after presenting the target avatar in the first avatar region, the method further comprises:
and canceling the display of the target avatar in the first avatar area when the duration of the target user account without inputting the preset operation exceeds the preset duration.
7. A real-time interaction device is applied to an electronic device where a target user account is located, and comprises:
the interface display unit is configured to execute a virtual space interface for displaying an anchor user account, the virtual space interface comprises at least one virtual image area, and different virtual image areas correspond to different preset conditions respectively;
the image display unit is configured to display a target avatar in a first avatar area when the target user account meets a first preset condition corresponding to the first avatar area, the first avatar area is any one of the avatar areas, and the target avatar is an avatar of the target user account.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the real-time interaction method of any one of claims 1-6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the real-time interaction method of any of claims 1-6.
10. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the real-time interaction method of any of claims 1-6.
CN202210405864.6A 2022-04-18 2022-04-18 Real-time interaction method, device, electronic equipment, storage medium and system Active CN114885199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210405864.6A CN114885199B (en) 2022-04-18 2022-04-18 Real-time interaction method, device, electronic equipment, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210405864.6A CN114885199B (en) 2022-04-18 2022-04-18 Real-time interaction method, device, electronic equipment, storage medium and system

Publications (2)

Publication Number Publication Date
CN114885199A true CN114885199A (en) 2022-08-09
CN114885199B CN114885199B (en) 2024-02-23

Family

ID=82669487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210405864.6A Active CN114885199B (en) 2022-04-18 2022-04-18 Real-time interaction method, device, electronic equipment, storage medium and system

Country Status (1)

Country Link
CN (1) CN114885199B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467020A (en) * 2023-03-08 2023-07-21 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986192A (en) * 2018-07-26 2018-12-11 北京运多多网络科技有限公司 Data processing method and device for live streaming
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
CN111918078A (en) * 2020-07-24 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast method and device
US20210266631A1 (en) * 2020-02-25 2021-08-26 Beijing Dajia Internet Information Technology Co., Ltd. Live streaming interactive method, apparatus, electronic device, server and storage medium
CN113613028A (en) * 2021-08-03 2021-11-05 北京达佳互联信息技术有限公司 Live broadcast data processing method, device, terminal, server and storage medium
CN113949892A (en) * 2021-10-14 2022-01-18 广州方硅信息技术有限公司 Live broadcast interaction method and system based on virtual resource consumption and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
CN108986192A (en) * 2018-07-26 2018-12-11 北京运多多网络科技有限公司 Data processing method and device for live streaming
US20210266631A1 (en) * 2020-02-25 2021-08-26 Beijing Dajia Internet Information Technology Co., Ltd. Live streaming interactive method, apparatus, electronic device, server and storage medium
CN111918078A (en) * 2020-07-24 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast method and device
CN113613028A (en) * 2021-08-03 2021-11-05 北京达佳互联信息技术有限公司 Live broadcast data processing method, device, terminal, server and storage medium
CN113949892A (en) * 2021-10-14 2022-01-18 广州方硅信息技术有限公司 Live broadcast interaction method and system based on virtual resource consumption and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467020A (en) * 2023-03-08 2023-07-21 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN116467020B (en) * 2023-03-08 2024-03-19 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114885199B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
US11513820B2 (en) Method for resource allocation, terminal, and computer-readable storage medium
JP6649426B2 (en) Display control program, computer device, display control method, and display control system
JP7375186B2 (en) Barrage processing method, device, electronic equipment and program
CN112565804B (en) Live broadcast interaction method, equipment, storage medium and system
CN109495427B (en) Multimedia data display method and device, storage medium and computer equipment
WO2022183707A1 (en) Interaction method and apparatus thereof
JP2018028789A (en) Server, information transmission method, and program thereof
CN114466209A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN113868446A (en) Page display method, device, equipment and storage medium
US8860720B1 (en) System and method for delivering graphics over network
CN112351289B (en) Live broadcast interaction method and device, computer equipment and storage medium
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
CN113490006A (en) Live broadcast interaction method and equipment based on bullet screen
CN112169327A (en) Control method of cloud game and related device
CN114885199B (en) Real-time interaction method, device, electronic equipment, storage medium and system
US20240091655A1 (en) Method, apparatus, electronic device and storage medium for game data processing
CN109766046B (en) Interactive operation execution method and device, storage medium and electronic device
CN113490061A (en) Live broadcast interaction method and equipment based on bullet screen
CN112169319A (en) Application program starting method, device, equipment and storage medium
CN112035083A (en) Vehicle window display method and device
CN113144606B (en) Skill triggering method of virtual object and related equipment
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN105653492B (en) Intelligent book
CN114222151A (en) Display method and device for playing interactive animation and computer equipment
CN113617027A (en) Cloud game processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant