CN115643445A - Interaction processing method and device, electronic equipment and storage medium - Google Patents

Interaction processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115643445A
CN115643445A CN202211097457.XA CN202211097457A CN115643445A CN 115643445 A CN115643445 A CN 115643445A CN 202211097457 A CN202211097457 A CN 202211097457A CN 115643445 A CN115643445 A CN 115643445A
Authority
CN
China
Prior art keywords
target
live broadcast
preset
avatar
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211097457.XA
Other languages
Chinese (zh)
Inventor
徐智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202211097457.XA priority Critical patent/CN115643445A/en
Publication of CN115643445A publication Critical patent/CN115643445A/en
Pending legal-status Critical Current

Links

Images

Abstract

The utility model relates to an interactive processing method, an interactive processing device, an electronic device and a storage medium, wherein the method comprises displaying a target live broadcast page corresponding to a virtual live broadcast room, and the target live broadcast page displays a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room; and responding to a target interaction instruction aiming at a target avatar in at least one preset avatar, and executing target interaction operation on the target avatar in a target live broadcast page. By utilizing the embodiment of the disclosure, the existence sense and the interactive participation sense of the target virtual image in the virtual live broadcast room can be greatly improved, and further, the interactivity and the immersion sense of the live broadcast atmosphere in the virtual live broadcast room can be improved while the enthusiasm of audiences for participating in the interaction in the virtual live broadcast room is improved.

Description

Interaction processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an interaction processing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, live broadcast service has become the current trend.
In the related art live broadcasting process, audiences can appear in a live broadcasting room in an avatar; however, in some live broadcast rooms, the number of viewers is large, and although the viewers enter the live broadcast rooms as avatars, the viewers are often submerged in a large number of avatars, so that the user engagement is weak, and the immersion and interactivity of the live broadcast atmosphere are poor.
Disclosure of Invention
The present disclosure provides an interaction processing method, an interaction processing apparatus, an electronic device, and a storage medium, so as to solve at least the problems in the related art that a user engagement sense is weak, and an immersion sense and interactivity of a live broadcast atmosphere are also poor. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an interaction processing method, including:
displaying a target live broadcast page corresponding to a virtual live broadcast room, wherein the target live broadcast page displays a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room;
and responding to a target interaction instruction aiming at a target avatar in at least one preset avatar, and executing target interaction operation on the target avatar in the target live broadcast page.
In an optional embodiment, in case the target interaction operation is an avatar update operation; in the target live broadcast page, the target interactive operation executed on a target avatar in at least one preset avatar comprises the following steps:
acquiring target image information corresponding to the image updating operation;
and updating the target virtual image displayed in the target live broadcast page based on the target image information.
In an alternative embodiment, in the case where the target interaction operation is a focus operation; in the target live broadcast page, the executing of the target interaction operation on the target avatar in the at least one preset avatar comprises:
acquiring focus rendering information corresponding to the focusing operation;
and performing focus rendering on the target virtual image in the target live broadcast page based on the focus rendering information.
In an optional embodiment, before performing a target interactive operation on a target avatar in at least one of the preset avatars in the target live page, the method further includes:
determining the target avatar from at least one of the preset avatars.
In an alternative embodiment, said determining said target avatar from at least one of said preset avatars comprises:
randomly selecting the target avatar from at least one of the preset avatars.
In an alternative embodiment, said randomly selecting said target avatar from at least one of said preset avatars comprises:
under the condition that preset time is reached, randomly selecting the target virtual image from at least one preset virtual image, and triggering the target interaction instruction;
or the like, or a combination thereof,
and under the condition that a live broadcast creation object of the virtual live broadcast room executes preset interactive operation, randomly selecting the target virtual image from at least one preset virtual image, and triggering the target interactive instruction.
In an alternative embodiment, said determining said target avatar from at least one of said preset avatars comprises:
acquiring target interaction information executed in the virtual live broadcast room by at least one target live broadcast browsing object;
and taking any preset virtual image of the target live broadcast browsing object with the target interaction information meeting preset conditions as the target virtual image.
In an optional embodiment, the method further comprises:
and triggering the target interaction instruction under the condition that the target interaction information meets the preset condition.
In an alternative embodiment, said determining said target avatar from at least one of said preset avatars comprises:
acquiring at least one target live broadcast browsing object and various interactive information executed in the virtual live broadcast room;
generating interaction index data corresponding to any target live broadcast browsing object based on the various interaction information, wherein the interaction index data represents the interaction degree of any target live broadcast browsing object in the virtual live broadcast room;
and taking any preset virtual image of the target live broadcast browsing object with the interaction index data reaching a preset threshold value as the target virtual image.
In an optional embodiment, the method further comprises:
and triggering the target interaction instruction under the condition that the interaction index data reaches the preset threshold value.
In an optional embodiment, the method further comprises:
obtaining comment information of at least one live broadcast browsing object in the virtual live broadcast room;
under the condition that the comment information of any live broadcast browsing object is preset comment information, controlling a preset virtual image of the target live broadcast browsing object to enter the virtual live broadcast room;
any target live browsing object is an object of which the comment information is the preset comment information in at least one live browsing object.
According to a second aspect of the embodiments of the present disclosure, there is provided an interaction processing apparatus including:
the target live broadcast page display module is configured to execute display of a target live broadcast page corresponding to a virtual live broadcast room, and the target live broadcast page displays a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room;
and the target interaction operation execution module is configured to execute target interaction operation on a target virtual image in the target live broadcast page in response to a target interaction instruction aiming at the target virtual image in at least one preset virtual image.
In an alternative embodiment, in case the target interaction operation is an avatar update operation; the target interactive operation execution module comprises:
a target character information acquisition unit configured to perform acquisition of target character information corresponding to the character update operation;
a target avatar updating unit configured to perform updating of the target avatar presented in the target live page based on the target avatar information.
In an alternative embodiment, in the case where the target interaction operation is a focus operation; the target interoperation execution module includes:
a focus rendering information acquisition unit configured to perform acquisition of focus rendering information corresponding to the focusing operation;
a focus processing unit configured to perform focus rendering of the target avatar in the target live page based on the focus rendering information.
In an optional embodiment, the apparatus further comprises:
and the target virtual image determining module is configured to determine the target virtual image from at least one preset virtual image before performing target interaction operation on the target virtual image in at least one preset virtual image in the target live broadcast page.
In an alternative embodiment, the target avatar determination module includes:
a first target avatar determination unit configured to perform a random selection of the target avatar from at least one of the preset avatars.
In an alternative embodiment, the first target avatar determination unit includes:
the first target interaction instruction triggering module is configured to randomly select a target avatar from at least one preset avatar under the condition that a preset time is reached, and trigger the target interaction instruction;
or the like, or a combination thereof,
and the second target interactive instruction triggering module is configured to randomly select the target avatar from at least one preset avatar under the condition that a live broadcast creation object of the virtual live broadcast room executes preset interactive operation, and trigger the target interactive instruction.
In an alternative embodiment, the target avatar determination module includes:
the target interaction information acquisition unit is configured to execute target interaction information which is executed in the virtual live broadcast room and is used for acquiring at least one target live broadcast browsing object;
a second target avatar determination unit configured to execute a preset avatar of any one of the target live browsing objects for which the target interaction information satisfies a preset condition, as the target avatar.
In an optional embodiment, the apparatus further comprises:
and the third target interaction instruction triggering module is configured to execute triggering of the target interaction instruction under the condition that the target interaction information meets the preset condition.
In an alternative embodiment, the target avatar determination module includes:
the interactive information acquisition unit is configured to acquire various interactive information of at least one target live broadcast browsing object and executed in the virtual live broadcast room;
the interaction index data generation unit is configured to execute generation of interaction index data corresponding to any one target live broadcast browsing object based on the multiple kinds of interaction information, and the interaction index data represents the interaction degree of the target live broadcast browsing object in the virtual live broadcast room;
a third target avatar determination unit configured to execute a preset avatar of any one of the target live browsing objects for which the interaction index data reaches a preset threshold as the target avatar.
In an optional embodiment, the apparatus further comprises:
and the fourth target interaction instruction triggering module is configured to trigger the target interaction instruction under the condition that the interaction index data reaches the preset threshold.
In an optional embodiment, the apparatus further comprises:
the comment information acquisition module is configured to execute the acquisition of comment information of at least one live browsing object in the virtual live broadcasting room;
the virtual image control module is configured to control the preset virtual image of the target live browsing object to enter the virtual live broadcasting room under the condition that the comment information of any one live browsing object is preset comment information;
any target live browsing object is an object of which the comment information in at least one live browsing object is the preset comment information.
According to a third aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any of the first aspects above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method according to any one of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspects described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
displaying a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room on a target live broadcast page corresponding to the virtual live broadcast room, so that the live broadcast browsing object can appear in the virtual live broadcast room in the form of the preset virtual image; and under the condition that a target interaction instruction aiming at a target avatar in at least one preset avatar is triggered, target interaction operation is executed on the target avatar in a target live broadcast page, so that the target avatar can be concerned in a large number of preset avatars, the existence sense and the interaction participation sense of the target avatar in a live broadcast room are greatly improved, and the interactivity and the immersion sense of a live broadcast atmosphere in the live broadcast room can be improved while the enthusiasm of audiences (users corresponding to live broadcast browsing objects) in the live broadcast room for participating in interaction is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram of an application environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of interaction processing in accordance with an illustrative embodiment;
FIG. 3 is a schematic diagram of a target live page provided in accordance with an exemplary embodiment;
FIG. 4 is a schematic illustration of another target live page provided in accordance with an exemplary embodiment;
FIG. 5 is a schematic illustration of another target live page provided in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an interaction processing device, according to an example embodiment;
FIG. 7 is a block diagram illustrating an electronic device for interaction processing in accordance with an illustrative embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment according to an exemplary embodiment, which may include a first terminal 100, a second terminal 200, and a server 300.
In an alternative embodiment, the first terminal 100 may be a master terminal; server 300 may be a backend server of a live platform; the second terminal 200 may be a viewer side. Optionally, the server 300 may provide an interactive service in a live broadcast process for the first terminal and the second terminal.
In an alternative embodiment, the first terminal 100 and the second terminal 200 may include, but are not limited to, an electronic device of a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, or the like, and may also be software running on the electronic device, such as an application program or the like. Optionally, the operating system running on the electronic device may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
In an alternative embodiment, the server 300 may be a stand-alone physical server, or may be a server cluster or distributed system formed by a plurality of physical servers.
In addition, it should be noted that fig. 1 shows only one application environment provided by the present disclosure, and in practical applications, other application environments may also be included, for example, more second terminals may also be included.
In this embodiment, the first terminal 100, the second terminal 200, and the server 300 may be directly or indirectly connected through wired or wireless communication, and the disclosure is not limited herein.
Fig. 2 is a flowchart illustrating an interaction processing method according to an exemplary embodiment, which may be applied to a first terminal or a second terminal, as shown in fig. 2, and may include the following steps:
in step S201, a target live broadcast page corresponding to the virtual live broadcast room is displayed.
In a specific embodiment, the virtual live broadcast room may be a live broadcast room in a virtual scene. The virtual scene may be a three-dimensional virtual scene. Specifically, the anchor may perform live broadcast configuration in a virtual scene in advance before playing, and specifically, the live broadcast configuration in the virtual scene may include configuration of the virtual scene, configuration of interaction rules in a live broadcast process, and the like. Specifically, the anchor image in the virtual live broadcast room may be an avatar (i.e., a virtual anchor), and in the case of broadcasting, the virtual anchor and the three-dimensional virtual scene are rendered together, so that a live broadcast page of the live broadcast room in the three-dimensional virtual scene including the virtual anchor is presented at the terminal. Optionally, the three-dimensional virtual scene may be set according to actual application requirements, for example, scenes such as a virtual dance floor, a virtual subway, and a virtual square.
In an optional embodiment, the target live page may be a live page on a live browsing object side. The live view object may be a user account (viewer account) for viewing a virtual live room.
In another optional embodiment, the target live page may also be a live page on the object side for live creation. Specifically, the live broadcast creation object may be a user account (anchor account) for creating a virtual live broadcast room.
In a specific embodiment, in the live broadcast room in the virtual scene, the anchor appears in the virtual live broadcast room as an avatar, and optionally, in an initial state (during play), the avatar of the live broadcast creation object may be an avatar preset for the live broadcast creation object or may be set by the system. In the live broadcast room in the virtual scene, the audience can also appear in the virtual live broadcast room in an avatar. Optionally, the target live broadcast page may display a preset avatar of at least one target live broadcast browsing object in the virtual live broadcast room. The at least one target live broadcast browsing object can be a live broadcast browsing object entering the virtual live broadcast room through a preset virtual image. Optionally, the avatar of the target live browsing object may be preset for the corresponding target live browsing object, or may be set by the system.
In an optional embodiment, after the live broadcast, the anchor can directly appear in the virtual live broadcast room as an avatar, and audiences can enter the virtual live broadcast room by sending preset comment information and then show the comment information in a corresponding live broadcast page; correspondingly, the method may further include:
obtaining comment information of at least one live broadcast browsing object in a virtual live broadcast room;
under the condition that the comment information of any live broadcast browsing object is preset comment information, controlling a preset virtual image of a target live broadcast browsing object to enter a virtual live broadcast room;
in a specific embodiment, any target live browsing object is an object of which the comment information is preset comment information in at least one live browsing object.
In a specific embodiment, the first terminal and the second terminal may obtain comment information of at least one live browsing object in a live broadcasting process, and may display a preset avatar of a target live browsing object of which comment information is preset comment information on a corresponding live broadcasting picture under the condition that the comment information of any live browsing object is the preset comment information. Specifically, the preset comment information can be preset trigger information for allowing audiences to enter a virtual live broadcast room in a virtual image; specifically, different virtual scenes may correspond to different preset comment information; optionally, taking a three-dimensional virtual scene as a virtual dance floor scene as an example, the preset comment information may be dancing together; taking a three-dimensional virtual scene as a virtual subway scene as an example, the preset comment information may be a subway together. Optionally, when the preset avatar of the target live broadcast browsing object enters the live broadcast room, the view angle of the corresponding interactive live broadcast page may be combined, and when the current view angle corresponding picture includes the preset avatar of the target live broadcast browsing object, the preset avatar of the target live broadcast browsing object may be displayed on the interactive live broadcast page.
In the above embodiment, by sending the preset comment information in the virtual live broadcast room, the live broadcast browsing object in the virtual live broadcast room enters the virtual live broadcast room through the preset virtual image, and the participation of the live broadcast browsing object and the immersion sense of the live broadcast atmosphere can be greatly improved.
In a specific embodiment, a three-dimensional virtual scene is assumed to be a virtual dance pool scene, and a live broadcast page on a live broadcast browsing object side is taken as an example, as shown in fig. 3, fig. 3 is a schematic diagram of a target live broadcast page provided according to an exemplary embodiment. The avatar corresponding to 301, the avatar corresponding to 302, the avatar corresponding to 303 and the avatar corresponding to 304 may be preset avatars of a target live browsing object; 305 may create an avatar for the live object.
In an optional embodiment, in the process that the preset avatar of any target live browsing object is displayed on the target live page, object identification information (e.g., a nickname) of the target live browsing object may be displayed so as to distinguish different preset avatars.
In step S203, in response to a target interaction instruction for a target avatar of the at least one preset avatar, a target interaction operation is performed on the target avatar in the target live broadcast page.
In a specific embodiment, the target interactive instruction is used to instruct the target interactive operation to be performed on the target avatar in the target live broadcast page. Specifically, each target live broadcast browsing object corresponds to a preset avatar, and the at least one preset avatar may include a preset avatar of the at least one target live broadcast browsing object. The target interactive operation may be an interactive operation for highlighting the target avatar.
In a specific embodiment, the target interactive instruction may be triggered when the target avatar is determined.
In an optional embodiment, before performing a target interactive operation on a target avatar in at least one preset avatar in a target live page, the method may further include:
from the at least one preset avatar, a target avatar is determined.
In an optional embodiment, in a case of a target interaction instruction trigger, the first terminal side may first determine a target avatar from at least one preset avatar, and perform a target interaction operation on the target avatar in a live broadcast page (target live broadcast page) of a live broadcast creation object. Further, the first terminal may transmit identification information of the target avatar to the server; correspondingly, the second terminal can acquire the identification information of the target avatar from the server under the condition that the target interaction instruction is triggered, further determine the target avatar, and execute target interaction operation on the target avatar in a live broadcast page (target live broadcast page) of a live broadcast browsing object.
In the above embodiment, one target avatar is selected from the preset avatars of at least one target live browsing object in the virtual live broadcasting room to perform target interaction operation, so that the target live browsing object in the virtual live broadcasting room has a chance to be paid attention to, participation and enthusiasm of audiences (users corresponding to the live browsing objects) in an interaction process are greatly improved, and interactivity and immersion of a live broadcasting atmosphere in the virtual live broadcasting room can be greatly improved.
In an alternative embodiment, a target avatar performing a target interactive operation may be randomly determined from at least one preset avatar, and accordingly, the determining the target avatar from the at least one preset avatar may include:
randomly selecting a target avatar from at least one preset avatar.
In a specific embodiment, the identification information of at least one preset avatar may be obtained, and a preset random sampling algorithm is combined to determine an identification information from the identification information of at least one preset avatar, and accordingly, the preset avatar corresponding to the determined identification information may be the target avatar.
In the above embodiment, the target avatar for executing the target interactive operation is randomly selected from the at least one preset avatar, so that each target live broadcast browsing object in the virtual live broadcast room has a chance to be paid attention to, participation sense and participation enthusiasm of audiences (users corresponding to the live broadcast browsing objects) in the interactive process are greatly improved, and further interactivity and immersion sense of a live broadcast atmosphere in the virtual live broadcast room can be greatly improved.
In an optional embodiment, the trigger time of the target interactive instruction may be preset under the condition that the target avatar is randomly determined from at least one preset avatar; accordingly, the randomly selecting the target avatar from the at least one preset avatar may include:
under the condition that the preset time is reached, randomly selecting a target virtual image from at least one preset virtual image, and triggering a target interaction instruction;
in a specific embodiment, the preset time may be a preset trigger time of the target interaction instruction; correspondingly, when the preset time is up, a target virtual image can be randomly selected from at least one preset virtual image, and the target interaction instruction aiming at the target virtual image is automatically triggered; optionally, the preset time may be automatically issued by the server, or may be set by the live broadcast creation object, and optionally, under the condition set by the live broadcast creation object, the first terminal may issue the set preset time to the second terminal through the server.
In an optional embodiment, in a case that the target avatar is randomly determined from the at least one preset avatar, the live broadcast creation object may trigger a target interaction instruction during a live broadcast process, and correspondingly, the randomly selecting the target avatar from the at least one preset avatar may include:
under the condition that a live broadcast creation object of a virtual live broadcast room executes preset interactive operation, a target virtual image is randomly selected from at least one preset virtual image, and a target interactive instruction is triggered.
In a specific embodiment, the preset interactive operation may be a preset operation of randomly selecting a target avatar from at least one preset avatar, and performing a trigger operation of the target interactive operation, specifically, the preset interactive operation may include, but is not limited to, clicking, double-clicking, sliding a preset trigger control, and the like, optionally, the anchor may also control execution of the target interactive operation in a voice manner, and correspondingly, the preset interactive operation may also be an entry operation of preset voice information. The preset voice message may be a preset voice message that triggers the target interactive operation to be performed, for example, to start extracting lucky audiences.
In the embodiment, the determination operation on the target virtual image is triggered by the preset time or the preset interaction operation executed by the live broadcast creation object in the live broadcast process, and the target interaction instruction is triggered, so that the triggering flexibility and the operation convenience for executing the target interaction operation on the target virtual image can be greatly improved.
In an optional embodiment, the determining the target avatar from the at least one preset avatar may include:
acquiring target interaction information executed by at least one target live broadcast browsing object in a virtual live broadcast room;
and taking the preset virtual image of any target live broadcast browsing object with the target interaction information meeting the preset conditions as the target virtual image.
In a specific embodiment, the target interaction information is interaction information preset and used for screening a target avatar, and optionally, the target interaction information may be set in combination with an actual application; optionally, the target interaction information may be action information executed by the target live broadcast browsing object in the virtual live broadcast room to control a preset avatar (a preset avatar in the virtual live broadcast room); optionally, the preset avatar may be controlled to execute a corresponding action by combining with an external device or a preset virtual control, or the preset avatar may be controlled to execute a corresponding action by combining with the comment information. Optionally, the target interaction information may be a virtual resource amount given away by the target live broadcast browsing object in the virtual live broadcast room for the live broadcast creation object;
in a specific embodiment, the preset condition may be a preset screening condition of the target avatar; optionally, taking target interaction information as a target live broadcast browsing object to control action information executed by a preset virtual image in a virtual live broadcast room as an example; optionally, the target interaction information meeting the preset condition may be: the target interaction information is preset action information; optionally, the target interaction information meeting the preset condition may be: the target interaction information is preset action information executed first in a first preset time period. Specifically, the first preset time period may be set in combination with an actual application.
Optionally, taking the virtual resource amount given by the target interaction information as the target live broadcast browsing object in the virtual live broadcast room as the live broadcast creation object, the target interaction information meeting the preset condition may be: and the target interactive information reaches a preset threshold value. Specifically, the preset threshold may be set in combination with the actual application. Optionally, the target interaction information meeting the preset condition may be: the target interactive information is the first virtual resource amount reaching the preset threshold value in the second preset time period. Specifically, the second preset time period may be set in combination with the actual application.
In a specific embodiment, in a scene where a target avatar is screened in combination with target interaction information that is executed by a target live viewing object in a virtual live broadcast room, the method may further include:
and triggering a target interaction instruction under the condition that the target interaction information meets the preset condition.
In the above embodiment, the target virtual image is screened by combining the target interaction information executed by the target live broadcast browsing object in the virtual live broadcast room, so that the interactivity in the virtual live broadcast room can be better improved, and the target interaction instruction is triggered under the condition that the target interaction information meets the preset condition, so that the interactive operation of the virtual image can be more targeted, and the participation enthusiasm of audiences (users corresponding to the live broadcast browsing object) in the interactive process can be better improved.
In an optional embodiment, the determining the target avatar from the at least one preset avatar includes:
acquiring various interactive information executed by at least one target live broadcast browsing object in a virtual live broadcast room;
generating interaction index data corresponding to any target live broadcast browsing object based on various interaction information, wherein the interaction index data represents the interaction degree of any target live broadcast browsing object in a virtual live broadcast room;
and taking the preset virtual image of any target live broadcast browsing object with the interaction index data reaching the preset threshold value as the target virtual image.
In a specific embodiment, the multiple kinds of interaction information are preset interaction information for screening the target avatar, and optionally, the multiple kinds of interaction information can be set in combination with an actual application; optionally, the multiple kinds of interaction information may include at least two of action information of the target live broadcast browsing object in the virtual live broadcast room for controlling the execution of a preset avatar, virtual resource information of the target live broadcast browsing object in the virtual live broadcast room for presenting the live broadcast creation object, the number of comments made by the target live broadcast browsing object in the virtual live broadcast room, and the number of comments made by the target live broadcast browsing object in the virtual live broadcast room;
in a specific embodiment, a preset quantization rule can be combined to quantize various interaction information into a numerical value (interaction index data) representing the interaction degree of a target live broadcast browsing object in a virtual live broadcast room; and taking the sum of the interaction index data corresponding to the various interaction information of each target live broadcast browsing object as the interaction index data corresponding to each target live broadcast browsing object.
In a specific embodiment, the preset threshold may be a preset screening threshold of the target avatar; optionally, when the interaction index data corresponding to any target live broadcast browsing object reaches a preset threshold, the preset avatar of the target live broadcast browsing object may be used as the target avatar.
In a specific embodiment, in the scene of filtering the target avatar in combination with a plurality of kinds of interaction information executed by the target live view object in the virtual live broadcast room, the method may further include:
and under the condition that the interaction index data reaches a preset threshold value, triggering a target interaction instruction.
In the above embodiment, the interaction index data which can represent the interaction degree of the target live broadcast browsing object in the virtual live broadcast room is determined by combining various interaction information executed by the target live broadcast browsing object in the virtual live broadcast room, so as to screen the target virtual image, and the interactivity in the virtual live broadcast room can be better improved. And under the condition that the interaction index data reaches the preset threshold value, a target interaction instruction is triggered, so that the interaction operation of the virtual image can be performed more specifically, and the participation enthusiasm of audiences (users corresponding to live broadcast browsing objects) in the interaction process is improved better.
In a specific embodiment, the target interactive operation may be an interactive operation for the target avatar itself, such as an avatar update operation (shape, size, etc.) for the target avatar itself, and a focus operation for the target avatar.
In an alternative embodiment, in the case that the target interaction operation is an avatar update operation; in the target live broadcast page, the performing a target interactive operation on a target avatar in at least one preset avatar may include:
acquiring target image information corresponding to image updating operation;
and updating the target virtual image displayed in the target live broadcast page based on the target image information.
In a specific embodiment, the character update operation may include a pose update operation for a pose update of the target avatar, and may also include a size update operation for a size update of the target avatar, and the like.
In a specific embodiment, the target avatar information may be rendering information required for updating the avatar of the target avatar; optionally, taking an image updating operation as a model updating operation as an example, the target image information may include rendering information of a model to be updated; optionally, the target avatar information may further include special effect rendering information in the target avatar updating process, and the like. Optionally, taking an image updating operation as a size updating operation as an example, the target image information may include a scaling ratio; optionally, the target avatar information may further include special effect rendering information in the target avatar updating process, and the like.
In a specific embodiment, as shown in fig. 3, it is assumed that the target live broadcast page is a live broadcast page on the live broadcast browsing object side, the target interactive operation is an image update operation (modeling update operation), and the target avatar is an avatar 301; as shown in fig. 4, fig. 4 is a schematic diagram of another target live page provided in accordance with an example embodiment. The avatar corresponding to 401 may be the avatar after the avatar update operation is performed on the avatar 301.
In an optional embodiment, the updated avatar (the avatar performing the avatar update operation) of the target live view object corresponding to the target interactive instruction may be restored to the target avatar after displaying the preset time, or the updated avatar may be displayed in the target live view page all the time.
In the above embodiment, when the target interaction operation is the image update operation, the image update of the target avatar in a large number of preset avatars is realized through the acquired target image information corresponding to the image update operation, the target avatar can be effectively highlighted, the existence sense and the interactive participation sense of the target avatar in the live virtual broadcasting room are greatly improved, and further, the interactivity in the live virtual broadcasting room and the immersion sense of the live broadcasting atmosphere can be greatly improved.
In an alternative embodiment, in the case where the target interaction operation is a focus operation; in the target live broadcast page, the performing a target interactive operation on a target avatar in at least one preset avatar may include:
acquiring focusing rendering information corresponding to focusing operation;
and performing focusing rendering on the target virtual image in the target live broadcast page based on the focusing rendering information.
In a specific embodiment, the focusing rendering information may be rendering information required for focusing the target avatar. Specifically, the focusing rendering mode may be set according to actual requirements, such as lighting focusing, light spot surrounding focusing, zooming in the ratio of the target avatar in the target live broadcast page through visual adjustment (e.g., moving a live broadcast frame to the front of the target avatar), and the like.
In a specific embodiment, taking the example of the light spot surrounding focus, the focus rendering information may be light spot rendering information, and specifically, the light spot rendering information may be used to represent rendering information (for example, number, size, color, and the like) of a preset light spot that needs to be rendered, and accordingly, position information of the target avatar in the target live broadcast page may be determined, and the preset light spot is rendered around the target avatar based on the position information and the light spot rendering information.
In another specific embodiment, the focus rendering information may be picture moving mirror information, such as moving mirror orientation, moving mirror distance, etc., in a proportion that the target avatar is in the target live broadcast page through visual adjustment to enlarge; accordingly, the position information of the target virtual image in the target live broadcast page can be determined, and the proportion of the target virtual image in the target live broadcast page can be enlarged based on the position information and the picture moving mirror information.
In a specific embodiment, as shown in fig. 3, it is assumed that the target live broadcast page is a live broadcast page on the live broadcast browsing object side, the target interaction operation is a focusing operation, and the target avatar is an avatar 301; as shown in fig. 5, fig. 5 is a schematic illustration of another target live page provided in accordance with an example embodiment. The avatar corresponding to 501 may be the avatar after the focusing operation is performed on the avatar 301.
In an optional embodiment, the focused avatar (the avatar performing the focusing operation) of the target live browsing object corresponding to the target interactive instruction may be restored to the target avatar after displaying the preset time duration, or may be displayed in the target live page with the focused avatar.
In the above embodiment, when the target interaction operation is the focusing operation, focusing of a large number of target avatars in the preset avatars is realized through the obtained focusing rendering information corresponding to the focusing operation, the target avatars can be effectively highlighted, the existence sense and the interaction participation sense of the target avatars in the live virtual broadcasting room are greatly improved, and further, the interactivity and the immersion sense of the live broadcasting atmosphere in the live virtual broadcasting room can be greatly improved.
As can be seen from the technical solutions provided by the embodiments of the present specification, in the present specification, a preset avatar of at least one target live broadcast browsing object in a virtual live broadcast room is displayed on a target live broadcast page corresponding to the virtual live broadcast room, so that the live broadcast browsing object can appear in the virtual live broadcast room as the preset avatar; and under the condition that a target interaction instruction of a target virtual image in at least one preset virtual image is triggered, target interaction operation is executed on the target virtual image in a target live broadcast page, so that the target virtual image can be concerned in a large number of preset virtual images, the existence sense and the interaction participation sense of the target virtual image in a virtual live broadcast room are greatly improved, the enthusiasm of audiences (users corresponding to live broadcast browsing objects) in interaction in the virtual live broadcast room is further improved, and meanwhile, the interactivity in the virtual live broadcast room and the immersion sense of a live broadcast atmosphere are improved.
FIG. 6 is a block diagram illustrating an interaction processing device, according to an example embodiment. Referring to fig. 6, the apparatus includes:
a target live broadcast page display module 610 configured to execute displaying a target live broadcast page corresponding to the virtual live broadcast room, where the target live broadcast page shows a preset avatar of at least one target live broadcast browsing object in the virtual live broadcast room;
and the target interactive operation execution module 620 is configured to execute a target interactive operation on the target avatar in the target live broadcast page in response to a target interactive instruction for the target avatar in the at least one preset avatar.
In an alternative embodiment, in the case that the target interaction operation is an avatar update operation; the target interoperation performing module 620 includes:
a target character information acquisition unit configured to perform acquisition of target character information corresponding to the character update operation;
and the target avatar updating unit is configured to update the target avatar displayed in the target live broadcast page based on the target avatar information.
In an alternative embodiment, in the case where the target interaction operation is a focus operation; the target interoperation performing module 620 includes:
a focus rendering information acquisition unit configured to perform acquisition of focus rendering information corresponding to a focus operation;
a focus processing unit configured to perform focus rendering of the target avatar in the target live page based on the focus rendering information.
In an optional embodiment, the apparatus further comprises:
and the target virtual image determining module is configured to determine the target virtual image from at least one preset virtual image before executing target interaction operation aiming at the target virtual image in the target live broadcast page.
In an alternative embodiment, the target avatar determination module includes:
a first target avatar determination unit configured to perform a random selection of a target avatar from among at least one preset avatar.
In an alternative embodiment, the first target avatar determination unit includes:
the first target interaction instruction triggering module is configured to randomly select a target virtual image from at least one preset virtual image and trigger a target interaction instruction when the preset time is reached;
or the like, or, alternatively,
and the second target interactive instruction triggering module is configured to randomly select a target avatar from at least one preset avatar under the condition that a live broadcast creation object in the virtual live broadcast room executes preset interactive operation, and trigger a target interactive instruction.
In an alternative embodiment, the target avatar determination module includes:
the target interaction information acquisition unit is configured to execute target interaction information which is executed in a virtual live broadcast room and is used for acquiring at least one target live broadcast browsing object;
and a second target avatar determination unit configured to execute a preset avatar of any target live browsing object whose target interaction information satisfies a preset condition as the target avatar.
In an optional embodiment, the apparatus further comprises:
and the third target interaction instruction triggering module is configured to execute triggering of the target interaction instruction under the condition that the target interaction information meets the preset condition.
In an alternative embodiment, the target avatar determination module includes:
the interactive information acquisition unit is configured to execute acquisition of various interactive information of at least one target live broadcast browsing object executed in the virtual live broadcast room;
the interactive index data generation unit is configured to execute the interactive index data corresponding to any target live broadcast browsing object based on various interactive information, and the interactive index data represents the interactive degree of any target live broadcast browsing object in the virtual live broadcast room;
and a third target avatar determination unit configured to execute a preset avatar of any target live browsing object having interaction index data reaching a preset threshold as the target avatar.
In an optional embodiment, the apparatus further comprises:
and the fourth target interaction instruction triggering module is configured to execute triggering of the target interaction instruction under the condition that the interaction index data reaches the preset threshold value.
In an optional embodiment, the apparatus further comprises:
the comment information acquisition module is configured to execute the acquisition of comment information of at least one live browsing object in the virtual live broadcasting room;
the virtual image control module is configured to control a preset virtual image of a target live broadcast browsing object to enter a virtual live broadcast room under the condition that comment information of any live broadcast browsing object is preset comment information;
any target live browsing object is an object of which the comment information is preset comment information in at least one live browsing object.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device for interactive processing, which may be a terminal, according to an exemplary embodiment, and an internal structure thereof may be as shown in fig. 7. The terminal may include RF (radio frequency) circuitry 710, memory 720 including one or more computer-readable storage media, input unit 730, display unit 740, sensor 750, audio circuitry 760, wiFi (wireless fidelity) module 770, processor 780 including one or more processing cores, and power supply 790. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
RF circuit 710 may be used for receiving and transmitting signals during a message transmission or communication session, and in particular, may receive downlink information from a base station and process the received downlink information to one or more processors 780; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 710 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (low noise amplifier), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with a network and other terminals through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (global system for mobile communications), GPRS (general packet radio service), CDMA (code division multiple access), WCDMA (wideband code division multiple access), LTE (long term evolution), email, SMS (short messaging service), etc.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 720 may also include a memory controller to provide access to memory 720 by processor 780 and input unit 730.
The input unit 730 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 730 may include a touch-sensitive surface 731 as well as other input devices 732. Touch-sensitive surface 731, also referred to as a touch display screen or touch pad, can collect touch operations by a user on or near touch-sensitive surface 731 (e.g., operations by a user on or near touch-sensitive surface 731 using a finger, stylus, or any other suitable object or attachment) and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 731 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch-sensitive surface 731 can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic wave. The input unit 730 may also include other input devices 732 in addition to the touch-sensitive surface 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal, which may be configured by graphics, text, icons, video, and any combination thereof. The display unit 740 may include a display panel 741, and optionally, the display panel 741 may be configured in the form of an LCD (liquid crystal display), an OLED (organic light-emitting diode), or the like. Further, touch-sensitive surface 731 can overlay display panel 741, such that when touch-sensitive surface 731 detects a touch event thereon or nearby, processor 780 can determine the type of touch event, and processor 780 can then provide a corresponding visual output on display panel 741 based on the type of touch event. Where the touch-sensitive surface 731 and the display panel 741 may be implemented as two separate components, input and output functions, but in some embodiments the touch-sensitive surface 731 and the display panel 741 may be integrated to implement input and output functions.
The terminal may also include at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications of recognizing terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the terminal, detailed description is omitted here.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and the terminal. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another terminal via the RF circuit 710, or outputs the audio data to the memory 720 for further processing. The audio circuitry 760 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user send and receive e-mail, browse web page and access streaming media etc. through WiFi module 770, which provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 770, it is understood that it does not belong to the essential constitution of the terminal, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 780 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby monitoring the terminal as a whole. Optionally, processor 780 may include one or more processing cores; preferably, the processor 780 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The terminal also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 790 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the display unit of the terminal is a touch screen display, the terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors according to the instructions of the method embodiments of the present invention.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the interaction processing method as in the embodiments of the present disclosure.
In an exemplary embodiment, there is also provided a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform an interaction processing method in the embodiments of the present disclosure.
In an exemplary embodiment, a computer program product containing instructions is also provided, which when run on a computer, causes the computer to perform the interaction processing method in the embodiments of the present disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An interactive processing method, comprising:
displaying a target live broadcast page corresponding to a virtual live broadcast room, wherein the target live broadcast page displays a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room;
and responding to a target interaction instruction aiming at a target avatar in at least one preset avatar, and executing target interaction operation on the target avatar in the target live broadcast page.
2. The interactive processing method according to claim 1, wherein in a case where the target interactive operation is an avatar update operation; in the target live broadcast page, the target interactive operation executed on a target avatar in at least one preset avatar comprises the following steps:
acquiring target image information corresponding to the image updating operation;
and updating the target virtual image displayed in the target live broadcast page based on the target image information.
3. The interactive processing method according to claim 1, wherein in a case where the target interactive operation is a focus operation; in the target live broadcast page, the executing of the target interaction operation on the target avatar in the at least one preset avatar comprises:
acquiring focus rendering information corresponding to the focusing operation;
and performing focusing rendering on the target virtual image in the target live broadcast page based on the focusing rendering information.
4. The interactive processing method according to any one of claims 1 to 3, wherein before performing a target interactive operation on a target avatar of at least one of the preset avatars in the target live page, the method further comprises:
determining the target avatar from at least one of the preset avatars.
5. The interactive processing method according to claim 4, wherein the determining the target avatar from at least one of the preset avatars comprises:
randomly selecting the target avatar from at least one of the preset avatars.
6. The interactive processing method according to claim 5, wherein the randomly selecting the target avatar from the at least one preset avatar comprises:
under the condition that preset time is reached, randomly selecting the target virtual image from at least one preset virtual image, and triggering the target interaction instruction;
or the like, or, alternatively,
and under the condition that a live broadcast creation object of the virtual live broadcast room executes preset interactive operation, randomly selecting the target virtual image from at least one preset virtual image, and triggering the target interactive instruction.
7. The interactive processing method according to claim 4, wherein the determining the target avatar from at least one of the preset avatars comprises:
acquiring target interaction information executed in the virtual live broadcast room by at least one target live broadcast browsing object;
and taking any preset virtual image of the target live broadcast browsing object with the target interaction information meeting preset conditions as the target virtual image.
8. The interactive processing method according to claim 7, wherein the method further comprises:
and triggering the target interaction instruction under the condition that the target interaction information meets the preset condition.
9. The interactive processing method according to claim 4, wherein the determining the target avatar from at least one of the preset avatars comprises:
acquiring at least one target live broadcast browsing object and various interactive information executed in the virtual live broadcast room;
generating interaction index data corresponding to any target live broadcast browsing object based on the various interaction information, wherein the interaction index data represents the interaction degree of any target live broadcast browsing object in the virtual live broadcast room;
and taking any preset virtual image of the target live broadcast browsing object with the interaction index data reaching a preset threshold value as the target virtual image.
10. The interaction processing method according to claim 9, further comprising:
and triggering the target interaction instruction under the condition that the interaction index data reaches the preset threshold value.
11. The interactive processing method according to any one of claims 1 to 3, wherein the method further comprises:
obtaining comment information of at least one live broadcast browsing object in the virtual live broadcast room;
under the condition that the comment information of any live broadcast browsing object is preset comment information, controlling a preset virtual image of the target live broadcast browsing object to enter the virtual live broadcast room;
any target live browsing object is an object of which the comment information is the preset comment information in at least one live browsing object.
12. An interaction processing apparatus, comprising:
the target live broadcast page display module is configured to execute display of a target live broadcast page corresponding to a virtual live broadcast room, and the target live broadcast page displays a preset virtual image of at least one target live broadcast browsing object in the virtual live broadcast room;
and the target interactive operation execution module is configured to execute target interactive operation on a target virtual image in the target live broadcast page in response to a target interactive instruction aiming at the target virtual image in at least one preset virtual image.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the interaction processing method of any of claims 1 to 11.
14. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the interaction processing method of any of claims 1 to 11.
CN202211097457.XA 2022-09-08 2022-09-08 Interaction processing method and device, electronic equipment and storage medium Pending CN115643445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211097457.XA CN115643445A (en) 2022-09-08 2022-09-08 Interaction processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211097457.XA CN115643445A (en) 2022-09-08 2022-09-08 Interaction processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115643445A true CN115643445A (en) 2023-01-24

Family

ID=84942491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211097457.XA Pending CN115643445A (en) 2022-09-08 2022-09-08 Interaction processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115643445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467020A (en) * 2023-03-08 2023-07-21 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467020A (en) * 2023-03-08 2023-07-21 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN116467020B (en) * 2023-03-08 2024-03-19 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109905754B (en) Virtual gift receiving method and device and storage equipment
CN113157906B (en) Recommendation information display method, device, equipment and storage medium
CN106791893A (en) Net cast method and device
CN113965807B (en) Message pushing method, device, terminal, server and storage medium
CN105979312B (en) Information sharing method and device
CN106973330B (en) Screen live broadcasting method, device and system
CN107333162B (en) Method and device for playing live video
CN107908765B (en) Game resource processing method, mobile terminal and server
CN108958587B (en) Split screen processing method and device, storage medium and electronic equipment
CN109426343B (en) Collaborative training method and system based on virtual reality
CN111491197A (en) Live content display method and device and storage medium
CN112969087B (en) Information display method, client, electronic equipment and storage medium
CN107272896B (en) Method and device for switching between VR mode and non-VR mode
CN109166164B (en) Expression picture generation method and terminal
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN112256181B (en) Interaction processing method and device, computer equipment and storage medium
KR102263977B1 (en) Methods, devices, and systems for performing information provision
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN115022653A (en) Information display method and device, electronic equipment and storage medium
CN115017406A (en) Live broadcast picture display method and device, electronic equipment and storage medium
CN114547436A (en) Page display method and device, electronic equipment and storage medium
CN115017340A (en) Multimedia resource generation method and device, electronic equipment and storage medium
CN114100121A (en) Operation control method, device, equipment, storage medium and computer program product
CN116017086A (en) Interactive processing method and device, electronic equipment and storage medium
CN116248942A (en) Interactive processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination