CN114288654B - Live interactive method, device, equipment, storage medium and computer program product - Google Patents

Live interactive method, device, equipment, storage medium and computer program product Download PDF

Info

Publication number
CN114288654B
CN114288654B CN202111658272.7A CN202111658272A CN114288654B CN 114288654 B CN114288654 B CN 114288654B CN 202111658272 A CN202111658272 A CN 202111658272A CN 114288654 B CN114288654 B CN 114288654B
Authority
CN
China
Prior art keywords
virtual object
viewing
virtual
account
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111658272.7A
Other languages
Chinese (zh)
Other versions
CN114288654A (en
Inventor
钱杉杉
林琳
梁皓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114288654A publication Critical patent/CN114288654A/en
Application granted granted Critical
Publication of CN114288654B publication Critical patent/CN114288654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种直播互动方法、装置、设备、存储介质及计算机程序产品,涉及直播程序领域。该方法包括:接收直播间进入操作,直播间进入操作用于指示当前登录的目标帐号以观众身份进入直播间进行直播观看;显示虚拟对局的直播画面,虚拟对局的虚拟场景中包括对战区域和观战区域;在对战区域中显示与主播帐号对应的第一虚拟对象,第一虚拟对象由直播间对应的主播帐号进行控制;在观战区域显示目标帐号对应的第二虚拟对象,第二虚拟对象由目标帐号进行控制。即,目标帐号通过以观众身份进入直播间后,当前直播画面中虚拟对局场景中设有观战区域,用于在进行直播观战的过程中,显示目标帐号对应的第二虚拟对象,提高人机交互频率。

The present application discloses a live interactive method, device, equipment, storage medium and computer program product, which relates to the field of live broadcast programs. The method includes: receiving a live broadcast room entry operation, the live broadcast room entry operation is used to instruct the currently logged in target account to enter the live broadcast room as a spectator to watch the live broadcast; displaying the live broadcast screen of the virtual game, the virtual scene of the virtual game includes a battle area and a viewing area; displaying the first virtual object corresponding to the anchor account in the battle area, the first virtual object is controlled by the anchor account corresponding to the live broadcast room; displaying the second virtual object corresponding to the target account in the viewing area, the second virtual object is controlled by the target account. That is, after the target account enters the live broadcast room as a spectator, a viewing area is provided in the virtual game scene in the current live broadcast screen, which is used to display the second virtual object corresponding to the target account during the live broadcast viewing process, thereby increasing the frequency of human-computer interaction.

Description

Live interaction method, device, equipment, storage medium and computer program product
The present application claims priority from chinese patent application No. 202111476539.0 entitled "live interaction method, apparatus, device, storage medium and computer program product," filed 12/06/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of live broadcast programs, and in particular, to a live broadcast interaction method, apparatus, device, storage medium, and computer program product.
Background
The self-playing chess is an emerging chess game, and after the two players place the already-owned fight objects (commonly called as 'chessmen') on the chessboard, the game application automatically controls the fight objects to fight and outputs fight results.
In the live broadcast process of the game related to the self-propelled chess, the audience usually watches the game in view angle after entering the live broadcast room, and in the live broadcast process, the audience can display the corresponding gift special effect or message special effect on the live broadcast interface in a manner of brushing the gift or leaving a message, so that the audience interacts with the host broadcasting the game live broadcast.
However, in the method, the live audience only depends on live consumption behavior or messages to enhance the presence sense of live, and the game substitution sense of the live audience cannot be intuitively enhanced, so that the man-machine interaction efficiency is reduced.
Disclosure of Invention
The embodiment of the application provides a live interaction method, a live interaction device, live interaction equipment, live interaction storage media and a live interaction computer program product, which can improve man-machine interaction efficiency. The technical scheme is as follows:
in one aspect, a live interaction method is provided, the method including:
receiving a live broadcasting room entering operation, wherein the live broadcasting room entering operation is used for indicating a currently logged-in target account to enter a live broadcasting room for live broadcasting watching according to the identity of a spectator;
Based on the live broadcasting room entering operation, displaying a live broadcasting picture of a virtual match, wherein a virtual scene of the virtual match comprises a match area and a sightseeing area, and the match area is used for performing a match process of the virtual match;
Displaying a first virtual object corresponding to a main broadcasting account in the fight area, wherein the first virtual object is controlled by the main broadcasting account corresponding to the live broadcasting room;
and displaying a second virtual object corresponding to the target account in the sightseeing area, wherein the second virtual object is controlled by the target account.
In another aspect, a live interaction device is provided, the device including:
the receiving module is used for receiving a live broadcasting room entering operation, wherein the live broadcasting room entering operation is used for indicating a currently logged-in target account to enter a live broadcasting room for live broadcasting watching according to the identity of a spectator;
The display module is used for displaying a live broadcast picture of a virtual match based on the live broadcast room entering operation, wherein the virtual scene of the virtual match comprises a match area and a sightseeing area, and the match area is used for performing the match process of the virtual match;
The display module is further used for displaying a first virtual object corresponding to the anchor account in the fight area, wherein the first virtual object is controlled by the anchor account corresponding to the live broadcasting room;
the display module is further configured to display a second virtual object corresponding to the target account in the sightseeing area, where the second virtual object is controlled by the target account.
In another aspect, a computer device is provided, where the device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement a live interaction method according to any one of the embodiments of the present application.
In another aspect, a computer readable storage medium is provided, where at least one program code is stored in the computer readable storage medium, where the program code is loaded and executed by a processor to implement a live interaction method of a terminal device according to any one of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the live interaction method according to any of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
After receiving the operation of entering the live broadcasting room, displaying a live broadcasting picture of the virtual game in the current interface, wherein besides displaying a first virtual object controlled by the main broadcasting account in a fight area contained in the virtual scene of the virtual game, a fight area is also arranged in the virtual scene of the virtual game and is used for displaying a second virtual object corresponding to the target account which enters in the current state of the audience, and the substitution sense of the audience in the process of live broadcasting and watching of the game is enhanced, so that the game interest of the audience is improved and the man-machine interaction frequency is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of a live interface provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a live interaction method provided by an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a live interaction method provided by another exemplary embodiment of the present application;
FIG. 5 is a schematic view of a view area display provided in an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a live interaction method provided by another exemplary embodiment of the present application;
FIG. 7 is a diagram of text chat interactions provided in an exemplary embodiment of the application;
FIG. 8 is a diagram of a magic expression pack display provided by an exemplary embodiment of the present application;
FIG. 9 is a schematic illustration of the interaction of the expression packs provided by an exemplary embodiment of the present application;
FIG. 10 is a flow chart of a method for displaying a sightseeing area according to another exemplary embodiment of the present application;
FIG. 11 is a schematic view of a first sightseeing area display provided in accordance with an exemplary embodiment of the present application;
FIG. 12 is a flowchart of live interaction feedback provided by an exemplary embodiment of the present application;
FIG. 13 is a block diagram of a live interaction device provided by an exemplary embodiment of the present application;
FIG. 14 is a block diagram of a live interaction device provided in another exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal structure according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
First, the terms involved in the embodiments of the present application will be briefly described:
Live broadcasting refers to a technique of collecting data of a main broadcasting party through equipment, converting the data into a video stream capable of being transmitted through a series of processes, for example, encoding and compressing the video, converting the video into a video stream, and outputting the video stream to a viewing terminal for playing. The live broadcast application program provided by the embodiment of the application refers to an application program provided from a media platform, namely, after a user registers an account in the live broadcast application program, the live broadcast application program can initiate a live broadcast room which is used as a host broadcast by the user. The initiation of the live broadcasting room comprises or does not comprise condition limitation, in some embodiments, the user account opens the live broadcasting room for live broadcasting in a mode of applying qualification, in other embodiments, the user account directly selects to start live broadcasting in a user interface of a live broadcasting application program, and after the information of the live broadcasting room is filled, the live broadcasting room can be opened for live broadcasting. In some embodiments, the user account may also be used as a viewer account to view live video of the anchor account.
Virtual environment-is the virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-imaginary environment, or a pure imaginary environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in the present application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
The self-propelled chess game is a novel multi-player combat strategy game. In a self-propelled chess game, a user can self-match a virtual object (namely a 'chess piece') to form a virtual object array capacity and fight against an hostile virtual object array capacity.
The chessboard refers to an area for preparing and performing fight in a self-propelled chess game fight interface, and can be any one of a two-dimensional virtual chessboard, a 2.5-dimensional virtual chessboard and a three-dimensional virtual chessboard, and the application is not limited to the two-dimensional virtual chessboard, the 2.5-dimensional virtual chessboard and the three-dimensional virtual chessboard.
Wherein the chessboard is divided into a fight area and a spare area. The combat zone comprises a plurality of combat chess grids with the same size, wherein the combat chess grids are used for placing combat chesses for combat in the combat process, and the combat zone comprises a plurality of combat chess grids used for placing combat-preparation chesses which cannot participate in combat in the combat process but can be dragged to be placed in the combat zone in the preparation stage.
Regarding the arrangement of the grids in the combat zone, in one possible embodiment, the combat zone includes n (rows) x m (columns) of combat grids, where n is an integer multiple of 2, and two adjacent rows of grids are aligned or two adjacent rows of grids are staggered. In addition, the fighting area is divided into two parts by row, namely a host fighting area and an enemy fighting area, and in the preparation stage, the user can only place chesses in the host fighting area.
Virtual object-refers to a movable object in a virtual environment. The movable object may be a virtual chess, virtual character, virtual animal, cartoon character, etc., such as a character, animal, plant, oil drum, wall, stone, etc., displayed in a three-dimensional virtual environment. Alternatively, the virtual object is a three-dimensional stereoscopic model created based on animated skeleton techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
In the embodiment of the present application, the virtual object includes different combat units in the self-propelled chess game or a control object that is freely movable during the game play. The virtual object may be a different pawn or a different virtual character, for example. The user can purchase, sell, upgrade, etc. the virtual object. The control objects can be different virtual characters, and the user can obtain corresponding rewards generated by the game by controlling the main control object to freely move in the virtual game, such as gold coin rewards, equipment rewards, game object rewards and the like.
Virtual game play refers to game play in which at least two virtual objects are in a virtual environment. In the embodiment of the present application, the virtual match is a match made up of at least two rounds of combat processes, that is, the virtual match includes multiple rounds of combat processes.
Fig. 1 shows a schematic diagram of a live broadcast interface provided by an exemplary embodiment of the present application, as shown in fig. 1, a current interface is displayed on a live broadcast screen 100 after a target account enters a live broadcast room with a viewer identity, wherein a virtual contrast is displayed on the live broadcast screen 100, a virtual scene of the virtual contrast includes a contrast area 101 and a sightseeing area 102 (both the left side and the right side of the contrast area 101 shown in fig. 1 are the sightseeing area 102), the contrast area 101 is used for performing virtual contrast, a first virtual object 103 is displayed in the contrast area 101, the first virtual object 103 is controlled by a main account corresponding to the live broadcast room, a second virtual object 104 is displayed in the sightseeing area 102, and the second virtual object 104 is controlled by the target account (viewer identity).
Fig. 2 is a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application, as shown in fig. 2, where the implementation environment includes a terminal 210 and a server 220, where the terminal 210 and the server 220 are connected through a communication network 230.
The live broadcast application provided in the embodiment of the present application is installed in the terminal 210, the current live broadcast application is associated with the target application, the user uses the terminal 210 to run the live broadcast application, and the interface of the current terminal 210 displays a live broadcast picture corresponding to the running interface of the live broadcast application, where the live broadcast picture includes the running picture of the target application.
Wherein the target application is an application supporting a three-dimensional virtual environment. The application 222 may be any one of a virtual reality application, a three-dimensional map application, a self-propelled chess game, an educational game, a Third person shooter game (Third-Person Shooting game, TPS), a First-person shooter game (First-Person Shooting game, FPS), a multiplayer online tactical competition game (Multiplayer Online Battle ARENA GAMES, MOBA), and a multiplayer gunfight survival game. The target application program can be a single-board application program, such as a single-board three-dimensional game program, or a network online application program.
The server 220 is configured to receive video streaming data of live video from the anchor terminal and transmit the video streaming data to the viewer terminal to play the live video.
In some embodiments, when the live video is implemented as a live video of a game, taking a self-propelled chess game as an example, when the audience terminal receives an operation of entering a live room, a live broadcast picture of a virtual game is displayed on an interface of the current audience terminal, and a sightseeing area and a game area are included in the virtual game picture, wherein a first virtual object is displayed in the game area, the first virtual object is controlled by a main account corresponding to the live room, a second virtual object is displayed in the sightseeing area, and the second virtual object is controlled by a target account corresponding to the audience terminal.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a vehicle-mounted terminal, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
It should be noted that the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform.
Cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, software, networks and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system. Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. The blockchain is essentially a decentralised database, and is a series of data blocks which are generated by association by using a cryptography method, and each data block contains information of a batch of network transactions and is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
Referring to fig. 3, a flowchart of a live interaction method according to an embodiment of the present application is shown, in the embodiment of the present application, the method is applied to the terminal 210 shown in fig. 2 for illustration, and the method includes:
Step 301, receiving a live room entry operation.
The live broadcasting room entering operation is used for indicating the currently logged-in target account to enter the live broadcasting room for live broadcasting watching in the identity of a spectator.
In some embodiments, the live broadcast room is an online virtual room correspondingly opened by the anchor account, and the live broadcast room generally includes the anchor account, an administrator account and an audience account, wherein the administrator account is used for performing live broadcast management on the live broadcast room, including comment management, live broadcast picture management, live broadcast flow management and the like, and the audience account refers to an account for viewing live broadcast pictures in the live broadcast room with the identity of an audience.
Optionally, the live room entry operation includes at least one of:
1. The method comprises the steps that a terminal runs a live broadcast application program with a live broadcast watching function, a live broadcast room list is displayed on an operation interface of the live broadcast application program, the live broadcast room list comprises at least one live broadcast room, the terminal receives a triggering operation on a designated live broadcast room, and enters the candidate live broadcast room, wherein the triggering operation comprises a clicking operation, a long-press operation, a sliding operation and the like;
2. acquiring an invitation code of a designated live broadcasting room, displaying a propaganda interface of the designated live broadcasting room by a current terminal, wherein the propaganda interface comprises a trigger control, inputting the invitation code corresponding to the designated live broadcasting room by triggering the trigger control, and entering the designated live broadcasting room;
3. the terminal runs a designated application program, wherein the designated application program is an applet with a live broadcast watching function, the applet is an applet which runs by taking the designated application program as a host program, when a live broadcast information link is displayed in the designated application program, live broadcast room information corresponding to the link is displayed through the live broadcast information link, and when a clicking operation on the live broadcast room information is received, the terminal jumps to the applet with the live broadcast watching function and enters the live broadcast room;
4. The method comprises the steps of obtaining a specific identification code (such as a bar code or a two-dimensional code, and the like, which are not limited herein) corresponding to a specified live broadcasting room, running a target application program with a live broadcasting watching function by a current terminal, opening a code scanning function of the target application program, scanning the identification code of the specified live broadcasting room, and jumping a terminal interface to the live broadcasting room.
It should be noted that the above-mentioned live room entry operation is only an illustrative example, and the live room entry operation is not limited in any way in the embodiment of the present application.
Illustratively, the live viewing content includes, but is not limited to, live viewing of the play content of the main account, or playback viewing of the historical live content of the main account.
Step 302, based on the live room entering operation, displaying a live screen of the virtual game.
The virtual scene of the virtual match comprises a match area and a sightseeing area, wherein the match area is used for performing a match process of the virtual match.
In some embodiments, after the target account enters the live broadcasting room with the identity of the audience, the terminal interface displays a live broadcasting picture of the live broadcasting room, where the virtual counter shown by the live broadcasting picture is a virtual counter in which the corresponding host account of the live broadcasting room currently participates, that is, the virtual counter includes the host account. The live broadcast picture is displayed at a first person viewing angle corresponding to the main broadcast account, or is displayed at a third person viewing angle, or the virtual counter office further comprises an adversary account, the live broadcast picture can be displayed at the viewing angle corresponding to the adversary account, the live broadcast picture is not limited herein, and when at least two conditions are realized in the three display conditions, each display condition can be switched at will.
Optionally, the fight area includes a fight preparation area and a fight area, where the fight preparation area is located on one side or two sides of the fight area, or is located in an area where the fight area extends outwards, or is located at a vertex angle position of the fight area, and the virtual fight area includes at least one sightseeing area, which is not limited herein.
Optionally, the sightseeing area is set by combining a designated application program corresponding to the virtual contrast with a live broadcast application program, that is, when the designated application program is connected with the live broadcast application program and is combined with the live broadcast application program, the terminal interface displays the operation interface of the designated application program, that is, the virtual contrast picture, the designated application program is connected with the live broadcast application program, the virtual contrast picture currently comprising the sightseeing area is projected to the live broadcast picture corresponding to the live broadcast account number for real-time display, or the sightseeing area is additionally and newly added to the virtual contrast by the live broadcast application program, that is, when the designated application program is connected with the live broadcast application program and is combined with the live broadcast application program, the live broadcast room corresponding to the live broadcast account number is opened in the operation process of the live broadcast application program, and the designated application program picture (that is, the virtual contrast picture) operated by the live broadcast terminal is projected to the live broadcast room, that is displayed in the live broadcast room, that is, the virtual contrast picture comprising the sightseeing area is not limited in the live broadcast scene when the virtual contrast is not projected to the live broadcast room.
Optionally, the sightseeing area is located at one side or two sides of the combat area, or is located in an area extending outwards of the combat area, or is located at a vertex angle position of the combat area, and the virtual combat area comprises at least one sightseeing area, which is not limited herein.
Schematically, the fight area in the virtual scene is used for displaying the ongoing virtual fight process of the anchor account, or displaying a preparation picture of the anchor account about to perform virtual fight, or displaying the playback record of the history virtual fight of the anchor account.
Schematically, the sightseeing area in the virtual scene is used for displaying the audience account number which is being watched live in the live broadcasting room, wherein the display mode comprises at least one of the following modes:
1. displaying the account name of the audience account in live watching in the sightseeing area;
2. Displaying account name cards corresponding to the audience accounts which are being watched live in the sightseeing area, wherein the account name cards comprise account names, account profile information and live consumption behaviors of the accounts corresponding to the live broadcasting room, such as 'contribution value', 'list value', 'gift giving record' and the like;
3. Each audience account corresponds to a designated virtual object, and the virtual object corresponding to the audience account currently in live broadcast watching is displayed in the sightseeing area, wherein the designated virtual object is preset for each audience account by the server, namely the designated virtual object corresponding to each audience account is fixed and unchangeable, or a virtual object list is displayed after the audience account enters a live broadcast room, a user can randomly select one virtual object as the designated virtual object corresponding to the current audience account, or the user can generate one designated virtual object through personalized creation, including by a face pinching mode and the like, and the method is not limited.
It should be noted that the above manner of displaying the sightseeing area is only an illustrative example, and the embodiment of the present application does not limit the manner of displaying the sightseeing area in any way.
The following steps 3031 to 3032 are two steps shown in parallel.
In step 3031, a first virtual object corresponding to the anchor account is displayed in the fight zone.
The first virtual object is controlled by a main broadcasting account corresponding to the live broadcasting room.
In some embodiments, the first virtual object is controlled by the main broadcast account corresponding to the current live broadcast room, and is used for indicating identity information corresponding to the main broadcast account, that is, the first virtual object represents the identity of the main broadcast player of the live broadcast room.
Illustratively, the first virtual object may be controlled by the current host account to move freely in the combat zone and the spare combat zone in the combat zone, or the host account may pick up a combat award, such as a gold coin award, an equipment material award, etc., displayed in the combat zone after the virtual combat is completed by controlling the first virtual object, or the host account may perform interactive contents, such as a chat session, a motion performance, etc., in the current combat zone by controlling the first virtual object, where the interactive contents include interaction with an account of a player of the opponent, or interaction with an account of a spectator in the sightseeing zone, which is not limited herein.
In some embodiments, the combat area further includes displaying a combat virtual object, where the combat virtual object is configured to automatically perform combat actions according to object attributes when engaged in a virtual combat, and the virtual combat is a combat consisting of at least two rounds of combat processes. In the embodiment of the present application, the virtual game is a game in a self-propelled chess game, and the virtual object is a combat unit in the self-propelled chess game. Optionally, the game virtual object corresponds to an object level, taking a self-propelled chess game as an example, the object level corresponding to the game virtual object can be synthesized by a specified number of peer level game virtual objects to achieve level promotion, for example, two game virtual objects with one level being one star can be synthesized into one game virtual object with two levels being two stars, so that corresponding combat attributes (such as a legal force value, a life value, an attack force and the like) of the game virtual object are promoted. Optionally, the game virtual object can automatically promote the level, for example, when the virtual game area contains a specified number of peer virtual objects with the same level and meets the level promotion requirement, the game virtual object is automatically synthesized, or the level promotion of the game virtual object is realized by means of manual promotion, that is, the user autonomously selects to synthesize the game virtual object meeting the level promotion requirement.
Illustratively, the virtual object of the game includes an alternative object located in the spare battle area and a battle object located in the battle area, when the alternative objects are included, the anchor user can automatically select the spare battle object to move to the battle area as the battle object, or the server automatically selects the spare battle object to move to the battle area as the battle object, when the virtual game starts, the master virtual object is located in the sightseeing area, and in the virtual game process, the battle object in the battle area performs automatic virtual game. Notably, the first virtual object is free to move in a combat zone, a preparation zone, or a sightseeing zone.
Optionally, each game account (including the anchor account) participating in the virtual game has a personalized fight area, and when a single round of virtual game starts, the fight area displayed on the current interface is the fight area owned by the anchor account, or the fight area owned by the enemy player is displayed, which is not limited herein.
Step 3032, displaying the second virtual object corresponding to the target account in the sightseeing area.
The second virtual object is controlled by the target account.
In some embodiments, the second virtual object is a virtual object controlled by the target account for indicating the identity of the viewer of the target account, and the second virtual object may be freely movable in the viewing area or located at a designated viewing location in the viewing area, without limitation.
Optionally, the displaying mode of the second virtual object in the sightseeing area includes at least one of the following modes:
1. displaying a character image corresponding to the second virtual object in the sightseeing area in a map form;
2. Displaying skeleton animation corresponding to the second virtual object in the sightseeing area in an animation mode;
3. And displaying the identification corresponding to the target account in the peripheral range of the second virtual object, such as an account nickname and the like, while displaying the second virtual object.
It should be noted that, the above-mentioned display manner of the second virtual object is only an illustrative example, and the specific display manner of the second virtual object in the embodiment of the present application is not limited in any way.
The viewing area further includes at least one virtual object corresponding to the viewer account, which is used to indicate the identity information corresponding to the viewer account, and the virtual object is controlled by the viewer account, and can freely move in the viewing area or be located at a designated viewing position in the viewing area, which is not limited herein.
Optionally, the target account may perform various interactions in the sightseeing area by controlling the second virtual object, including sending chat content with other accounts, generating a personalized expression package, performing personalized action performance with virtual objects corresponding to other accounts, and the like, which are not limited herein, where the other accounts include, but are not limited herein, a viewer account (i.e., "virtual object corresponding to other accounts" is a virtual object corresponding to the viewer account) or a host account (i.e., "virtual object corresponding to other accounts" is a first virtual object corresponding to the host account).
Illustratively, the first virtual object and the second virtual object are the same type of virtual object, or are different types of virtual objects, which are not limited herein.
In some embodiments, the second virtual object corresponding to the target account may be personalized by the target account, and appearance switching is performed in the sightseeing area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In an alternative embodiment, through a warfare trigger operation, a second virtual object corresponding to a target account is displayed in a warfare area, and referring to fig. 4, a flowchart of a live interaction method provided by an exemplary embodiment of the present application is shown schematically, in an embodiment of the present application, the method is applied to the terminal 210 shown in fig. 2, and a self-propelled chess game is taken as an example, and the method includes:
step 401, receiving a live room entry operation.
The live broadcasting room entering operation is used for indicating the currently logged-in target account to enter the live broadcasting room for live broadcasting watching in the identity of a spectator.
The description of the live room entry operation in step 401 is described in detail in step 301, and is not repeated here.
Step 402, based on the live room entering operation, displaying a live screen of the virtual game.
The virtual scene of the virtual match comprises a match area and a sightseeing area, wherein the match area is used for performing a match process of the virtual match.
The description of the live view of the virtual game in step 402 is already described in detail in step 302, and will not be repeated here.
Step 4031, a first virtual object corresponding to the anchor account is displayed in the fight zone.
The first virtual object is controlled by a main broadcasting account corresponding to the live broadcasting room.
The description of the first virtual object in step 4031 is already described in detail in step 3031, and is not repeated here.
Step 4041, a sightseeing trigger operation is received.
The sightseeing trigger operation is used for indicating the second virtual object corresponding to the target account to move to the sightseeing area.
Schematically, after the target account enters the current live broadcasting room, the current interface displays a live broadcasting picture corresponding to the live broadcasting room, the terminal receives a sightseeing trigger operation, and a second virtual object corresponding to the target account is displayed in a sightseeing area corresponding to the virtual scene.
Optionally, before the terminal receives the sightseeing trigger operation, the target account is located in the living broadcasting room for live broadcasting and watching in the identity of the audience, or the target account is not located in the living broadcasting room for live broadcasting and watching in the identity of the audience, and the current interface only displays the live broadcasting picture browsing interface of the current living broadcasting room, which is not limited herein.
In some embodiments, a sightseeing trigger control is displayed and used for triggering sightseeing functions, and when the sightseeing trigger control is in a touch-controllable state, triggering operation of the sightseeing trigger control is received and used as sightseeing trigger operation.
The method comprises the steps of displaying a sightseeing trigger control on a live interface of a live broadcasting room, triggering the sightseeing trigger control by a target user, wherein the target user comprises clicking operation, long-press operation, sliding operation, terminal motion control operation (such as shaking) and the like, and triggering a sightseeing function of the live broadcasting room, wherein the sightseeing function is used for live broadcasting and watching a target account in the live broadcasting room according to the identity of a viewer, and meanwhile a second virtual object corresponding to the target account is displayed in a sightseeing area in a virtual scene.
Optionally, the sightseeing trigger control includes a touchable state and a non-touchable state, the touchable state refers to that after the target account is triggered by the sightseeing trigger control, a second virtual object corresponding to the target account is displayed in the sightseeing area, the non-touchable state includes that the target account cannot display the corresponding second virtual object in the sightseeing area, and only the identity of the audience can be watched in live broadcast in the live broadcast room.
The non-touch state display mode includes at least one of the following modes:
1. presetting virtual objects corresponding to audience accounts with the specified number in a live broadcasting room, and displaying a non-touch state by using a sightseeing trigger control when the number of the virtual objects displayed in the sightseeing area reaches the specified number;
2. presetting a specified time threshold in the live broadcasting room, namely displaying a touchable state by the sightseeing trigger control in the specified time threshold, and displaying an untouchable state by the sightseeing trigger control when the specified time threshold is exceeded;
3. And setting a sightseeing condition in the live broadcasting room, and displaying the non-touch state of the sightseeing trigger control when the target account does not reach the sightseeing condition, wherein the sightseeing condition comprises consumption strength, attention duration and the like of the target account corresponding to the live broadcasting room, and the sightseeing condition is not limited herein.
It should be noted that the above-mentioned manner of displaying the non-touchable state is merely an illustrative example, and the embodiment of the present application is not limited thereto.
Optionally, the preset requirement in the live broadcast room is preset by the live broadcast application program or set by the host account, which is not limited herein.
In some embodiments, the sightseeing area includes a first sightseeing area and a second sightseeing area, the first sightseeing area is provided with corresponding first sightseeing conditions, the second sightseeing area is provided with corresponding second sightseeing conditions, and referring to fig. 5, a schematic view of a sightseeing area display mode provided by an exemplary embodiment of the present application is shown, as shown in fig. 5, a terminal displays a live broadcast picture 500 of a virtual match, wherein the live broadcast picture includes a match area and a sightseeing area, the sightseeing area is divided into a first sightseeing area 501 and a second sightseeing area 502, the live broadcast picture 500 displays a first sightseeing trigger control 503 corresponding to the first sightseeing area 501, and displays a second sightseeing trigger control 504 corresponding to the second sightseeing area 502, wherein the first sightseeing trigger control 503 is preconfigured with the first sightseeing conditions, the second sightseeing trigger control 504 is preconfigured with the second sightseeing conditions, and the second account trigger control 503 is displayed in a touch state when the target account does not meet the first sightseeing conditions, and the second account trigger control 504 is displayed in a touch state when the target does not meet the first sightseeing conditions.
Illustratively, the first sightseeing condition of the first sightseeing area is set by the anchor account or preset by the live broadcast application program, the first sightseeing condition is used for indicating the account authority of the target account, and when the account authority of the target account meets the first sightseeing condition, the touch-control state of the first sightseeing trigger control is displayed.
Schematically, the target account authority preset by the first sightseeing condition includes at least one of the following modes:
1. The first sightseeing condition comprises the identity attribute of a target account, such as fan identity, special invitation audience, and the like, namely, when the target account has the fan identity of the anchor account, the target account meets the first sightseeing condition, or a special invitation code is arranged in a live broadcasting room, and after the target account enters the live broadcasting room, the special invitation code is input, namely, the target account meets the first sightseeing condition;
2. The first sightseeing condition comprises a member system, wherein the member system comprises a member grade attribute of a target account, namely, a member grade threshold value is preset by a main account, when the target account reaches the member grade threshold value after purchasing members or completing responding member tasks, the corresponding member grade of the target account meets the first sightseeing condition;
3. The first sightseeing condition comprises live broadcast consumption records of the target account, and the main broadcast account presets a consumption threshold, namely, when a consumption value corresponding to the live broadcast consumption records of the target account in the live broadcast room or the main broadcast account corresponding to the main broadcast account reaches the consumption threshold, the target account meets the first sightseeing condition;
4. the first sightseeing condition comprises a historical interaction record of the target account and the anchor account, wherein the anchor account presets an interaction amount threshold, namely, when the historical interaction amount between the target account and the anchor account reaches the interaction amount threshold, the target account meets the first sightseeing condition, and the interaction record comprises a 'playing list', 'contribution value' and the like.
It should be noted that the above-mentioned target account authority preset with respect to the first sightseeing condition is only an illustrative example, and the embodiment of the present application is not limited thereto.
Illustratively, during use, the viewer account that meets the first viewing condition may be referred to as a "member account".
The second sightseeing condition is set by the main account or by the live broadcast application program, and is not limited herein, and the second sightseeing condition is used for indicating the sightseeing round of the target account, namely, when the current second sightseeing trigger control is in a touchable state, the audience triggers the second sightseeing trigger control in a competition mode, when the number of the audience accounts for triggering the second sightseeing trigger control reaches a preset trigger threshold, the non-touchable state of the second sightseeing trigger control is displayed, and therefore, a designated trigger sequence, namely, the sightseeing round, corresponds to the audience accounts for completing the triggering operation of the second sightseeing trigger control. In use, a viewer account that matches only the second viewing condition (that is mismatched to the first viewing condition) may be referred to as a "normal account".
Step 4042, based on the sightseeing trigger operation, displaying the second virtual object corresponding to the target account in the sightseeing area.
In some embodiments, displaying a candidate sightseeing location in the sightseeing area, wherein the candidate sightseeing location is used for accommodating virtual objects corresponding to the audience accounts, and displaying a second virtual object corresponding to the target accounts at the candidate sightseeing location based on a sightseeing trigger operation when the candidate sightseeing location is in an empty state, wherein the empty state is used for indicating that the candidate sightseeing location is in an accommodating state for the virtual objects.
Optionally, the sightseeing area includes at least one candidate sightseeing position, and each candidate sightseeing position is used for displaying a virtual object corresponding to the audience account, where each candidate sightseeing position correspondingly displays one or more virtual objects corresponding to the audience account, and the method is not limited herein.
Optionally, the displaying mode of the candidate sightseeing location in the sightseeing area includes at least one of the following modes:
1. The candidate sightseeing positions are fixedly arranged in the sightseeing area, namely, each candidate sightseeing position is positioned at a fixed position in the sightseeing area and cannot be changed or transformed;
2. The candidate sightseeing positions are randomly distributed in the sightseeing area, namely, the positions of the candidate sightseeing positions in the sightseeing area are randomly arranged, and after each round of the game is finished, the candidate sightseeing positions are reset and randomly displayed again;
3. the position of the candidate sightseeing position in the sightseeing area is set by the main broadcasting account, namely, the main broadcasting account can carry out personalized setting on the candidate sightseeing position in the living broadcast room;
4. The position of the candidate sightseeing position in the sightseeing area is set by the audience account, that is, when the candidate sightseeing position displays the virtual object corresponding to the audience account, the audience account can set the designated position to display the candidate sightseeing position in the sightseeing area.
It should be noted that, the above display manner for the candidate sightseeing positions is only an illustrative example, and the specific display manner for the candidate sightseeing positions in the embodiment of the present application is not limited in any way.
Optionally, after receiving the sightseeing trigger operation, the terminal displays the second virtual object of the target account at the candidate sightseeing position according to the triggering sequence of the audience account, or the target account can select the candidate sightseeing position as the second virtual object of the target account at the designated position by itself.
Illustratively, the accommodation state of the candidate sightseeing location indicates a display condition of a virtual object corresponding to the candidate sightseeing location, and the candidate sightseeing location includes an empty seat state and a occupied seat state, wherein the empty seat state indicates that the virtual object corresponding to any audience account is not displayed in the current candidate sightseeing location. The occupied state refers to that the candidate sightseeing position is set to display virtual objects with fixed number, and the number of the virtual objects corresponding to the audience accounts displayed in the current candidate sightseeing position reaches the fixed number, so that the candidate sightseeing position is in the occupied state.
Alternatively, the holding amount and number of the candidate sightseeing positions are fixed, or may be set by the host account, which is not limited herein.
Schematically, each candidate warfare location in the warfare area corresponds to a warfare trigger control, when a designated candidate warfare location is in an empty seat state, the warfare trigger control corresponding to the designated candidate warfare location displays a touchable state, when the designated candidate warfare location is in a seat occupying state, the corresponding warfare trigger control displays an untouchable state, or when the candidate warfare location in the empty seat state exists in the warfare area, the warfare trigger control displays a touchable state, and when all the candidate warfare locations in the warfare area are in the seat occupying state, the warfare trigger control displays an untouchable state, and the method is not limited. That is, when the candidate sightseeing position is in the occupied state, the sightseeing trigger control is displayed in the non-touchable state, and the occupied state is used for indicating that the candidate sightseeing position is in the accommodation limiting state for the virtual object.
Illustratively, the empty seat state and the occupied seat state of the candidate sightseeing position can be configured by the anchor account, that is, the anchor account has the right of "kicking out the audience", and when the virtual object corresponding to the audience account is displayed in the sightseeing area, the anchor account can limit the virtual object corresponding to the audience account, so that the virtual object corresponding to the audience account cannot be displayed in the sightseeing area.
In some embodiments, the second virtual object is displayed in the first viewing area in response to the target account matching the first viewing condition. And displaying a second virtual object in a second sightseeing area in response to the target account being mismatched with the first sightseeing condition.
Referring to fig. 5 schematically, as shown in fig. 5, the first sightseeing area 501 includes a first candidate sightseeing position 505, when the candidate sightseeing position 505 is in an empty state and the account authority of the current target account accords with the first sightseeing condition, the first sightseeing trigger control 503 corresponding to the first sightseeing area 501 displays a touch state, when the terminal receives a trigger operation on the first sightseeing trigger control 503, the second virtual object 506 corresponding to the target account is displayed in the candidate sightseeing area. And the second sightseeing area 502 includes the second candidate sightseeing position 507, when there is the second candidate sightseeing position 507 in the empty seat state, the second sightseeing trigger control 504 displays the touchable state, when all the second candidate sightseeing positions 505 are in the occupied seat state, the non-touchable state (not shown in fig. 5) of the second sightseeing trigger control 504 is displayed, therefore, when the target account is not matched with the first sightseeing condition, but the second sightseeing trigger control 504 is in the touchable state, the second sightseeing trigger control 504 needs to be triggered by means of competition, so that the corresponding second virtual object 506 is displayed at the second candidate sightseeing position 507 according to the generated sightseeing round corresponding to the triggering sequence, that is, in response to the mismatch of the target account and the first sightseeing condition, and is matched with the second sightseeing condition, and the second virtual object is displayed at the second sightseeing area, that is, when the target not matched with the first sightseeing condition, the target account is displayed with the second virtual object corresponding to the target account through competition mode, the triggering operation of the second sightseeing trigger control 504 is performed by the competition mode.
It is noted that when the target account meets the first sightseeing condition and both the first sightseeing trigger control and the second sightseeing trigger control display a touch state, the target account can trigger the first sightseeing trigger control or trigger the second sightseeing trigger control for displaying the second virtual object in the first sightseeing area or the second area without limitation, when the target account does not meet the first sightseeing condition and the second candidate sightseeing position is in the air seat state, only the second sightseeing trigger control displays the touch state, the target account can trigger the second sightseeing trigger control, and the successful triggering indicates that the target account meets the second sightseeing condition and is used for displaying the second virtual object in the second sightseeing area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In this embodiment, the sightseeing area is divided into a first sightseeing area provided with a first sightseeing condition and a second sightseeing area provided with a second sightseeing condition, so that live broadcast presence feeling of the audience conforming to the first sightseeing condition is improved, and meanwhile, the second candidate sightseeing position in the second sightseeing area is acquired in a competitive mode, so that live broadcast promotion of the audience can be stimulated, interest of the audience on game live broadcast is improved, and man-machine interaction frequency is enhanced.
In an alternative embodiment, after the second virtual object of the target account is displayed in the sightseeing area, the target account may perform multiple interactions in the living broadcast room through the second virtual object, and referring to fig. 6, a flowchart of a live broadcast interaction method provided in an exemplary embodiment of the present application is shown schematically, and in the embodiment of the present application, the method is described by taking an example after being applied to step 3032:
in step 601, an interactive content trigger is received.
The interactive content triggering operation is used for triggering live broadcast interaction of the target account and the main broadcast account in a live broadcast room.
In some embodiments, live broadcast interaction between the target account and the main broadcast account in the live broadcast room comprises live broadcast room chat interaction or virtual object action performance, wherein the chat interaction comprises text chat or voice chat or expression package sending, the virtual object action performance comprises that a second virtual object corresponding to the target account performs action performance alone, or that the second virtual object performs action performance with a first virtual object corresponding to the main broadcast account, or that the second virtual object performs action performance with virtual objects corresponding to other audience accounts in the live broadcast room, or that the virtual opponent also comprises an opponent virtual object corresponding to an opponent player account, the second virtual object performs action performance with an opponent virtual object, and the opponent player account is a player account performing virtual opponent with the main broadcast account, or that at least one virtual object (including at least one of the virtual objects) displayed in the live broadcast room is arbitrarily specified by the second virtual object to jointly complete action performance. The following describes four interaction modes in detail, including text chat interaction, sending expression packages, performing actions, and sending magic expression packages (i.e. at least two virtual objects are needed to complete action performances), which are not limited herein.
First, live interactions include text chat interactions.
In some embodiments, chat content input operations are received as interactive content trigger operations.
Alternatively, the target account may be input through text, as an input chat content, or input voice, and the terminal may convert the input voice into a corresponding text content, as an input chat content, which is not limited thereto.
Referring to fig. 7, a schematic diagram of text chat interaction provided by an exemplary embodiment of the present application is shown, as shown in fig. 7, a live broadcast screen 700 is currently displayed, a chat content display frame 701 is displayed in the live broadcast screen 700, a chat content input frame 702 is included in the chat content display frame 701, and a chat content sending control 703 is included in the chat content display frame 701, and the target account may input a custom text "support a host" in the chat content input frame 702, and click on the chat content sending control 703 as a chat content input operation.
Second, the live interaction includes performing a first target action.
In some embodiments, a first candidate action list is displayed, the first candidate action list including a first target action therein, and a first action selection operation is received in the first candidate action list, the first action selection operation being for selecting the first target action in the first candidate action list.
The method includes the steps that a first candidate action list is displayed on a current live broadcast picture, wherein the first candidate action list comprises at least one preset first candidate action, a terminal receives triggering operation for designating the first candidate action as a first action selection operation, and the designated first candidate action is determined to be a first target action.
Third, live interaction includes sending a magic expression package.
In some embodiments, a second candidate action list is displayed, the second candidate action list including a second target action therein, and a second action selection operation is received in the second candidate action list, the second action selection operation being for selecting the second target action in the second candidate action list.
Referring to fig. 8, a schematic illustration of a magic expression package display provided by an exemplary embodiment is shown, as shown in fig. 8, an interface of a live broadcast picture 800 includes an expression package trigger control 801, after the trigger operation is performed on the expression package trigger control 801, the trigger operation is performed on a magic expression package trigger control 802 included in the expression package interface, a magic expression package list 803 is displayed on a current interface, where the magic expression package list 803 includes at least one target magic expression package 804 (one magic expression package corresponds to a designated action), and by selecting the target magic expression package 804 and performing a click operation on the target magic expression package, it is determined that a transmission object corresponding to the target magic expression package 804 is an enemy virtual object 805 corresponding to an enemy player account.
Fourth, live interaction includes sending an expression package.
Referring to fig. 9, an illustration of an expression package interaction diagram provided by an exemplary embodiment of the present application is shown, as shown in fig. 9, an interface of a live broadcast picture 900 includes an expression package trigger control 901, after the expression package trigger control 901 is triggered, a trigger operation is performed on a candidate expression package trigger control 902 included in the expression package interface, and a candidate expression package list 903 is displayed on a current interface, where the candidate expression package list 903 includes at least one target expression package 904, and by selecting and clicking a target expression package 904, the candidate expression package is used as an interaction content trigger operation.
In step 602, in response to the interactive content triggering operation, interactive content between the first virtual object and the second virtual object is displayed.
Four different interactive contents are displayed corresponding to the four interactive modes in step 601.
First, the interactive contents include chat interactive contents.
In some embodiments, the chat interactive content corresponding to the chat content input operation is determined, and the chat interactive content is displayed in a peripheral range of the second virtual object.
Optionally, after the chat interactive content is determined, displaying the chat interactive content in a text scrolling manner in a peripheral range of the second virtual object, or fixedly displaying the chat interactive content, and after the next chat interactive content is determined, switching to display the next chat interactive content, or after a period of time, canceling the display of the chat interactive content, which is not limited herein.
Illustratively, as shown in fig. 7, the input text "support a next anchor" in the chat content input box 702 is determined as the input chat interactive content 704, and based on clicking the chat content sending control 703, the chat interactive content 704 "support a next anchor" is displayed in the chat content display box 701, and at the same time, an interactive box 706 corresponding to "support a next anchor" is displayed in a peripheral range of the second virtual object 705 corresponding to the target account.
Second, the interactive content includes a first target action.
In some embodiments, in response to the first action selection operation, the second virtual object is displayed to perform the first target action.
Schematically, after the triggering operation is performed on the first target action, displaying an animation effect corresponding to the first target action performed by the second virtual object in the live broadcast picture, for example, when the first target action is a 'circle', after the triggering operation on the first target action is completed, the second virtual object corresponding to the target account starts to complete the 'circle' action and display the animation effect, and optionally, continuously displaying the process that the second virtual object performs the first target action in the live broadcast picture, that is, repeatedly playing the animation effect, or displaying the process that only the second virtual object performs the first target action once in the live broadcast picture.
And thirdly, the interactive content comprises a second target action corresponding to the magic expression package.
In some embodiments, in response to the second action selection operation, the second virtual object and the first virtual object are displayed to collectively perform the second target action.
Optionally, the process that the second virtual object and the first virtual object execute the second target action together is continuously displayed in the live broadcast picture, or only the process that the second virtual object and the first virtual object execute the second target action together is displayed once.
Optionally, the second virtual object may also perform the second target action together with an enemy virtual object corresponding to an enemy player account in the virtual opponent, or perform the second target action together with virtual objects corresponding to other spectator accounts in the war zone, which is not limited herein.
Schematically, please refer to fig. 8, after the click operation on the target magic expression package 804 is completed, and the enemy virtual object 805 corresponding to the enemy player account is determined as a second target action corresponding to the target magic expression package, for example, if the target magic expression package 804 is "hug", the second virtual object 806 and the enemy virtual object 805 together complete the "hug" action and display the "hug" action in the live broadcast picture (the "hug" action display process is not shown).
It should be noted that the second target action is preset with a trigger requirement, and the trigger requirement includes a buffering time, or consumes a virtual resource, where the buffering time means that after the second target action is triggered, the second target action cannot be triggered again in the buffering time, and the consumption of the virtual resource means that the second target action needs to be purchased by using the virtual resource corresponding to the live application program through a purchase channel in the live application program, and then the second target action can be triggered.
Fourth, the interactive contents include a target expression package.
Schematically, as shown in fig. 9, the selected target expression package 904 is triggered based on the interactive content, and an image corresponding to the target expression package 904 is displayed in a peripheral range of the second virtual object 905.
Schematically, as shown in fig. 9, the target expression package 904 is preset with a trigger requirement, that is, when the target account selects the target expression package 904 for display, the target expression package 904 cannot be clicked again within the buffer time, and the buffer icon 906 is displayed at the position corresponding to the target expression package 904, and when the buffer time is reached, the buffer icon 906 disappears, and the target expression package 904 can be triggered again (not shown in fig. 9).
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In this embodiment, the audience in the live broadcasting room can perform various interaction modes in the live broadcasting process, including interactive chat, action performance and the like, so that the substitution sense of the audience in the live broadcasting watching process can be enhanced, the interactivity between the audience and the anchor is enhanced, and the man-machine interaction frequency is improved.
In an alternative embodiment, please refer to fig. 10, which illustrates a flowchart of a method for displaying a sightseeing area according to an exemplary embodiment of the present application, as shown in fig. 10, the method includes the following steps:
1001, the target account enters the live room.
And the terminal receives the live broadcasting room entering operation, so that the target account enters the live broadcasting room in the identity of a spectator, and the current terminal displays a live broadcasting picture of the live broadcasting room, wherein the live broadcasting picture is a virtual contrast picture.
1002, A second virtual object corresponding to the target account is displayed in the first sightseeing area.
After the target account enters the live broadcasting room, judging whether the account authority of the target account can trigger a first sightseeing area according to a first sightseeing condition preset by the host account, if the target account meets the first sightseeing condition, displaying a touchable state of a first sightseeing trigger control corresponding to the first sightseeing area if a first candidate sightseeing position in the first sightseeing area displays an empty seat state, clicking the first sightseeing trigger control by a user, displaying a second virtual object corresponding to the target account if the first candidate sightseeing position in the first sightseeing area meets the first sightseeing condition, and otherwise, displaying a non-touchable state of the first sightseeing trigger control.
And 1003, displaying a second virtual object corresponding to the target account in a second sightseeing area.
When the target account number is not matched with the first sightseeing condition, whether a second candidate sightseeing position in the second sightseeing area displays an empty seat state is required to be judged, when the second candidate sightseeing position displaying the empty seat state exists, the terminal displays the touch state of a second sightseeing trigger control corresponding to the second sightseeing area, the user performs clicking operation on the second sightseeing trigger control in a competitive mode within the time range of displaying the touch state, and if the clicking operation is successful, the current target account number accords with the second sightseeing condition, and the second candidate sightseeing position in the second sightseeing area displays a second virtual object corresponding to the target account number.
And 1004, the second virtual object corresponding to the target account cannot be displayed.
When the first sightseeing conditions of the target account are not matched, and all second candidate sightseeing positions in the second sightseeing area display the occupied state, the fact that the target account is not matched with the second sightseeing conditions at present is indicated, and the second sightseeing trigger control displays the non-touch state, so that a second virtual object corresponding to the target account cannot be displayed in the second sightseeing area.
In some embodiments, in addition to the main account corresponding to the current live broadcast picture in the current virtual game, the hostile player account is also a main account, that is, the current virtual game includes two main accounts, only the first sightseeing area 1103 corresponding to the first main account is displayed in the current virtual scene, and referring to fig. 11, a schematic diagram of displaying the first sightseeing area provided in an exemplary embodiment of the present application is shown, as shown in fig. 11, in the current live broadcast picture 1100, a first virtual object 1101 corresponding to the first main account and a hostile virtual object 1102 corresponding to the second main account are included, the first main account is a main account corresponding to the current live broadcast room, the second main account is a hostile player account corresponding to the current live broadcast room in the virtual game, then the first sightseeing area 1103 corresponding to the first main account is displayed in the current virtual scene, and the second hostile area 1104 is displayed, where the hostile sightseeing area 1104 is configured with specified hostile conditions corresponding to the first hostile area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In an alternative embodiment, please refer to fig. 12, which illustrates a live interaction feedback flowchart provided by an exemplary embodiment of the present application, as shown in fig. 12, the method includes the following steps:
1201, the interactive content is sent to the virtual office.
When the second virtual object corresponding to the target account is displayed in the sightseeing area (the first sightseeing area or the second sightseeing area), the terminal receives the interactive content triggering operation corresponding to the target account, including triggering interactive chat, sending expression packages, sending magic expression packages or action performances and the like. And judging whether the triggering operation of the interactive content is successful, and if the triggering operation is successful, sending the interactive content to the virtual opposite office, namely displaying the interactive content between the target account and the anchor account in the virtual opposite office process.
Schematically, as shown in fig. 11, when a first anchor account and a second anchor account exist in the virtual game, the second virtual object 1105 corresponding to the target account may select an interactive chat mode to display interactive chat content 1106, or select the first virtual object 1101 or the enemy virtual object 1102 to send a magic expression package (indicated by an arrow dotted line in fig. 11), and meanwhile, the virtual object 1107 corresponding to the audience account in the enemy combat area may also select the first virtual object 1101 or the enemy virtual object 1102 to send a magic expression package, which is not limited herein.
1202, Entering a process containing interactive content into a live screen.
In the process of displaying the interactive contents of the target account and the anchor account in the virtual game, simultaneously transmitting the picture corresponding to the virtual game with the interactive contents to the live broadcast picture corresponding to the current live broadcast room, and if the transmission is successful, synchronously playing the current virtual game picture in real time by the current live broadcast picture.
1203, Prompt for failure and failure cause.
If the interactive content transmission fails, displaying an interactive content transmission failure prompt at the terminal, wherein the interactive content transmission failure prompt comprises failure reasons including reasons such as poor video quality and poor network signal.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
According to the scheme, through adding the function points of the live broadcasting room combined with the characteristics of the self-propelled chess game, the willingness of the audience to log in the game and participate in the live broadcasting function in the game is improved. The problem that most of self-playing game players are lost in a live platform to only play live broadcast and watch without participating in the game is promoted. In addition, the actions such as preempting the sightseeing position and sending the expression can bring stronger participation to the live audience, and the live audience watching experience is improved.
Fig. 13 is a block diagram of a live interaction device according to an embodiment of the present application. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The apparatus may include:
A receiving module 1310, configured to receive a live room entry operation, where the live room entry operation is configured to instruct a currently logged-in target account to enter a live room for live viewing with a viewer identity;
The display module 1320 is configured to display a live broadcast picture of a virtual match based on the live broadcast room entering operation, where the virtual scene of the virtual match includes a combat area and a sightseeing area, and the combat area is used for performing a combat process of the virtual match;
the display module 1320 is further configured to display a first virtual object corresponding to a main account in the fight area, where the first virtual object is controlled by the main account corresponding to the live room;
The display module 1320 is further configured to display a second virtual object corresponding to the target account in the sightseeing area, where the second virtual object is controlled by the target account.
In an alternative embodiment, as shown in fig. 14, the display module 1320 includes:
A receiving unit 1321, configured to receive a sightseeing trigger operation, where the sightseeing trigger operation is used to instruct to move a second virtual object corresponding to the target account to the sightseeing area;
And a display unit 1322, configured to display, in the sightseeing area, the second virtual object corresponding to the target account based on the sightseeing trigger operation.
In an optional embodiment, the display unit 1322 is further configured to display a candidate sightseeing location in the sightseeing area, where the candidate sightseeing location is configured to accommodate a virtual object corresponding to a viewer, and display, in a case where the candidate sightseeing location is in an empty seat state, the second virtual object corresponding to the target account in the candidate sightseeing location based on the sightseeing trigger operation, where the empty seat state is configured to indicate that the candidate sightseeing location is in an accommodating state for the virtual object.
In an optional embodiment, the receiving unit 1321 is further configured to display a sightseeing trigger control, where the sightseeing trigger control is used to trigger a sightseeing function, and receive, as the sightseeing trigger operation, a trigger operation on the sightseeing trigger control when the sightseeing trigger control is in a touchable state.
In an optional embodiment, the display unit 1322 is further configured to display the sightseeing trigger control in a non-touchable state when the candidate sightseeing location is in a stand-by state, where the stand-by state is used to indicate that the candidate sightseeing location is in a limited-accommodation state for the virtual object.
In an optional embodiment, the receiving module 1310 is further configured to receive an interactive content triggering operation, where the interactive content triggering operation is used to trigger live interaction between the target account and the main account in the live broadcast room;
The display module 1320 is further configured to display interactive content between the first virtual object and the second virtual object in response to the interactive content triggering operation.
In an alternative embodiment, the receiving module 1310 is further configured to receive a chat content input operation as the interactive content triggering operation;
The display module 1320 is further configured to determine chat interaction content corresponding to the chat content input operation, and display the chat interaction content in a peripheral range of the second virtual object.
In an optional embodiment, the receiving module 1310 is further configured to display a first candidate action list, where the first candidate action list includes a first target action; receiving a first action selection operation in the first candidate action list, wherein the first action selection operation is used for selecting the first target action in the first candidate action list;
The display module 1320 is further configured to display, in response to the first action selection operation, the second virtual object to perform the first target action.
In an optional embodiment, the receiving module 1310 is further configured to display a second candidate action list, where the second candidate action list includes a second target action; receiving a second action selection operation in the second candidate action list, the second action selection operation being used for selecting the second target action in the second candidate action list;
The display module 1320 is further configured to display, in response to the second action selection operation, that the second virtual object and the first virtual object together perform the second target action.
In an alternative embodiment, the sightseeing area comprises a first sightseeing area and a second sightseeing area;
the display module 1320 is further configured to display the second virtual object in the first sightseeing area in response to the target account matching a first sightseeing condition, where the first sightseeing condition is used to indicate an account authority of the target account, and display the second virtual object in the second sightseeing area in response to a mismatch between the target account and the first sightseeing condition.
In an alternative embodiment, the display module 1320 is further configured to display, in response to the target account being mismatched with the first viewing condition and matched with a second viewing condition, the second virtual object in the second viewing area, where the second viewing condition is used to indicate a viewing round of the target account.
In summary, the embodiment of the application provides a live broadcast interaction device, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein, besides a first virtual object controlled by a main broadcast account is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is also arranged in the virtual scene of the virtual game, and the virtual scene of the virtual game is used for displaying a second virtual object corresponding to a target account entering in a current audience identity.
It should be noted that, in the live broadcast interactive display device provided in the foregoing embodiment, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live interaction device provided in the above embodiment and the live interaction method embodiment belong to the same concept, and detailed implementation processes of the live interaction device and the live interaction method embodiment are detailed in the method embodiment and are not repeated herein.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be a smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, MPEG audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, MPEG audio layer 4) player, notebook computer, or desktop computer. Terminal 1500 can also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1500 includes a processor 1501 and memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is configured to store at least one instruction for execution by processor 1501 to implement the live interaction method provided by the method embodiments of the present application.
In some embodiments, terminal 1500 can optionally further include a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral devices include at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, and a power supply 1508.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, the memory 1502 and the peripheral interface 1503 are integrated on the same chip or circuit board, and in some other embodiments, either or both of the processor 1501, the memory 1502 and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals. Optionally, the radio frequency circuit 1504 includes an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to, the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1504 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, the front panel of the terminal 1500 may be provided, in other embodiments, the display 1505 may be at least two, each provided on a different surface or in a folded design of the terminal 1500, and in still other embodiments, the display 1505 may be a flexible display, provided on a curved surface or folded surface of the terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of materials such as an LCD (Liquid CRYSTAL DISPLAY) and an OLED (Organic Light-Emitting Diode).
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The power supply 1508 is used to power the various components in the terminal 1500. The power source 1508 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1508 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to, an acceleration sensor 1511, a gyroscope sensor 1512, a pressure sensor 1513, an optical sensor 1514, and a proximity sensor 1515.
The acceleration sensor 1511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the touch display screen 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1515 may collect 3D motion of the user on the terminal 1500 in cooperation with the acceleration sensor 1511. The processor 1501 can implement functions such as motion sensing (e.g., changing a UI according to a tilting operation of a user), image stabilization at photographing, game control, and inertial navigation based on data collected by the gyro sensor 1512.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or below touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, a grip signal of the user on the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the touch display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1514 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of touch display screen 1505 based on the intensity of ambient light collected by optical sensor 1514. Specifically, the display brightness of the touch display screen 1505 is turned up when the ambient light intensity is high, and the display brightness of the touch display screen 1505 is turned down when the ambient light intensity is low. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1514.
A proximity sensor 1515, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1515 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, the processor 1501 controls the touch display 1505 to switch from the on-screen state to the off-screen state when the proximity sensor 1515 detects that the distance between the user and the front of the terminal 1500 is gradually decreasing, and the processor 1501 controls the touch display 1505 to switch from the off-screen state to the on-screen state when the proximity sensor 1515 detects that the distance between the user and the front of the terminal 1500 is gradually increasing.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments, or may be a computer readable storage medium alone, which is not incorporated in the terminal. The computer readable storage medium stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the live interaction method according to any of the above embodiments.
Alternatively, the computer readable storage medium may include a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), a Solid state disk (SSD, solid STATE DRIVES), an optical disk, or the like. The random access memory may include resistive random access memory (ReRAM, RESISTANCE RANDOM ACCESS MEMORY) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (13)

1.一种直播互动方法,其特征在于,所述方法包括:1. A live interactive method, characterized in that the method comprises: 接收直播间进入操作,所述直播间进入操作用于指示当前登录的目标帐号以观众身份进入直播间进行直播观看;Receive a live broadcast room entry operation, where the live broadcast room entry operation is used to instruct the currently logged-in target account to enter the live broadcast room as a viewer to watch the live broadcast; 基于所述直播间进入操作,显示虚拟对局的直播画面,所述虚拟对局的虚拟场景中包括对战区域和观战区域,其中,所述对战区域用于进行所述虚拟对局的对战过程,所述观战区域包括第一观战区域和第二观战区域,所述第一观战区域设有第一观战条件,所述第一观战条件用于指示所述目标帐号的帐号权限,所述第二观战区域设有第二观战条件,所述第二观战条件用于指示所述目标帐号的观战轮次;Based on the live room entry operation, a live broadcast screen of a virtual game is displayed, wherein the virtual scene of the virtual game includes a battle area and a viewing area, wherein the battle area is used to conduct the battle process of the virtual game, and the viewing area includes a first viewing area and a second viewing area, wherein the first viewing area is provided with a first viewing condition, and the first viewing condition is used to indicate the account authority of the target account, and the second viewing area is provided with a second viewing condition, and the second viewing condition is used to indicate the viewing round of the target account; 在所述对战区域中显示与主播帐号对应的第一虚拟对象,所述第一虚拟对象由所述直播间对应的所述主播帐号进行控制;Displaying a first virtual object corresponding to the anchor account in the battle area, wherein the first virtual object is controlled by the anchor account corresponding to the live broadcast room; 当所述目标帐号符合所述第一观战条件,显示可触控状态的第一观战触发控件;响应于对所述第一观战触发控件的触发操作,在所述第一观战区域显示所述目标帐号对应的第二虚拟对象,所述第二虚拟对象由所述目标帐号进行控制;When the target account meets the first spectating condition, a first spectating trigger control in a touchable state is displayed; in response to a triggering operation on the first spectating trigger control, a second virtual object corresponding to the target account is displayed in the first spectating area, and the second virtual object is controlled by the target account; 当所述目标帐号符合所述第二观战条件,显示可触控状态的第二观战触发控件,其中,对所述第二观战触发控件完成触发操作的观众帐号对应的触发次序作为所述观战轮次;响应于以竞争形式实现的对所述第二观战触发控件的触发操作,在所述第二观战区域显示所述目标帐号对应的所述第二虚拟对象。When the target account meets the second viewing condition, a second viewing trigger control in a touchable state is displayed, wherein the triggering order corresponding to the audience accounts that complete the triggering operation on the second viewing trigger control is used as the viewing round; in response to the triggering operation of the second viewing trigger control implemented in a competitive form, the second virtual object corresponding to the target account is displayed in the second viewing area. 2.根据权利要求1所述的方法,其特征在于,所述在所述第一观战区域显示所述目标帐号对应的第二虚拟对象,包括:2. The method according to claim 1, wherein displaying the second virtual object corresponding to the target account in the first viewing area comprises: 显示位于所述第一观战区域中的候选观战位置,所述候选观战位置用于容纳观众对应的虚拟对象;Display candidate viewing positions in the first viewing area, where the candidate viewing positions are used to accommodate virtual objects corresponding to viewers; 在所述候选观战位置处于空座状态的情况下,基于对所述第一观战触发控件的触发操作在所述候选观战位置显示所述目标帐号对应的所述第二虚拟对象,所述空座状态用于指示所述候选观战位置处于对虚拟对象的可容纳状态;In the case where the candidate viewing position is in an empty seat state, displaying the second virtual object corresponding to the target account at the candidate viewing position based on a trigger operation on the first viewing trigger control, wherein the empty seat state is used to indicate that the candidate viewing position is in a state where it can accommodate a virtual object; 所述在所述第二观战区域显示所述目标帐号对应的所述第二虚拟对象,包括:The displaying the second virtual object corresponding to the target account in the second viewing area includes: 显示位于所述第二观战区域中的候选观战位置,所述候选观战位置用于容纳观众对应的虚拟对象;Display candidate viewing positions in the second viewing area, where the candidate viewing positions are used to accommodate virtual objects corresponding to the audience; 在所述候选观战位置处于空座状态的情况下,基于对所述第二观战触发控件的触发操作在所述候选观战位置显示所述目标帐号对应的所述第二虚拟对象,所述空座状态用于指示所述候选观战位置处于对虚拟对象的可容纳状态。When the candidate viewing position is in an empty seat state, the second virtual object corresponding to the target account is displayed at the candidate viewing position based on a trigger operation on the second viewing trigger control, and the empty seat state is used to indicate that the candidate viewing position is in a state where it can accommodate a virtual object. 3.根据权利要求2所述的方法,其特征在于,所述方法还包括:3. The method according to claim 2, characterized in that the method further comprises: 在所述第一观战区域中的所述候选观战位置处于占座状态的情况下,以不可触控状态显示所述第一观战触发控件,所述占座状态用于指示所述候选观战位置处于对虚拟对象的容纳限制状态;When the candidate viewing position in the first viewing area is in an occupied state, displaying the first viewing trigger control in a non-touchable state, wherein the occupied state is used to indicate that the candidate viewing position is in a state of accommodating a virtual object; 在所述第二观战区域中的所述候选观战位置处于占座状态的情况下,以不可触控状态显示所述第二观战触发控件。When the candidate viewing position in the second viewing area is in an occupied state, the second viewing trigger control is displayed in a non-touchable state. 4.根据权利要求1至3任一所述的方法,其特征在于,所述方法还包括:4. The method according to any one of claims 1 to 3, characterized in that the method further comprises: 接收互动内容触发操作,所述互动内容触发操作用于触发所述目标帐号和所述主播帐号在所述直播间的直播互动;Receiving an interactive content triggering operation, where the interactive content triggering operation is used to trigger a live broadcast interaction between the target account and the anchor account in the live broadcast room; 响应于所述互动内容触发操作,显示所述第一虚拟对象和所述第二虚拟对象之间的互动内容。In response to the interactive content triggering operation, the interactive content between the first virtual object and the second virtual object is displayed. 5.根据权利要求4所述的方法,其特征在于,所述接收互动内容触发操作,包括:5. The method according to claim 4, characterized in that the receiving of the interactive content triggering operation comprises: 接收聊天内容输入操作作为所述互动内容触发操作;Receiving a chat content input operation as the interactive content triggering operation; 所述响应于所述互动内容触发操作,显示所述第一虚拟对象和所述第二虚拟对象之间的互动内容,包括:The step of displaying the interactive content between the first virtual object and the second virtual object in response to the interactive content triggering operation includes: 确定所述聊天内容输入操作对应的聊天互动内容;在所述第二虚拟对象的周侧范围显示所述聊天互动内容。Determine the chat interaction content corresponding to the chat content input operation; and display the chat interaction content in a surrounding area of the second virtual object. 6.根据权利要求4所述的方法,其特征在于,所述接收互动内容触发操作,包括:6. The method according to claim 4, wherein the receiving of the interactive content triggering operation comprises: 显示第一候选动作列表,所述第一候选动作列表中包括第一目标动作;Displaying a first candidate action list, wherein the first candidate action list includes a first target action; 接收在所述第一候选动作列表中的第一动作选择操作,所述第一动作选择操作用于对所述第一候选动作列表中的所述第一目标动作进行选择;receiving a first action selection operation in the first candidate action list, where the first action selection operation is used to select the first target action in the first candidate action list; 所述响应于所述互动内容触发操作,显示所述第一虚拟对象和所述第二虚拟对象之间的互动内容,包括:The step of displaying the interactive content between the first virtual object and the second virtual object in response to the interactive content triggering operation includes: 响应于所述第一动作选取操作,显示所述第二虚拟对象执行所述第一目标动作。In response to the first action selection operation, the second virtual object is displayed to perform the first target action. 7.根据权利要求4所述的方法,其特征在于,所述接收互动内容触发操作,包括:7. The method according to claim 4, wherein the receiving of the interactive content triggering operation comprises: 显示第二候选动作列表,所述第二候选动作列表中包括第二目标动作;displaying a second candidate action list, wherein the second candidate action list includes a second target action; 接收在所述第二候选动作列表中的第二动作选择操作,所述第二动作选择操作用于对所述第二候选动作列表中的所述第二目标动作进行选择;receiving a second action selection operation in the second candidate action list, where the second action selection operation is used to select the second target action in the second candidate action list; 所述响应于所述互动内容触发操作,显示所述第一虚拟对象和所述第二虚拟对象之间的互动内容,包括:The step of displaying the interactive content between the first virtual object and the second virtual object in response to the interactive content triggering operation includes: 响应于所述第二动作选取操作,显示所述第二虚拟对象和所述第一虚拟对象共同执行所述第二目标动作。In response to the second action selection operation, the second virtual object and the first virtual object are displayed to jointly perform the second target action. 8.根据权利要求1至3任一所述的方法,其特征在于,所述当所述目标帐号符合所述第二观战条件,显示可触控状态的第二观战触发控件,包括;8. The method according to any one of claims 1 to 3, characterized in that when the target account meets the second spectating condition, displaying a touchable second spectating trigger control comprises: 响应于所述目标帐号与所述第一观战条件失配,显示可触控状态的所述第二观战触发控件。In response to the target account not matching the first spectating condition, displaying the second spectating trigger control in a touchable state. 9.根据权利要求8所述的方法,其特征在于,所述响应于所述目标帐号与所述第一观战条件失配,显示可触控状态的所述第二观战触发控件,包括:9. The method according to claim 8, wherein in response to the target account not matching the first spectating condition, displaying the second spectating trigger control in a touchable state comprises: 响应于所述目标帐号与所述第一观战条件失配,且与所述第二观战条件匹配,显示可触控状态的所述第二观战触发控件。In response to the target account not matching the first spectating condition but matching the second spectating condition, displaying the second spectating trigger control in a touchable state. 10.一种直播互动装置,其特征在于,所述装置包括:10. A live interactive device, characterized in that the device comprises: 接收模块,用于接收直播间进入操作,所述直播间进入操作用于指示当前登录的目标帐号以观众身份进入直播间进行直播观看;A receiving module, used to receive a live broadcast room entry operation, wherein the live broadcast room entry operation is used to instruct the currently logged-in target account to enter the live broadcast room as a viewer to watch the live broadcast; 显示模块,用于基于所述直播间进入操作,显示虚拟对局的直播画面,所述虚拟对局的虚拟场景中包括对战区域和观战区域,其中,所述对战区域用于进行所述虚拟对局的对战过程,所述观战区域包括第一观战区域和第二观战区域,所述第一观战区域设有对应的第一观战条件,所述第一观战条件用于指示所述目标帐号的帐号权限,所述第二观战区域设有对应的第二观战条件,所述第二观战条件用于指示所述目标帐号的观战轮次;A display module, configured to display a live broadcast screen of a virtual game based on an operation of entering the live broadcast room, wherein the virtual scene of the virtual game includes a battle area and a viewing area, wherein the battle area is used to conduct a battle process of the virtual game, and the viewing area includes a first viewing area and a second viewing area, wherein the first viewing area is provided with a corresponding first viewing condition, and the first viewing condition is used to indicate an account authority of the target account, and the second viewing area is provided with a corresponding second viewing condition, and the second viewing condition is used to indicate a viewing round of the target account; 所述显示模块,还用于在所述对战区域中显示与主播帐号对应的第一虚拟对象,所述第一虚拟对象由所述直播间对应的所述主播帐号进行控制;The display module is further used to display a first virtual object corresponding to the anchor account in the battle area, where the first virtual object is controlled by the anchor account corresponding to the live broadcast room; 所述显示模块,还用于当所述目标帐号符合所述第一观战条件,显示可触控状态的第一观战触发控件;响应于对所述第一观战触发控件的触发操作,在所述第一观战区域显示所述目标帐号对应的第二虚拟对象;The display module is further configured to display a first spectating trigger control in a touchable state when the target account meets the first spectating condition; and display a second virtual object corresponding to the target account in the first spectating area in response to a triggering operation on the first spectating trigger control; 所述显示模块,还用于当所述目标帐号符合所述第二观战条件,显示可触控状态的第二观战触发控件;响应于以竞争形式实现的对所述第二观战触发控件的触发操作,在所述第二观战区域显示所述目标帐号对应的所述第二虚拟对象。The display module is further used to display a second spectator trigger control in a touchable state when the target account meets the second spectator condition; and display the second virtual object corresponding to the target account in the second spectator area in response to a trigger operation on the second spectator trigger control implemented in a competitive form. 11.一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至9任一所述的直播互动方法。11. A computer device, characterized in that the computer device includes a processor and a memory, the memory stores at least one instruction, at least one program, a code set or an instruction set, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to implement the live interactive method as described in any one of claims 1 to 9. 12.一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至9任一所述的直播互动方法。12. A computer-readable storage medium, characterized in that the storage medium stores at least one instruction, at least one program, a code set or an instruction set, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by a processor to implement the live interactive method as described in any one of claims 1 to 9. 13.一种计算机程序产品,其特征在于,包括计算机程序或指令,所述计算机程序或者指令被处理器执行时实现如权利要求1至9任一所述的直播互动方法。13. A computer program product, characterized in that it comprises a computer program or instructions, and when the computer program or instructions are executed by a processor, the live interactive method as described in any one of claims 1 to 9 is implemented.
CN202111658272.7A 2021-12-06 2021-12-31 Live interactive method, device, equipment, storage medium and computer program product Active CN114288654B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111476539 2021-12-06
CN2021114765390 2021-12-06

Publications (2)

Publication Number Publication Date
CN114288654A CN114288654A (en) 2022-04-08
CN114288654B true CN114288654B (en) 2025-04-22

Family

ID=80974161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111658272.7A Active CN114288654B (en) 2021-12-06 2021-12-31 Live interactive method, device, equipment, storage medium and computer program product

Country Status (1)

Country Link
CN (1) CN114288654B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116726487A (en) * 2022-03-01 2023-09-12 深圳市腾讯网络信息技术有限公司 Method, device, equipment, storage medium and program product for interaction of sightseeing
CN115150634B (en) * 2022-07-06 2024-11-19 广州博冠信息科技有限公司 Live broadcasting room information processing method and device, storage medium and electronic equipment
CN115086704A (en) * 2022-08-19 2022-09-20 广州市千钧网络科技有限公司 Method, device and storage medium for video image synthesis in live broadcast
CN116055760A (en) * 2023-01-31 2023-05-02 网易(杭州)网络有限公司 Live broadcast interaction method and device in game scene, storage medium and electronic equipment
CN117215683A (en) * 2023-08-09 2023-12-12 腾讯科技(深圳)有限公司 User emotion feedback method and device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3788594B2 (en) * 2001-10-17 2006-06-21 株式会社バンダイナムコゲームス Program, information storage medium and server
JP6809830B2 (en) * 2016-07-13 2021-01-06 株式会社バンダイナムコエンターテインメント Programs and electronics
JP6639540B2 (en) * 2018-02-16 2020-02-05 株式会社カプコン Game system
CN109045709A (en) * 2018-07-24 2018-12-21 合肥爱玩动漫有限公司 A kind of method of watching in real time for fighting games
KR20200080978A (en) * 2018-12-27 2020-07-07 주식회사 엔씨소프트 Apparatus and method for providing game screen information
CN111282274B (en) * 2020-02-14 2021-05-28 腾讯科技(深圳)有限公司 Virtual object layout method, device, terminal and storage medium
JP2021151326A (en) * 2020-03-24 2021-09-30 株式会社Jvcケンウッド Game system, method, and program
CN112891944B (en) * 2021-03-26 2022-10-25 腾讯科技(深圳)有限公司 Interaction method and device based on virtual scene, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114288654A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN114288654B (en) Live interactive method, device, equipment, storage medium and computer program product
CN111228811B (en) Virtual object control method, device, equipment and medium
CN110755850B (en) Team forming method, device, equipment and storage medium for competitive game
CN111589167A (en) Event fighting method, device, terminal, server and storage medium
CN113230655B (en) Virtual object control method, device, equipment, system and readable storage medium
CN112995687B (en) Interaction method, device, equipment and medium based on Internet
CN113244616B (en) Interaction method, device and equipment based on virtual scene and readable storage medium
CN114125483B (en) Event popup display method, device, equipment and medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113058264A (en) Display method of virtual scene, processing method, device and equipment of virtual scene
CN112973116B (en) Virtual scene picture display method and device, computer equipment and storage medium
CN113599810B (en) Virtual object-based display control method, device, equipment and medium
CN112774185B (en) Virtual card control method, device and equipment in card virtual scene
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CA3164842A1 (en) Method and apparatus for generating special effect in virtual environment, device, and storage medium
CN112156454A (en) Virtual object generation method and device, terminal and readable storage medium
CN114053707B (en) Virtual trace display method, device, equipment, medium and computer program product
HK40047824B (en) Method and apparatus for displaying virtual scene pictures, computer device and storage medium
HK40055287A (en) Display control method based on virtual object, device, equipment and medium
HK40051671B (en) Controlling method, device, equipment, system and readable storage medium of virtual object
WO2025118791A1 (en) Method and apparatus for disabling virtual object, and device and computer-readable storage medium
HK40047824A (en) Method and apparatus for displaying virtual scene pictures, computer device and storage medium
CN119925932A (en) Virtual character capturing method, device, equipment and computer readable storage medium
CN119015694A (en) Information processing method, device, equipment and medium based on virtual environment
HK40043864A (en) Method and apparatus for controlling virtual cards in card virtual scene, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant