The present application claims priority from chinese patent application No. 202111476539.0 entitled "live interaction method, apparatus, device, storage medium and computer program product," filed 12/06/2021, the entire contents of which are incorporated herein by reference.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
First, the terms involved in the embodiments of the present application will be briefly described:
Live broadcasting refers to a technique of collecting data of a main broadcasting party through equipment, converting the data into a video stream capable of being transmitted through a series of processes, for example, encoding and compressing the video, converting the video into a video stream, and outputting the video stream to a viewing terminal for playing. The live broadcast application program provided by the embodiment of the application refers to an application program provided from a media platform, namely, after a user registers an account in the live broadcast application program, the live broadcast application program can initiate a live broadcast room which is used as a host broadcast by the user. The initiation of the live broadcasting room comprises or does not comprise condition limitation, in some embodiments, the user account opens the live broadcasting room for live broadcasting in a mode of applying qualification, in other embodiments, the user account directly selects to start live broadcasting in a user interface of a live broadcasting application program, and after the information of the live broadcasting room is filled, the live broadcasting room can be opened for live broadcasting. In some embodiments, the user account may also be used as a viewer account to view live video of the anchor account.
Virtual environment-is the virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-imaginary environment, or a pure imaginary environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in the present application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
The self-propelled chess game is a novel multi-player combat strategy game. In a self-propelled chess game, a user can self-match a virtual object (namely a 'chess piece') to form a virtual object array capacity and fight against an hostile virtual object array capacity.
The chessboard refers to an area for preparing and performing fight in a self-propelled chess game fight interface, and can be any one of a two-dimensional virtual chessboard, a 2.5-dimensional virtual chessboard and a three-dimensional virtual chessboard, and the application is not limited to the two-dimensional virtual chessboard, the 2.5-dimensional virtual chessboard and the three-dimensional virtual chessboard.
Wherein the chessboard is divided into a fight area and a spare area. The combat zone comprises a plurality of combat chess grids with the same size, wherein the combat chess grids are used for placing combat chesses for combat in the combat process, and the combat zone comprises a plurality of combat chess grids used for placing combat-preparation chesses which cannot participate in combat in the combat process but can be dragged to be placed in the combat zone in the preparation stage.
Regarding the arrangement of the grids in the combat zone, in one possible embodiment, the combat zone includes n (rows) x m (columns) of combat grids, where n is an integer multiple of 2, and two adjacent rows of grids are aligned or two adjacent rows of grids are staggered. In addition, the fighting area is divided into two parts by row, namely a host fighting area and an enemy fighting area, and in the preparation stage, the user can only place chesses in the host fighting area.
Virtual object-refers to a movable object in a virtual environment. The movable object may be a virtual chess, virtual character, virtual animal, cartoon character, etc., such as a character, animal, plant, oil drum, wall, stone, etc., displayed in a three-dimensional virtual environment. Alternatively, the virtual object is a three-dimensional stereoscopic model created based on animated skeleton techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
In the embodiment of the present application, the virtual object includes different combat units in the self-propelled chess game or a control object that is freely movable during the game play. The virtual object may be a different pawn or a different virtual character, for example. The user can purchase, sell, upgrade, etc. the virtual object. The control objects can be different virtual characters, and the user can obtain corresponding rewards generated by the game by controlling the main control object to freely move in the virtual game, such as gold coin rewards, equipment rewards, game object rewards and the like.
Virtual game play refers to game play in which at least two virtual objects are in a virtual environment. In the embodiment of the present application, the virtual match is a match made up of at least two rounds of combat processes, that is, the virtual match includes multiple rounds of combat processes.
Fig. 1 shows a schematic diagram of a live broadcast interface provided by an exemplary embodiment of the present application, as shown in fig. 1, a current interface is displayed on a live broadcast screen 100 after a target account enters a live broadcast room with a viewer identity, wherein a virtual contrast is displayed on the live broadcast screen 100, a virtual scene of the virtual contrast includes a contrast area 101 and a sightseeing area 102 (both the left side and the right side of the contrast area 101 shown in fig. 1 are the sightseeing area 102), the contrast area 101 is used for performing virtual contrast, a first virtual object 103 is displayed in the contrast area 101, the first virtual object 103 is controlled by a main account corresponding to the live broadcast room, a second virtual object 104 is displayed in the sightseeing area 102, and the second virtual object 104 is controlled by the target account (viewer identity).
Fig. 2 is a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application, as shown in fig. 2, where the implementation environment includes a terminal 210 and a server 220, where the terminal 210 and the server 220 are connected through a communication network 230.
The live broadcast application provided in the embodiment of the present application is installed in the terminal 210, the current live broadcast application is associated with the target application, the user uses the terminal 210 to run the live broadcast application, and the interface of the current terminal 210 displays a live broadcast picture corresponding to the running interface of the live broadcast application, where the live broadcast picture includes the running picture of the target application.
Wherein the target application is an application supporting a three-dimensional virtual environment. The application 222 may be any one of a virtual reality application, a three-dimensional map application, a self-propelled chess game, an educational game, a Third person shooter game (Third-Person Shooting game, TPS), a First-person shooter game (First-Person Shooting game, FPS), a multiplayer online tactical competition game (Multiplayer Online Battle ARENA GAMES, MOBA), and a multiplayer gunfight survival game. The target application program can be a single-board application program, such as a single-board three-dimensional game program, or a network online application program.
The server 220 is configured to receive video streaming data of live video from the anchor terminal and transmit the video streaming data to the viewer terminal to play the live video.
In some embodiments, when the live video is implemented as a live video of a game, taking a self-propelled chess game as an example, when the audience terminal receives an operation of entering a live room, a live broadcast picture of a virtual game is displayed on an interface of the current audience terminal, and a sightseeing area and a game area are included in the virtual game picture, wherein a first virtual object is displayed in the game area, the first virtual object is controlled by a main account corresponding to the live room, a second virtual object is displayed in the sightseeing area, and the second virtual object is controlled by a target account corresponding to the audience terminal.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a vehicle-mounted terminal, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
It should be noted that the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform.
Cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, software, networks and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system. Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. The blockchain is essentially a decentralised database, and is a series of data blocks which are generated by association by using a cryptography method, and each data block contains information of a batch of network transactions and is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
Referring to fig. 3, a flowchart of a live interaction method according to an embodiment of the present application is shown, in the embodiment of the present application, the method is applied to the terminal 210 shown in fig. 2 for illustration, and the method includes:
Step 301, receiving a live room entry operation.
The live broadcasting room entering operation is used for indicating the currently logged-in target account to enter the live broadcasting room for live broadcasting watching in the identity of a spectator.
In some embodiments, the live broadcast room is an online virtual room correspondingly opened by the anchor account, and the live broadcast room generally includes the anchor account, an administrator account and an audience account, wherein the administrator account is used for performing live broadcast management on the live broadcast room, including comment management, live broadcast picture management, live broadcast flow management and the like, and the audience account refers to an account for viewing live broadcast pictures in the live broadcast room with the identity of an audience.
Optionally, the live room entry operation includes at least one of:
1. The method comprises the steps that a terminal runs a live broadcast application program with a live broadcast watching function, a live broadcast room list is displayed on an operation interface of the live broadcast application program, the live broadcast room list comprises at least one live broadcast room, the terminal receives a triggering operation on a designated live broadcast room, and enters the candidate live broadcast room, wherein the triggering operation comprises a clicking operation, a long-press operation, a sliding operation and the like;
2. acquiring an invitation code of a designated live broadcasting room, displaying a propaganda interface of the designated live broadcasting room by a current terminal, wherein the propaganda interface comprises a trigger control, inputting the invitation code corresponding to the designated live broadcasting room by triggering the trigger control, and entering the designated live broadcasting room;
3. the terminal runs a designated application program, wherein the designated application program is an applet with a live broadcast watching function, the applet is an applet which runs by taking the designated application program as a host program, when a live broadcast information link is displayed in the designated application program, live broadcast room information corresponding to the link is displayed through the live broadcast information link, and when a clicking operation on the live broadcast room information is received, the terminal jumps to the applet with the live broadcast watching function and enters the live broadcast room;
4. The method comprises the steps of obtaining a specific identification code (such as a bar code or a two-dimensional code, and the like, which are not limited herein) corresponding to a specified live broadcasting room, running a target application program with a live broadcasting watching function by a current terminal, opening a code scanning function of the target application program, scanning the identification code of the specified live broadcasting room, and jumping a terminal interface to the live broadcasting room.
It should be noted that the above-mentioned live room entry operation is only an illustrative example, and the live room entry operation is not limited in any way in the embodiment of the present application.
Illustratively, the live viewing content includes, but is not limited to, live viewing of the play content of the main account, or playback viewing of the historical live content of the main account.
Step 302, based on the live room entering operation, displaying a live screen of the virtual game.
The virtual scene of the virtual match comprises a match area and a sightseeing area, wherein the match area is used for performing a match process of the virtual match.
In some embodiments, after the target account enters the live broadcasting room with the identity of the audience, the terminal interface displays a live broadcasting picture of the live broadcasting room, where the virtual counter shown by the live broadcasting picture is a virtual counter in which the corresponding host account of the live broadcasting room currently participates, that is, the virtual counter includes the host account. The live broadcast picture is displayed at a first person viewing angle corresponding to the main broadcast account, or is displayed at a third person viewing angle, or the virtual counter office further comprises an adversary account, the live broadcast picture can be displayed at the viewing angle corresponding to the adversary account, the live broadcast picture is not limited herein, and when at least two conditions are realized in the three display conditions, each display condition can be switched at will.
Optionally, the fight area includes a fight preparation area and a fight area, where the fight preparation area is located on one side or two sides of the fight area, or is located in an area where the fight area extends outwards, or is located at a vertex angle position of the fight area, and the virtual fight area includes at least one sightseeing area, which is not limited herein.
Optionally, the sightseeing area is set by combining a designated application program corresponding to the virtual contrast with a live broadcast application program, that is, when the designated application program is connected with the live broadcast application program and is combined with the live broadcast application program, the terminal interface displays the operation interface of the designated application program, that is, the virtual contrast picture, the designated application program is connected with the live broadcast application program, the virtual contrast picture currently comprising the sightseeing area is projected to the live broadcast picture corresponding to the live broadcast account number for real-time display, or the sightseeing area is additionally and newly added to the virtual contrast by the live broadcast application program, that is, when the designated application program is connected with the live broadcast application program and is combined with the live broadcast application program, the live broadcast room corresponding to the live broadcast account number is opened in the operation process of the live broadcast application program, and the designated application program picture (that is, the virtual contrast picture) operated by the live broadcast terminal is projected to the live broadcast room, that is displayed in the live broadcast room, that is, the virtual contrast picture comprising the sightseeing area is not limited in the live broadcast scene when the virtual contrast is not projected to the live broadcast room.
Optionally, the sightseeing area is located at one side or two sides of the combat area, or is located in an area extending outwards of the combat area, or is located at a vertex angle position of the combat area, and the virtual combat area comprises at least one sightseeing area, which is not limited herein.
Schematically, the fight area in the virtual scene is used for displaying the ongoing virtual fight process of the anchor account, or displaying a preparation picture of the anchor account about to perform virtual fight, or displaying the playback record of the history virtual fight of the anchor account.
Schematically, the sightseeing area in the virtual scene is used for displaying the audience account number which is being watched live in the live broadcasting room, wherein the display mode comprises at least one of the following modes:
1. displaying the account name of the audience account in live watching in the sightseeing area;
2. Displaying account name cards corresponding to the audience accounts which are being watched live in the sightseeing area, wherein the account name cards comprise account names, account profile information and live consumption behaviors of the accounts corresponding to the live broadcasting room, such as 'contribution value', 'list value', 'gift giving record' and the like;
3. Each audience account corresponds to a designated virtual object, and the virtual object corresponding to the audience account currently in live broadcast watching is displayed in the sightseeing area, wherein the designated virtual object is preset for each audience account by the server, namely the designated virtual object corresponding to each audience account is fixed and unchangeable, or a virtual object list is displayed after the audience account enters a live broadcast room, a user can randomly select one virtual object as the designated virtual object corresponding to the current audience account, or the user can generate one designated virtual object through personalized creation, including by a face pinching mode and the like, and the method is not limited.
It should be noted that the above manner of displaying the sightseeing area is only an illustrative example, and the embodiment of the present application does not limit the manner of displaying the sightseeing area in any way.
The following steps 3031 to 3032 are two steps shown in parallel.
In step 3031, a first virtual object corresponding to the anchor account is displayed in the fight zone.
The first virtual object is controlled by a main broadcasting account corresponding to the live broadcasting room.
In some embodiments, the first virtual object is controlled by the main broadcast account corresponding to the current live broadcast room, and is used for indicating identity information corresponding to the main broadcast account, that is, the first virtual object represents the identity of the main broadcast player of the live broadcast room.
Illustratively, the first virtual object may be controlled by the current host account to move freely in the combat zone and the spare combat zone in the combat zone, or the host account may pick up a combat award, such as a gold coin award, an equipment material award, etc., displayed in the combat zone after the virtual combat is completed by controlling the first virtual object, or the host account may perform interactive contents, such as a chat session, a motion performance, etc., in the current combat zone by controlling the first virtual object, where the interactive contents include interaction with an account of a player of the opponent, or interaction with an account of a spectator in the sightseeing zone, which is not limited herein.
In some embodiments, the combat area further includes displaying a combat virtual object, where the combat virtual object is configured to automatically perform combat actions according to object attributes when engaged in a virtual combat, and the virtual combat is a combat consisting of at least two rounds of combat processes. In the embodiment of the present application, the virtual game is a game in a self-propelled chess game, and the virtual object is a combat unit in the self-propelled chess game. Optionally, the game virtual object corresponds to an object level, taking a self-propelled chess game as an example, the object level corresponding to the game virtual object can be synthesized by a specified number of peer level game virtual objects to achieve level promotion, for example, two game virtual objects with one level being one star can be synthesized into one game virtual object with two levels being two stars, so that corresponding combat attributes (such as a legal force value, a life value, an attack force and the like) of the game virtual object are promoted. Optionally, the game virtual object can automatically promote the level, for example, when the virtual game area contains a specified number of peer virtual objects with the same level and meets the level promotion requirement, the game virtual object is automatically synthesized, or the level promotion of the game virtual object is realized by means of manual promotion, that is, the user autonomously selects to synthesize the game virtual object meeting the level promotion requirement.
Illustratively, the virtual object of the game includes an alternative object located in the spare battle area and a battle object located in the battle area, when the alternative objects are included, the anchor user can automatically select the spare battle object to move to the battle area as the battle object, or the server automatically selects the spare battle object to move to the battle area as the battle object, when the virtual game starts, the master virtual object is located in the sightseeing area, and in the virtual game process, the battle object in the battle area performs automatic virtual game. Notably, the first virtual object is free to move in a combat zone, a preparation zone, or a sightseeing zone.
Optionally, each game account (including the anchor account) participating in the virtual game has a personalized fight area, and when a single round of virtual game starts, the fight area displayed on the current interface is the fight area owned by the anchor account, or the fight area owned by the enemy player is displayed, which is not limited herein.
Step 3032, displaying the second virtual object corresponding to the target account in the sightseeing area.
The second virtual object is controlled by the target account.
In some embodiments, the second virtual object is a virtual object controlled by the target account for indicating the identity of the viewer of the target account, and the second virtual object may be freely movable in the viewing area or located at a designated viewing location in the viewing area, without limitation.
Optionally, the displaying mode of the second virtual object in the sightseeing area includes at least one of the following modes:
1. displaying a character image corresponding to the second virtual object in the sightseeing area in a map form;
2. Displaying skeleton animation corresponding to the second virtual object in the sightseeing area in an animation mode;
3. And displaying the identification corresponding to the target account in the peripheral range of the second virtual object, such as an account nickname and the like, while displaying the second virtual object.
It should be noted that, the above-mentioned display manner of the second virtual object is only an illustrative example, and the specific display manner of the second virtual object in the embodiment of the present application is not limited in any way.
The viewing area further includes at least one virtual object corresponding to the viewer account, which is used to indicate the identity information corresponding to the viewer account, and the virtual object is controlled by the viewer account, and can freely move in the viewing area or be located at a designated viewing position in the viewing area, which is not limited herein.
Optionally, the target account may perform various interactions in the sightseeing area by controlling the second virtual object, including sending chat content with other accounts, generating a personalized expression package, performing personalized action performance with virtual objects corresponding to other accounts, and the like, which are not limited herein, where the other accounts include, but are not limited herein, a viewer account (i.e., "virtual object corresponding to other accounts" is a virtual object corresponding to the viewer account) or a host account (i.e., "virtual object corresponding to other accounts" is a first virtual object corresponding to the host account).
Illustratively, the first virtual object and the second virtual object are the same type of virtual object, or are different types of virtual objects, which are not limited herein.
In some embodiments, the second virtual object corresponding to the target account may be personalized by the target account, and appearance switching is performed in the sightseeing area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In an alternative embodiment, through a warfare trigger operation, a second virtual object corresponding to a target account is displayed in a warfare area, and referring to fig. 4, a flowchart of a live interaction method provided by an exemplary embodiment of the present application is shown schematically, in an embodiment of the present application, the method is applied to the terminal 210 shown in fig. 2, and a self-propelled chess game is taken as an example, and the method includes:
step 401, receiving a live room entry operation.
The live broadcasting room entering operation is used for indicating the currently logged-in target account to enter the live broadcasting room for live broadcasting watching in the identity of a spectator.
The description of the live room entry operation in step 401 is described in detail in step 301, and is not repeated here.
Step 402, based on the live room entering operation, displaying a live screen of the virtual game.
The virtual scene of the virtual match comprises a match area and a sightseeing area, wherein the match area is used for performing a match process of the virtual match.
The description of the live view of the virtual game in step 402 is already described in detail in step 302, and will not be repeated here.
Step 4031, a first virtual object corresponding to the anchor account is displayed in the fight zone.
The first virtual object is controlled by a main broadcasting account corresponding to the live broadcasting room.
The description of the first virtual object in step 4031 is already described in detail in step 3031, and is not repeated here.
Step 4041, a sightseeing trigger operation is received.
The sightseeing trigger operation is used for indicating the second virtual object corresponding to the target account to move to the sightseeing area.
Schematically, after the target account enters the current live broadcasting room, the current interface displays a live broadcasting picture corresponding to the live broadcasting room, the terminal receives a sightseeing trigger operation, and a second virtual object corresponding to the target account is displayed in a sightseeing area corresponding to the virtual scene.
Optionally, before the terminal receives the sightseeing trigger operation, the target account is located in the living broadcasting room for live broadcasting and watching in the identity of the audience, or the target account is not located in the living broadcasting room for live broadcasting and watching in the identity of the audience, and the current interface only displays the live broadcasting picture browsing interface of the current living broadcasting room, which is not limited herein.
In some embodiments, a sightseeing trigger control is displayed and used for triggering sightseeing functions, and when the sightseeing trigger control is in a touch-controllable state, triggering operation of the sightseeing trigger control is received and used as sightseeing trigger operation.
The method comprises the steps of displaying a sightseeing trigger control on a live interface of a live broadcasting room, triggering the sightseeing trigger control by a target user, wherein the target user comprises clicking operation, long-press operation, sliding operation, terminal motion control operation (such as shaking) and the like, and triggering a sightseeing function of the live broadcasting room, wherein the sightseeing function is used for live broadcasting and watching a target account in the live broadcasting room according to the identity of a viewer, and meanwhile a second virtual object corresponding to the target account is displayed in a sightseeing area in a virtual scene.
Optionally, the sightseeing trigger control includes a touchable state and a non-touchable state, the touchable state refers to that after the target account is triggered by the sightseeing trigger control, a second virtual object corresponding to the target account is displayed in the sightseeing area, the non-touchable state includes that the target account cannot display the corresponding second virtual object in the sightseeing area, and only the identity of the audience can be watched in live broadcast in the live broadcast room.
The non-touch state display mode includes at least one of the following modes:
1. presetting virtual objects corresponding to audience accounts with the specified number in a live broadcasting room, and displaying a non-touch state by using a sightseeing trigger control when the number of the virtual objects displayed in the sightseeing area reaches the specified number;
2. presetting a specified time threshold in the live broadcasting room, namely displaying a touchable state by the sightseeing trigger control in the specified time threshold, and displaying an untouchable state by the sightseeing trigger control when the specified time threshold is exceeded;
3. And setting a sightseeing condition in the live broadcasting room, and displaying the non-touch state of the sightseeing trigger control when the target account does not reach the sightseeing condition, wherein the sightseeing condition comprises consumption strength, attention duration and the like of the target account corresponding to the live broadcasting room, and the sightseeing condition is not limited herein.
It should be noted that the above-mentioned manner of displaying the non-touchable state is merely an illustrative example, and the embodiment of the present application is not limited thereto.
Optionally, the preset requirement in the live broadcast room is preset by the live broadcast application program or set by the host account, which is not limited herein.
In some embodiments, the sightseeing area includes a first sightseeing area and a second sightseeing area, the first sightseeing area is provided with corresponding first sightseeing conditions, the second sightseeing area is provided with corresponding second sightseeing conditions, and referring to fig. 5, a schematic view of a sightseeing area display mode provided by an exemplary embodiment of the present application is shown, as shown in fig. 5, a terminal displays a live broadcast picture 500 of a virtual match, wherein the live broadcast picture includes a match area and a sightseeing area, the sightseeing area is divided into a first sightseeing area 501 and a second sightseeing area 502, the live broadcast picture 500 displays a first sightseeing trigger control 503 corresponding to the first sightseeing area 501, and displays a second sightseeing trigger control 504 corresponding to the second sightseeing area 502, wherein the first sightseeing trigger control 503 is preconfigured with the first sightseeing conditions, the second sightseeing trigger control 504 is preconfigured with the second sightseeing conditions, and the second account trigger control 503 is displayed in a touch state when the target account does not meet the first sightseeing conditions, and the second account trigger control 504 is displayed in a touch state when the target does not meet the first sightseeing conditions.
Illustratively, the first sightseeing condition of the first sightseeing area is set by the anchor account or preset by the live broadcast application program, the first sightseeing condition is used for indicating the account authority of the target account, and when the account authority of the target account meets the first sightseeing condition, the touch-control state of the first sightseeing trigger control is displayed.
Schematically, the target account authority preset by the first sightseeing condition includes at least one of the following modes:
1. The first sightseeing condition comprises the identity attribute of a target account, such as fan identity, special invitation audience, and the like, namely, when the target account has the fan identity of the anchor account, the target account meets the first sightseeing condition, or a special invitation code is arranged in a live broadcasting room, and after the target account enters the live broadcasting room, the special invitation code is input, namely, the target account meets the first sightseeing condition;
2. The first sightseeing condition comprises a member system, wherein the member system comprises a member grade attribute of a target account, namely, a member grade threshold value is preset by a main account, when the target account reaches the member grade threshold value after purchasing members or completing responding member tasks, the corresponding member grade of the target account meets the first sightseeing condition;
3. The first sightseeing condition comprises live broadcast consumption records of the target account, and the main broadcast account presets a consumption threshold, namely, when a consumption value corresponding to the live broadcast consumption records of the target account in the live broadcast room or the main broadcast account corresponding to the main broadcast account reaches the consumption threshold, the target account meets the first sightseeing condition;
4. the first sightseeing condition comprises a historical interaction record of the target account and the anchor account, wherein the anchor account presets an interaction amount threshold, namely, when the historical interaction amount between the target account and the anchor account reaches the interaction amount threshold, the target account meets the first sightseeing condition, and the interaction record comprises a 'playing list', 'contribution value' and the like.
It should be noted that the above-mentioned target account authority preset with respect to the first sightseeing condition is only an illustrative example, and the embodiment of the present application is not limited thereto.
Illustratively, during use, the viewer account that meets the first viewing condition may be referred to as a "member account".
The second sightseeing condition is set by the main account or by the live broadcast application program, and is not limited herein, and the second sightseeing condition is used for indicating the sightseeing round of the target account, namely, when the current second sightseeing trigger control is in a touchable state, the audience triggers the second sightseeing trigger control in a competition mode, when the number of the audience accounts for triggering the second sightseeing trigger control reaches a preset trigger threshold, the non-touchable state of the second sightseeing trigger control is displayed, and therefore, a designated trigger sequence, namely, the sightseeing round, corresponds to the audience accounts for completing the triggering operation of the second sightseeing trigger control. In use, a viewer account that matches only the second viewing condition (that is mismatched to the first viewing condition) may be referred to as a "normal account".
Step 4042, based on the sightseeing trigger operation, displaying the second virtual object corresponding to the target account in the sightseeing area.
In some embodiments, displaying a candidate sightseeing location in the sightseeing area, wherein the candidate sightseeing location is used for accommodating virtual objects corresponding to the audience accounts, and displaying a second virtual object corresponding to the target accounts at the candidate sightseeing location based on a sightseeing trigger operation when the candidate sightseeing location is in an empty state, wherein the empty state is used for indicating that the candidate sightseeing location is in an accommodating state for the virtual objects.
Optionally, the sightseeing area includes at least one candidate sightseeing position, and each candidate sightseeing position is used for displaying a virtual object corresponding to the audience account, where each candidate sightseeing position correspondingly displays one or more virtual objects corresponding to the audience account, and the method is not limited herein.
Optionally, the displaying mode of the candidate sightseeing location in the sightseeing area includes at least one of the following modes:
1. The candidate sightseeing positions are fixedly arranged in the sightseeing area, namely, each candidate sightseeing position is positioned at a fixed position in the sightseeing area and cannot be changed or transformed;
2. The candidate sightseeing positions are randomly distributed in the sightseeing area, namely, the positions of the candidate sightseeing positions in the sightseeing area are randomly arranged, and after each round of the game is finished, the candidate sightseeing positions are reset and randomly displayed again;
3. the position of the candidate sightseeing position in the sightseeing area is set by the main broadcasting account, namely, the main broadcasting account can carry out personalized setting on the candidate sightseeing position in the living broadcast room;
4. The position of the candidate sightseeing position in the sightseeing area is set by the audience account, that is, when the candidate sightseeing position displays the virtual object corresponding to the audience account, the audience account can set the designated position to display the candidate sightseeing position in the sightseeing area.
It should be noted that, the above display manner for the candidate sightseeing positions is only an illustrative example, and the specific display manner for the candidate sightseeing positions in the embodiment of the present application is not limited in any way.
Optionally, after receiving the sightseeing trigger operation, the terminal displays the second virtual object of the target account at the candidate sightseeing position according to the triggering sequence of the audience account, or the target account can select the candidate sightseeing position as the second virtual object of the target account at the designated position by itself.
Illustratively, the accommodation state of the candidate sightseeing location indicates a display condition of a virtual object corresponding to the candidate sightseeing location, and the candidate sightseeing location includes an empty seat state and a occupied seat state, wherein the empty seat state indicates that the virtual object corresponding to any audience account is not displayed in the current candidate sightseeing location. The occupied state refers to that the candidate sightseeing position is set to display virtual objects with fixed number, and the number of the virtual objects corresponding to the audience accounts displayed in the current candidate sightseeing position reaches the fixed number, so that the candidate sightseeing position is in the occupied state.
Alternatively, the holding amount and number of the candidate sightseeing positions are fixed, or may be set by the host account, which is not limited herein.
Schematically, each candidate warfare location in the warfare area corresponds to a warfare trigger control, when a designated candidate warfare location is in an empty seat state, the warfare trigger control corresponding to the designated candidate warfare location displays a touchable state, when the designated candidate warfare location is in a seat occupying state, the corresponding warfare trigger control displays an untouchable state, or when the candidate warfare location in the empty seat state exists in the warfare area, the warfare trigger control displays a touchable state, and when all the candidate warfare locations in the warfare area are in the seat occupying state, the warfare trigger control displays an untouchable state, and the method is not limited. That is, when the candidate sightseeing position is in the occupied state, the sightseeing trigger control is displayed in the non-touchable state, and the occupied state is used for indicating that the candidate sightseeing position is in the accommodation limiting state for the virtual object.
Illustratively, the empty seat state and the occupied seat state of the candidate sightseeing position can be configured by the anchor account, that is, the anchor account has the right of "kicking out the audience", and when the virtual object corresponding to the audience account is displayed in the sightseeing area, the anchor account can limit the virtual object corresponding to the audience account, so that the virtual object corresponding to the audience account cannot be displayed in the sightseeing area.
In some embodiments, the second virtual object is displayed in the first viewing area in response to the target account matching the first viewing condition. And displaying a second virtual object in a second sightseeing area in response to the target account being mismatched with the first sightseeing condition.
Referring to fig. 5 schematically, as shown in fig. 5, the first sightseeing area 501 includes a first candidate sightseeing position 505, when the candidate sightseeing position 505 is in an empty state and the account authority of the current target account accords with the first sightseeing condition, the first sightseeing trigger control 503 corresponding to the first sightseeing area 501 displays a touch state, when the terminal receives a trigger operation on the first sightseeing trigger control 503, the second virtual object 506 corresponding to the target account is displayed in the candidate sightseeing area. And the second sightseeing area 502 includes the second candidate sightseeing position 507, when there is the second candidate sightseeing position 507 in the empty seat state, the second sightseeing trigger control 504 displays the touchable state, when all the second candidate sightseeing positions 505 are in the occupied seat state, the non-touchable state (not shown in fig. 5) of the second sightseeing trigger control 504 is displayed, therefore, when the target account is not matched with the first sightseeing condition, but the second sightseeing trigger control 504 is in the touchable state, the second sightseeing trigger control 504 needs to be triggered by means of competition, so that the corresponding second virtual object 506 is displayed at the second candidate sightseeing position 507 according to the generated sightseeing round corresponding to the triggering sequence, that is, in response to the mismatch of the target account and the first sightseeing condition, and is matched with the second sightseeing condition, and the second virtual object is displayed at the second sightseeing area, that is, when the target not matched with the first sightseeing condition, the target account is displayed with the second virtual object corresponding to the target account through competition mode, the triggering operation of the second sightseeing trigger control 504 is performed by the competition mode.
It is noted that when the target account meets the first sightseeing condition and both the first sightseeing trigger control and the second sightseeing trigger control display a touch state, the target account can trigger the first sightseeing trigger control or trigger the second sightseeing trigger control for displaying the second virtual object in the first sightseeing area or the second area without limitation, when the target account does not meet the first sightseeing condition and the second candidate sightseeing position is in the air seat state, only the second sightseeing trigger control displays the touch state, the target account can trigger the second sightseeing trigger control, and the successful triggering indicates that the target account meets the second sightseeing condition and is used for displaying the second virtual object in the second sightseeing area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In this embodiment, the sightseeing area is divided into a first sightseeing area provided with a first sightseeing condition and a second sightseeing area provided with a second sightseeing condition, so that live broadcast presence feeling of the audience conforming to the first sightseeing condition is improved, and meanwhile, the second candidate sightseeing position in the second sightseeing area is acquired in a competitive mode, so that live broadcast promotion of the audience can be stimulated, interest of the audience on game live broadcast is improved, and man-machine interaction frequency is enhanced.
In an alternative embodiment, after the second virtual object of the target account is displayed in the sightseeing area, the target account may perform multiple interactions in the living broadcast room through the second virtual object, and referring to fig. 6, a flowchart of a live broadcast interaction method provided in an exemplary embodiment of the present application is shown schematically, and in the embodiment of the present application, the method is described by taking an example after being applied to step 3032:
in step 601, an interactive content trigger is received.
The interactive content triggering operation is used for triggering live broadcast interaction of the target account and the main broadcast account in a live broadcast room.
In some embodiments, live broadcast interaction between the target account and the main broadcast account in the live broadcast room comprises live broadcast room chat interaction or virtual object action performance, wherein the chat interaction comprises text chat or voice chat or expression package sending, the virtual object action performance comprises that a second virtual object corresponding to the target account performs action performance alone, or that the second virtual object performs action performance with a first virtual object corresponding to the main broadcast account, or that the second virtual object performs action performance with virtual objects corresponding to other audience accounts in the live broadcast room, or that the virtual opponent also comprises an opponent virtual object corresponding to an opponent player account, the second virtual object performs action performance with an opponent virtual object, and the opponent player account is a player account performing virtual opponent with the main broadcast account, or that at least one virtual object (including at least one of the virtual objects) displayed in the live broadcast room is arbitrarily specified by the second virtual object to jointly complete action performance. The following describes four interaction modes in detail, including text chat interaction, sending expression packages, performing actions, and sending magic expression packages (i.e. at least two virtual objects are needed to complete action performances), which are not limited herein.
First, live interactions include text chat interactions.
In some embodiments, chat content input operations are received as interactive content trigger operations.
Alternatively, the target account may be input through text, as an input chat content, or input voice, and the terminal may convert the input voice into a corresponding text content, as an input chat content, which is not limited thereto.
Referring to fig. 7, a schematic diagram of text chat interaction provided by an exemplary embodiment of the present application is shown, as shown in fig. 7, a live broadcast screen 700 is currently displayed, a chat content display frame 701 is displayed in the live broadcast screen 700, a chat content input frame 702 is included in the chat content display frame 701, and a chat content sending control 703 is included in the chat content display frame 701, and the target account may input a custom text "support a host" in the chat content input frame 702, and click on the chat content sending control 703 as a chat content input operation.
Second, the live interaction includes performing a first target action.
In some embodiments, a first candidate action list is displayed, the first candidate action list including a first target action therein, and a first action selection operation is received in the first candidate action list, the first action selection operation being for selecting the first target action in the first candidate action list.
The method includes the steps that a first candidate action list is displayed on a current live broadcast picture, wherein the first candidate action list comprises at least one preset first candidate action, a terminal receives triggering operation for designating the first candidate action as a first action selection operation, and the designated first candidate action is determined to be a first target action.
Third, live interaction includes sending a magic expression package.
In some embodiments, a second candidate action list is displayed, the second candidate action list including a second target action therein, and a second action selection operation is received in the second candidate action list, the second action selection operation being for selecting the second target action in the second candidate action list.
Referring to fig. 8, a schematic illustration of a magic expression package display provided by an exemplary embodiment is shown, as shown in fig. 8, an interface of a live broadcast picture 800 includes an expression package trigger control 801, after the trigger operation is performed on the expression package trigger control 801, the trigger operation is performed on a magic expression package trigger control 802 included in the expression package interface, a magic expression package list 803 is displayed on a current interface, where the magic expression package list 803 includes at least one target magic expression package 804 (one magic expression package corresponds to a designated action), and by selecting the target magic expression package 804 and performing a click operation on the target magic expression package, it is determined that a transmission object corresponding to the target magic expression package 804 is an enemy virtual object 805 corresponding to an enemy player account.
Fourth, live interaction includes sending an expression package.
Referring to fig. 9, an illustration of an expression package interaction diagram provided by an exemplary embodiment of the present application is shown, as shown in fig. 9, an interface of a live broadcast picture 900 includes an expression package trigger control 901, after the expression package trigger control 901 is triggered, a trigger operation is performed on a candidate expression package trigger control 902 included in the expression package interface, and a candidate expression package list 903 is displayed on a current interface, where the candidate expression package list 903 includes at least one target expression package 904, and by selecting and clicking a target expression package 904, the candidate expression package is used as an interaction content trigger operation.
In step 602, in response to the interactive content triggering operation, interactive content between the first virtual object and the second virtual object is displayed.
Four different interactive contents are displayed corresponding to the four interactive modes in step 601.
First, the interactive contents include chat interactive contents.
In some embodiments, the chat interactive content corresponding to the chat content input operation is determined, and the chat interactive content is displayed in a peripheral range of the second virtual object.
Optionally, after the chat interactive content is determined, displaying the chat interactive content in a text scrolling manner in a peripheral range of the second virtual object, or fixedly displaying the chat interactive content, and after the next chat interactive content is determined, switching to display the next chat interactive content, or after a period of time, canceling the display of the chat interactive content, which is not limited herein.
Illustratively, as shown in fig. 7, the input text "support a next anchor" in the chat content input box 702 is determined as the input chat interactive content 704, and based on clicking the chat content sending control 703, the chat interactive content 704 "support a next anchor" is displayed in the chat content display box 701, and at the same time, an interactive box 706 corresponding to "support a next anchor" is displayed in a peripheral range of the second virtual object 705 corresponding to the target account.
Second, the interactive content includes a first target action.
In some embodiments, in response to the first action selection operation, the second virtual object is displayed to perform the first target action.
Schematically, after the triggering operation is performed on the first target action, displaying an animation effect corresponding to the first target action performed by the second virtual object in the live broadcast picture, for example, when the first target action is a 'circle', after the triggering operation on the first target action is completed, the second virtual object corresponding to the target account starts to complete the 'circle' action and display the animation effect, and optionally, continuously displaying the process that the second virtual object performs the first target action in the live broadcast picture, that is, repeatedly playing the animation effect, or displaying the process that only the second virtual object performs the first target action once in the live broadcast picture.
And thirdly, the interactive content comprises a second target action corresponding to the magic expression package.
In some embodiments, in response to the second action selection operation, the second virtual object and the first virtual object are displayed to collectively perform the second target action.
Optionally, the process that the second virtual object and the first virtual object execute the second target action together is continuously displayed in the live broadcast picture, or only the process that the second virtual object and the first virtual object execute the second target action together is displayed once.
Optionally, the second virtual object may also perform the second target action together with an enemy virtual object corresponding to an enemy player account in the virtual opponent, or perform the second target action together with virtual objects corresponding to other spectator accounts in the war zone, which is not limited herein.
Schematically, please refer to fig. 8, after the click operation on the target magic expression package 804 is completed, and the enemy virtual object 805 corresponding to the enemy player account is determined as a second target action corresponding to the target magic expression package, for example, if the target magic expression package 804 is "hug", the second virtual object 806 and the enemy virtual object 805 together complete the "hug" action and display the "hug" action in the live broadcast picture (the "hug" action display process is not shown).
It should be noted that the second target action is preset with a trigger requirement, and the trigger requirement includes a buffering time, or consumes a virtual resource, where the buffering time means that after the second target action is triggered, the second target action cannot be triggered again in the buffering time, and the consumption of the virtual resource means that the second target action needs to be purchased by using the virtual resource corresponding to the live application program through a purchase channel in the live application program, and then the second target action can be triggered.
Fourth, the interactive contents include a target expression package.
Schematically, as shown in fig. 9, the selected target expression package 904 is triggered based on the interactive content, and an image corresponding to the target expression package 904 is displayed in a peripheral range of the second virtual object 905.
Schematically, as shown in fig. 9, the target expression package 904 is preset with a trigger requirement, that is, when the target account selects the target expression package 904 for display, the target expression package 904 cannot be clicked again within the buffer time, and the buffer icon 906 is displayed at the position corresponding to the target expression package 904, and when the buffer time is reached, the buffer icon 906 disappears, and the target expression package 904 can be triggered again (not shown in fig. 9).
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In this embodiment, the audience in the live broadcasting room can perform various interaction modes in the live broadcasting process, including interactive chat, action performance and the like, so that the substitution sense of the audience in the live broadcasting watching process can be enhanced, the interactivity between the audience and the anchor is enhanced, and the man-machine interaction frequency is improved.
In an alternative embodiment, please refer to fig. 10, which illustrates a flowchart of a method for displaying a sightseeing area according to an exemplary embodiment of the present application, as shown in fig. 10, the method includes the following steps:
1001, the target account enters the live room.
And the terminal receives the live broadcasting room entering operation, so that the target account enters the live broadcasting room in the identity of a spectator, and the current terminal displays a live broadcasting picture of the live broadcasting room, wherein the live broadcasting picture is a virtual contrast picture.
1002, A second virtual object corresponding to the target account is displayed in the first sightseeing area.
After the target account enters the live broadcasting room, judging whether the account authority of the target account can trigger a first sightseeing area according to a first sightseeing condition preset by the host account, if the target account meets the first sightseeing condition, displaying a touchable state of a first sightseeing trigger control corresponding to the first sightseeing area if a first candidate sightseeing position in the first sightseeing area displays an empty seat state, clicking the first sightseeing trigger control by a user, displaying a second virtual object corresponding to the target account if the first candidate sightseeing position in the first sightseeing area meets the first sightseeing condition, and otherwise, displaying a non-touchable state of the first sightseeing trigger control.
And 1003, displaying a second virtual object corresponding to the target account in a second sightseeing area.
When the target account number is not matched with the first sightseeing condition, whether a second candidate sightseeing position in the second sightseeing area displays an empty seat state is required to be judged, when the second candidate sightseeing position displaying the empty seat state exists, the terminal displays the touch state of a second sightseeing trigger control corresponding to the second sightseeing area, the user performs clicking operation on the second sightseeing trigger control in a competitive mode within the time range of displaying the touch state, and if the clicking operation is successful, the current target account number accords with the second sightseeing condition, and the second candidate sightseeing position in the second sightseeing area displays a second virtual object corresponding to the target account number.
And 1004, the second virtual object corresponding to the target account cannot be displayed.
When the first sightseeing conditions of the target account are not matched, and all second candidate sightseeing positions in the second sightseeing area display the occupied state, the fact that the target account is not matched with the second sightseeing conditions at present is indicated, and the second sightseeing trigger control displays the non-touch state, so that a second virtual object corresponding to the target account cannot be displayed in the second sightseeing area.
In some embodiments, in addition to the main account corresponding to the current live broadcast picture in the current virtual game, the hostile player account is also a main account, that is, the current virtual game includes two main accounts, only the first sightseeing area 1103 corresponding to the first main account is displayed in the current virtual scene, and referring to fig. 11, a schematic diagram of displaying the first sightseeing area provided in an exemplary embodiment of the present application is shown, as shown in fig. 11, in the current live broadcast picture 1100, a first virtual object 1101 corresponding to the first main account and a hostile virtual object 1102 corresponding to the second main account are included, the first main account is a main account corresponding to the current live broadcast room, the second main account is a hostile player account corresponding to the current live broadcast room in the virtual game, then the first sightseeing area 1103 corresponding to the first main account is displayed in the current virtual scene, and the second hostile area 1104 is displayed, where the hostile sightseeing area 1104 is configured with specified hostile conditions corresponding to the first hostile area.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
In an alternative embodiment, please refer to fig. 12, which illustrates a live interaction feedback flowchart provided by an exemplary embodiment of the present application, as shown in fig. 12, the method includes the following steps:
1201, the interactive content is sent to the virtual office.
When the second virtual object corresponding to the target account is displayed in the sightseeing area (the first sightseeing area or the second sightseeing area), the terminal receives the interactive content triggering operation corresponding to the target account, including triggering interactive chat, sending expression packages, sending magic expression packages or action performances and the like. And judging whether the triggering operation of the interactive content is successful, and if the triggering operation is successful, sending the interactive content to the virtual opposite office, namely displaying the interactive content between the target account and the anchor account in the virtual opposite office process.
Schematically, as shown in fig. 11, when a first anchor account and a second anchor account exist in the virtual game, the second virtual object 1105 corresponding to the target account may select an interactive chat mode to display interactive chat content 1106, or select the first virtual object 1101 or the enemy virtual object 1102 to send a magic expression package (indicated by an arrow dotted line in fig. 11), and meanwhile, the virtual object 1107 corresponding to the audience account in the enemy combat area may also select the first virtual object 1101 or the enemy virtual object 1102 to send a magic expression package, which is not limited herein.
1202, Entering a process containing interactive content into a live screen.
In the process of displaying the interactive contents of the target account and the anchor account in the virtual game, simultaneously transmitting the picture corresponding to the virtual game with the interactive contents to the live broadcast picture corresponding to the current live broadcast room, and if the transmission is successful, synchronously playing the current virtual game picture in real time by the current live broadcast picture.
1203, Prompt for failure and failure cause.
If the interactive content transmission fails, displaying an interactive content transmission failure prompt at the terminal, wherein the interactive content transmission failure prompt comprises failure reasons including reasons such as poor video quality and poor network signal.
In summary, the embodiment of the application provides a live broadcast interaction method, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein besides a first virtual object controlled by a main broadcast account number is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is further arranged in the virtual scene of the virtual game, and the second virtual object corresponding to a target account number entering in a current audience identity is displayed.
According to the scheme, through adding the function points of the live broadcasting room combined with the characteristics of the self-propelled chess game, the willingness of the audience to log in the game and participate in the live broadcasting function in the game is improved. The problem that most of self-playing game players are lost in a live platform to only play live broadcast and watch without participating in the game is promoted. In addition, the actions such as preempting the sightseeing position and sending the expression can bring stronger participation to the live audience, and the live audience watching experience is improved.
Fig. 13 is a block diagram of a live interaction device according to an embodiment of the present application. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The apparatus may include:
A receiving module 1310, configured to receive a live room entry operation, where the live room entry operation is configured to instruct a currently logged-in target account to enter a live room for live viewing with a viewer identity;
The display module 1320 is configured to display a live broadcast picture of a virtual match based on the live broadcast room entering operation, where the virtual scene of the virtual match includes a combat area and a sightseeing area, and the combat area is used for performing a combat process of the virtual match;
the display module 1320 is further configured to display a first virtual object corresponding to a main account in the fight area, where the first virtual object is controlled by the main account corresponding to the live room;
The display module 1320 is further configured to display a second virtual object corresponding to the target account in the sightseeing area, where the second virtual object is controlled by the target account.
In an alternative embodiment, as shown in fig. 14, the display module 1320 includes:
A receiving unit 1321, configured to receive a sightseeing trigger operation, where the sightseeing trigger operation is used to instruct to move a second virtual object corresponding to the target account to the sightseeing area;
And a display unit 1322, configured to display, in the sightseeing area, the second virtual object corresponding to the target account based on the sightseeing trigger operation.
In an optional embodiment, the display unit 1322 is further configured to display a candidate sightseeing location in the sightseeing area, where the candidate sightseeing location is configured to accommodate a virtual object corresponding to a viewer, and display, in a case where the candidate sightseeing location is in an empty seat state, the second virtual object corresponding to the target account in the candidate sightseeing location based on the sightseeing trigger operation, where the empty seat state is configured to indicate that the candidate sightseeing location is in an accommodating state for the virtual object.
In an optional embodiment, the receiving unit 1321 is further configured to display a sightseeing trigger control, where the sightseeing trigger control is used to trigger a sightseeing function, and receive, as the sightseeing trigger operation, a trigger operation on the sightseeing trigger control when the sightseeing trigger control is in a touchable state.
In an optional embodiment, the display unit 1322 is further configured to display the sightseeing trigger control in a non-touchable state when the candidate sightseeing location is in a stand-by state, where the stand-by state is used to indicate that the candidate sightseeing location is in a limited-accommodation state for the virtual object.
In an optional embodiment, the receiving module 1310 is further configured to receive an interactive content triggering operation, where the interactive content triggering operation is used to trigger live interaction between the target account and the main account in the live broadcast room;
The display module 1320 is further configured to display interactive content between the first virtual object and the second virtual object in response to the interactive content triggering operation.
In an alternative embodiment, the receiving module 1310 is further configured to receive a chat content input operation as the interactive content triggering operation;
The display module 1320 is further configured to determine chat interaction content corresponding to the chat content input operation, and display the chat interaction content in a peripheral range of the second virtual object.
In an optional embodiment, the receiving module 1310 is further configured to display a first candidate action list, where the first candidate action list includes a first target action; receiving a first action selection operation in the first candidate action list, wherein the first action selection operation is used for selecting the first target action in the first candidate action list;
The display module 1320 is further configured to display, in response to the first action selection operation, the second virtual object to perform the first target action.
In an optional embodiment, the receiving module 1310 is further configured to display a second candidate action list, where the second candidate action list includes a second target action; receiving a second action selection operation in the second candidate action list, the second action selection operation being used for selecting the second target action in the second candidate action list;
The display module 1320 is further configured to display, in response to the second action selection operation, that the second virtual object and the first virtual object together perform the second target action.
In an alternative embodiment, the sightseeing area comprises a first sightseeing area and a second sightseeing area;
the display module 1320 is further configured to display the second virtual object in the first sightseeing area in response to the target account matching a first sightseeing condition, where the first sightseeing condition is used to indicate an account authority of the target account, and display the second virtual object in the second sightseeing area in response to a mismatch between the target account and the first sightseeing condition.
In an alternative embodiment, the display module 1320 is further configured to display, in response to the target account being mismatched with the first viewing condition and matched with a second viewing condition, the second virtual object in the second viewing area, where the second viewing condition is used to indicate a viewing round of the target account.
In summary, the embodiment of the application provides a live broadcast interaction device, after receiving an operation of entering a live broadcast room, a live broadcast picture of a virtual game is displayed in a current interface, wherein, besides a first virtual object controlled by a main broadcast account is displayed in a fight area included in a virtual scene of the virtual game, a sightseeing area is also arranged in the virtual scene of the virtual game, and the virtual scene of the virtual game is used for displaying a second virtual object corresponding to a target account entering in a current audience identity.
It should be noted that, in the live broadcast interactive display device provided in the foregoing embodiment, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live interaction device provided in the above embodiment and the live interaction method embodiment belong to the same concept, and detailed implementation processes of the live interaction device and the live interaction method embodiment are detailed in the method embodiment and are not repeated herein.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be a smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, MPEG audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, MPEG audio layer 4) player, notebook computer, or desktop computer. Terminal 1500 can also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1500 includes a processor 1501 and memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is configured to store at least one instruction for execution by processor 1501 to implement the live interaction method provided by the method embodiments of the present application.
In some embodiments, terminal 1500 can optionally further include a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral devices include at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, and a power supply 1508.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, the memory 1502 and the peripheral interface 1503 are integrated on the same chip or circuit board, and in some other embodiments, either or both of the processor 1501, the memory 1502 and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals. Optionally, the radio frequency circuit 1504 includes an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to, the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1504 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, the front panel of the terminal 1500 may be provided, in other embodiments, the display 1505 may be at least two, each provided on a different surface or in a folded design of the terminal 1500, and in still other embodiments, the display 1505 may be a flexible display, provided on a curved surface or folded surface of the terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of materials such as an LCD (Liquid CRYSTAL DISPLAY) and an OLED (Organic Light-Emitting Diode).
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The power supply 1508 is used to power the various components in the terminal 1500. The power source 1508 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1508 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to, an acceleration sensor 1511, a gyroscope sensor 1512, a pressure sensor 1513, an optical sensor 1514, and a proximity sensor 1515.
The acceleration sensor 1511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the touch display screen 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1515 may collect 3D motion of the user on the terminal 1500 in cooperation with the acceleration sensor 1511. The processor 1501 can implement functions such as motion sensing (e.g., changing a UI according to a tilting operation of a user), image stabilization at photographing, game control, and inertial navigation based on data collected by the gyro sensor 1512.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or below touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, a grip signal of the user on the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the touch display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1514 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of touch display screen 1505 based on the intensity of ambient light collected by optical sensor 1514. Specifically, the display brightness of the touch display screen 1505 is turned up when the ambient light intensity is high, and the display brightness of the touch display screen 1505 is turned down when the ambient light intensity is low. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1514.
A proximity sensor 1515, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1515 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, the processor 1501 controls the touch display 1505 to switch from the on-screen state to the off-screen state when the proximity sensor 1515 detects that the distance between the user and the front of the terminal 1500 is gradually decreasing, and the processor 1501 controls the touch display 1505 to switch from the off-screen state to the on-screen state when the proximity sensor 1515 detects that the distance between the user and the front of the terminal 1500 is gradually increasing.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments, or may be a computer readable storage medium alone, which is not incorporated in the terminal. The computer readable storage medium stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the live interaction method according to any of the above embodiments.
Alternatively, the computer readable storage medium may include a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), a Solid state disk (SSD, solid STATE DRIVES), an optical disk, or the like. The random access memory may include resistive random access memory (ReRAM, RESISTANCE RANDOM ACCESS MEMORY) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.