CN116962835A - Virtual object interaction method and device, computer equipment and storage medium - Google Patents

Virtual object interaction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116962835A
CN116962835A CN202310312175.5A CN202310312175A CN116962835A CN 116962835 A CN116962835 A CN 116962835A CN 202310312175 A CN202310312175 A CN 202310312175A CN 116962835 A CN116962835 A CN 116962835A
Authority
CN
China
Prior art keywords
virtual
capturing
virtual object
interaction
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310312175.5A
Other languages
Chinese (zh)
Inventor
肖志婕
陈俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310312175.5A priority Critical patent/CN116962835A/en
Publication of CN116962835A publication Critical patent/CN116962835A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character

Abstract

The application provides an interaction method and device of virtual objects, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying at least one first virtual object to be captured of a virtual task in a live room; for any first virtual object, based on the interaction behavior of the audience object in the live broadcasting room, displaying a capturing prompt identifier of the first virtual object, wherein the probability corresponding to the capturing prompt identifier is used for indicating the success rate of capturing the first virtual object when an aiming point is positioned in the capturing prompt identifier, and the probability corresponding to the capturing prompt identifier is positively related to the interaction behavior; and responding to the capturing operation of the first virtual object, and displaying a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier. The method enriches the playing method of the virtual task and can improve the user experience; and interaction between live broadcasting rooms is promoted, and liveness and program effect of the live broadcasting rooms are improved.

Description

Virtual object interaction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a virtual object interaction method, device, computer device, and storage medium.
Background
With the development of computer technology, the variety of games is increasing. Among them, games based on Virtual Reality (VR) technology have been developed. Virtual reality technology aims at generating a realistic virtual space for three-dimensional vision, touch sense, smell sense and other multiple sensory experiences, thereby generating an immersive sensation for a user. The user can conduct interaction actions such as antagonism with the virtual object in the game through the control terminal. Because the difficulty of defeating the virtual object is fixed in the process of interacting with the virtual object, the interaction process of defeating the virtual object by the user each time is also fixed, for example, the user needs to hit the virtual object about 5 times each time to defeat the virtual object, thereby reducing the game experience of the user.
Disclosure of Invention
The embodiment of the application provides an interaction method, an interaction device, computer equipment and a storage medium of a virtual object, which increase randomness and entertainment of capturing a first virtual object, enrich the playing method of a virtual task and improve the experience of a user for participating in the task; and the interaction between the audience and the live broadcasting room and between the audience and the anchor is promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and therefore the liveness and the program effect of the live broadcasting room are greatly improved. The technical scheme is as follows:
In one aspect, a method for interaction of virtual objects is provided, the method comprising:
displaying at least one first virtual object to be captured of a virtual task in a live room;
for any first virtual object, based on the interaction behavior of the audience object in the live broadcasting room, displaying a capturing prompt identifier of the first virtual object, wherein the probability corresponding to the capturing prompt identifier is used for indicating the success rate of capturing the first virtual object when an aiming point is positioned in the capturing prompt identifier, and the probability corresponding to the capturing prompt identifier is positively related to the interaction behavior;
and responding to the capturing operation of the first virtual object, and displaying a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier.
In another aspect, an interactive device for a virtual object is provided, the device including:
the display module is used for displaying at least one first virtual object to be captured of the virtual task in the live broadcasting room;
the processing module is used for displaying capturing prompt identifiers of the first virtual objects based on interaction behaviors of audience objects in the live broadcasting room, wherein the probability corresponding to the capturing prompt identifiers is used for representing the success rate of capturing the first virtual objects when aiming points are located in the capturing prompt identifiers, and the probability corresponding to the capturing prompt identifiers is positively related to the interaction behaviors;
And the capturing module is used for responding to the capturing operation of the first virtual object and displaying a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier.
In some embodiments, the processing module comprises:
the acquisition unit is used for acquiring basic probability of capturing prompt identification of any first virtual object;
the determining unit is used for determining the increment probability of the capturing prompt identifier based on the interactive behavior of the audience object in the live broadcasting room;
the first display unit is used for displaying the capturing prompt identification of the first virtual object based on the basic probability and the increment probability.
In some embodiments, the determining unit is configured to obtain an interaction behavior of a viewer object in the living room within a time from a historical time to a current time, where the historical time is a time when a value-added probability is determined in the living room based on the interaction behavior of the viewer object last time; and under the condition that the interaction behavior of the audience object meets a first interaction condition, determining the probability corresponding to the first interaction condition as the increment probability of the capturing prompt identifier.
In some embodiments, the determining unit is configured to determine, when the number of times of the interaction behavior of the audience object reaches a target number of times of interaction, a probability corresponding to the target number of times of interaction as a value-added probability of the capturing prompt identifier;
the determining unit is further configured to determine, when the virtual resource consumed by the interactive behavior of the audience object reaches a target condition, a probability corresponding to the target condition as a value-added probability of the capturing prompt identifier.
In some embodiments, the display module is further configured to display a prompt message in the live broadcast room, where the prompt message is used to prompt the increment probability acquired at the current moment.
In some embodiments, the display module includes:
the second display unit is used for displaying a task panel in the live broadcasting room, and at least one virtual task is displayed in the task panel;
the second display unit is also used for responding to the triggering operation of any virtual task and displaying the virtual scene of the virtual task in the live broadcasting room;
and the third display unit is used for displaying at least one first virtual object to be captured in the virtual scene.
In some embodiments, the virtual scene of the virtual task is generated based on a virtual reality technique;
the second display unit is used for responding to the triggering operation of the virtual task, and displaying a scene authorization prompt in the live broadcasting room under the condition of connecting the virtual reality equipment, wherein the scene authorization prompt is used for prompting a host broadcasting object in the live broadcasting room to share the virtual scene generated by the virtual reality equipment into the live broadcasting room; and displaying the virtual scene of the virtual task in the live broadcasting room under the condition that scene authorization is confirmed.
In some embodiments, the display module is further configured to close the display function of the penalty element in the live broadcast room if the virtual task is not completed within the target duration and the interaction behavior of the audience object satisfies a second interaction condition.
In some embodiments, a second virtual object is also displayed in the living room, the second virtual object being synthesized based on the target prop selected by the viewer object and the first virtual object;
the apparatus further comprises:
and the acquisition module is used for acquiring basic virtual rewards and additional virtual rewards under the condition that the second virtual object is captured, wherein the basic virtual rewards are equal to rewards obtained by capturing the first virtual object, and the additional virtual rewards are positively related to the value of the target prop.
In some embodiments, the display module is further configured to display a third virtual object in the living room, where the interaction behavior of the audience object in the living room satisfies a third interaction condition, where the third virtual object is used to assist the host object in capturing the first virtual object in the living room; and displaying that the first virtual object is captured based on the third virtual object under the condition that the aiming point when capturing the first virtual object is positioned in the capturing prompt identifier.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory is configured to store at least one segment of a computer program, where the at least one segment of the computer program is loaded and executed by the processor to implement a method for interaction of virtual objects in an embodiment of the present application.
In another aspect, a computer readable storage medium is provided, where at least one segment of a computer program is stored, where the at least one segment of the computer program is loaded and executed by a processor to implement a method for interaction of virtual objects as in an embodiment of the present application.
In another aspect, a computer program product is provided, comprising a computer program stored in a computer readable storage medium, the computer program being read from the computer readable storage medium by a processor of a computer device, the computer program being executed by the processor to cause the computer device to perform the method of interaction of virtual objects provided in the various alternative implementations of the aspects or aspects described above.
The embodiment of the application provides an interaction method of a virtual object, which is characterized in that a capturing prompt identifier of a first virtual object is displayed according to the interaction behavior of an audience object in a live broadcast room, and because the interaction behavior of an audience account in the live broadcast room is associated with the capturing difficulty of the first virtual object, when an aiming point is positioned in the capturing prompt identifier during capturing the first virtual object, the success rate of capturing the first virtual object is related to the interaction behavior of the audience object, namely the interaction behavior of the audience object directly influences the result of capturing the first virtual object, the randomness and entertainment of capturing the first virtual object are increased, the playing method of a virtual task is enriched, and the experience of a user participating in the task can be improved; and the interaction between the audience and the live broadcasting room and between the audience and the anchor is promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and therefore the liveness and the program effect of the live broadcasting room are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of a virtual object interaction method according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for interaction of virtual objects according to an embodiment of the present application;
FIG. 3 is a flowchart of another method for interaction of virtual objects according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a task panel provided in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of a scene authorization hint provided in accordance with an embodiment of the present application;
fig. 6 is a schematic diagram of a living room provided according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a virtual capture prop provided in accordance with an embodiment of the present application;
FIG. 8 is a schematic diagram of a capture cue marker provided according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a value-added probability provided in accordance with an embodiment of the present application;
FIG. 10 is a schematic diagram of capturing special effects provided in accordance with an embodiment of the present application;
FIG. 11 is a schematic diagram of a prompt for task completion provided according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a task failure notification message provided according to an embodiment of the present application;
FIG. 13 is an interaction flow chart of a virtual object interaction method according to an embodiment of the present application;
FIG. 14 is a block diagram of an interactive device for virtual objects according to an embodiment of the present application;
FIG. 15 is a block diagram of another virtual object interaction apparatus provided in accordance with an embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution.
The term "at least one" in the present application means one or more, and the meaning of "a plurality of" means two or more.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the interactive behavior of the audience object involved in the application is obtained under the condition of full authorization.
In order to facilitate understanding, terms related to the present application are explained below.
Artificial intelligence (Artificial Intelligence, AI): the system is a theory, a method, a technology and an application system which simulate, extend and extend human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
With research and progress of artificial intelligence technology, research and application of artificial intelligence technology are developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service, virtual reality, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and become more and more important. The application provides an interaction method of virtual objects, which relates to the technology of virtual reality of artificial intelligence.
Virtual scene: refers to a check scene that an application displays (or provides) while running on a terminal. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene. For example, a simulated scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to movable objects in a virtual world. The movable object may be at least one of a virtual character, a virtual animal, and a cartoon character. In some embodiments, when the virtual world is a three-dimensional virtual world, the virtual objects are three-dimensional stereoscopic models, each having its own shape and volume in the three-dimensional virtual world, occupying a portion of space in the three-dimensional virtual world. In some embodiments, the virtual object is a three-dimensional character built based on three-dimensional human skeletal technology, which implements different external figures by wearing different skins. In some embodiments, the virtual object can be implemented using a 2.5-dimensional or 2-dimensional model, which embodiments of the application are not limited in this regard.
The interaction method of the virtual object provided by the embodiment of the application can be executed by computer equipment. In some embodiments, the computer device is a terminal or a server. In the following, taking a computer device as an example of a terminal, an implementation environment of the virtual object interaction method provided by the embodiment of the present application is introduced, and fig. 1 is a schematic diagram of an implementation environment of the virtual object interaction method provided by the embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
In some embodiments, the interaction method of the virtual object provided by the embodiment of the application can be applied to the field of XR (Extended Reality). The terminal 101 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a VR device, an AR (Augmented Reality) device, an MR (Mixed Reality) device, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, or the like, but is not limited thereto. The terminal 101 installs and runs an application supporting a virtual space. The virtual space may be a live broadcast or a virtual scene generated based on a virtual reality technology, which is not limited by the embodiment of the present application. The application may be a game-type application or a live-type application, etc., to which embodiments of the present application are not limited. Illustratively, the terminal 101 is a terminal used by a user. The terminal 101 can display a virtual scene generated based on a virtual reality technology in a living room. The user interacts with the virtual objects in the virtual scene using the terminal 101. The interaction may be capturing the virtual object or following the movement of the virtual object, and the embodiment of the present application is not limited in this regard.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals may be only one, or the number of the terminals may be tens or hundreds, or more, and the number and the device type of the terminal are not limited in the embodiment of the present application.
In some embodiments, the server 102 is a stand-alone physical server, can be a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Server 102 is used to provide background services for applications that support virtual space. In some embodiments, the server 102 takes on primary computing work and the terminal 101 takes on secondary computing work; alternatively, the server 102 takes on secondary computing work and the terminal 101 takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 102 and the terminal 101.
Fig. 2 is a flowchart of an interaction method of a virtual object according to an embodiment of the present application, referring to fig. 2, in an embodiment of the present application, a terminal implementation is used as an example. The interaction method of the virtual object comprises the following steps:
201. the terminal displays at least one first virtual object to be captured of the virtual task in the live broadcast room.
In the embodiment of the application, the terminal can display the virtual scene of the virtual task in the live broadcasting room. The virtual scene may be a pure virtual scene, a simulation environment generated based on a virtual reality technology, or a semi-simulated semi-virtual scene generated based on a mixed reality technology, which is not limited in the embodiment of the present application. The virtual scene comprises at least one first virtual object to be captured. The virtual task refers to capturing a first virtual object in a virtual scene. Both the anchor object and the viewer object of the living room can see the at least one first virtual object to be captured. The host broadcasting object of the live broadcasting room can capture the observed first virtual object to capture the first virtual object. The audience object of the living room is able to view the process of capturing the first virtual object by the anchor object.
202. For any first virtual object, the terminal displays capturing prompt identifiers of the first virtual object based on interaction behaviors of audience objects in the live broadcasting room, and the probability corresponding to the capturing prompt identifiers is used for indicating the success rate of capturing the first virtual object when the aiming point is located in the capturing prompt identifiers, and the probability corresponding to the capturing prompt identifiers is positively related to the interaction behaviors.
In the embodiment of the application, for any first virtual object, in the process that the main broadcasting object in the living broadcasting room catches the first virtual object, the terminal displays the catching prompt identifier of the first virtual object. The position of the capture cue identification moves along with the position of the first virtual object. The capture cue identification is used to provide targeting services for capturing the first virtual object. That is, when the aiming point corresponding to the capturing operation is located outside the capturing prompt identifier, the first virtual object is not captured. When the aiming point corresponding to the capturing operation is positioned in the capturing prompt identifier, the first virtual object is likely to be successfully captured. At this time, whether the first virtual object can be successfully captured depends on the probability corresponding to the capture prompt identifier. The capture prompt identifier of each first virtual object has a corresponding probability. The probability corresponding to the capturing prompt identifier is the success rate of capturing the first virtual object when the aiming point is positioned in the capturing prompt identifier in the capturing process of capturing the first virtual object. That is, when the aiming point of the anchor object capturing the first virtual object is located in the capturing prompt identifier, the greater the probability of the capturing prompt identifier corresponding to the first virtual object, the easier the anchor object captures the first virtual object. The probability corresponding to the capturing prompt identification is positively correlated with the interactive behavior of the audience object in the live broadcasting room. That is, the interactive behavior of the audience object in the live broadcasting room can improve the probability corresponding to the capturing prompt identifier, so that the difficulty of capturing the first virtual object by the anchor object is reduced.
203. And responding to the capturing operation of the first virtual object, and displaying a capturing result aiming at the first virtual object by the terminal based on the probability corresponding to the capturing prompt identifier.
In the embodiment of the application, the capturing operation refers to capturing a first virtual object in a virtual scene. The capturing operation can be triggered by a capturing control on the terminal, can be triggered by virtual reality equipment connected with the terminal, and can be triggered by twisting the terminal. Under the condition that the anchor object captures the first virtual object, when an aiming point of the anchor object in capturing is located in a capturing prompt identifier of the first virtual object, the terminal acquires the probability corresponding to the capturing prompt identifier of the first virtual object. And then, the terminal displays a capturing result aiming at the first virtual object in the live broadcasting room according to the probability corresponding to the capturing prompt identifier. The higher the probability corresponding to the capturing prompt identifier is, the greater the success rate of capturing the first virtual object is.
The embodiment of the application provides an interaction method of a virtual object, which is characterized in that a capturing prompt identifier of a first virtual object is displayed according to the interaction behavior of an audience object in a live broadcast room, and because the interaction behavior of an audience account in the live broadcast room is associated with the capturing difficulty of the first virtual object, when an aiming point is positioned in the capturing prompt identifier during capturing the first virtual object, the success rate of capturing the first virtual object is related to the interaction behavior of the audience object, namely the interaction behavior of the audience object directly influences the result of capturing the first virtual object, the randomness and entertainment of capturing the first virtual object are increased, the playing method of a virtual task is enriched, and the experience of a user participating in the task can be improved; and the interaction between the audience object and the live broadcasting room and between the audience object and the anchor object are promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and therefore the liveness and the program effect of the live broadcasting room are greatly improved.
Fig. 3 is a flowchart of another interaction method of virtual objects according to an embodiment of the present application, referring to fig. 3, in an embodiment of the present application, an example of execution by a terminal is described. The interaction method of the virtual object comprises the following steps:
301. the terminal displays at least one first virtual object to be captured of the virtual task in the live broadcast room.
In an embodiment of the application, the live room includes a main cast object and a viewer object. The anchor object is able to participate in virtual tasks in the living room. The virtual scene of the virtual task comprises at least one first virtual object to be captured. The anchor object may share the virtual scene of the virtual task to the live room, so that the terminal may display the virtual scene of the virtual task in the live room. At least one first virtual object to be captured is displayed in the virtual scene. In the process that the anchor object participates in the virtual task, the anchor object can control the terminal to capture the first virtual object. The live broadcasting room and the virtual task can be two independent individuals, namely, the live broadcasting room and the virtual task enter by different application programs; the live room and virtual tasks may also be associated, such as, among other functions, participation in the virtual tasks by the host object in the live room, etc., as the embodiments of the present application are not limited in this regard.
In some embodiments, the live room and virtual tasks are associated. Accordingly, the process of displaying at least one first virtual object to be captured of the virtual task in the live broadcast room by the terminal comprises: the terminal displays the task panel in the living broadcast room. At least one virtual task is displayed in the task panel. Then, for any virtual task, in response to a trigger operation on the virtual task, the terminal displays a virtual scene of the virtual task in the live broadcast room. Then, the terminal displays at least one first virtual object to be captured in the virtual scene. According to the scheme provided by the embodiment of the application, the live broadcasting room is associated with the virtual task, so that the main broadcasting object can directly enter the virtual task through the live broadcasting room, the operation is simple, and the man-machine interaction efficiency is improved; and as the host broadcasting object can enter the virtual task through the live broadcasting room, the host broadcasting object is easy to generate the mind of the live broadcasting and the virtual task, so that the host broadcasting object is guided to take account of the live broadcasting and the virtual task, the interaction between negligence of the host broadcasting object and the audience account in the live broadcasting room due to the addiction of the host broadcasting object to the virtual task is avoided, and the liveness of the live broadcasting room is further improved.
For example, fig. 4 is a schematic diagram of a task panel according to an embodiment of the present application. Referring to fig. 4, fig. 4 (a) exemplarily shows a function panel of a living room. The function panel has task controls displayed therein. The task control is used to provide virtual tasks to the anchor object. Chat controls, decision controls, and the like can also be displayed in the functional panel, and the embodiment of the application is not limited in this regard. Chat controls are used to provide the anchor object with a join line function with other anchor objects. The subtotal control is used to provide the host object with a challenge function with other host objects. And responding to the triggering operation of the task control, and displaying a task panel in the live broadcasting room by the terminal. Referring to fig. 4 (b), a task option, an in-progress option, and a virtual store option are displayed in the task panel. In response to a selection operation of the task option, the terminal displays a plurality of virtual tasks in the task panel. The plurality of virtual tasks may include an incomplete virtual task, an in-progress virtual task, a completed virtual task, and the like, to which embodiments of the present application are not limited. For any virtual task, the virtual task includes a task goal, a time-limited duration, and a virtual reward. Taking the virtual task 1 as an example, the task objective of the virtual task 1 is to capture 200 first virtual objects; the time-limited duration is 10 minutes; the virtual reward available after completion of the task is 2000 points. Points earned by the anchor object can be exchanged in a virtual store in the task panel, and can be exchanged into properties, gifts, virtual currency, etc. of the living broadcast room, which is not limited in the embodiments of the present application.
The virtual scene of the virtual task can be a simulation environment generated based on a virtual reality technology, or can be a virtual scene generated based on an augmented reality technology; the virtual scene may also be a semi-simulated semi-fictional virtual scene generated based on a mixed reality technology, and the embodiment of the application is not limited to this. The virtual scene of the virtual task may be generated by the terminal itself based on any of the above technologies, or may be generated by another device corresponding to any of the above technologies, which is not limited in the embodiment of the present application.
Optionally, the virtual scene of the virtual task is generated based on a virtual reality technology. Accordingly, the process of displaying the virtual scene of the virtual task in the live broadcasting room by the terminal comprises the following steps: and responding to the triggering operation of the virtual task, and displaying a scene authorization prompt in the live broadcasting room by the terminal under the condition of connecting the virtual reality equipment. Then, the terminal displays the virtual scene of the virtual task in the live broadcasting room under the condition that the scene authorization is confirmed. The scene authorization prompt is used for prompting a host broadcasting object of the live broadcasting room to share the virtual scene generated by the virtual reality equipment to the live broadcasting room. The virtual reality device may be VR glasses, to which embodiments of the application are not limited. After the anchor object selects the virtual task, the anchor object may connect the virtual reality device with the terminal. Then, the terminal displays a scene authorization prompt under the condition that the terminal is connected with the virtual reality device. And under the condition that the scene authorization is confirmed, the terminal switches the scene shot by the camera of the terminal in the live broadcasting room into a virtual scene generated by the virtual reality equipment. Thus, the viewer object in the living room can see the virtual scene seen by the anchor object. According to the scheme provided by the embodiment of the application, the virtual scene is generated by the virtual reality technology and is shared into the live broadcasting room, so that the audience object and the host broadcasting object in the live broadcasting room can generate the sense of being in the scene, the experience of the host broadcasting object in the virtual task is improved, the experience of the audience object in watching the live broadcasting is also improved, the interaction between the live broadcasting room is facilitated, and the liveness of the live broadcasting room is improved; and by displaying the scene authorization prompt, the host broadcasting object can determine whether to authorize the virtual scene to the live broadcasting room, so that the safety of scene information can be ensured, and the intention of the host broadcasting object can be met.
For example, fig. 5 is a schematic diagram of a scene authorization hint provided according to an embodiment of the present application. Referring to fig. 5, a prompt message is displayed in the scene authorization prompt. The prompt information is used to prompt the host whether to share the virtual scene to the live room. The scene authorization prompt also displays an authorization control and a cancellation control. And responding to the triggering operation of the authorization control, and displaying the virtual scene of the virtual task in the live broadcasting room by the terminal. And responding to the triggering operation of the cancellation control, and closing the scene authorization prompt by the terminal. The scene authorization prompt may be displayed at any position, which is not limited by the embodiment of the present application.
In case of confirming the scene authorization, the terminal displays at least one first virtual object to be captured of the virtual task in the live room. For example, fig. 6 is a schematic diagram of a live room provided according to an embodiment of the present application. Referring to fig. 6, the terminal displays a virtual scene of a virtual task in the living room. A first virtual object 601 is displayed in the virtual scene. The terminal also displays a progress bar 602 and remaining duration of the virtual task in the live room. As can be seen from fig. 6, the task goal of the virtual task is to capture 200 first virtual objects, and 120 first virtual objects have been captured by the present moment. The remaining duration of the virtual task is also 1 minute and 33 seconds. Also displayed in the living room is a radar 603 for virtual tasks. The radar 603 is used to provide the position of the first virtual object to the anchor object, e.g. a black spot in the radar 603. The anchor object can move to the location where the first virtual object is located according to the location indicated by the radar 603.
In the case of the terminal displaying the first virtual object in the live broadcast room, that is, in the case of the host object finding the first virtual object, the terminal may display at least one virtual capture prop to be selected in the live broadcast room. Optionally, the terminal may display at least one virtual capturing prop to be selected in the live broadcast room under the condition that the prop selection control displayed in the live broadcast room is triggered; or, the terminal may further display at least one virtual capturing prop to be selected in the live broadcast room in response to the triggering of the virtual reality device, which is not limited in the embodiment of the present application.
For example, fig. 7 is a schematic diagram of a virtual capture prop according to an embodiment of the present application. Referring to fig. 7, the terminal displays a virtual capture prop 701 in a living room. The virtual capturing prop 701 can be displayed on the upper layer of the virtual object, and the display position of the virtual capturing prop is not limited in the embodiment of the present application. A property switch control 702 is also displayed in the living room. The anchor object may trigger prop switch control 702 on the terminal to switch the currently displayed virtual capture prop. Or, due to the connection relationship between the virtual reality device and the terminal, the anchor object can trigger the virtual reality device to switch the currently displayed virtual capturing prop. For example, the anchor object may click on the VR handle to cause the terminal to display at least one virtual capture prop in the living room. And under the condition that the VR handle is clicked again, the terminal switches and displays different virtual capturing props in the live broadcasting room. In the event that a virtual capture prop has been selected, the anchor object is able to capture the first virtual object. Accordingly, the terminal continues to perform steps 302 to 305.
302. For any first virtual object, the terminal acquires the basic probability of the capturing prompt identifier of the first virtual object, wherein the basic probability of the capturing prompt identifier is used for representing the basic success rate of capturing the first virtual object when the aiming point is positioned in the capturing prompt identifier.
In the embodiment of the application, for any first virtual object in the virtual task, the first virtual object has a corresponding capturing prompt identifier. The position of the capture cue identification moves along with the position of the first virtual object. In the capturing process of the first virtual object, the terminal can display a capturing prompt identifier of the first virtual object. The first virtual object may only be captured when the aiming point of the virtual capture prop or the aiming point of the capture operation is within the area indicated by the capture cue identification. The capturing result for the first virtual object depends on the probability corresponding to the capturing prompt identifier. For any first virtual object, the capture prompt identifier of the first virtual object corresponds to a basic probability. The basic probability is the lowest success rate of capturing the first virtual object when the aiming point is positioned in the region indicated by the capturing prompt identifier in the process of capturing the first virtual object. The basic probability can be preset by a developer of the virtual task, and the embodiment of the application does not limit the size of the basic probability.
For any first virtual object, the first virtual object may have a plurality of capturing prompt identifiers, and the number of capturing prompt identifiers of the first virtual object is not limited in the embodiment of the present application. The capturing prompt identifiers of the first virtual object can be a plurality of capturing prompt identifiers with the same position and different sizes; or a plurality of capturing prompt identifiers with different positions and same size; or a plurality of capturing prompt identifiers with different positions and different sizes, and the embodiment of the application is not limited to the capturing prompt identifiers. The basic probabilities of the plurality of capturing prompt identifiers can be the same or different, and the embodiment of the application is not limited to the basic probabilities. The capturing prompt identifier can be a target ring or a prompt box and the like, and the embodiment of the application does not limit the style of the capturing prompt identifier.
For example, fig. 8 is a schematic diagram of a capturing prompt identifier according to an embodiment of the present application. Referring to fig. 8, the capture cue is identified as a target ring. The first virtual object corresponds to three target rings, a first target ring 801, a second target ring 802, and a third target ring 803, respectively. The plurality of target rings have the same center and different sizes. The base probabilities of the plurality of target rings may be inversely related to the size of the target ring, as embodiments of the application are not limited in this respect. For example, the base probability of the first target ring 801 is 25%. That is, the minimum success rate for capturing the first virtual object is 25% when the aiming point of virtual capture prop 804 is located between first target ring 801 and second target ring 802. The base probability of the second target ring 802 is 50%. That is, the lowest success rate for capturing the first virtual object is 50% when the aiming point of the virtual capture prop 804 is located between the second target ring 802 and the third target ring 803. The base probability of the third target ring 803 is 75%. That is, the lowest success rate of capturing the first virtual object is 75% when the aiming point of the virtual capture prop 804 is located within the area surrounded by the third target ring 803.
303. The terminal determines the increment probability of the capturing prompt identifier based on the interactive behavior of the audience object in the live broadcasting room, wherein the increment probability of the capturing prompt identifier is used for indicating the increment success rate of capturing the first virtual object when the aiming point is positioned in the capturing prompt identifier, and the increment probability corresponding to the capturing prompt identifier is related to the interactive behavior.
In the embodiment of the application, the terminal can count the interactive behavior of the audience objects in the live broadcasting room; then, a value-added probability of the first virtual object is determined based on the interactive behavior of the viewer object. Or, the server counts the interactive behavior of the audience object in the living broadcast room, then determines the value-added probability of the first virtual object, and sends the value-added probability to the terminal, which is not limited in the embodiment of the application. The increment probability of the capturing prompt identifier is the success rate of capturing the first virtual object, which is increased on the basis of the basic probability of the capturing prompt identifier when the aiming point is positioned in the region indicated by the capturing prompt identifier in the process of capturing the first virtual object.
The increment probability of the capturing prompt identifier can be a positive value or a negative value, and the embodiment of the application is not limited to the positive value or the negative value. That is, the interactive behavior of the audience object in the live broadcast room can improve the success rate of capturing the first virtual object, and also can reduce the success rate of capturing the first virtual object, which is not limited in the embodiment of the present application. Alternatively, the positive interaction behavior may increase the success rate of capturing the first virtual object, and the negative interaction behavior may decrease the success rate of capturing the first virtual object. The forward interaction behavior may be giving a gift to the host, posting a positive comment or praying at the live room, etc., which is not limiting in the embodiments of the present application. Negative interaction behavior may be to issue negative comments in a live room or purchase a whole number of players, etc., which is not limited in the embodiments of the present application.
In the embodiment of the application, the terminal can determine the increment probability of the capturing prompt identifier of the first virtual object according to the interactive behavior of the audience object in the live broadcasting room from the current moment; or the terminal can also determine the increment probability of the capturing prompt identifier of the first virtual object according to the interactive behavior of the audience object in the live broadcasting room in the process that the main broadcasting object participates in the virtual task; or the terminal can also determine the increment probability of the capturing prompt identifier of the first virtual object according to the moment when the increment probability is determined based on the interaction behavior of the audience object last time and the interaction behavior of the audience object in the live broadcasting room at the current moment.
In some embodiments, the terminal determines the value-added probability of the capture prompt identifier of the first virtual object according to the time when the value-added probability was last determined based on the interaction behavior of the viewer object, to the interaction behavior of the viewer object in the live room at the current time. In other words, the interactive behavior can only exchange the increment probability of the capturing prompt identifier once, and after the exchange, the interactive behavior does not participate in calculation of the increment probability. Correspondingly, the process of determining the increment probability of the capturing prompt identifier by the terminal based on the interactive behavior of the audience object in the live broadcasting room is as follows: the terminal acquires the interactive behavior of the audience object in the live broadcasting room from the historical moment to the current moment. Then, under the condition that the interaction behavior of the audience object meets the first interaction condition, the terminal determines the probability corresponding to the first interaction condition as the increment probability of the capturing prompt identifier. The historical moment is the moment when the increment probability is determined based on the interactive behavior of the audience object in the last time in the live broadcasting room. According to the scheme provided by the embodiment of the application, the increment probability of the capturing prompt identifier of the first virtual object is determined from the moment when the increment probability is determined based on the interaction behavior of the audience object at the last moment to the interaction behavior of the audience object in the live broadcasting room at the current moment, so that the interaction behavior can only be converted into the increment probability of the capturing prompt identifier for one time and cannot be reused, and if the success rate of capturing the first virtual object is increased, new interaction behaviors need to be generated between the audience object and the live broadcasting room and between the audience object and the host broadcasting object, and the liveness and the program effect between the live broadcasting room can be improved.
The first interaction condition may be that the number of times of the interaction behavior reaches the target interaction number, or that the virtual resource consumed by the interaction behavior reaches the target condition, which is not limited in the embodiment of the present application. Correspondingly, the process that the terminal determines the probability corresponding to the first interaction condition as the increment probability of the capturing prompt identifier is as follows: and under the condition that the number of the interaction behaviors of the audience objects reaches the target interaction number, the terminal determines the probability corresponding to the target interaction number as the increment probability of the capturing prompt identifier. Or under the condition that the virtual resources consumed by the interactive behaviors of the audience objects reach the target conditions, the terminal determines the probability corresponding to the target conditions as the increment probability of the capturing prompt identifier. According to the scheme provided by the embodiment of the application, only when the interaction behavior of the audience object meets the first interaction condition, the interaction behavior of the audience object is converted into the increment probability of the capturing prompt identifier, namely, the success rate of capturing the first virtual object can be improved only when the interaction behavior of the audience object meets the condition, so that the interaction between the audience object and the live broadcasting room and the interaction between the audience object and the main broadcasting object can be promoted; the liveness and the program effect of the live broadcasting room are improved; compared with the mode that the increment probability is calculated for each interaction behavior, the method can reduce operation consumption.
Besides the above manner of determining the increment probability under the condition that the interaction behavior satisfies the condition, the terminal may also count the interaction behavior of the audience object when the first virtual object appears, and determine the increment probability corresponding to the interaction behavior, which is not limited in the embodiment of the present application.
In some embodiments, the terminal is also capable of displaying the value-added probabilities in the living room. Correspondingly, the terminal displays prompt information in the live broadcasting room. The prompt information is used for prompting the increment probability acquired at the current moment. The embodiment of the application does not limit the display position of the value-added probability.
For example, fig. 9 is a schematic diagram of a value-added probability provided according to an embodiment of the present application. Referring to fig. 9, the terminal displays a prompt message of "the probability of capturing the first virtual object increases by 5%", in the comment area of the living room. I.e. the increment probability is 5%. The prompt message can be regarded as a comment, and the comment slides upwards along with the comment of the comment area until the comment disappears.
It should be noted that, in the embodiment of the present application, the timing for acquiring the basic probability and the timing for acquiring the increment probability may be the same or different. That is, the execution timing of step 302 and step 303 may be the same or different, which is not limited in the embodiment of the present application.
304. And the terminal displays the capturing prompt identifier of the first virtual object based on the basic probability and the increment probability.
In the embodiment of the application, the terminal determines the target probability of the capturing prompt identifier according to the basic probability and the increment probability of the capturing prompt identifier of the first virtual object. And then, the terminal displays the capturing prompt identifier of the first virtual object according to the target probability of the capturing prompt identifier. The target probability corresponding to the capture prompt identifier is used for indicating the final success rate of capturing the first virtual object when the aiming point is positioned in the capture prompt identifier when the first virtual object is captured. The terminal can directly sum the basic probability and the increment probability to obtain a target probability; alternatively, the terminal may also weight sum the base probability and the increment probability to obtain the target probability, which is not limited in the embodiment of the present application. In the process of calculating the target probability of capturing prompt identification, the embodiment of the application does not limit the number of the used increment probabilities, namely, the probability of capturing the first virtual object can be increased for a plurality of times before capturing the first virtual object.
For example, with continued reference to fig. 8, the capture cue is identified as a target ring. If the probability of capturing the first virtual object is increased by 5% before capturing the first virtual object, the target probability of the first target ring 801 of the first virtual object 804 is 30% when capturing the first virtual object; the target probability for the second target ring 802 is 55%; the target probability for the third target ring 803 is 80%.
The terminal can display the capture prompt identifier of the first virtual object while displaying the first virtual object; or the terminal can also display the capturing prompt identifier of the first virtual object when triggering the aiming operation of the first virtual object; or, the terminal may display the capturing prompt identifier of the first virtual object when triggering the capturing operation on the first virtual object.
For example, the terminal first displays a first virtual object in the living room. In this case, the anchor object discovers the first virtual object. Then, in response to the operation of the anchor object pressing the VR handle button for a long time, the virtual task enters a mode of capturing the virtual object. In this case, the terminal displays the capture prompt identifier of the first virtual object.
305. And responding to the capturing operation of the first virtual object, and displaying a capturing result aiming at the first virtual object by the terminal based on the probability corresponding to the capturing prompt identifier.
In the embodiment of the application, in response to the capturing operation of the first virtual object, under the condition that the aiming point of the capturing operation is positioned in the capturing prompt identifier, the terminal displays the capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier. The capturing result may be that the first virtual object is captured or that the first virtual object is not captured. Before capturing the first virtual object, the anchor object may select a virtual capture prop for capturing the first virtual object. Correspondingly, under the condition that the aiming point of the virtual capturing prop is positioned in the capturing prompt identifier, the terminal displays a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier.
For example, with continued reference to fig. 8, the capture cue is identified as a target ring. The aiming point of the virtual capture prop is located between the second target ring 802 and the third target ring 803. In response to the capturing operation of the first virtual object, the terminal displays a capturing result for the first virtual object based on the probability of the third target ring 803. That is, 30% of the time, the terminal displays that the first virtual object was successfully captured; 70% of the time, the terminal displays that the first virtual object is not captured.
And under the condition that the first virtual object is successfully captured, the terminal displays the capturing special effect of the first virtual object. The embodiment of the application does not limit the capturing effect. The terminal may also obtain a virtual reward corresponding to the first virtual object.
For example, FIG. 10 is a schematic diagram of capturing special effects provided in accordance with an embodiment of the present application. Referring to fig. 10, in case of successfully capturing the first virtual object, the terminal displays a lightning special effect and displays that the first virtual object is absorbed by the virtual capture prop.
In some embodiments, a second virtual object is also displayed in the living room. The anchor object is also able to perform a catch on the second virtual object. The manner of capturing the second virtual object is the same as the manner of capturing the first virtual object, and the principle is not described herein. Unlike the first virtual object, the second virtual object may be generated based on interactive behavior of the viewer object in the living room. Optionally, the second virtual object is synthesized based on the target prop selected by the audience object and the first virtual object. Accordingly, in the case where the second virtual object is captured, the terminal acquires the base virtual reward and the additional virtual reward. Wherein the underlying virtual reward is equal to a reward obtained by capturing the first virtual object. The additional virtual rewards are positively correlated with the value of the target prop. According to the scheme provided by the embodiment of the application, the target prop is provided in the live broadcasting room, so that the audience object in the live broadcasting room can synthesize the second virtual object by utilizing the target prop and the first virtual object, namely, the audience object can participate in the virtual task, the randomness and entertainment of capturing the virtual object are increased, the playing method of the virtual task is enriched, and the experience of the user in participating in the task can be improved; and compared with the rewards obtained by capturing the second virtual objects, the rewards obtained by capturing the second virtual objects are more, so that the enthusiasm of the host broadcasting objects to participate in virtual tasks can be improved, further interaction between the audience objects and the live broadcasting room and interaction between the audience objects and the host broadcasting objects can be promoted, more users are attracted to enter the live broadcasting room to participate in interaction, and the liveness and the program effect of the live broadcasting room are greatly improved.
For example, the first virtual object is a monster, the target prop is a fairy hat, and the second virtual object is a fairy. During the participation of the anchor object in the virtual task, the audience object in the living room can purchase the eidolon hat. The spectator object may then drag the fairy hat onto the displayed monster of the living room. The terminal then displays that the ghost becomes a sprite. In the case where the sprite object captures the sprite, the sprite object can acquire additional virtual rewards on the basis of acquiring virtual rewards corresponding to the monster. The higher the value of the eidolon hat, the more the bonus virtual prize.
In some embodiments, the terminal may also display a third virtual object in the living room based on the interactive behavior of the viewer object. The third virtual object may be used to assist a host object in the live room in capturing the first virtual object. Accordingly, in the case that the interactive behavior of the audience object in the live broadcast room satisfies the third interactive condition, the terminal displays the third virtual object in the live broadcast room. Then, in the case that the aiming point when capturing the first virtual object is located in the capturing prompt identifier, the terminal displays that the first virtual object is captured based on the third virtual object. Wherein the third virtual object may follow the position movement of the anchor object. When the anchor object encounters the first virtual object, the third virtual object can assist the anchor object in capturing the first virtual object. According to the scheme provided by the embodiment of the application, the third virtual object is displayed in the live broadcasting room according to the interaction behavior of the audience object in the live broadcasting room, so that the host broadcasting object is assisted to capture the first virtual object, namely the interaction behavior of the audience object directly influences the result of capturing the first virtual object, the randomness and entertainment of capturing the first virtual object are increased, the playing method of the virtual task is enriched, and the experience of the user participating in the virtual task can be improved; moreover, to complete the task objective of the virtual task, the main broadcasting object and the audience object are continuously interacted, so that the interaction between the audience object and the living broadcasting room and the interaction between the audience object and the main broadcasting object are promoted, more users are attracted to enter the living broadcasting room to participate in the interaction, and the liveness and the program effect of the living broadcasting room are greatly improved.
For example, the third virtual object is witch. Audience objects can accumulate the capturing fortune value of the current live broadcasting room through interaction actions such as praying, gift giving and the like. The higher the capture fortune value, the greater the probability that the anchor object will see Sybil. The witches may follow the position movements of the anchor object. When the active object captures the first virtual object, the witches can develop magic, assist the anchor object to capture the first virtual object, and the capturing success rate is 100%. The embodiment of the application does not limit the number of the first virtual objects which are captured by the third virtual object in an assisted manner.
Under the condition that the interaction behavior of the audience object in the live broadcasting room meets a third interaction condition, the terminal can generate a third virtual object in the virtual scene displayed in the live broadcasting room. Or, the third virtual object exists in the virtual scene in advance; and under the condition that the interaction behavior of the audience object in the live broadcasting room meets the third interaction condition, the terminal can directly display the third virtual object in the live broadcasting room. That is, when the interactive behavior of the audience object satisfies the condition, the probability that the anchor object encounters the third virtual object may increase. The embodiment of the application does not limit the display mode of the third virtual object.
It should be noted that, in the case that the aiming point of the virtual capturing prop is located outside the capturing prompt identifier, the terminal displays that the first virtual object is not captured.
In the embodiment of the application, whether the first virtual object is captured or not, the anchor object can control the terminal to actively exit the virtual task. Alternatively, the anchor object may automatically exit the virtual task if the virtual task is completed. In this case, the terminal performs step 306. Alternatively, the anchor object may also automatically exit the virtual task in the event that the virtual task fails. In this case, the terminal performs step 307. That is, steps 306 and 307 are optional steps.
306. If the virtual task is completed within the target duration, the terminal acquires the virtual rewards of the virtual task, and the target duration is used for representing the time limit for executing the virtual task.
In the embodiment of the application, each virtual task has a respective task target and target duration. The target duration is a time-limited duration of executing the virtual task, which may also be referred to as a time limit. The task goal is to capture a target number of first virtual objects for a target duration. And if the anchor object successfully captures the first virtual objects with the target number in the target duration, the terminal acquires the virtual rewards of the virtual tasks. Under the condition of completing the virtual task, the terminal can switch the virtual scene provided by the virtual reality equipment in the live broadcasting room into a scene shot by the camera of the terminal. The terminal can also display prompt information of task completion in the live broadcasting room.
For example, fig. 11 is a schematic diagram of prompt information for task completion according to an embodiment of the present application. Referring to fig. 11, in case of completing a virtual task, the terminal may display a scene photographed by its own camera in the living room. That is, the terminal can display the anchor object in the living room. The terminal can also display prompt information of task completion in the live broadcasting room. As can be seen from FIG. 11, the prompt for task completion is "your virtual task completed, virtual rewards have been issued to your backpack-! ". The virtual prize issued is 2000 points. The anchor object may click on the "know" control to turn off the display of the reminder information. The anchor object may click on the "view rewards" control to cause the terminal to display the knapsack of the anchor object. The acquired virtual rewards are included in the knapsack of the anchor object.
307. If the virtual task is not completed within the target duration, the terminal displays penalty elements in the live broadcasting room, wherein the penalty elements are used for punishing the main broadcasting object of the unfinished virtual task, and the number of the penalty elements is positively related to the difficulty of the virtual task.
In the embodiment of the application, the terminal can punish the anchor object under the condition that the anchor object does not complete the virtual task. Accordingly, the terminal may display the penalty element in the live room. The number of penalty elements is positively correlated with the difficulty of the virtual task. That is, the greater the number of first virtual objects to be captured as indicated by the task goals of the virtual task, the greater the penalty elements. The display duration of the penalty element may also be positively correlated with the difficulty of the virtual task, which is not limited by the embodiments of the present application. The penalty element may be a terrorist element, such as an electric saw or a mortuary, to mentally afflict the host, as embodiments of the application are not limited in this respect. The degree of terrorism of a terrorist element may be positively correlated with the difficulty of a virtual task. According to the scheme provided by the embodiment of the application, terrorist elements are displayed in the live broadcasting room under the condition that the virtual task is not completed by the host broadcasting object, so that the host broadcasting object is punished, the playing method of the virtual task is enriched, and the activity and the program effect of the live broadcasting room can be improved.
Before the penalty element is displayed, the terminal can display prompt information of task failure in the live broadcasting room so as to remind the host of virtual task failure, and a penalty link is entered. For example, fig. 12 is a schematic diagram of a task failure notification message according to an embodiment of the present application. Referring to fig. 12, in case that a virtual task is not completed, the terminal displays a prompt message of a task failure in the live broadcasting room. As can be seen from FIG. 12, the prompt for task failure is "task failure-! You will be penalized after 30 seconds. Please prepare. The anchor object may click on the "know" control to close the display of the hint information and to wait for the display of the penalty element. The anchor object may also click on a "help" control to seek help from the audience object at the live room to avoid penalties.
In some embodiments, the interactive behavior of the spectator object may affect the performance of the penalty described above. Correspondingly, if the virtual task is not completed within the target duration and the interactive behavior of the audience object meets the second interactive condition, the terminal closes the display function of the punishment element in the live broadcasting room. According to the scheme provided by the embodiment of the application, punishment elements are not displayed under the condition that the interactive behaviors of the audience objects meet the conditions, namely punishment exemption rights can be won for the anchor objects when the interactive behaviors of the audience objects meet the conditions, one punishment is avoided, randomness and entertainment of punishment execution are increased, and the playing method of virtual tasks is enriched; in addition, to avoid punishment, the audience objects in the live broadcasting room are required to continuously interact, so that interaction between the audience objects and the live broadcasting room and interaction between the audience objects and the live broadcasting object are promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and the liveness and the program effect of the live broadcasting room are greatly improved.
In order to more clearly describe the interaction method of the virtual object provided by the embodiment of the application, the interaction method of the virtual object is further described below with reference to the accompanying drawings. Fig. 13 is an interaction flow chart of an interaction method of virtual objects according to an embodiment of the present application. Referring to fig. 13, the anchor object can select any virtual task in a task panel displayed by the terminal. The anchor object then connects the virtual reality device with the terminal. And under the condition of connecting the virtual reality equipment, the terminal displays a scene authorization prompt in the live broadcasting room. Then, in case of confirming the scene authority, the terminal transmits the scene authority to the server. The server can then send information of the virtual task to the terminal. The information of the virtual task comprises a task target of the virtual task, a target duration and a model of the first virtual object. Then, the terminal can display the virtual scene of the virtual task in the live broadcast room so that the anchor object can acquire the position of the first virtual object. In the case of initiating capture for the first virtual object, the server may count interaction behavior of the viewer object in the living room, and determine a value-added probability of the first virtual object based on the interaction behavior. And the server sends the value-added probability of the first virtual object to the terminal. The terminal can display the prompt information of the value-added probability in the live broadcasting room. The terminal displays a capturing result for the first virtual object based on the value-added probability. If the anchor object completes the task target of the virtual task within the target duration, the server can return prompt information of the success of the task, the virtual rewards and the identification of the anchor object to the terminal, so that the terminal issues the virtual rewards to the knapsack of the anchor object. If the anchor object does not complete the task target of the virtual task within the target duration, the server can return prompt information of task failure, punishment elements and identification of the anchor object to the terminal. And then, the terminal displays prompt information of task failure in the live broadcasting room. The prompt message of task failure includes punishment countdown. The anchor object may receive a penalty after the countdown is over; the anchor object may also seek assistance from the audience objects in the living room before the countdown has ended to avoid penalties.
The embodiment of the application provides an interaction method of a virtual object, which is characterized in that a capturing prompt identifier of a first virtual object is displayed according to the interaction behavior of an audience object in a live broadcast room, and because the interaction behavior of an audience account in the live broadcast room is associated with the capturing difficulty of the first virtual object, when an aiming point is positioned in the capturing prompt identifier during capturing the first virtual object, the success rate of capturing the first virtual object is related to the interaction behavior of the audience object, namely the interaction behavior of the audience object directly influences the result of capturing the first virtual object, the randomness and entertainment of capturing the first virtual object are increased, the playing method of a virtual task is enriched, and the experience of a user participating in the task can be improved; and the interaction between the audience and the live broadcasting room and between the audience and the anchor is promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and therefore the liveness and the program effect of the live broadcasting room are greatly improved.
Fig. 14 is a block diagram of an interactive device for a virtual object according to an embodiment of the present application. The virtual object interaction device is configured to execute the steps of the virtual object interaction method, referring to fig. 14, where the virtual object interaction device includes: a display module 1401, a processing module 1402 and a capturing module 1403.
A display module 1401 for displaying at least one first virtual object to be captured of a virtual task in a live room;
the processing module 1402 is configured to display, for any first virtual object, capturing prompt identifiers of the first virtual object based on interaction behaviors of audience objects in the live broadcasting room, where a probability corresponding to the capturing prompt identifiers is used to represent a success rate of capturing the first virtual object when the aiming point is located in the capturing prompt identifiers, and the probability corresponding to the capturing prompt identifiers is positively related to the interaction behaviors;
the capturing module 1403 is configured to display a capturing result for the first virtual object based on a probability corresponding to the capturing hint identifier in response to a capturing operation for the first virtual object.
The embodiment of the application provides an interaction device for a first virtual object, which displays a capturing prompt identifier of the first virtual object according to the interaction behavior of an audience object in a live broadcast room, and because the interaction behavior of an audience account in the live broadcast room is associated with the capturing difficulty of the first virtual object, when an aiming point is positioned in the capturing prompt identifier during capturing the first virtual object, the success rate of capturing the first virtual object is related to the interaction behavior of the audience object, namely the interaction behavior of the audience object directly influences the result of capturing the first virtual object, the randomness and entertainment of capturing the first virtual object are increased, the playing method of virtual tasks is enriched, and the experience of users participating in tasks can be improved; and the interaction between the audience object and the live broadcasting room and between the audience object and the anchor object are promoted, more users are attracted to enter the live broadcasting room to participate in the interaction, and therefore the liveness and the program effect of the live broadcasting room are greatly improved.
In some embodiments, fig. 15 is a block diagram of another virtual object interaction device according to an embodiment of the present application. Referring to fig. 15, a processing module 1402 includes:
an obtaining unit 14021, configured to obtain, for any first virtual object, a basic probability of a capturing prompt identifier of the first virtual object;
a determining unit 14022, configured to determine a value-added probability of the capturing prompt identifier based on an interaction behavior of the audience object in the live broadcasting room;
a first display unit 14023, configured to display a capture prompt identifier of the first virtual object based on the basic probability and the increment probability.
In some embodiments, with continued reference to fig. 15, the determining unit 14022 is configured to obtain an interaction behavior of the audience object in the living room within a time from a history time to a current time, where the history time is a time when the incremental probability is determined based on the interaction behavior of the audience object last time in the living room; and under the condition that the interaction behavior of the audience object meets the first interaction condition, determining the probability corresponding to the first interaction condition as the increment probability of the capturing prompt identifier.
In some embodiments, with continued reference to fig. 15, the determining unit 14022 is configured to determine, when the number of times of interaction of the audience object reaches the target number of interactions, a probability corresponding to the target number of interactions as a value-added probability of the capturing prompt identifier;
The determining unit 14022 is further configured to determine, when the virtual resource consumed by the interactive behavior of the audience object reaches the target condition, a probability corresponding to the target condition as a value-added probability of the capturing hint identifier.
In some embodiments, with continued reference to fig. 15, the display module 1401 is further configured to display a prompt message in the living broadcast room, where the prompt message is used to prompt the incremental probability acquired at the current time.
In some embodiments, with continued reference to fig. 15, a display module 1401 includes:
a second display unit 14011, configured to display a task panel in the living broadcast room, where at least one virtual task is displayed;
the second display unit 14011 is further configured to display, for any virtual task, a virtual scene of the virtual task in the live broadcast room in response to a trigger operation on the virtual task;
a third display unit 14012, configured to display at least one first virtual object to be captured in the virtual scene.
In some embodiments, with continued reference to fig. 15, a virtual scene of a virtual task is generated based on a virtual reality technique;
a second display unit 14011, configured to display, in response to a trigger operation on a virtual task, a scene authorization hint in the live broadcasting room in a case where the virtual reality device is connected, the scene authorization hint being configured to hint a host broadcasting object in the live broadcasting room to share a virtual scene generated by the virtual reality device to the live broadcasting room; and displaying the virtual scene of the virtual task in the live broadcasting room under the condition that the scene authorization is confirmed.
In some embodiments, with continued reference to fig. 15, the display module 1401 is further configured to turn off the display function of the penalty element in the living room if the virtual task is not completed within the target duration and the interactive behavior of the viewer object satisfies the second interaction condition.
In some embodiments, a second virtual object is also displayed in the living room, the second virtual object being synthesized based on the target prop selected by the viewer object and the first virtual object;
with continued reference to fig. 15, the apparatus further includes:
an obtaining module 1404 is configured to obtain a basic virtual reward and an additional virtual reward in a case where the second virtual object is captured, where the basic virtual reward is equal to a reward obtained by capturing the first virtual object, and the additional virtual reward is positively related to the value of the target prop.
In some embodiments, with continued reference to fig. 15, the display module 1401 is further configured to display a third virtual object in the living room, where the interaction behavior of the audience object in the living room satisfies a third interaction condition, where the third virtual object is used to assist the host object in the living room to capture the first virtual object; and under the condition that the aiming point when the first virtual object is captured is positioned in the capturing prompt identifier, displaying the first virtual object to be captured based on the third virtual object.
It should be noted that: in the interactive device for virtual objects provided in the above embodiment, when an application program is running, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the interaction device of the virtual object and the interaction method embodiment of the virtual object provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the interaction device of the virtual object are detailed in the method embodiment, which is not described herein again.
In the embodiment of the present application, the computer device can be configured as a terminal or a server, when the computer device is configured as a terminal, the technical solution provided by the embodiment of the present application may be implemented by the terminal as an execution body, and when the computer device is configured as a server, the technical solution provided by the embodiment of the present application may be implemented by the server as an execution body, or the technical solution provided by the present application may be implemented by interaction between the terminal and the server, which is not limited by the embodiment of the present application.
Fig. 16 is a block diagram of a terminal 1600 according to an embodiment of the present application. The terminal 1600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1601 may also include a host processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1601 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. Memory 1602 may also include high-speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1602 is used to store at least one computer program for execution by processor 1601 to implement the method of interaction of virtual objects provided by the method embodiments of the present application.
In some embodiments, terminal 1600 may also optionally include: a peripheral interface 1603, and at least one peripheral. The processor 1601, memory 1602, and peripheral interface 1603 may be connected by bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1603 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1604, a display screen 1605, a camera assembly 1606, audio circuitry 1607, and a power supply 1608.
Peripheral interface 1603 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1601 and memory 1602. In some embodiments, the processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1601, memory 1602, and peripheral interface 1603 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1604 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1604 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuit 1604 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1604 may also include NFC (Near Field Communication ) related circuits, which the present application is not limited to.
The display screen 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1605 is a touch display, the display 1605 also has the ability to collect touch signals at or above the surface of the display 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1605 may be one and disposed on the front panel of the terminal 1600; in other embodiments, the display 1605 may be at least two, each disposed on a different surface of the terminal 1600 or in a folded configuration; in other embodiments, the display 1605 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1600. Even more, the display screen 1605 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 1605 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1606 is used to capture images or video. In some embodiments, camera assembly 1606 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1606 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing, or inputting the electric signals to the radio frequency circuit 1604 for voice communication. The microphone may be provided in a plurality of different locations of the terminal 1600 for stereo acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 1607 may also include a headphone jack.
A power supply 1608 is used to power the various components in the terminal 1600. The power supply 1608 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1608 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1609. The one or more sensors 1609 include, but are not limited to: acceleration sensor 1610, gyroscope sensor 1611, pressure sensor 1612, optical sensor 1613, and proximity sensor 1614.
The acceleration sensor 1610 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1600. For example, the acceleration sensor 1610 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1601 may control the display screen 1605 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1610. The acceleration sensor 1610 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1611 may detect a body direction and a rotation angle of the terminal 1600, and the gyro sensor 1611 may collect 3D actions of the user on the terminal 1600 in cooperation with the acceleration sensor 1610. The processor 1601 may implement the following functions based on the data collected by the gyro sensor 1611: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1612 may be disposed on a side frame of terminal 1600 and/or on an underlying layer of display 1605. When the pressure sensor 1612 is provided at a side frame of the terminal 1600, a grip signal of the terminal 1600 by a user may be detected, and the processor 1601 performs a left-right hand recognition or a quick operation according to the grip signal collected by the pressure sensor 1612. When the pressure sensor 1612 is disposed at the lower layer of the display screen 1605, the processor 1601 performs control of an operability control on the UI interface according to a pressure operation of the display screen 1605 by a user. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1613 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1613. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1605 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1605 is turned down. In another embodiment, the processor 1601 may also dynamically adjust the capture parameters of the camera module 1606 based on the ambient light intensity collected by the optical sensor 1613.
A proximity sensor 1614, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 1600. The proximity sensor 1614 is used to collect a distance between a user and the front surface of the terminal 1600. In one embodiment, when the proximity sensor 1614 detects that the distance between the user and the front face of the terminal 1600 is gradually decreasing, the processor 1601 controls the display 1605 to switch from the bright screen state to the off screen state; when the proximity sensor 1614 detects that the distance between the user and the front surface of the terminal 1600 gradually increases, the processor 1601 controls the display 1605 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 16 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1700 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 1701 and one or more memories 1702, where at least one computer program is stored in the memories 1702, and the at least one computer program is loaded and executed by the processors 1701 to implement the interaction method of virtual objects provided in the above method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The embodiment of the application also provides a computer readable storage medium, in which at least one section of computer program is stored, and the at least one section of computer program is loaded and executed by a processor of the computer device to implement the operations performed by the computer device in the interaction method of the virtual object in the embodiment. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the interaction method of the virtual object provided in the above-mentioned various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (15)

1. A method of interaction of virtual objects, the method comprising:
displaying at least one first virtual object to be captured of a virtual task in a live room;
for any first virtual object, based on the interaction behavior of the audience object in the live broadcasting room, displaying a capturing prompt identifier of the first virtual object, wherein the probability corresponding to the capturing prompt identifier is used for indicating the success rate of capturing the first virtual object when an aiming point is positioned in the capturing prompt identifier, and the probability corresponding to the capturing prompt identifier is positively related to the interaction behavior;
and responding to the capturing operation of the first virtual object, and displaying a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier.
2. The method of claim 1, wherein for any first virtual object, displaying the capture cue identification of the first virtual object based on the interactive behavior of the viewer object in the living room, comprises:
For any first virtual object, acquiring the basic probability of a capturing prompt identifier of the first virtual object;
based on the interactive behavior of the audience objects in the live broadcasting room, determining the increment probability of the capturing prompt identifier;
and displaying the capture prompt identification of the first virtual object based on the basic probability and the increment probability.
3. The method of claim 2, wherein the determining the appreciation probability of the capture cue identification based on the interactive behavior of the viewer object in the living room comprises:
acquiring interaction behaviors of audience objects in the live broadcasting room from a historical moment to a current moment, wherein the historical moment is the moment when the value-added probability is determined based on the interaction behaviors of the audience objects in the live broadcasting room last time;
and under the condition that the interaction behavior of the audience object meets a first interaction condition, determining the probability corresponding to the first interaction condition as the increment probability of the capturing prompt identifier.
4. The method according to claim 3, wherein the determining, in the case where the interaction behavior of the audience object satisfies a first interaction condition, a probability corresponding to the first interaction condition as the value-added probability of the capturing hint identifier includes:
Under the condition that the number of the interaction behaviors of the audience objects reaches the target interaction number, determining the probability corresponding to the target interaction number as the increment probability of the capturing prompt identifier; or alternatively, the process may be performed,
and under the condition that the virtual resources consumed by the interactive behaviors of the audience objects reach the target conditions, determining the probability corresponding to the target conditions as the increment probability of the capturing prompt identifier.
5. The method according to claim 2, wherein the method further comprises:
and displaying prompt information in the live broadcasting room, wherein the prompt information is used for prompting the increment probability acquired at the current moment.
6. The method of claim 1, wherein displaying at least one first virtual object to be captured of a virtual task in a live room comprises:
displaying a task panel in the live broadcasting room, wherein at least one virtual task is displayed in the task panel;
for any virtual task, responding to triggering operation of the virtual task, and displaying a virtual scene of the virtual task in the live broadcasting room;
at least one first virtual object to be captured is displayed in the virtual scene.
7. The method of claim 6, wherein the virtual scene of the virtual task is generated based on a virtual reality technique;
the responding to the triggering operation of the virtual task displays the virtual scene of the virtual task in the live broadcast room, and the method comprises the following steps:
in response to triggering operation of the virtual task, displaying a scene authorization prompt in the live broadcasting room under the condition of connecting virtual reality equipment, wherein the scene authorization prompt is used for prompting a host broadcasting object in the live broadcasting room to share a virtual scene generated by the virtual reality equipment to the live broadcasting room;
and displaying the virtual scene of the virtual task in the live broadcasting room under the condition that scene authorization is confirmed.
8. The method of claim 6, wherein the virtual task includes a target duration, the target duration being indicative of a time limit for executing the virtual task;
the method further comprises the steps of:
and if the virtual task is not completed within the target duration, displaying punishment elements in the live broadcasting room, wherein the punishment elements are used for punishing the main broadcasting object which does not complete the virtual task, and the number of the punishment elements is positively related to the difficulty of the virtual task.
9. The method of claim 8, wherein the method further comprises:
and if the virtual task is not completed within the target duration and the interactive behavior of the audience object meets a second interactive condition, closing the display function of the punishment element in the live broadcasting room.
10. The method of claim 1, wherein a second virtual object is also displayed in the living room, the second virtual object being synthesized based on the target property selected by the viewer object and the first virtual object;
the method further comprises the steps of:
in the event that the second virtual object is captured, a base virtual reward equal to the reward obtained by capturing the first virtual object and a bonus virtual reward positively correlated to the value of the target prop are obtained.
11. The method according to claim 1, wherein the method further comprises:
displaying a third virtual object in the live broadcasting room under the condition that the interaction behavior of the audience object in the live broadcasting room meets a third interaction condition, wherein the third virtual object is used for assisting a host broadcasting object in the live broadcasting room to capture the first virtual object;
And displaying that the first virtual object is captured based on the third virtual object under the condition that the aiming point when capturing the first virtual object is positioned in the capturing prompt identifier.
12. An interactive apparatus for virtual objects, the apparatus comprising:
the display module is used for displaying at least one first virtual object to be captured of the virtual task in the live broadcasting room;
the processing module is used for displaying capturing prompt identifiers of the first virtual objects based on interaction behaviors of audience objects in the live broadcasting room, wherein the probability corresponding to the capturing prompt identifiers is used for representing the success rate of capturing the first virtual objects when aiming points are located in the capturing prompt identifiers, and the probability corresponding to the capturing prompt identifiers is positively related to the interaction behaviors;
and the capturing module is used for responding to the capturing operation of the first virtual object and displaying a capturing result aiming at the first virtual object based on the probability corresponding to the capturing prompt identifier.
13. A computer device, characterized in that it comprises a processor and a memory for storing at least one piece of computer program, which is loaded by the processor and which performs the interaction method of the virtual object according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium is adapted to store at least one segment of a computer program adapted to perform the method of interaction of virtual objects according to any of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method of interaction of virtual objects according to any one of claims 1 to 11.
CN202310312175.5A 2023-03-20 2023-03-20 Virtual object interaction method and device, computer equipment and storage medium Pending CN116962835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310312175.5A CN116962835A (en) 2023-03-20 2023-03-20 Virtual object interaction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310312175.5A CN116962835A (en) 2023-03-20 2023-03-20 Virtual object interaction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116962835A true CN116962835A (en) 2023-10-27

Family

ID=88453687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310312175.5A Pending CN116962835A (en) 2023-03-20 2023-03-20 Virtual object interaction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116962835A (en)

Similar Documents

Publication Publication Date Title
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111462307A (en) Virtual image display method, device, equipment and storage medium of virtual object
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN111921197A (en) Method, device, terminal and storage medium for displaying game playback picture
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN113244616B (en) Interaction method, device and equipment based on virtual scene and readable storage medium
CN112007362B (en) Display control method, device, storage medium and equipment in virtual world
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113457173B (en) Remote teaching method, remote teaching device, computer equipment and storage medium
CN113134232B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113144598B (en) Virtual exchange-matching reservation method, device, equipment and medium
CN114130012A (en) User interface display method, device, equipment, medium and program product
CN112995687B (en) Interaction method, device, equipment and medium based on Internet
CN113599819A (en) Prompt message display method, device, equipment and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN113041613A (en) Method, device, terminal and storage medium for reviewing game
CN111589117A (en) Method, device, terminal and storage medium for displaying function options
CN114146409B (en) Game management method, device, equipment and computer readable storage medium
CN113599810B (en) Virtual object-based display control method, device, equipment and medium
CN112156463B (en) Role display method, device, equipment and medium
CN112604274B (en) Virtual object display method, device, terminal and storage medium
CN114130020A (en) Virtual scene display method, device, terminal and storage medium
CN111672101B (en) Method, device, equipment and storage medium for acquiring virtual prop in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40098969

Country of ref document: HK