CN110478904B - Virtual object control method, device, equipment and storage medium in virtual environment - Google Patents

Virtual object control method, device, equipment and storage medium in virtual environment Download PDF

Info

Publication number
CN110478904B
CN110478904B CN201910760234.9A CN201910760234A CN110478904B CN 110478904 B CN110478904 B CN 110478904B CN 201910760234 A CN201910760234 A CN 201910760234A CN 110478904 B CN110478904 B CN 110478904B
Authority
CN
China
Prior art keywords
virtual
virtual object
prop
virtual environment
escape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910760234.9A
Other languages
Chinese (zh)
Other versions
CN110478904A (en
Inventor
古星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910760234.9A priority Critical patent/CN110478904B/en
Publication of CN110478904A publication Critical patent/CN110478904A/en
Application granted granted Critical
Publication of CN110478904B publication Critical patent/CN110478904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/75Enforcing rules, e.g. detecting foul play or generating lists of cheating players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a virtual object control method in a virtual environment, which is executed by a terminal and comprises the following steps: displaying a virtual environment interface containing a virtual environment picture; when the escape prop is within the visual range of the first virtual object, displaying the escape prop in the virtual environment interface; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; and when the first virtual object finishes the designated operation on the escape prop, triggering the first virtual object to be separated from the virtual environment in the state of not being eliminated. The scheme of separating from the virtual environment under the condition that the virtual object is not eliminated in the virtual environment shortens the time required for separating from the virtual environment under the condition that the virtual object is not eliminated, and saves the electric quantity and the data flow consumed by the terminal under the condition that the virtual object is successfully escaped.

Description

Virtual object control method, device, equipment and storage medium in virtual environment
Technical Field
The present disclosure relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a virtual object in a virtual environment.
Background
A tactical competitive game is a game which places virtual objects in a virtual environment, provides a series of escape rules, and controls the virtual objects in the game to realize escape after a user formulates an escape strategy according to the escape rules.
Generally, the game result of the tactical competition game is determined according to the escape result of the virtual object, that is, when the escape of the virtual object is successful, the game result is winning. In the related art, in the process that a user can control a virtual object to escape in a virtual environment, the user can fight with the virtual object controlled by other users until all users of other teams are eliminated, and finally, the user in the eliminated teams can successfully escape, and a settlement interface is displayed.
However, in the related art, the user needs to eliminate all users of other teams before the user can successfully escape, which inevitably results in that the time required for the user to successfully escape is long, and thus the electric quantity and the data traffic of the terminal are greatly consumed.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for controlling a virtual object in a virtual environment, which can save the electric quantity and data flow consumed by a terminal under the condition that the virtual object is successfully escaped. The technical scheme is as follows:
in one aspect, a method for controlling a virtual object in a virtual environment is provided, where the method is performed by a terminal, and the method includes:
displaying a virtual environment interface comprising a virtual environment picture, wherein the virtual environment picture is a picture when the virtual environment is observed from the view angle of a first virtual object controlled by the terminal;
when the escape prop is within the visual range of the first virtual object, displaying the escape prop in the virtual environment interface; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed by the virtual object with the target identity after the collection of the target prop is finished;
when the first virtual object finishes the designated operation of the escape prop, triggering the first virtual object to be separated from the virtual environment in the state of not being eliminated.
In another aspect, there is provided a virtual object control apparatus in a virtual environment, the apparatus being used in a terminal, the apparatus comprising:
the environment interface display module is used for displaying a virtual environment interface containing a virtual environment picture, wherein the virtual environment picture is a picture obtained when the virtual environment is observed from the visual angle of a first virtual object controlled by the terminal;
the property display module is used for displaying the escape property in the virtual environment interface when the escape property is within the visual range of the first virtual object; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed by the virtual object with the target identity after the collection of the target prop is finished;
and the separation triggering module is used for triggering the first virtual object to separate from the virtual environment in the state of not being eliminated when the first virtual object completes the designated operation on the escape prop.
Optionally, the apparatus further comprises:
the first control display module is used for displaying a first trigger control on the virtual environment interface after the target prop is collected by the first virtual object when the first virtual object and the second virtual object are the same virtual object;
a calling request sending module, configured to send a calling request to a server after receiving a triggering operation on the first trigger control, where the calling request is used to indicate that the first virtual object performs the calling operation;
and the refreshing module is used for refreshing the virtual prop in the virtual environment according to the refreshing instruction sent by the server.
Optionally, the prop display module includes:
the quantity obtaining unit is used for obtaining the quantity of the target props collected by the first virtual object when the triggering operation of the first triggering control is received;
and the request sending unit is used for sending the call request to the server when the number of the target props collected by the first virtual object is not less than a number threshold.
Optionally, the request sending unit is configured to display a second trigger control on the virtual environment interface when the number of the target props collected by the first virtual object is not less than a number threshold; and when receiving the triggering operation of the second triggering control, sending the calling request to a server.
Optionally, the target item is a virtual object visible and collectable item having the target identity.
Optionally, the escape prop is a virtual prop usable by a virtual object having the target identity.
Optionally, the apparatus further comprises:
and the animation display module is used for displaying the escape animation, and the escape animation comprises the animation of the first virtual object which is separated from the virtual environment in the state of not being eliminated.
Optionally, the escape animation further includes an animation in which a second virtual object is separated from the virtual environment in a state of not being eliminated, where the second virtual object is a virtual object on the same team as the first virtual object.
Optionally, the apparatus further comprises:
and the settlement interface display module is used for displaying a ranking settlement interface, and the ranking settlement interface comprises ranking information of the first virtual object in the virtual object with the target identity in the virtual environment.
Optionally, the ranking settlement interface further includes ranking information of the first virtual object in each virtual object in the virtual environment.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement a virtual object control method in a virtual environment as provided in the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the virtual object control method in a virtual environment as provided in the embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer, causes the computer to execute the virtual object control method in a virtual environment as provided in the embodiments of the present application described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the escape prop is refreshed in the virtual environment after the target prop is collected and used by the second virtual object with the target identity, and when the first virtual object controlled by the terminal successfully executes the designated operation on the escape prop, the virtual object is triggered to break away from the virtual environment in the state of not being eliminated, so that the scheme of breaking away from the virtual environment in the virtual environment under the condition of not being eliminated is provided, the time required for breaking away from the virtual environment under the condition of not being eliminated is shortened, and the electric quantity and the data flow consumed by the terminal under the condition of successful escape of the virtual object are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a birth point selection process provided by an exemplary embodiment of the present application;
FIG. 2 is a skill demonstration of a third viewing skill provided by an exemplary embodiment of the present application;
fig. 3 is a process diagram of a viewing perspective conversion method provided in an exemplary embodiment of the present application;
FIG. 4 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 5 is a flowchart of a method for controlling virtual objects in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for controlling virtual objects in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 7 is a comparative schematic diagram of a display of a target prop according to the embodiment shown in FIG. 6;
fig. 8 to 10 are schematic views of escape screens according to the embodiment shown in fig. 6;
fig. 11 is a schematic view of an escape procedure according to the embodiment shown in fig. 6;
FIG. 12 is a block diagram illustrating an exemplary embodiment of a virtual object control apparatus in a virtual environment;
fig. 13 is a block diagram of a terminal according to another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
tactical competitive game: the virtual role is placed in a virtual environment, a series of escape rules are provided, and after a user sets an escape strategy according to the escape rules, the virtual role in the game is controlled to realize escape.
Optionally, in this embodiment of the application, the escape rule provided by the tactical competition game at least includes: birth rules, safety zone rules, additional identity rules, observation rules and escape rules.
The four rules are explained separately:
first, birth rules
The birth rule is used for representing the determining mode of the position of the virtual character after the virtual character enters the game. Optionally, the virtual environment corresponds to a map, n preset positions in the map correspond to n birth points, the user may select any one of the n birth points when the game play starts, and after the game play starts, the initial position of the virtual character controlled by the user is the position corresponding to the selected birth point. Optionally, the candidate birth points provided in each pair may be the n birth points, or some of the n birth points, such as: and m birth points are determined from the n birth points to be used as selectable birth points of the current game, and the user selects any one of the m birth points, wherein m is more than 0 and less than n. Optionally, the area formed by connecting the m birth points is displayed as a bar-shaped area in the map corresponding to the virtual environment, optionally, the birth point of the starting position of the bar-shaped area is closest to the first side of the map, the birth point of the ending position of the bar-shaped area is closest to the second side of the map, and the first side and the second side are two opposite sides.
Illustratively, 31 positions in the map correspond to 31 birth points, and 8 birth points are determined in a single session, as shown in fig. 1, fig. 1 is a schematic diagram of a birth point selection process provided in an exemplary embodiment of the present application, as shown in fig. 1, a map corresponding to a virtual environment shows 31 birth points 110, and according to a current session, 8 birth points 120 in a bar-shaped area are randomly determined as birth points of virtual roles selectable by a user in the current session.
It is noted that the bar-shaped area is used for gathering the virtual objects participating in the game in the bar-shaped area at the beginning of the game, and controlling the virtual objects to move to the security area in a relatively consistent path along with the refreshing side of the security area. It should be noted that the stripe regions may also be implemented as regions with other shapes, which is not limited in the embodiments of the present application.
Second, safety zone rules
In the escape process, the virtual character is influenced by environmental factors in the virtual environment and needs to be moved to a safe area so as to prevent the environmental factors in the dangerous area from gradually reducing the life value of the virtual character until the virtual character is eliminated. Optionally, the security zone may be refreshed according to the game progress, may also be refreshed according to the use of the game props by the user, and may also be refreshed according to the use of the skill by the user, wherein the refresh process corresponds to the characteristics of the refresh rate, the refresh range, the refresh interval duration, and the like. Optionally, during the refresh process, the secure area is gradually reduced from a first area with a larger range to a second area with a smaller range, and the second area is a sub-area in the first area, that is, the refresh process is a process of gradually reducing the secure area from an edge of the first area to a second area determined inside the first area. The refreshing speed is used for indicating the time length for the first area to be reduced to the second area, and the refreshing time length can be fixed or can be correspondingly prolonged or shortened according to the use of props or skills by a user; the refreshing range is used for representing the range of the refreshed second region in the virtual environment, and the refreshing range can be preset and can also be increased or decreased according to the use of props or skills by a user; the refresh interval duration is used for representing the interval duration between two adjacent safety zone refresh events, and the refresh interval duration can be fixed duration, and can also be prolonged or shortened according to the use of props or skills by a user.
The first region may be a regular-shaped region or an irregular-shaped region; the second region may be a region of a regular shape within the first region, or may be a region of an irregular shape within the first region. Optionally, the virtual environment is divided by squares with a preset size in a map corresponding to the virtual environment, for example: each square corresponds to an area of 100 × 100 size in the virtual environment, and the refreshing of the secure area may be performed in units of square refreshing, such as: the first area occupies 6400 interconnected squares with irregular outlines, and the second area occupies 3800 interconnected squares in the 6400 squares.
Optionally, in the process of determining the security zone, first determining an area with a preset size and a preset shape in the virtual environment as a first security zone obtained by final refresh, generating a second security zone surrounding the first security zone on the basis of the first security zone, generating a third security zone surrounding the second security zone on the basis of the second security zone, and so on, determining to obtain the refresh between two adjacent security zones according to the refresh times of the security zones, such as: if the security zone needs to be refreshed for 4 times, the first security zone is refreshed from the maximum virtual environment range to obtain a fourth security zone, the second security zone is refreshed from the fourth security zone to obtain a third security zone, the third security zone is refreshed from the third security zone to obtain a second security zone, and the fourth security zone is refreshed from the second security zone to obtain the first security zone.
The description will be given by taking the example of performing the first security zone refresh within the maximum virtual environment range, where the maximum virtual environment range is a square range, at least one refresh point is randomly determined on four sides of the square range, and in the refresh process, the refresh points on each side are gradually refreshed within the square range until the fourth security zone is obtained by refresh.
Optionally, the virtual character can also create a danger zone with a preset size at any position in the current safety zone through props or skills. Illustratively, after the virtual character a acquires the prop drilling rig, the drilling rig is used at a first position of a current safety zone in the virtual environment, and then a danger zone with a preset shape and a preset size is generated in the safety zone by taking the first position as an initial position.
Third, add the identity rule
Optionally, in this embodiment of the application, the escape rules provided in the tactical competition game further include rules corresponding to additional identities of the virtual character, and when the virtual character escapes in the virtual environment, the virtual character has different skills and different visual contents corresponding to different additional identities. Alternatively, the additional identity may be randomly assigned by the server to the virtual character in the game play before the game play of the tactical competition game is started, or the user may select from the additional identity randomly assigned by the server after the matching is successful. Optionally, when the additional identity is randomly allocated by the server, the server allocates the additional identity to the virtual role in the opposite office according to a preset proportion, for example: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7: 2: 1, when 100 virtual roles are paired in a office, allocating a first additional identity to 70 virtual roles, allocating a second additional identity to 20 virtual roles, and allocating a third additional identity to 10 virtual roles; optionally, when the additional identity is selected by the user, the server controls the number of virtual characters for each additional identity in the bureau to be maintained at a preset ratio, such as: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7: 2: 1, for 100 virtual roles in the office, when the virtual role of the first additional identity is selected to reach 70, the server prompts the first additional identity as the non-selectable additional identity.
Optionally, in the tactical competitive game provided in the embodiment of the present application, at least three additional identities are provided, and each additional identity corresponds to a corresponding skill set. Illustratively, a first additional identity, a second additional identity and a third additional identity are provided in the tactical competitive game, wherein the first additional identity corresponds to a first skill set, the second additional identity corresponds to a second skill set, and the third additional identity corresponds to a third skill set, wherein there may be an intersection between the first skill set, the second skill set and the third skill set, that is, there may be a target skill, belonging to at least two of the first skill set, the second skill set and the third skill set. Optionally, each skill set further includes a respective corresponding independent skill, that is, the first skill set includes a first skill, and the first skill does not belong to the second skill set nor the third skill set; the second skill set comprises a second skill, the second skill neither belonging to the first skill set nor to the third skill set; a third skill is included in the third skill set, the third skill belonging to neither the first skill set nor the second skill set.
Illustratively, the three additional identities are taken as an example for explanation, and in the tactical competitive game of the embodiment of the present application, each additional identity corresponds to at least one specific skill. Schematically, the independent skills for each additional identity are separately illustrated:
a first additional identity (destroyer additional identity) whose corresponding first skill comprises: visible to a destroyer treasure box in the virtual environment, there are provided three props available to virtual characters of only the additional identity of the destroyer, including: 1. a annunciator; 2. a seismograph; 3. a drilling rig, wherein an annunciator is used to obtain an additional equipment reward, illustratively, the annunciator user summons a higher-rated prop (e.g., helmet, armor, backpack, etc.) and/or the annunciator is used to summon a more comprehensive lethally-effective weapon; the seismograph is used for changing the refreshing progress of the safety zone, illustratively, when the virtual object is used for the seismograph between two safety zone refreshing events, the time interval between the two refreshing events is correspondingly reduced by preset time length, such as: the current safety zone starts to be refreshed to the next safety zone after 20 seconds, and when the virtual object is used for the seismograph, the current safety zone starts to be refreshed to the next safety zone after 10 seconds; the drilling rig is used to create a safety zone of a preset size. Illustratively, after the virtual character a acquires the prop drilling rig, the drilling rig is used at a first position of a current safety zone in the virtual environment, and then a danger zone with a preset shape and a preset size is generated in the safety zone by taking the first position as an initial position.
A second additional identity (hunter additional identity) corresponding to a second skill comprising: and marking the positions of other virtual characters around the position of the virtual character in the map by triggering the props. Optionally, the map divides the virtual environment by squares with a preset size, and when the virtual character triggers the prop, the positions of the virtual character in 9 squares on the periphery of the squares (including the squares) are marked in the map by taking the square where the virtual character is located in the map as a center square.
Schematically, fig. 2 is a schematic diagram of a skill demonstration manner of a third observation skill provided by an exemplary embodiment of the present application, and as shown in fig. 2, a map 200 of a virtual environment divides the virtual environment by squares of a preset size, a current target virtual character is located in the squares as shown in fig. 2, when the target virtual character triggers the third observation skill, the distribution of virtual characters in 9 squares (shown as 9 squares in a dashed line frame in fig. 2) on the peripheral side of the squares 210 (including the squares 210) is determined by taking the squares 210 as a center, and corresponding coordinates in the map are marked according to the location of each virtual character, as shown as marks 220 in fig. 2.
Alternatively, after the hunter marks the positions of the virtual characters in 9 squares on the periphery of the squares (including the squares) in the map, when a killing event (a killer kills the victim) occurs between the hunter and the marked virtual characters, the killer obtains a corresponding gain, such as: increase the blood returning speed, increase the moving speed, and the like.
A third additional identity (seeker additional identity) corresponding to a third skill comprising: when the virtual roles maintain lives in the virtual environment and collect the target props to reach the preset number, calling the escape props, and determining to acquire the virtual roles of the escape props or acquiring the virtual roles of the escape props and the escape success of teammates thereof.
Fourth, observe the rule
Optionally, in the tactical competitive game according to the embodiment of the present application, at least three special observation skills are provided, and before starting a game match, the user selects any one of the at least three special observation skills as a special observation skill for the master virtual character controlled by the user to observe the virtual environment in the game match. Schematically, three observation skills are taken as examples, and the three special observation skills are respectively explained:
first observation skill (hawk overlook): the virtual environment is observed through the first prop at a view angle, namely after the first prop (such as a virtual bird) is triggered, the first prop rises to a preset height in the air of the virtual environment, and the virtual character observes the virtual environment through the first prop at the height. Referring to fig. 3, fig. 3 is a process schematic diagram of an observation perspective conversion method according to an exemplary embodiment of the present application, as shown in fig. 3, a virtual environment is observed at a first perspective of a virtual character 300, an observation screen 310 includes an object 311 and a hill 312 in the virtual environment, when a user triggers a first prop 320 through an external input device (e.g., presses an R key on a keyboard), the virtual environment is observed by the first prop 320 rising to a preset height of the virtual environment, and the observation screen 330 includes the object 311, the hill 312 and an object 313 on the other side of the hill 312.
Second viewing skill (footprint tracking): after the virtual character triggers the skill, the step of the virtual character in the peripheral area of the virtual environment is displayed in the game interface, and the step is used for indicating the traveling direction of the virtual character passing through the area within a preset time length (note that, when the virtual character travels in a reverse mode in the area, the traveling direction indicated by the step is opposite to the actual traveling direction of the virtual character).
Third viewing skill (spar probe): the weapon spar within the preset range with the position of the virtual character as the center is observed by triggering the detection prop, optionally, the weapon spar is a mark attached to a weapon (the weapon spar can be alternatively realized as a weapon map, a weapon accessory and the like), optionally, the weapon can be a weapon held by the virtual character, and also can be a weapon to be picked up and placed on the ground of the virtual environment, optionally, when the virtual character triggers the second observation skill, other virtual articles in the virtual environment observed by the virtual character are displayed in certain gray scale and transparency, and the weapon spar is displayed in certain brightness through other virtual articles (such as walls, hillsides, floors and the like). Optionally, when the prop is triggered to observe the weapon spar, the observation range takes the position where the virtual character is located as the center, and the weapon spar in the observation range is highlighted by the spherical observation range with the preset radius.
Fifth, rule of escape
Optionally, the escape rules for different additional identities may be the same or different, and the same set of unified escape rules may be used for all additional identities, and a set of additional escape rules may be used for a specific additional identity. Schematically, 1, aiming at all virtual roles, when the virtual roles maintain lives to a final safety zone and obtain the escape prop, determining to obtain the virtual role of the escape prop, or obtaining the virtual role of the escape prop and the escape success of teammates thereof; 2. aiming at the virtual roles of the target additional identity (the additional identity of the searcher in the additional identity rule), when the virtual roles maintain lives in the virtual environment and collect the target props to reach the preset number, the escape props are called, and the virtual roles of the escape props are determined to be obtained, or the virtual roles of the escape props and the teammates of the escape props are obtained to be successfully escaped. It should be noted that, when the escape is performed in the above manner 2, the total number of the target properties in the virtual environment is a preset number, and the preset number is used to control the number of the virtual characters of the target additional identity of the escape in the manner 2, such as: optionally, the virtual roles with the target additional identities can prompt the remaining number and/or positions of the target properties which are not acquired by the virtual roles in the target additional identities in real time in a tactical competition game, and when the sum of the number and the remaining number of the target properties held by the virtual roles does not reach the required number for calling the escape properties, the virtual roles switch to a fighting strategy to escape in the method 1.
The above mode 1 and mode 2 are two schemes that exist in parallel, that is, for a virtual character with a target additional identity, the escape can be successful through the mode 1 or the escape can be successful through the mode 2, but the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 2 after the escape is successful through the mode 1, or the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 1 after the escape is successful through the mode 2. Optionally, the target prop in the above mode 2 is a prop that is visible in the virtual environment by the target virtual character with the additional identity, that is, the target prop is invisible in the virtual environment by the virtual character with the additional identity.
Optionally, the virtual roles display different game results after escaping in different ways, and optionally, when the virtual role of the target additional identity corresponds to an additional escape rule and the escape succeeds according to the escape rule, the virtual role of the target additional identity displays an additional display result according to the additional escape rule. Illustratively, after the virtual character escapes through the first escape rule, a result interface of the game shows that "you like to successfully escape and get the first name", and a result interface of the game shows that "you like to get the second name" is provided for the virtual character which still has a life value in the virtual environment; when the virtual character of the target additional identity escapes through the second escape rule, and the virtual character of the target additional identity is the first virtual character escaping through the second escape rule in the virtual environment, displaying that 'Maotai you become the first name of the target additional identity and the escape is successful' on the result interface of matching, displaying that 'Maotai you become the second name of the target additional identity and the escape is successful' on the result interface of matching aiming at the second virtual character escaping through the second escape rule in the virtual environment, and so on.
The schemes shown in the following embodiments of the present application mainly introduce the technical implementation of successful escape of the virtual object with the target identity (e.g., the identity of the searcher in the identity rule).
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with an application program supporting a virtual environment, such as an application program supporting a three-dimensional virtual environment. The application program may be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, and an MOBA game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
The terminal in the present application may include: an operating system and an application program.
The operating system is the base software that provides applications with secure access to the computer hardware.
An application is an application that supports a virtual environment. Optionally, the application is an application that supports a three-dimensional virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Third-person Shooting Game (TPS), a First-person Shooting Game (FPS), an MOBA Game and a multi-player gunfight survival Game. The application may be a stand-alone application, such as a stand-alone 3D game program.
Fig. 4 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 400 includes: a first device 420, a server 440, and a second device 460. The first device 420 and the second device 460 may be implemented as terminals in the present application.
The first device 420 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, an MOBA game and a multi-player gunfight living game. The first device 420 is a device used by a first user who uses the first device 420 to control a first virtual object located in a virtual environment for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first device 420 is connected to the server 440 through a wireless network or a wired network.
The server 440 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 440 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, server 440 undertakes primary computing work and first device 420 and second device 460 undertakes secondary computing work; alternatively, server 440 undertakes secondary computing work and first device 420 and second device 460 undertakes primary computing work; alternatively, the server 440, the first device 420, and the second device 460 perform cooperative computing by using a distributed computing architecture.
The second device 460 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The second device 460 is a device used by a second user who uses the second device 460 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different teams, different organizations, or two groups with enemy.
Alternatively, the applications installed on the first device 420 and the second device 460 are the same, or the applications installed on the two devices are the same type of application for different control system platforms. The first device 420 may generally refer to one of a plurality of devices, and the second device 460 may generally refer to one of a plurality of devices, and this embodiment is illustrated by the first device 420 and the second device 460. The device types of the first device 420 and the second device 460 are the same or different, and include: at least one of a game console, a desktop computer, a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The following embodiments are illustrated where the device is a desktop computer.
Those skilled in the art will appreciate that the number of devices described above may be greater or fewer. For example, the number of the devices may be only one, or several tens or hundreds, or more. The number and the type of the devices are not limited in the embodiments of the present application.
With reference to the above noun introduction and description of implementation environment, please refer to fig. 5, which shows a flowchart of a method for controlling a virtual object in a virtual environment according to an exemplary embodiment of the present application, and the method is applied to a terminal for example, as shown in fig. 5, the method includes:
step 501, displaying a virtual environment interface including a virtual environment picture, where the virtual environment picture is a picture when the virtual environment is observed from the view angle of the first virtual object controlled by the terminal.
Step 502, when the escape prop is in the visual field range of the first virtual object, displaying the escape prop in the virtual environment interface; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed by the virtual object with the target identity after the collection of the target prop is completed.
The target identity may be a searcher identity in the identity rule.
In the embodiment of the application, in the process that a user controls a first virtual object to move in a virtual environment through a terminal, after a second virtual object with a target identity collects a target prop, the second virtual object can execute a calling operation, at this time, the escape prop in the virtual environment is refreshed, and when the escape prop is in the visual field range of the first virtual object, the escape prop is displayed in a virtual environment interface displayed by the terminal corresponding to the first virtual object.
Step 503, when the first virtual object completes the designated operation on the escape prop, triggering the first virtual object to leave the virtual environment in the state of not being eliminated.
In the embodiment of the application, a user can control the first virtual object to execute the designated operation on the escape prop through the terminal, and after the designated operation is executed, the terminal triggers the first virtual object to be separated from the virtual environment in a state that the first virtual object is not eliminated.
For example, in an escape game environment, after the user himself or another user calls the escape prop (such as a box prop), the user controls the virtual object to open the box prop, and after the virtual object successfully opens the box prop, the terminal triggers the user-controlled virtual object to escape successfully (i.e. triggers the first virtual object to leave the virtual environment in a state of not being eliminated).
In summary, according to the method for triggering a virtual object to escape from a virtual environment provided by this embodiment, after a target item is collected and used by a second virtual object with a target identity, an escape item is refreshed in the virtual environment, and when a first virtual object controlled by a terminal successfully executes a specified operation on the escape item, the virtual object is triggered to escape from the virtual environment in a state that the virtual object is not eliminated, so that a scheme is provided in which users of all other teams are not eliminated in the virtual environment and can escape from the virtual environment without being eliminated, time required for escaping from the virtual environment in a state that the users are not eliminated is shortened, and electric quantity and data traffic consumed by the terminal in a case that the virtual object is successfully escaped is saved.
In the embodiment shown in fig. 5, the first virtual object and the second virtual object may be the same virtual object or different virtual objects.
Fig. 6 is a flowchart of a method for controlling a virtual object in a virtual environment according to an exemplary embodiment of the present application, which is described by taking as an example that the method is applied to a terminal, and a first virtual object and a second virtual object are the same virtual object, as shown in fig. 6, the method includes:
step 601, displaying a virtual environment interface including a virtual environment picture, where the virtual environment picture is a picture when the virtual environment is observed from a view angle of the first virtual object controlled by the terminal.
In one possible implementation manner, the screen when the virtual environment is observed from the perspective of the first virtual object controlled by the terminal may be a screen when the virtual environment is observed in a rotating manner by the first rotating axis distance.
Optionally, the first rotation axis distance is a distance between a viewpoint of the virtual object and the rotation center. Optionally, the viewpoint is used for observing the virtual environment by acquiring a picture of the virtual environment through the camera model, that is, the first rotation axis distance is a distance between the camera model and the rotation center.
Optionally, in the process of observing the virtual environment by rotating the camera model, the virtual environment may be observed in a manner that the camera model and the virtual object rotate synchronously, that is, the camera model is bound to the virtual object, and the virtual object rotates synchronously with the rotation of the camera model while the camera model rotates; the camera model can also observe the virtual environment in a mode of independent rotation, namely the virtual object keeps the position and the facing direction unchanged, and the camera model observes the virtual environment through rotation.
Optionally, the first rotational wheelbase is a default wheelbase when the virtual environment is observed.
Step 602, after the target prop is collected by the first virtual object, a first trigger control is displayed on the virtual environment interface.
Wherein the target item is a virtual item visible and collectable with the virtual object of the target identity.
In one possible implementation, each virtual object in the virtual environment has one or more of at least two identities, wherein the virtual object with the target identity may find and collect the target prop in the virtual environment.
For example, a plurality of target props may be distributed in the virtual environment, for a first virtual object with a target identity, when the target prop is within a visual field range of the first virtual object, in a virtual environment interface displayed by a terminal corresponding to the first virtual object, the target prop is displayed in a state capable of being acquired, for example, light rays or particle effects with special colors are emitted, and/or when the first virtual object is close to the target prop within a preset range, prompt information for acquiring the target prop appears, and when a user performs an acquisition operation, for example, automatic acquisition after the user closes the target prop, or performs a click operation on the target prop, the terminal plays an animation of an action of acquiring the target prop by the first virtual object.
For a third virtual object without a target identity, when the target property is within the visual field range of the third virtual object, the target property is displayed in an unrecoverable state, for example, as a map of the target property, in a virtual environment interface displayed by a terminal corresponding to the third virtual object, and when the third virtual object is close to the preset range of the target property, prompt information for acquiring the target property does not appear. Or, in another exemplary scheme, the target prop is invisible for the third virtual object, that is, the target prop is not displayed in the virtual environment interface displayed by the terminal corresponding to the third virtual object.
For example, please refer to fig. 7, which shows a schematic comparison diagram of a display of a target prop according to an embodiment of the present application. As shown in part (a) of fig. 7, in the virtual environment interface 710 displayed at the terminal corresponding to the virtual object 711, a target item 712 exists within the visual field of the virtual object 711 having a target identity, the target item 712 is displayed in the virtual environment interface 710 with a luminous special effect, and when the virtual object 711 is close to the target item 712, the virtual object 711 automatically picks up the target item 712.
As shown in part (b) of fig. 7, in the virtual environment interface 720 displayed at the terminal corresponding to the virtual object 721, the target item 712 also exists in the visual field of the virtual object 721 without the target identity, the target item 712 is displayed in the virtual environment interface 720 in the form of a common map (without lighting effect), and when the virtual object 721 is close to the target item 712, the target item 712 is not picked up by the virtual object 721.
When the first virtual object has a target identity and the target prop is collected, the terminal may display a first trigger control in the virtual environment interface, where the first trigger control may be displayed in various possible forms.
For example, in an exemplary scheme, the terminal may directly display a first trigger control in the form of a button, such as a button displayed with a target prop icon, in a superimposed manner at a designated position in the virtual environment interface.
Or, in another exemplary scheme, the terminal may show a knapsack interface in the virtual environment interface, where the knapsack interface includes virtual props owned by the first virtual object, and the terminal may show a first trigger control in the form of a usable virtual item in the knapsack interface. That is, the target prop displayed in the backpack interface may be regarded as the first trigger control.
Step 603, after receiving the trigger operation on the first trigger control, sending a call request to a server, where the call request is used to indicate that the first virtual object executes the call operation.
In a possible implementation manner, after the first trigger control is triggered, the terminal may send a call request to the server.
Or, in other possible implementations, after the first trigger control is triggered, the terminal may further detect other conditions, and send the call request to the server only when the other conditions are also met.
For example, in an exemplary scheme, when receiving a trigger operation on the first trigger control, the terminal may obtain the number of the target items collected by the first virtual object; and when the number of the target props collected by the first virtual object is not less than a number threshold value, sending the call request to the server.
In the embodiment of the application, the virtual object with the target identity can collect a plurality of target props, and has the capability of calling the escape prop when the target props collected by the virtual object with the target identity reach a preset number threshold; correspondingly, after the user executes the triggering operation on the first triggering control, the terminal firstly detects whether the first virtual object collects enough target props, if so, the terminal sends the calling request to the server, and otherwise, the terminal does not respond to the triggering operation.
Optionally, in this embodiment of the application, the terminal may show the information about the number of the collected target items of the first virtual object based on the first trigger control, for example, when the first trigger control is a button shown by a target item icon, the terminal may display the number of the collected target items near the button shown by the target item icon. Optionally, when the number of the collected target props reaches the number threshold, the first trigger control may be displayed according to a preset display effect, for example, the first trigger control is displayed with a light-emitting effect, a blinking effect, a special color effect, or a special particle effect.
Optionally, when the triggering operation on the first triggering control is received, if the number of the target props collected by the first virtual object is less than the number threshold, the terminal may further display a prompt message in the virtual environment interface to prompt that the number of the target props is insufficient.
Optionally, under the condition that the number of the target props collected by the first virtual object is not less than the number threshold, if the triggering operation on the first triggering control is received, the terminal may further display a second triggering control on the virtual environment interface; and when the triggering operation of the second triggering control is received, sending the calling request to a server.
Optionally, when the number of the target props collected by the first virtual object is not less than the number threshold, if the trigger operation on the first trigger control is received, the terminal may further remove the target props with the number thresholds from the props owned by the first virtual object.
In another exemplary scheme, in order to avoid mistaken calling under the condition that the user does not want to call the escape prop, after the user corresponding to the first virtual object triggers the first trigger control, the terminal may not directly send a calling request to the server, but show another trigger control on the virtual environment interface, and when the other trigger control is triggered, the terminal sends the calling request to the server.
Optionally, in this embodiment of the application, when the terminal sends the call request to the server, a section of call animation may be played, for example, an animation in which a first virtual object emits a light to the sky through a certain prop may be played, and further, the terminal may trigger to refresh and display the light in the virtual environment, and the light may be visible to all virtual objects in the virtual environment, or the light may be visible to a virtual object with a target identity in the virtual environment.
And step 604, refreshing the escape prop in the virtual environment according to the refreshing instruction sent by the server.
In an exemplary scheme, the refreshing indication is sent by a server, and relevant data of the escape prop, such as the position of the escape prop in the virtual environment, movement information and the like, are refreshed in the virtual environment. And after the terminal receives the refreshing instruction, refreshing the escape prop in the virtual environment according to the position and the movement information of the escape prop.
For example, the terminal refreshes a transport vehicle in the virtual environment according to the refreshing instruction, and when the transport vehicle moves to the refreshing place of the escape prop, the escape prop is airdropped at the refreshing place.
The refresh point may be indicated by a call request sent from the terminal to the server.
In a possible implementation manner, the call request includes position information of the first virtual object in the virtual environment when the terminal sends the call request; and the server determines the refreshing position information of the escape prop in the virtual environment according to the position information of the first virtual object in the virtual environment. For example, taking an example of refreshing a transport vehicle in a virtual environment and airdropping an escape prop at a refreshing location, the refreshing location information may include a moving path of the transport vehicle and location information of the refreshing location in the virtual environment.
In this embodiment, the server may use, when the terminal sends the call request, the position information of the first virtual object in the virtual environment as the position information of the refresh location of the escape prop in the virtual environment, and randomly generate a moving path passing through the refresh location for the transport vehicle.
Or, the server may also determine a random location within a preset range around the first virtual object as a refresh location according to the location information of the first virtual object in the virtual environment when the terminal sends the call request, acquire the location information of the refresh location, and randomly generate a moving path for the transport vehicle to pass through the refresh location.
In another possible implementation manner, the call request may also include area information of a refresh area designated by the user and corresponding to the first virtual object, and the server determines, according to the area information of the refresh area, refresh position information of the escape prop in the virtual environment.
When the terminal detects the triggering operation of the first triggering control or the second triggering control, a refresh area selection interface may be displayed, where the refresh area selection interface includes a thumbnail map of the virtual environment, for example, the refresh area selection interface may be a map display interface of the virtual environment, and the thumbnail map in the refresh area selection interface is divided into a plurality of selectable map areas, and when the terminal detects the selection operation of the user on a target map area in the refresh area selection interface, the terminal takes the area information of the selected map area as the area information of the refresh area specified by the user, and sends the call request according to the area information of the refresh area specified by the user.
When the server determines the refreshing position information of the escape prop in the virtual environment according to the area information of the refreshing area, the server can acquire the position information of the central point of the refreshing area as the position information of the refreshing place, and simultaneously randomly generate a moving path passing through the refreshing place for the transport carrier.
Or, when the server determines the refresh position information of the escape prop in the virtual environment according to the area information of the refresh area, the server may also randomly determine a place in the refresh area as a refresh place, acquire the position information of the refresh place, and randomly generate a moving path passing through the refresh place for the transport vehicle.
In an exemplary scenario, the escape prop may be visible to all virtual objects in the virtual environment, that is, the escape prop may be observed in the virtual environment regardless of the identity of the virtual objects in the virtual environment.
In another exemplary scheme, the escape prop can be an escape prop that can be used by all virtual objects in the virtual environment, that is, the escape prop can be used in the virtual environment regardless of the identities of the virtual objects in the virtual environment.
In another exemplary scenario, the escape prop is a virtual prop that can be used by a virtual object having the target identity. For example, in a virtual environment, a virtual object with a target identity may observe and use the escape item, while a virtual object without the target identity may not observe the escape item in the virtual environment, or may observe the escape item but may not use the virtual item.
Step 605, when the escape prop is within the visual field of the first virtual object, the escape prop is displayed in the virtual environment interface.
In this embodiment of the application, the user controls the first virtual object to approach the escape item through the terminal, and when the escape item is within the visual range of the first virtual object, the escape item is displayed in the virtual environment interface displayed by the terminal, and the user can further control the first virtual object to perform a specified operation on the escape item, for example, perform an operation of opening the escape item.
Optionally, when the escape prop is out of the visual range of the first virtual object, prop indication information is displayed in the virtual environment interface; the item indication information is used for indicating at least one of the direction of the escape item relative to the first virtual object and the distance between the escape item and the first virtual object.
Optionally, after the escape prop is refreshed in the virtual environment, when an operation of displaying a map is received, the terminal may display a map interface, where the map interface includes an abbreviated map of the virtual environment; and displaying an abbreviated icon of the escape prop on an abbreviated map of the virtual environment corresponding to the position of the escape prop.
Step 606, when the first virtual object completes the designated operation of the escape prop, triggering the first virtual object to be separated from the virtual environment in the state of not being eliminated.
According to the scheme provided by the embodiment of the application, after the first virtual object controlled by the user with the designated identity collects the call prop, the escape prop can be called, and when the escape prop is executed with the designated operation, the first virtual object executing the designated operation is controlled to be separated from the virtual environment in the state of not being eliminated, the first virtual object is not required to eliminate all users of other teams, so that the time required for separating the first virtual object from the virtual environment in the state of not being eliminated is greatly shortened, and the electric quantity and the data flow consumed by the terminal are reduced.
Optionally, when the first virtual object is triggered to leave the virtual environment in the state of not being eliminated, the terminal further displays an escape animation, where the escape animation includes an animation that the first virtual object leaves the virtual environment in the state of not being eliminated.
In a possible implementation manner, when the first virtual object completes the designated operation on the escape prop, the terminal triggers the first virtual object to leave the virtual environment in a state that the first virtual object is not eliminated (i.e. the escape is successful), and the terminal also triggers to play a section of animation that the first virtual object is successful in escaping.
For example, please refer to fig. 8 to 10, which illustrate schematic views of an escape screen according to an embodiment of the present application.
As shown in fig. 8, after the first virtual object collects enough target props and calls the escape props, one escape prop (box prop) is refreshed in the virtual environment in an airdrop manner, that is, a virtual object in the form of a flying bird is refreshed in the virtual environment, and when the virtual object in the form of the flying bird flies to a place where the user calls the escape prop, one box prop 81 is airdropped.
As shown in fig. 9, the user controls the first virtual object to approach the escape prop and starts timing after clicking. For example, in fig. 9, after the user controls the first virtual object to approach the escape item and clicks or presses a certain key (e.g., a T key on a keyboard), or clicks or presses the escape item displayed on the screen, the terminal starts timing, and displays timing information 91 in the virtual environment interface.
As shown in fig. 10, if the time reaches a predetermined time and is not interrupted in the process, an escape animation is triggered, for example, in fig. 10, a flying bird 1001 appears in the box-type prop, and the virtual object controlled by the user automatically catches the leg of the flying bird and is dragged away from the virtual environment.
Optionally, when the first virtual object completes the designated operation on the escape prop, the terminal may further trigger a third virtual object to leave the virtual environment in a state that the third virtual object is not eliminated; the third virtual object is another virtual object that is on the same team as the first virtual object in the virtual environment.
In an exemplary embodiment, for each virtual object in the same team, when the first virtual object successfully exits from the virtual environment through the exit prop, other virtual objects in the same team may also exit from the virtual environment in a state of not being eliminated, and the terminals of other virtual objects in the same team trigger an animation that displays the corresponding virtual object exiting from the virtual environment in a state of not being eliminated.
For example, when the first virtual object successfully executes the designated operation on the escape prop, the terminal of the other virtual object of the same team triggers and displays the animation as follows: a bird will appear near other virtual objects on the same team, which will automatically catch the bird's leg and be dragged away from the virtual environment.
Step 607, displaying a ranking settlement interface, where the ranking settlement interface includes ranking information of the first virtual object in the virtual object having the target identity in the virtual environment.
Optionally, the ranking settlement interface further includes ranking information of the first virtual object in each virtual object in the virtual environment.
Please refer to fig. 11, which shows a schematic diagram of an escape process according to an embodiment of the present application. As shown in fig. 11, the escape process can be as follows:
step 1, judging whether a user collects enough props after entering a battlefield, and if so, entering step 2;
in an exemplary scheme, a battle in a virtual environment starts (S1101), a user controls a first virtual object to collect a target prop in the virtual environment, a terminal determines whether the first virtual object collects a sufficient number of props (S1102), and after the first virtual object receives the sufficient number of target props, the terminal prompts the user that the escape props can be summoned by using the target props.
Step 2, whether the user successfully uses the prop or not is judged, and if yes, the step 3 is carried out;
when the user needs to call the escape prop, the target prop can be triggered and used through a control in a backpack bar or a virtual environment interface, the terminal can detect whether the user successfully uses the prop, if yes, a call request is sent to the server, and the next step is carried out.
Step 3, whether the flying birds dropping the escape box normally take off on time and pass through the position where the user uses the prop correctly or not is judged, and the escape box is dropped; if yes, entering step 4;
in an exemplary scheme, the server refreshes a transportation vehicle in the form of a flying bird in the virtual environment, and specifies a flight path and a position of an airdrop escape box for the transportation vehicle in the form of the flying bird (i.e., a position where a virtual object controlled by a user is located when the user uses a target prop). The terminal refreshes the transport vehicle and flies to the position of the airdrop escape box according to the instruction of the server (S1103), judges whether the transport vehicle reaches the airdrop position (S1104), and if so, the airdrop escape box (S1105).
Step 4, whether the escape box is opened successfully by the user or not is judged, and if yes, the step 5 is carried out;
in an exemplary scheme, the user-controlled virtual object needs to perform an operation of opening the escape box to successfully escape, for example, as shown in fig. 8, if the user-controlled virtual object successfully maintains a certain operation within a preset time period (for example, 8 seconds) without being interrupted or actively cancelled, the terminal confirms that the escape box is successfully opened by the user.
Step 5, refreshing the escaping flying birds after the bird is opened successfully, playing the escaping animation of the user, and if the escaping animation is played, entering step 6;
in the process, the terminal judges whether the operation of opening the escape box by the user is interrupted or not (S1106), if not, the terminal confirms that the escape box is successfully opened by the user, the escape form of the escape prop can be refreshed, for example, a flying bird is refreshed, and an escape animation of a virtual object controlled by the user is played (S1107), for example, the virtual object grasps the leg of the flying bird and is dragged by the flying bird to fly away.
Step 6, the user correctly pops up the settlement interface at this time, and performs single-round settlement awards (S1108).
In an exemplary scheme, by separating virtual objects of a virtual environment through the scheme shown in the application, ranking among the virtual objects with target identities can be obtained, and corresponding reward settlement can be obtained.
Optionally, in addition to the ranking among the virtual objects having the target identity, the virtual objects that are separated from the virtual environment by the scheme shown in the present application may also obtain the nouns among all the virtual objects in the virtual environment and obtain the corresponding bonus settlement.
Optionally, in this embodiment of the present application, when the first virtual object is successfully escaped, the teammates of the first virtual object may also successfully escape, and obtain the same nouns and reward settlement.
In summary, according to the method for controlling a virtual object in a virtual environment provided by this embodiment, after the second virtual object with the target identity collects the target item and uses the target item, the escape item is refreshed in the virtual environment, and when the first virtual object controlled by the terminal successfully executes the designated operation on the escape item, the virtual object is triggered to leave the virtual environment in the state that the virtual object is not eliminated, so that a scheme is provided in which users of all other teams can leave the virtual environment without being eliminated in the virtual environment, the time required for leaving the virtual environment in the state that the users are not eliminated is shortened, and the electric quantity and the data traffic consumed by the terminal when the virtual object controlled by the user is successfully escaped are saved.
In addition, according to the scheme shown in the embodiment of the application, when the virtual object is controlled to be separated from the virtual environment under the condition that the virtual object is not eliminated, the corresponding animation effect is displayed, the problem that the picture is more obtrusive when the virtual object is separated from the virtual environment is avoided, and the construction effect of the virtual environment is improved.
In addition, through the scheme shown in the application, the user can obtain a special chance of successful escape, the victory mode of a cruel game that the user is forced to kill only one person/team is avoided, the mode of searching for enemies and eliminating the enemies in a full scale in a tactical competitive game is changed, the game atmosphere and the game experience are effectively improved, and the game fun of the user is increased.
It should be noted that, in the scheme shown in fig. 6, only three processes of collecting the target prop, calling the escape prop, and using the escape prop to escape are described as examples, which are executed by the first virtual object. Optionally, the three processes of collecting the target property, calling the escape property, and using the escape property to escape may also be executed by different virtual objects, for example, when the first virtual object and the second virtual object are different virtual objects, the second virtual object with the target identity may collect the target property and successfully call the escape property, and the first virtual object with the target identity may use the escape property to escape. For another example, when the first virtual object, the second virtual object, and the third virtual object are different virtual objects, the second virtual object with the target identity may collect the target prop, the third virtual object with the target identity may successfully call the escape prop using the target prop, and the first virtual object with the target identity may use the escape prop to escape.
Fig. 12 is a block diagram of a virtual object control apparatus in a virtual environment according to an exemplary embodiment of the present application, where the apparatus may be implemented in a terminal, and as shown in fig. 12, the apparatus includes:
an environment interface display module 1201, configured to display a virtual environment interface including a virtual environment picture, where the virtual environment picture is a picture obtained when the virtual environment is observed from a view angle of a first virtual object controlled by the terminal;
a property display module 1202, configured to display the escape property in the virtual environment interface when the escape property is within a visual range of the first virtual object; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed by the virtual object with the target identity after the collection of the target prop is finished;
a disengagement triggering module 1203, configured to trigger the first virtual object to disengage from the virtual environment in a state that the first virtual object is not eliminated when the first virtual object completes a specified operation on the escape prop.
Optionally, the apparatus further comprises:
the first control display module is used for displaying a first trigger control on the virtual environment interface after the target prop is collected by the first virtual object when the first virtual object and the second virtual object are the same virtual object;
a calling request sending module, configured to send a calling request to a server after receiving a triggering operation on the first trigger control, where the calling request is used to indicate that the first virtual object performs the calling operation;
and the refreshing module is used for refreshing the virtual prop in the virtual environment according to the refreshing instruction sent by the server.
Optionally, the prop display module 1202 includes:
the quantity obtaining unit is used for obtaining the quantity of the target props collected by the first virtual object when the triggering operation of the first triggering control is received;
and the request sending unit is used for sending the call request to the server when the number of the target props collected by the first virtual object is not less than a number threshold.
Optionally, the request sending unit is configured to display a second trigger control on the virtual environment interface when the number of the target props collected by the first virtual object is not less than a number threshold; and when receiving the triggering operation of the second triggering control, sending the calling request to a server.
Optionally, the target item is a virtual object visible and collectable item having the target identity.
Optionally, the escape prop is a virtual prop usable by a virtual object having the target identity.
Optionally, the apparatus further comprises:
and the animation display module is used for displaying the escape animation, and the escape animation comprises the animation of the first virtual object which is separated from the virtual environment in the state of not being eliminated.
Optionally, the escape animation further includes an animation in which a second virtual object is separated from the virtual environment in a state of not being eliminated, where the second virtual object is a virtual object on the same team as the first virtual object.
Optionally, the apparatus further comprises:
and the settlement interface display module is used for displaying a ranking settlement interface, and the ranking settlement interface comprises ranking information of the first virtual object in the virtual object with the target identity in the virtual environment.
Optionally, the ranking settlement interface further includes ranking information of the first virtual object in each virtual object in the virtual environment.
In summary, in the virtual object control apparatus in a virtual environment provided in this embodiment, after the second virtual object with the target identity collects the target prop and uses the target prop, the escape prop is refreshed in the virtual environment, and when the first virtual object controlled by the terminal successfully performs the specified operation on the escape prop, the virtual object is triggered to leave the virtual environment in the state that the virtual object is not eliminated, so that a scheme is provided in which users of all other teams in the virtual environment can leave the virtual environment without being eliminated, the time required for leaving the virtual environment in the state that the users are not eliminated is shortened, and the electric quantity and the data traffic consumed by the terminal in the case that the virtual object controlled by the user is successfully escaped are saved.
In addition, according to the scheme shown in the embodiment of the application, when the virtual object is controlled to be separated from the virtual environment under the condition that the virtual object is not eliminated, the corresponding animation effect is displayed, the problem that the picture is more obtrusive when the virtual object is separated from the virtual environment is avoided, and the construction effect of the virtual environment is improved.
Fig. 13 is a block diagram illustrating a terminal 1300 according to an exemplary embodiment of the present invention. The terminal 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement a virtual object control method in a virtual environment as provided by method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying touch display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the touch display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the touch display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the touch display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the touch display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually becomes larger.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
An embodiment of the present application further provides a computer device, where the computer device includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded by the processor and implements the above-mentioned method for controlling virtual objects in a virtual environment as shown in fig. 5 or fig. 6.
Embodiments of the present application further provide a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the virtual object control method in the virtual environment as described in fig. 5 or fig. 6.
The present application further provides a computer program product, which when run on a computer, causes the computer to execute the method for controlling a virtual object in a virtual environment provided by the above method embodiments.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement the virtual object control method in a virtual environment as described in fig. 5 or fig. 6.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of virtual object control in a virtual environment, the method being performed by a terminal, the method comprising:
displaying a virtual environment interface comprising a virtual environment picture, wherein the virtual environment picture is a picture when the virtual environment is observed from the view angle of a first virtual object controlled by the terminal;
when the escape prop is within the visual range of the first virtual object, displaying the escape prop in the virtual environment interface; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed after the virtual object with the target identity finishes collecting the target prop, the target prop is a virtual prop collected by the virtual object with the target identity, and the target identity is an additional identity randomly distributed by the server;
when the first virtual object finishes the designated operation of the escape prop, triggering the first virtual object and a third virtual object to be separated from the virtual environment in a state of not being eliminated, wherein the third virtual object is other virtual objects which are in the same team as the first virtual object in the virtual environment.
2. The method of claim 1, wherein when the first virtual object and the second virtual object are the same virtual object, the method further comprises:
after the first virtual object collects the target prop, displaying a first trigger control on the virtual environment interface;
after receiving a triggering operation of the first triggering control, sending a calling request to a server, wherein the calling request is used for indicating that the first virtual object executes the calling operation;
and refreshing the virtual prop in the virtual environment according to a refreshing instruction sent by the server.
3. The method of claim 2, wherein sending a summoning request to a server after receiving the triggering operation of the first trigger control comprises:
when receiving a trigger operation on the first trigger control, acquiring the quantity of the target props collected by the first virtual object;
when the number of the target props collected by the first virtual object is not less than a number threshold value, sending the call request to the server.
4. The method of claim 3, wherein said sending the call request to the server when the number of the target props collected by the first virtual object is not less than a number threshold comprises:
when the number of the target props collected by the first virtual object is not less than a number threshold value, displaying a second trigger control on the virtual environment interface;
and when receiving the triggering operation of the second triggering control, sending the calling request to a server.
5. The method of any one of claims 1 to 4, wherein the target prop is a virtual object visible and collectable prop having the target identity.
6. The method of any one of claims 1 to 4, wherein the escape prop is a virtual prop usable by a virtual object having the target identity.
7. The method of any of claims 1 to 4, further comprising:
displaying an escape animation, wherein the escape animation comprises an animation of the first virtual object being separated from the virtual environment in the state of not being eliminated.
8. The method according to claim 7, wherein the escape animation further comprises an animation that a second virtual object is out of the virtual environment in a state that the second virtual object is not eliminated, wherein the second virtual object is a virtual object on the same team as the first virtual object.
9. The method of any of claims 1 to 4, further comprising:
displaying a ranking settlement interface, wherein the ranking settlement interface comprises ranking information of the first virtual object in a virtual object with the target identity in the virtual environment.
10. The method of claim 9, wherein the ranking settlement interface further comprises ranking information of the first virtual object in each virtual object within the virtual environment.
11. An apparatus for controlling a virtual object in a virtual environment, the apparatus being used in a terminal, the apparatus comprising:
the environment interface display module is used for displaying a virtual environment interface containing a virtual environment picture, wherein the virtual environment picture is a picture obtained when the virtual environment is observed from the visual angle of a first virtual object controlled by the terminal;
the property display module is used for displaying the escape property in the virtual environment interface when the escape property is within the visual range of the first virtual object; the escape prop is a virtual prop refreshed in the virtual environment after a second virtual object of the target identity executes a calling operation; the calling operation is an operation which can be executed after the virtual object with the target identity finishes collecting the target prop, the target prop is a virtual prop collected by the virtual object with the target identity, and the target identity is an additional identity randomly distributed by the server;
and the separation triggering module is used for triggering the first virtual object and a third virtual object to separate from the virtual environment in a non-eliminated state when the first virtual object completes the designated operation on the escape prop, wherein the third virtual object is another virtual object in the same team as the first virtual object in the virtual environment.
12. The apparatus of claim 11, further comprising:
the first control display module is used for displaying a first trigger control on the virtual environment interface after the target prop is collected by the first virtual object when the first virtual object and the second virtual object are the same virtual object;
a calling request sending module, configured to send a calling request to a server after receiving a triggering operation on the first trigger control, where the calling request is used to indicate that the first virtual object performs the calling operation;
and the refreshing module is used for refreshing the virtual prop in the virtual environment according to the refreshing instruction sent by the server.
13. The apparatus of claim 12, wherein the prop display module comprises:
the quantity obtaining unit is used for obtaining the quantity of the target props collected by the first virtual object when the triggering operation of the first triggering control is received;
and the request sending unit is used for sending the call request to the server when the number of the target props collected by the first virtual object is not less than a number threshold.
14. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by said processor to implement a virtual object control method in a virtual environment according to any one of claims 1 to 10.
15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a virtual object control method in a virtual environment according to any one of claims 1 to 10.
CN201910760234.9A 2019-08-16 2019-08-16 Virtual object control method, device, equipment and storage medium in virtual environment Active CN110478904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910760234.9A CN110478904B (en) 2019-08-16 2019-08-16 Virtual object control method, device, equipment and storage medium in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910760234.9A CN110478904B (en) 2019-08-16 2019-08-16 Virtual object control method, device, equipment and storage medium in virtual environment

Publications (2)

Publication Number Publication Date
CN110478904A CN110478904A (en) 2019-11-22
CN110478904B true CN110478904B (en) 2021-03-19

Family

ID=68551818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910760234.9A Active CN110478904B (en) 2019-08-16 2019-08-16 Virtual object control method, device, equipment and storage medium in virtual environment

Country Status (1)

Country Link
CN (1) CN110478904B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111330274B (en) * 2020-02-20 2022-02-18 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
CN111524398B (en) * 2020-04-14 2021-12-31 天津洪恩完美未来教育科技有限公司 Processing method, device and system of interactive picture book
CN112156470A (en) * 2020-10-21 2021-01-01 腾讯科技(深圳)有限公司 Resource processing method and device based on virtual scene and computer equipment
CN112256251A (en) * 2020-10-29 2021-01-22 北京冰封互娱科技有限公司 Game data processing method, game data processing device, main body object configuration method, main body object configuration device, and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000308759A (en) * 1999-04-27 2000-11-07 Konami Co Ltd Control method for video game characters, video game device, and storage medium
CN109107154B (en) * 2018-08-02 2023-04-07 腾讯科技(深圳)有限公司 Virtual item movement control method and device, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
古风创意新游《代号LN》内容揭秘:吃鸡+无限法则+APEX+武侠乂?;虎牙丶冥Sir;《https://www.bilibili.com/read/cv2861033/》;20190613;第1-14页 *

Also Published As

Publication number Publication date
CN110478904A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN111249730B (en) Virtual object control method, device, equipment and readable storage medium
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112083848B (en) Method, device and equipment for adjusting position of control in application program and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN111672118B (en) Virtual object aiming method, device, equipment and medium
CN113398571A (en) Virtual item switching method, device, terminal and storage medium
CN110448905B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN111760278A (en) Skill control display method, device, equipment and medium
CN112691370A (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN111589144B (en) Virtual character control method, device, equipment and medium
CN110448907B (en) Method and device for displaying virtual elements in virtual environment and readable storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112870699A (en) Information display method, device, equipment and medium in virtual environment
CN111330277A (en) Virtual object control method, device, equipment and storage medium
CN113101656B (en) Virtual object control method, device, terminal and storage medium
CN110478899A (en) Map area control method, device, equipment and medium in virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant