CN110465083B - Map area control method, apparatus, device and medium in virtual environment - Google Patents

Map area control method, apparatus, device and medium in virtual environment Download PDF

Info

Publication number
CN110465083B
CN110465083B CN201910760226.4A CN201910760226A CN110465083B CN 110465083 B CN110465083 B CN 110465083B CN 201910760226 A CN201910760226 A CN 201910760226A CN 110465083 B CN110465083 B CN 110465083B
Authority
CN
China
Prior art keywords
virtual
target
map area
virtual environment
prop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910760226.4A
Other languages
Chinese (zh)
Other versions
CN110465083A (en
Inventor
刘俊汐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910760226.4A priority Critical patent/CN110465083B/en
Publication of CN110465083A publication Critical patent/CN110465083A/en
Application granted granted Critical
Publication of CN110465083B publication Critical patent/CN110465083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/69Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/843Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a map area control method, a map area control device, map area control equipment and a map area control medium in a virtual environment, and relates to the field of virtual environments. The method comprises the following steps: displaying a user interface, wherein the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting the visual angle of a main control virtual role, and the virtual environment is a fighting environment for enabling at least two virtual roles to rob a limited number of escape eligibility in a map with a continuously enlarged first target map area; controlling the main control virtual role to complete a trigger event in the virtual environment; and changing a local area in the common map area of the map into a second target map area according to the trigger event.

Description

Map area control method, apparatus, device and medium in virtual environment
Technical Field
The present application relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, and a medium for controlling a map area in a virtual environment.
Background
A tactical competitive game is a game which places virtual objects in a virtual environment, provides a series of escape rules, and controls the virtual objects in the game to realize escape after a player formulates an escape strategy according to the escape rules.
In the virtual environment of a tactical competitive game, there are a target map area (also called a danger area) and a general map area (also called a safety area) on a map. The target map area on the outer circle is reduced by one circle every preset period, so that the common map area in the central area is reduced. For example, after waiting for the time duration of T1, the target map area located at the outer circle is reduced by 20% within the time duration of T2; and after waiting for the time length of T1, reducing the target map area positioned on the outer circle by 20% again within the time length of T2. The duration of T1 may be referred to as a wait period and the duration of T2 may be referred to as an increase period.
The refreshing of the current target map area is controlled by the server, and the waiting period and the increasing period are relatively fixed, so the duration of the whole game is relatively fixed. When the number of people is large for the game at the same time (for example, hundreds of millions of players are online at the same time in the evening or on a weekend), the server is under a large pressure.
Disclosure of Invention
The embodiment of the application provides a map area control method, a map area control device, a map area control equipment and a map area control medium in a virtual environment, and the problems that the duration of a whole game is relatively fixed, and when the number of people who are on-line in a game is large, the pressure born by a server is large can be solved. The technical scheme is as follows:
in one aspect, a method for controlling a map area in a virtual environment is provided, where the method is applied to a terminal running a computer program, and the method includes:
displaying a user interface, wherein the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting the visual angle of a main control virtual role, and the virtual environment is a fighting environment for enabling at least two virtual roles to rob a limited number of escape eligibility in a map with a continuously enlarged first target map area;
controlling the master virtual role to complete a trigger event in the virtual environment;
and changing a local area in the common map area of the map into a second target map area according to the trigger event.
In another aspect, there is provided a map area control apparatus in a virtual environment, the apparatus running a program based on a virtual environment engine, the apparatus further comprising:
the system comprises a display module, a display module and a control module, wherein the display module is used for displaying a user interface, the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting a visual angle of a main control virtual role, and the virtual environment is a battle environment for enabling at least two virtual roles to rob a limited number of escape qualifications in a map with a first target map area which is continuously enlarged;
the control module is used for controlling the main control virtual role to complete a trigger event in the virtual environment;
and the changing module is used for changing a local area in the common map area of the map into a second target map area according to the trigger event.
In another aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a map area control method in a virtual environment as provided in embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the map area control method in a virtual environment as provided in the embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer, causes the computer to execute a map area control method in a virtual environment as provided in the embodiments of the present application described above.
The beneficial effects that technical scheme that this application embodiment brought include at least:
the main control virtual role is controlled to complete the trigger event in the virtual environment, and the local area in the common map area in the center of the map is changed into the second target map area according to the trigger event, so that the retraction process of the target map area is not limited to the periodic control of the system any more, but the influence factors controlled by a user are increased, the whole duration of different matches can be greatly changed, for example, the user changes the local area in the common map area into the second target map area for many times, the whole duration of single match can be obviously reduced, and the pressure borne by the server is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a birth point selection process provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of a skill presentation of viewing skills provided in an exemplary embodiment of the present application;
fig. 3 is a process diagram of a viewing perspective conversion method according to an exemplary embodiment of the present application;
FIG. 4 shows a block diagram of an electronic device provided by an exemplary embodiment of the present application;
FIG. 5 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a method for controlling a map area in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 7 is a flowchart of a method for map area control in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic illustration of a periodic increase in target map area provided by another exemplary embodiment of the present application;
FIG. 9 is a schematic illustration of an interface of a map area control method in a virtual environment, as implemented, according to another exemplary embodiment of the present application;
FIG. 10 is a flow chart of a method for map area control in a virtual environment provided by an exemplary embodiment of the present application;
fig. 11 is a block diagram illustrating a map area control apparatus in a virtual environment according to another exemplary embodiment of the present application;
fig. 12 is a block diagram of a terminal according to another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
tactical competitive game: the game is characterized in that a virtual object is placed in a virtual environment, a series of escape rules are provided, and after a player makes an escape strategy according to the escape rules, the virtual object in the game is controlled to realize escape.
Target map area (also called danger zone): in maps for tactical sports games, map areas that can cause virtual characters to be environmentally damaged. Such as: when the virtual character is in the target map area, blood volume is lost continuously until death.
General map area (also called safe zone): in a map of a tactical competitive game, a map area in which a virtual character is not environmentally damaged is provided.
Retracting: the target map area gradually increases, resulting in a process in which the general map area gradually becomes smaller.
Optionally, in this embodiment of the application, the escape rule provided by the tactical competition game at least includes: at least one of birth rules, general map area rules, identity rules, observation rules and escape rules.
The four rules are explained separately:
1. birth rules
The birth rule is used for representing the determining mode of the position of the virtual character after the virtual character enters the game. Optionally, the virtual environment corresponds to a map, and the n preset locations in the map correspond to n birth points. The user can select any one of the n birth points when the game starts, and after the game starts, the initial position of the virtual character controlled by the user is the position corresponding to the selected birth point. Alternatively, the candidate birth points provided in each pair may be the n birth points, or some of the n birth points, such as: and m birth points are determined from the n birth points to be used as selectable birth points of the current game, and the user selects any one of the m birth points, wherein m is more than 0 and less than n. Optionally, the area formed by connecting the m birth points is displayed as a bar-shaped area in the map corresponding to the virtual environment, optionally, the birth point of the starting position of the bar-shaped area is closest to the first side of the map, the birth point of the ending position of the bar-shaped area is closest to the second side of the map, and the first side and the second side are two opposite sides.
Illustratively, 31 positions in the map correspond to 31 birth points, and 8 birth points are determined in a single session, as shown in fig. 1, fig. 1 is a schematic diagram of a birth point selection process provided in an exemplary embodiment of the present application, as shown in fig. 1, a map corresponding to a virtual environment shows 31 birth points 110, and according to a current session, 8 birth points 120 in a bar-shaped area are randomly determined as birth points of virtual roles selectable by a user in the current session.
It should be noted that the bar-shaped area is used to gather the virtual objects participating in the game in the bar-shaped area at the beginning of the game, and to move the virtual objects to the general map area along a relatively consistent path (e.g., to move the virtual objects to the left direction and to move the virtual objects to the right direction in fig. 1) along the retracting direction of the general map area. It should be noted that the stripe regions may also be implemented as regions with other shapes, which is not limited in the embodiments of the present application.
2. Map region rules
In the escape process, the virtual character is influenced by environmental factors in the virtual environment and needs to be moved to a common map area so as to prevent the environmental factors in the target map area from gradually reducing the life value of the virtual character until the virtual character is eliminated. Optionally, the general map area may be indented according to the game progress, or may be indented according to the use of the game prop by the user, or may be indented according to the use of the skill by the user, wherein the indentation process corresponds to the characteristics of the indentation speed, the indentation range, the indentation interval duration, and the like. Optionally, in the retracting process, the general map area is gradually reduced from a first area with a larger range to a second area with a smaller range, and the second area is a sub-area in the first area, that is, the retracting process is a process of gradually reducing the general map area from an edge of the first area to a second area determined inside the general map area. The retraction speed is used for representing the time length for the first area to be reduced to the second area, and the retraction time length can be fixed or can be correspondingly prolonged or shortened according to the use of the prop or the skill of the user; the indentation range is used for representing the range of the indented second area in the virtual environment, and the indentation range can be preset and can also be increased or reduced according to the use of props or skills of a user; the indentation interval duration is used for representing the interval duration between two adjacent ordinary map area indentation events, and the indentation interval duration can be fixed duration, and can also be prolonged or shortened according to the use of props or skills of a user.
The first region may be a regularly-shaped region or an irregularly-shaped region; the second region may be a region of a regular shape within the first region, or may be a region of an irregular shape within the first region. Optionally, the virtual environment is divided by a grid with a preset size in a map corresponding to the virtual environment, for example: each square corresponds to an area of 100 × 100 size in the virtual environment, and the refresh of the ordinary map area may be performed by taking the square as a refresh unit, for example: the first area occupies 6400 interconnected squares with irregular outlines, and the second area occupies 3800 interconnected squares in the 6400 squares.
Optionally, in the process of determining the ordinary map area, first determining an area with a preset size and a preset shape in the virtual environment as a first ordinary map area obtained by final refreshing, generating a second ordinary map area surrounding the first ordinary map area on the basis of the first ordinary map area, generating a third ordinary map area surrounding the second ordinary map area on the basis of the second ordinary map area, and so on, determining to obtain the refreshing between two adjacent ordinary map areas according to the number of times of refreshing the ordinary map areas, such as: the common map area needs to be refreshed for 4 times, first, the first common map area is refreshed from the maximum virtual environment range to obtain a fourth common map area, the second common map area is refreshed from the fourth common map area to obtain a third common map area, the third common map area is refreshed from the third common map area to obtain a second common map area, and the fourth common map area is refreshed from the second common map area to obtain the first common map area.
The description is given by taking the example of performing the first common map area refresh in the maximum virtual environment range, where the maximum virtual environment range is a square range, at least one refresh point is randomly determined on four sides of the square range, and in the refresh process, the refresh points on each side are gradually refreshed into the square range until a fourth common map area is obtained by refresh.
Optionally, the virtual character may also create a target map area of a preset size at any position in the current general map area through props or skills. Illustratively, after the virtual character a acquires the prop drilling machine, the drilling machine is used at a first position of a current ordinary map area in the virtual environment, and then a target map area with a preset shape and a preset size is generated in the ordinary map area by taking the first position as an initial position.
3. Additional identity rules
Optionally, in this embodiment of the application, the escape rules provided in the tactical competitive game further include rules corresponding to additional identities of the virtual character, and when the virtual character escapes from the virtual environment, the virtual character has different skills and different visual contents corresponding to different additional identities. Alternatively, the additional identity may be randomly assigned by the server to the virtual character in the game play before the game play of the tactical competition game is started, or the user may select from the additional identity randomly assigned by the server after the matching is successful. Optionally, when the additional identity is randomly allocated by the server, the server allocates the additional identity to the virtual role in the opposite office according to a preset proportion, for example: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7:2:1, when 100 virtual roles are paired in a station, allocating a first additional identity to 70 virtual roles, allocating a second additional identity to 20 virtual roles, and allocating a third additional identity to 10 virtual roles; optionally, when the additional identity is selected by the user, the server controls the number of virtual characters for each additional identity in the bureau to be maintained at a preset ratio, such as: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7:2:1, for 100 virtual roles in the office, when the virtual role of the first additional identity is selected to reach 70, the server prompts the first additional identity as the non-selectable additional identity.
Optionally, in the tactical competitive game provided in the embodiment of the present application, at least three additional identities are provided, and each additional identity corresponds to a corresponding skill set. Illustratively, a first additional identity, a second additional identity and a third additional identity are provided in the tactical competitive game, wherein the first additional identity corresponds to a first skill set, the second additional identity corresponds to a second skill set, and the third additional identity corresponds to a third skill set, wherein there may be an intersection between the first skill set, the second skill set and the third skill set, that is, there may be a target skill belonging to at least two of the first skill set, the second skill set and the third skill set. Optionally, each skill set further includes a respective corresponding independent skill, that is, the first skill set includes a first skill, and the first skill does not belong to the second skill set nor the third skill set; the second skill set comprises a second skill, the second skill neither belonging to the first skill set nor to the third skill set; a third skill is included in the third skill set, the third skill belonging to neither the first skill set nor the second skill set.
Illustratively, the three additional identities are taken as examples for explanation, and in the tactical competitive game in the embodiment of the present application, each additional identity corresponds to at least one specific skill. Schematically, the independent skills for each additional identity are separately illustrated:
a first additional identity (destroyer additional identity), the corresponding first skill of which comprises: visible to a destroyer treasure box in the virtual environment, there are provided three props available to virtual characters of only the additional identity of the destroyer, including: 1. a annunciator; 2. a seismograph; 3. a drilling rig, wherein an annunciator is used to obtain an additional equipment reward, illustratively, the annunciator user summons a higher-rated prop (e.g., helmet, armor, backpack, etc.) and/or the annunciator is used to summon a more comprehensive lethally-effective weapon; the seismograph is used for changing the refreshing progress of the common map area, illustratively, when the seismograph is used between two times of common map area refreshing events of a virtual object, the time interval between the two times of refreshing events is correspondingly reduced by preset time length, such as: the current common map area starts to be refreshed to the next common map area after 20 seconds, and when the virtual object uses the seismograph, the current common map area starts to be refreshed to the next common map area after 10 seconds; the drilling machine is used for creating a target map area with a preset size in the common map area. Illustratively, after the virtual character a acquires the prop drilling machine, the drilling machine is used at a first position of a current ordinary map area in the virtual environment, and then a target map area with a preset shape and a preset size is generated in the ordinary map area by taking the first position as an initial position.
A second additional identity (hunter additional identity) whose corresponding second skill comprises: and marking the positions of other virtual characters around the position of the virtual character in the map by triggering the props. Optionally, the map divides the virtual environment by squares with a preset size, and when the virtual character triggers the prop, the positions of the virtual character in 9 squares on the periphery of the squares (including the squares) are marked in the map by taking the square where the virtual character is located in the map as a center square.
Schematically, fig. 2 is a schematic diagram of a skill exhibition manner of a third observation skill provided by an exemplary embodiment of the present application, as shown in fig. 2, a map 200 of a virtual environment divides the virtual environment by tiles of a preset size, a tile where a current target avatar is located is the tile 210 shown in fig. 2, when the target avatar triggers the third observation skill, a distribution of avatars in 9 tiles (shown by 9 tiles at a dashed line box in fig. 2) on the peripheral side of the tile 210 (including the tile 210) is determined with the tile 210 as a center, and corresponding coordinates in the map are marked for the location of each avatar, as shown by a mark 220 in fig. 2.
Alternatively, after the hunter marks the positions of the virtual characters in 9 squares on the periphery of the squares (including the squares) in the map, when a killing event (a killer kills the victim) occurs between the hunter and the marked virtual characters, the killer obtains a corresponding gain, such as: increase the blood returning speed, increase the moving speed, and the like.
A third additional identity (seeker additional identity), corresponding to a third skill comprising: when the virtual roles maintain lives in the virtual environment and collect target properties to reach the preset number, the escape properties are called, and the virtual roles of the escape properties are determined to be obtained, or the virtual roles of the escape properties and the team friends of the escape properties are obtained to be successfully escaped.
4. Observation rules
Optionally, in the tactical competitive game according to the embodiment of the present application, at least three special observation skills are provided, and before starting a game match, the user selects any one of the at least three special observation skills as a special observation skill for the master virtual character controlled by the user to observe the virtual environment in the game match. Schematically, three observation skills are taken as an example, and the three special observation skills are respectively explained:
first observation skill (hawk overlook): the virtual environment is observed through the first prop at a prop angle, namely after the first prop (such as a virtual bird) is triggered, the first prop rises to a preset height in the air of the virtual environment, and the virtual character observes the virtual environment at the height through the first prop, and the first prop in the sight range can be observed by other virtual characters in the station due to the fact that the virtual character observes the virtual environment in a mode of releasing the first prop and the air, so that the position of the virtual character releasing the first prop is determined. Referring to fig. 3, fig. 3 is a process schematic diagram of an observation perspective conversion method according to an exemplary embodiment of the present application, as shown in fig. 3, a virtual environment is observed at a first perspective of a virtual character 300, an observation screen 310 includes an object 311 and a hill 312 in the virtual environment, when a user triggers a first prop 320 through an external input device (e.g., presses an R key on a keyboard), the virtual environment is observed by the first prop 320 rising to a preset height of the virtual environment, and the observation screen 330 includes the object 311, the hill 312, and an object 313 on the other side of the hill 312.
Second viewing skill (footprint tracking): after the virtual character triggers skills, the footprint of the virtual character in the peripheral area of the virtual environment is displayed in the game interface, and the footprint is used for indicating the traveling direction of the virtual character passing through the peripheral area within a preset time period (note that, when the virtual character travels in a reverse manner in the peripheral area, the traveling direction indicated by the footprint is opposite to the actual traveling direction of the virtual character).
Third viewing skill (spar probe): the weapon spar within the preset range with the position of the virtual character as the center is observed by triggering the detection prop, optionally, the weapon spar is a mark attached to a weapon (the weapon spar can be alternatively realized as a weapon map, a weapon accessory and the like), optionally, the weapon can be a weapon held by the virtual character, and also can be a weapon to be picked up and placed on the ground of the virtual environment, optionally, when the virtual character triggers the second observation skill, other virtual articles in the virtual environment observed by the virtual character are displayed in certain gray scale and transparency, and the weapon spar is displayed in certain brightness through other virtual articles (such as walls, hillsides, floors and the like). Optionally, when the prop is triggered to observe the weapon spar, the observation range takes the position where the virtual character is located as the center, and the weapon spar in the observation range is highlighted by the spherical observation range with the preset radius.
5. Rule of escape
Optionally, the escape rules for different additional identities may be the same or different, and all additional identities may correspond to the same set of unified escape rules, and a specific additional identity may correspond to a set of additional escape rules. Schematically, 1, aiming at all virtual roles, when the virtual roles maintain lives to a final common map area and obtain escape props, determining to obtain the virtual roles of the escape props, or obtaining the virtual roles of the escape props and the escape success of teammates thereof; 2. aiming at the virtual roles of the target additional identity (the additional identity of the searcher in the additional identity rule), when the virtual roles maintain lives in the virtual environment and collect the target props to reach the preset number, the escape props are called, and the virtual roles of the escape props are determined to be obtained, or the virtual roles of the escape props and the teammates of the escape props are obtained to be successfully escaped. It should be noted that, when the escape is performed in the above manner 2, the total number of the target properties in the virtual environment is a preset number, and the preset number is used to control the number of the virtual characters of the target additional identity of the escape in the manner 2, such as: optionally, the virtual roles with the target additional identities can prompt the remaining number and/or positions of the target properties which are not acquired by the virtual roles in the target additional identities in real time in a tactical competition game, and when the sum of the number and the remaining number of the target properties held by the virtual roles does not reach the required number for calling the escape properties, the virtual roles switch to a fighting strategy to escape in the method 1.
The above mode 1 and mode 2 are two schemes that exist in parallel, that is, for a virtual character with a target additional identity, the escape can be successful through the mode 1 or the escape can be successful through the mode 2, but the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 2 after the escape is successful through the mode 1, or the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 1 after the escape is successful through the mode 2. Optionally, the target prop in the above mode 2 is a prop that is visible in the virtual environment by the target virtual character with the additional identity, that is, the target prop is invisible in the virtual environment by the virtual character with the additional identity.
Optionally, the virtual roles display different game results after escaping in different ways, and optionally, when the virtual role of the target additional identity corresponds to an additional escape rule and the escape succeeds according to the escape rule, the virtual role of the target additional identity displays an additional display result according to the additional escape rule. Illustratively, after the virtual character escapes through the first escape rule, a result interface of the game shows that "you like to successfully escape and get the first name", and a result interface of the game shows that "you like to get the second name" is provided for the virtual character which still has a life value in the virtual environment; when the virtual character of the target additional identity escapes through the second escape rule, and the virtual character of the target additional identity is the first virtual character escaping through the second escape rule in the virtual environment, displaying that 'Maotai you become the first name of the target additional identity and the escape is successful' on the result interface of matching, displaying that 'Maotai you become the second name of the target additional identity and the escape is successful' on the result interface of matching aiming at the second virtual character escaping through the second escape rule in the virtual environment, and so on.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with an application program supporting a virtual environment, such as an application program supporting a three-dimensional virtual environment. The application program may be any one of a Battle royal (Battle sports) Game, a virtual reality application program, a three-dimensional map program, a Third-person shooter Game (TPS), a First-person shooter Game (FPS), and a Multiplayer Online Battle sports Game (MOBA). Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
In some embodiments, the application program may be a shooting type game, a racing type game, a big-fleeing and killing type game, and the like. The client can support at least one operating system of a Windows operating system, an apple operating system, an android operating system, an IOS operating system and a LINUX operating system, and the clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the client is a program adapted to a mobile terminal having a touch screen.
In some embodiments, the client is an application developed based on a three-dimensional engine, such as the three-dimensional engine being a Unity engine. Fig. 4 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 4, the terminal includes a processor 11, a touch screen 12, and a memory 13.
The processor 11 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capability.
The touch screen 12 includes a general touch screen or a pressure sensitive touch screen. The ordinary touch screen can measure a pressing operation or a sliding operation applied to the touch screen 12; a pressure sensitive touch screen can measure the degree of pressure applied to the touch screen 12.
The memory 13 stores an executable program of the processor 11. Illustratively, the memory 13 stores a tactical sports program a, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The tactical competition program a is an application program developed based on the three-dimensional virtual engine 17. Optionally, the tactical competition program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, a three-dimensional presentation program developed by the three-dimensional virtual engine 17. For example, when the operating system of the terminal adopts an android operating system, the tactical sports program a is developed by adopting Java programming language and C # language; for another example, when the operating system of the terminal is the IOS operating system, the tactical competition program a is developed using Object-C programming language and C # language.
The three-dimensional Virtual engine 17 is a three-dimensional interactive engine supporting multiple operating system platforms, and illustratively, the three-dimensional Virtual engine may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual engine 17 is not limited in the embodiment of the present application, and the following embodiment exemplifies that the three-dimensional Virtual engine 17 is a Unity engine.
The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.
The other drivers 192 may be drivers associated with the processor 11, drivers associated with the memory 13, drivers associated with network components, drivers associated with sound components, etc.
Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.
Fig. 5 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 500 includes: a first terminal 550, a server cluster 520, a second terminal 530.
The first terminal 550 is installed and operated with a client 511 supporting a virtual environment, and the client 511 may be a multiplayer online battle program. When the first terminal runs the client 511, a user interface of the client 511 is displayed on a screen of the first terminal 550. The client may be any one of a MOBA game, a tactical sports game, and an SLG game. In the present embodiment, the client is an MOBA game. The first terminal 550 is a terminal used by the first user 501, and the first user 501 uses the first terminal 550 to control a first virtual character located in the virtual environment to perform an activity, where the first virtual character may be referred to as a master virtual character of the first user 501. The activities of the first avatar include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first avatar is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 530 is installed and operated with a client 531 supporting a virtual environment, and the client 531 may be a multiplayer online battle program. When the second terminal 530 runs the client 531, a user interface of the client 531 is displayed on the screen of the second terminal 530. The client may be any one of an MOBA game, a tactical sports game, and an SLG game, and in this embodiment, the client is an MOBA game for example. Second terminal 530 is a terminal used by second user 502, and second user 502 uses second terminal 530 to control a second virtual character located in the virtual environment to perform an activity, where the second virtual character may be referred to as a master virtual character of second user 502. Illustratively, the second avatar is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have an enemy relationship.
Optionally, the clients installed on the first terminal 550 and the second terminal 530 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 550 may generally refer to one of a plurality of terminals, and the second terminal 530 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 550 and the second terminal 530. The device types of the first terminal 550 and the second terminal 530 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 5, but there are a plurality of other terminals 540 that may access the server cluster 520 in different embodiments. In some embodiments, there are also one or more terminals 540 corresponding to the developer, a development and editing platform for the client of the virtual environment is installed on the terminal 540, the developer can edit and update the client on the terminal 540 and transmit the updated client installation package to the server cluster 520 through a wired or wireless network, and the first terminal 550 and the second terminal 550 can download the client installation package from the server cluster 520 to implement the update on the client.
The first terminal 550, the second terminal 530, and the other terminals 140 are connected to the server cluster 520 through a wireless network or a wired network.
The server cluster 520 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 520 is used to provide background services for clients that support a three-dimensional virtual environment. Optionally, the server cluster 520 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 520 undertakes the secondary computing work, and the terminal undertakes the primary computing work; alternatively, a distributed computing architecture is adopted between the server cluster 520 and the terminals (the first terminal 550 and the second terminal 530) to perform the cooperative computing.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 520 includes servers 521 and 526, where servers 521 include processor 522, user account database 523, battle service module 524, and user-oriented Input/Output Interface (I/O Interface) 525. The processor 522 is configured to load an instruction stored in the server 521, and process data in the user account database 521 and the combat service module 524; the user account database 521 is used for storing data of user accounts used by the first terminal 550, the second terminal 530, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 524 is configured to provide a plurality of fight rooms for the users to fight; the user-oriented I/O interface 525 is used to establish communication with the first terminal 550 and/or the second terminal 530 through a wireless network or a wired network to exchange data. Optionally, an indentation module 527 is disposed in the server 526, and the indentation module 527 is used for implementing a map area control method in the virtual environment provided in the following embodiments.
Fig. 6 shows a flowchart of a map area control method in a virtual environment according to an exemplary embodiment of the present application. The method may be performed by the terminal (first or second terminal or other terminal) shown in fig. 4 or fig. 5. The terminal has a computer program running therein, which may be a virtual environment engine (located locally or on the server side) based program. The method comprises the following steps:
step 602, displaying a user interface, wherein the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting a visual angle of a main control virtual character, and the virtual environment is a battle environment for enabling at least two virtual characters to rob a limited number of escape qualifications in a map with a first target map area increasing continuously;
the virtual environment is a two-dimensional virtual environment, a three-dimensional virtual environment, or a 2.5-dimensional virtual environment. The present embodiment is exemplified in that the virtual environment is a three-dimensional virtual environment. The virtual environment is a virtual environment provided by a virtual environment engine.
Illustratively, the virtual environment is a three-dimensional virtual environment corresponding to a tactical sports game. The user interface is the user interface corresponding to the terminal when the tactical competitive game is operated.
A first target map area and a general map area exist in a map in a virtual environment. The first target map area is a target map area controlled by the server. The first target map area may be periodically increased, and accordingly, the general map area may be periodically decreased. In each periodic indentation process, the first target map area is kept for a constant duration, which is called a waiting duration; the target map area begins a process of increasing, referred to as an "increase period".
Illustratively, the first target map area is a map area located at the outer circle, and the normal map area is a map area located at the center.
Step 604, controlling the master virtual role to complete a trigger event in the virtual environment;
the trigger event is an event for triggering a manner of changing the increase of the target map area. Optionally, the triggering event is any form of event that the master virtual character can implement in the virtual environment. For example, the trigger event is an event that the master virtual character moves to a certain place; for another example, the triggering event is that the main control virtual character kills a specified strange; for another example, the triggering event is that the elimination quantity of the master virtual character for eliminating other virtual characters reaches a target value.
Illustratively, a triggering event is an event in which a master virtual character uses a target prop in a virtual environment. The target prop is a prop that has the ability to change a general map area to a second target map area.
And 606, displaying a local area in the general map area of the map as a second target map area according to the trigger event.
Illustratively, the terminal changes a local area in the general map area located at the center in the map to the second target map area according to the trigger event. The second target map area may be a permanent change or a temporary change that is valid for the duration of the target.
In one example, the trigger event corresponds to an event location. And the terminal changes a local area corresponding to the event position in the common map area in the map into a second target map area according to the trigger event.
In one example, the terminal transmits a change request for requesting the server to display a partial area of a general map area of a map as the second target map area to the server according to a trigger event. Changing, by the server, a local area in the general map area of the map to the second target map area according to the change.
In summary, in the method provided in this embodiment, the main control virtual character completes the trigger event in the virtual environment, and the local area in the central general map area in the map is changed into the second target map area according to the trigger event, so that the retraction process of the target map area is no longer limited to the periodic control of the system, but an influence factor controlled by the user is increased, and thus the overall duration of different matches is greatly changed, for example, the user changes the local area in the general map area into the second target map area for many times, which can significantly reduce the overall duration of a single match, thereby reducing the pressure on the server.
Fig. 7 is a flowchart illustrating a method for controlling a map area in a virtual environment according to an exemplary embodiment of the present application. The method may be performed by the terminal (first or second terminal or other terminal) shown in fig. 4 or fig. 5. The terminal has a computer program running therein, which may be a virtual environment engine (located locally or on the server side) based program. The method comprises the following steps:
step 701, displaying a user interface, wherein the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting the visual angle of a main control virtual character, and the virtual environment is a fighting environment for enabling at least two virtual characters to rob a limited number of escape eligibility in a map with a continuously enlarged first target map area;
the virtual environment is a two-dimensional virtual environment, a three-dimensional virtual environment, or a 2.5-dimensional virtual environment. The present embodiment is exemplified in that the virtual environment is a three-dimensional virtual environment. The virtual environment is a virtual environment provided by a virtual environment engine.
Illustratively, the virtual environment is a three-dimensional virtual environment corresponding to a tactical competitive game. The user interface is a user interface corresponding to the terminal when the tactical competitive game is operated.
Illustratively, the user interface includes: a virtual environment picture and a Head Up Display (HUD) region. The HUD area comprises a control used for controlling the main control virtual role and a control used for displaying the auxiliary information. The auxiliary information includes: at least one of state information of the master virtual character, state information of teammates, state information of enemies, state information of virtual environments, network information of terminals and hardware performance information.
A master virtual character is a virtual character in a virtual environment that is controlled by a user. The perspective of the master avatar may be a first perspective. Optionally, the first perspective is a perspective when the virtual object is viewed in the first perspective direction in the virtual environment, and the virtual object having the first orientation is included in the virtual picture viewed from the first perspective.
Optionally, the first perspective direction is a direction when a virtual object is viewed by the camera model in the virtual environment. Optionally, the first perspective direction is a direction of the camera model when viewing the virtual object in the virtual environment behind the virtual object, that is, a direction pointing from the back of the virtual object to the virtual object. Generally, the back of the virtual object refers to a direction in which the back of the virtual object is rearward, that is, when the virtual object is viewed from the back of the virtual object, the back of the virtual object is generally seen.
A first target map area and a general map area exist in a map in a virtual environment. The first target map area may be periodically increased, and accordingly the general map area may be periodically decreased. In each periodic indentation process, the first target map area is kept for a constant duration, which is called a waiting duration; the first target map area begins a process of increasing, referred to as an "increase period".
The target map area is an area of the map where the virtual character is environmentally damaged in the map. The general map area is an area of the map where the virtual character is not damaged by the environment in the map.
Referring collectively to FIG. 8, an overhead view of a map in a virtual environment is shown. Assume that the map of the virtual environment is in the positive direction. The first target map area 71 is a map area located between a square 73 and a circle 74, and the normal map area 72 is a map area located within the circle 74. The first target map area 71 located at the outer circle is periodically increased (i.e., retracted), resulting in a periodic decrease of the general map area 72 located at the center.
Step 702, after controlling the master control virtual role to kill other virtual roles, obtaining a target prop;
the master virtual character can use properties such as firearms, knives, traps, etc. to attack the virtual character of the enemy. When the main control virtual role kills other virtual roles, other virtual roles have certain probability to obtain the target prop. In this embodiment, the target prop is a prop having a function of changing a local area of a general map area into a second target map area.
It should be noted that this step is an optional step. Another point to be described is that the manner in which the master virtual character acquires the target prop may also be other manners, for example, calling an air-drop replenishment package, and acquiring the target prop from the air-drop replenishment package, which is not limited in this embodiment.
Step 703, controlling the main control virtual role to use the target prop in the virtual environment;
after obtaining the target prop, the user can control the main control virtual role to use the target prop immediately; the target prop can also be used at a later time when the target prop is expected to be used.
When the target prop is not used, the target prop can be stored in the storage space of the main control virtual character. The storage space may be a backpack.
In one example, this step includes, but is not limited to, the following sub-steps:
1. controlling a main control virtual role to start a target prop in a virtual environment;
optionally, the user clicks an icon corresponding to the target prop (or presses a shortcut key corresponding to the target prop) to control the master virtual character to enable the target prop in the virtual environment.
2. And controlling the main control virtual role to determine the action position (law application range) of the target prop in the virtual environment.
Optionally, after the target prop is enabled, a candidate action location for the target prop is displayed on the virtual environment (or in a map top view of the virtual environment). The user can change the candidate action position through at least one control mode of a touch screen, a mouse and a direction key, so that the action position of the target prop is determined. The action position is a position for confirming a local area.
As shown in FIG. 9, after the target prop is enabled, a candidate action location 70 is displayed in the user interface, and the user may drag the candidate action location 70 through the mouse.
In one example, the area of the local region is a preset area. In another example, the area of the local region is an area that is dynamically selected by a user over a range of values. For example, the user may scroll the middle mouse button to dynamically change the area of the local region within a range allowed by the range of values.
Step 704, displaying a local area in the general map area of the map as a second target map area;
illustratively, the terminal changes a local area corresponding to the action position of the target prop in a common map area in the map into a second target map area according to an event using the target prop
The terminal sends a change request to the server according to the trigger event, wherein the change request is used for requesting the server to display a local area in the general map area of the map as a second target map area. The server changes a local area in the general map area of the map to a second target map area when the change request meets a legitimacy condition.
In one example, the legitimacy condition includes, but is not limited to, one of the following conditions:
firstly, a master control virtual role is an owner of a target prop;
and secondly, the master virtual role is a virtual role with a specified special identity. For example, the master virtual role is a virtual role having a "destroyer" identity.
In one example, the first target map area and the second target map area are damaged in the same manner; in another example, the first target map area and the second target map area are different in the manner of injury. The injury mode comprises the following steps: harming the blood volume of the virtual character, harming the life of the virtual character, harming the magic value of the virtual character, and harming the skill using capability of the virtual character; hurting the speed of movement of the virtual character; at least one of the velocities of motion of the virtual character is impaired.
In one example, the first target map area and the second target map area are the same extent of injury; in another example, the first target map area and the second target map area are not damaged to the same extent. For example, the first target map area is damaged to the extent that blood is dropped at a speed of 1, and the second target map area is damaged to the extent that blood is dropped at a speed of 2. Wherein, the speed 2 is greater than the speed 1, or the speed 2 is less than the speed 1.
Step 705, after the target duration, the second target map area is restored and displayed as the ordinary map area.
The second target map area may be a permanent change or a temporary change that is valid for the duration of the target. When the second target map area is temporarily changed, the terminal restores and displays the second target map area as the ordinary map area after the target duration elapses.
Illustratively, after the target duration passes, the server sends a frame synchronization instruction to all or part of the clients participating in the local battle, and the clients restore and display the second target map area as a common map area according to the frame synchronization instruction.
In summary, in the method provided in this embodiment, the main control virtual character completes the trigger event in the virtual environment, and the local area in the central general map area in the map is changed into the second target map area according to the trigger event, so that the retraction process of the target map area is no longer limited to the periodic control of the system, but is increased by the influence factor controlled by the user, and therefore the overall duration of different matches may be greatly changed, for example, the user changes the local area in the general map area into the second target map area many times, and the overall duration of a single match can be significantly reduced, thereby reducing the pressure borne by the server.
In the method provided by this embodiment, the server may dynamically control the number of the target items in each game by using the target items as the trigger conditions. For example, when the severity of the game is low (for example, the number of eliminated players in a unit time length is small), more target props are issued to each game to increase the severity of the game; when the violence degree of the opposite is low, less target props are issued to each opposite to reduce the violence degree of the opposite.
The method provided by the embodiment also can use the target prop when the main control virtual role meets certain use conditions, so that the target prop is prevented from being abused, and the battle is abnormally ended.
In an alternative embodiment based on fig. 6 or fig. 7, the above-mentioned trigger event may also be replaced by other possible implementations, such as any one of the following implementations:
firstly, controlling a main control virtual role to move to a target place in a virtual environment;
in an exemplary example, the target location is a map point preset in a map, for example, a tower top of a tower, a corner of a city, etc. And after the user controls the main control virtual role to move to the target place in the virtual environment, triggering the change of the increasing mode of the target map area.
Secondly, controlling the main control virtual role to obtain a target object in the virtual environment;
in an exemplary example, the target object is a preset object in the map, such as a bead, a piece of jade, and the like. And after the user controls the main control virtual character to obtain the target object in the virtual environment, triggering the change of the increasing mode of the target map area.
And thirdly, controlling the main control virtual Character to kill a target Non-Player Character or a Non-control Character (NPC) in the virtual environment.
In an illustrative example, the target NPC is a certain NPC preset in the map. And after the user controls the main control virtual role to obtain the target NPC in the virtual environment, triggering the change of the increasing mode of the target map area.
It should be noted that, the embodiment of the present application does not limit the specific form of the trigger event.
In summary, the method provided in this embodiment changes the increasing manner of the target map area by providing various forms of trigger events, and is not limited to the trigger form of the virtual item, so that the specific form of the user when changing the increasing manner of the target map area is enriched.
Fig. 10 is a flowchart illustrating a method for controlling a map area in a virtual environment according to an exemplary embodiment of the present application. The method may be performed by the terminal (first or second terminal or other terminal) shown in fig. 4 or fig. 5. The method comprises the following steps:
step 1001, start;
step 1002, in battlefield battle of a tactical competitive game;
the server matches multiple (say 100) avatars into the battlefield process. The battlefield process may be a battlefield process corresponding to a tactical competitive game.
Step 1003, whether the target prop is obtained and used or not is judged;
when the master control virtual character kills other virtual characters, triggers the target prop based on the random drop principle and uses the target prop, the step 1004 is entered; otherwise, go back to step 1002.
Step 1004, selecting an action range;
the action range of the target prop can be selected by a user in a self-defined way. For example, an area hidden by an enemy is determined as an action range; as another example, the bridgehead area of a bridge is determined as the action range; as another example, an entrance area of a building is defined as the active area.
Step 1005, whether the message is interrupted;
in one example, the use of a target prop requires a length of time for administration. Within the application duration, the use of the target prop can be interrupted when the main control virtual role is injured or killed.
If the application time is not interrupted, go to step 1006; and if the application time is interrupted within the time length, the use of the target prop is interrupted.
Step 1006, generating a danger zone;
and generating a danger area on the action area (application range) of the target prop.
And step 1007, finishing.
In the following, embodiments of the apparatus of the present application are referred to, and details of the embodiments of the apparatus not described in detail can be referred to the embodiments of the method described above.
Fig. 11 is a block diagram illustrating a map area control apparatus in a virtual environment according to an exemplary embodiment of the present application. The apparatus runs with a program based on a virtual environment engine, the apparatus comprising:
a display module 1120, configured to display, by using a virtual environment engine, a user interface on which a virtual environment picture generated when a virtual environment is observed from a perspective of a master virtual character is displayed, where the virtual environment is a battle environment in which at least two virtual characters rob a limited number of escape eligibility in a map in which a first target map area is continuously enlarged;
a control module 1140 for controlling the master virtual role to complete a trigger event in the virtual environment;
a changing module 1160, configured to change a local area in the general map area of the map to a second target map area according to the trigger event.
In one example, the trigger event corresponds to an event location;
the changing module 1160 is configured to change a local area corresponding to the event location in the general map area in the map to a second target map area according to the trigger event.
In one example, the triggering event is an event that uses the target prop;
the changing module 1160 is configured to change, according to the event using the target prop, a local area corresponding to an action position of the target prop in the general map area in the map into a second target map area.
In one example, the apparatus further comprises:
a recovering module 1180, configured to recover the second target map area to the common map area after a target duration elapses.
In one example, the area of the local region is a preset area.
In one example, the control module 1140 is configured to control the master virtual character to use a target prop in the virtual environment, where the target prop is a prop with the ability to change a general map area to a target map area.
In one example, the control module 1140 is configured to control the master virtual character to enable the target prop in the virtual environment; and controlling the main control virtual role to determine the action position of the target prop in the virtual environment, wherein the action position is used for confirming the position of the local area.
In an example, the control module 1140 is configured to control the master virtual character to obtain the target prop after killing other virtual characters.
In one example, the first target map area and the second target map area are injured in the same or different ways; or the damage degree of the first target map area and the second target map area is the same or different.
Fig. 12 shows a block diagram of a terminal 1200 according to an exemplary embodiment of the present application. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 can also be referred to as user equipment, portable terminals, laptop terminals, desktop terminals, and the like by other names.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 can include one or more computer-readable storage media, which can be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement a map area control method in a virtual environment as provided by method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, and power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even more, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals and inputting the electric signals to the processor 1201 for processing, or inputting the electric signals to the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, optical sensor 1215, and proximity sensor 1216.
Acceleration sensor 1211 may detect acceleration areas on three coordinate axes of the coordinate system established with terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. From the data collected by the gyro sensor 1212, the processor 1201 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed on the lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
Optical sensor 1215 is used to collect ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between a user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present application further provides a computer device, where the computer device includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded by the processor and implements the map area control method in the virtual environment provided in any of the above method embodiments.
Embodiments of the present application further provide a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the map area control method in the virtual environment provided in any of the above-mentioned method embodiments.
The present application further provides a computer program product for causing a computer to perform a method for controlling a map area in a virtual environment as provided by any of the above method embodiments when the computer program product runs on the computer.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, set of codes or set of instructions which is loaded and executed by the processor to implement a method of map area control in a virtual environment as provided by any of the various method embodiments described above.
Optionally, the computer-readable storage medium may include: read Only Memory (ROM), random Access Memory (RAM), solid State Drive (SSD), or optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (8)

1. A map area control method in a virtual environment, applied to a terminal running a computer program, the method further comprising:
displaying a user interface, wherein the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting the visual angle of a main control virtual role, and the virtual environment is a fighting environment for enabling at least two virtual roles to rob a limited number of escape eligibility in a map with a continuously enlarged first target map area; the main control virtual character is a virtual character controlled by a user in the virtual environment, the view angle of the main control virtual character is a first view angle, the first view angle is a view angle when a virtual object is observed in the virtual environment in a first view angle direction, a virtual image changing plane observed by the first view angle comprises the virtual object with a first orientation, and the first view angle direction is a direction when the virtual object is observed through a camera model in the virtual environment or a direction when the virtual object is observed through the camera model behind the virtual object in the virtual environment;
the corresponding additional identities of different virtual characters when escaping in the virtual environment are the same or different, the different additional identities have different skills and different visual contents, the additional identities are randomly distributed to the virtual characters in a game in the game by a server before the game in the tactical competitive game starts, or the additional identities are selected by a user from the additional identities randomly distributed by the server after the server successfully matches the virtual characters and enters a battlefield; the additional identities comprise at least a first additional identity, a second additional identity and a third additional identity, the first additional identity corresponds to a first skill set, the second additional identity corresponds to a second skill set, the third additional identity corresponds to a third skill set, target skills are present and belong to at least two skill sets of the first skill set, the second skill set and the third skill set, and each skill set further comprises a respective corresponding independent skill; corresponding to the same set of unified escape rules for all the additional identities, and corresponding to a set of additional escape rules for a specific additional identity, and when the virtual character obtains an escape prop, obtaining the virtual character of the escape prop and the team friends of the virtual character to successfully escape;
controlling the main control virtual role to complete a trigger event in the virtual environment, wherein the trigger event corresponds to an event position; changing a local area corresponding to the event position in a common map area in the map into a second target map area according to the trigger event; the first target map area and the second target map area have different damage modes, or the first target map area and the second target map area have different damage degrees, and the damage modes include: at least one of blood volume damage to the virtual character, life damage to the virtual character, magic value damage to the virtual character, skill use ability damage to the virtual character, moving speed damage to the virtual character, and moving speed damage to the virtual character;
the controlling the master virtual role to complete the trigger event in the virtual environment includes:
when it is detected that a user clicks an icon corresponding to a target prop or presses a shortcut key corresponding to the target prop, controlling the main control virtual character to start the target prop in the virtual environment, wherein the target prop is a prop capable of changing the common map area into a target map area, the target prop is obtained by controlling the main control virtual character to kill other virtual characters, and the number of the target props in each game is dynamically controlled by a server according to the intensity of the game; after the target prop is started, displaying a candidate action position of the target prop in a map top view of the virtual environment, and determining the action position of the target prop according to the change of a user to the candidate action position through at least one control mode of a touch screen, a mouse and a direction key, wherein the action position is used for confirming the position of a local area, and the user dynamically changes the area of the local area within a range allowed by a value range by rolling a middle key of the mouse; the action range of the target prop is an area hidden by an enemy;
if the trigger event is an event that the main control virtual role uses the target prop in the virtual environment, changing a local area corresponding to the action position of the target prop in a common map area of the map into a second target map area according to the trigger event, including:
sending a change request to the server according to the event using the target prop, so that the server changes a local area in the general map area to the second target map area when determining that the change request meets a legality condition, wherein the change request is used for requesting the server to display the local area in the general map area as the second target map area, and the legality condition comprises that the master virtual character is an owner of the target prop and the master virtual character is a virtual character with a specified special identity.
2. The method of claim 1, further comprising:
and after the target duration, restoring the second target map area to the common map area.
3. The method of claim 1, wherein said controlling the master virtual character further comprises, prior to using the target prop in the virtual environment:
and controlling the main control virtual role to kill other virtual roles, and then obtaining the target prop.
4. An apparatus for controlling a map area in a virtual environment, the apparatus comprising:
the system comprises a display module, a display module and a control module, wherein the display module is used for displaying a user interface, the user interface displays a virtual environment picture generated when a virtual environment is observed by adopting a visual angle of a main control virtual role, and the virtual environment is a battle environment for enabling at least two virtual roles to rob a limited number of escape qualifications in a map with a first target map area which is continuously enlarged; the main control virtual character is a virtual character controlled by a user in the virtual environment, the view angle of the main control virtual character is a first view angle, the first view angle is a view angle when a virtual object is observed in the virtual environment in a first view angle direction, a virtual image changing plane observed from the first view angle comprises the virtual object with a first orientation, and the first view angle direction is a direction when the virtual object is observed through a camera model in the virtual environment or a direction when the virtual object is observed through the camera model behind the virtual object in the virtual environment;
the corresponding additional identities of different virtual characters when escaping in the virtual environment are the same or different, the different additional identities have different skills and different visual contents, the additional identities are randomly distributed to the virtual characters in a game in the game by a server before the game in the tactical competitive game starts, or the additional identities are selected by a user from the additional identities randomly distributed by the server after the server successfully matches the virtual characters and enters a battlefield; the additional identities comprise at least a first additional identity, a second additional identity and a third additional identity, the first additional identity corresponds to a first skill set, the second additional identity corresponds to a second skill set, the third additional identity corresponds to a third skill set, target skills are present and belong to at least two skill sets of the first skill set, the second skill set and the third skill set, and each skill set further comprises a respective corresponding independent skill; corresponding to the same set of unified escape rules for all the additional identities, and corresponding to a set of additional escape rules for a specific additional identity, and when the virtual character obtains an escape prop, obtaining the virtual character of the escape prop and the team friends of the virtual character to successfully escape;
the control module is used for controlling the main control virtual role to complete a trigger event in the virtual environment, and the trigger event corresponds to an event position;
a changing module, configured to change, according to the trigger event, a local area, corresponding to the event location, in a general map area in the map to a second target map area; the first target map area and the second target map area have different damage modes, or the first target map area and the second target map area have different damage degrees, and the damage modes include: at least one of blood volume damage to the virtual character, life damage to the virtual character, magic value damage to the virtual character, skill use ability damage to the virtual character, moving speed damage to the virtual character, and moving speed damage to the virtual character;
the control module is used for:
when it is detected that a user clicks an icon corresponding to a target prop or presses a shortcut key corresponding to the target prop, controlling the main control virtual character to start the target prop in the virtual environment, wherein the target prop is a prop with the capability of changing the common map area in the map into a target map area, the target prop is obtained after controlling the main control virtual character to kill other virtual characters, and the number of the target props in each game is dynamically controlled by a server according to the intensity of the game; after the target prop is started, displaying a candidate action position of the target prop in a map top view of the virtual environment, and determining the action position of the target prop according to the change of a user to the candidate action position through at least one control mode of a touch screen, a mouse and a direction key, wherein the action position is used for confirming the position of a local area, and the user dynamically changes the area of the local area within a range allowed by a value range by rolling a middle key of the mouse; the action range of the target prop is an area hidden by an enemy;
if the trigger event is an event that the main control virtual character uses the target prop in the virtual environment, the change module is configured to send a change request to the server according to the event that the target prop is used, so that the server changes a local area, corresponding to the action position of the target prop, in the general map area to a second target map area when it is determined that the change request meets a validity condition, where the change request is used to request the server to display the local area in the general map area as the second target map area, and the validity condition includes that the main control virtual character is an owner of the target prop, and the main control virtual character is a virtual character with a specified special identity.
5. The apparatus of claim 4, further comprising:
and the recovery module is used for recovering the second target map area into the common map area after the target duration.
6. The apparatus of claim 4, further comprising:
and controlling the main control virtual role to kill other virtual roles, and then obtaining the target prop.
7. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes or set of instructions, the at least one instruction, the at least one program, the set of codes or set of instructions being loaded and executed by the processor to implement a map area control method in a virtual environment as claimed in any one of claims 1 to 3.
8. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a map area control method in a virtual environment according to any one of claims 1 to 3.
CN201910760226.4A 2019-08-16 2019-08-16 Map area control method, apparatus, device and medium in virtual environment Active CN110465083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910760226.4A CN110465083B (en) 2019-08-16 2019-08-16 Map area control method, apparatus, device and medium in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910760226.4A CN110465083B (en) 2019-08-16 2019-08-16 Map area control method, apparatus, device and medium in virtual environment

Publications (2)

Publication Number Publication Date
CN110465083A CN110465083A (en) 2019-11-19
CN110465083B true CN110465083B (en) 2023-04-14

Family

ID=68510951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910760226.4A Active CN110465083B (en) 2019-08-16 2019-08-16 Map area control method, apparatus, device and medium in virtual environment

Country Status (1)

Country Link
CN (1) CN110465083B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111330275B (en) * 2020-03-04 2024-03-19 网易(杭州)网络有限公司 Interaction method and device in game, storage medium and electronic equipment
CN111921198B (en) * 2020-08-27 2022-06-24 腾讯科技(深圳)有限公司 Control method, device and equipment of virtual prop and computer readable storage medium
CN112330823B (en) * 2020-11-05 2023-06-16 腾讯科技(深圳)有限公司 Virtual prop display method, device, equipment and readable storage medium
CN112316421B (en) * 2020-11-27 2022-11-01 腾讯科技(深圳)有限公司 Equipment method, device, terminal and storage medium of virtual item
CN112657184A (en) * 2020-12-21 2021-04-16 北京像素软件科技股份有限公司 Data processing method and device, server and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3910890B2 (en) * 2002-08-21 2007-04-25 株式会社バンダイナムコゲームス GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM
CN108144301B (en) * 2017-11-30 2020-12-01 腾讯科技(深圳)有限公司 Virtual object information display method and device, storage medium and electronic device
CN109481939B (en) * 2018-11-16 2022-06-21 深圳市腾讯信息技术有限公司 Region adjusting method and device, storage medium and electronic device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"刺激战场:玩家意外发现"毒圈发射点";游戏的世界呀;《https://new.qq.com/omn/20190421/20190421A07IWH.html》;20190421;第1-4页 *
【千机百变】第三期:身份系统【破坏者】揭秘;代号LN;《https://mp.weixin.qq.com/s/qUj1L7Y7OMTjDUuZUzAR_A》;20190612;第1-8页 *
代号LN.【千机百变】第三期:身份系统【破坏者】揭秘.《https://mp.weixin.qq.com/s/qUj1L7Y7OMTjDUuZUzAR_A》.2019,第1-6页. *
关于精确空袭;马汤俊;《https://codol.gamebbs.qq.com/forum.php?mod=viewthread&tid=188910》;20150206;全文 *
堡垒之夜版本更新:新道具暴风瓶要怎么用?;尬就完事了;《https://baijiahao.baidu.com/s?id=1635675392572718797&wfr=spider&for=pc》;20190607;第1-2页 *

Also Published As

Publication number Publication date
CN110465083A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
WO2021143259A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN111462307B (en) Virtual image display method, device, equipment and storage medium of virtual object
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
JP2022517337A (en) How to control a virtual object to mark a virtual item and its equipment and computer program
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN110448905B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN113117330B (en) Skill release method, device, equipment and medium for virtual object
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112870715B (en) Virtual item putting method, device, terminal and storage medium
CN110801628B (en) Method, device, equipment and medium for controlling virtual object to restore life value
CN110478900B (en) Map area generation method, device, equipment and storage medium in virtual environment
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN113398571A (en) Virtual item switching method, device, terminal and storage medium
CN113680060B (en) Virtual picture display method, apparatus, device, medium and computer program product
CN111659117A (en) Virtual object display method and device, computer equipment and storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111589144B (en) Virtual character control method, device, equipment and medium
CN110448907B (en) Method and device for displaying virtual elements in virtual environment and readable storage medium
CN112354180A (en) Method, device and equipment for updating integral in virtual scene and storage medium
CN111389000A (en) Using method, device, equipment and medium of virtual prop

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant