CN111097173B - Method and device for acquiring game gain state example and computer storage medium - Google Patents

Method and device for acquiring game gain state example and computer storage medium Download PDF

Info

Publication number
CN111097173B
CN111097173B CN201911354252.3A CN201911354252A CN111097173B CN 111097173 B CN111097173 B CN 111097173B CN 201911354252 A CN201911354252 A CN 201911354252A CN 111097173 B CN111097173 B CN 111097173B
Authority
CN
China
Prior art keywords
gain
gain state
instance
state
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911354252.3A
Other languages
Chinese (zh)
Other versions
CN111097173A (en
Inventor
季文彬
蔡沛程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911354252.3A priority Critical patent/CN111097173B/en
Publication of CN111097173A publication Critical patent/CN111097173A/en
Application granted granted Critical
Publication of CN111097173B publication Critical patent/CN111097173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

The application relates to a method and a device for acquiring game gain state instances, a computer readable storage medium and computer equipment, wherein the method comprises the following steps: acquiring a first gain state instance; looking up a second gain state instance in the gain state instance list; aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example in the gain effective time of the second gain state example; determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance; and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain the target gain state example. The scheme provided by the application can reduce the times of the server for calculating the game numerical value loss information, reduce high-frequency information transmission and reduce the service pressure and the cost of information transmission.

Description

Method and device for acquiring game gain state example and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for obtaining a game gain state instance, a computer-readable storage medium, a computer device, and a method for processing a game role life value.
Background
With the development of computer technology, large electronic games running on terminal devices such as mobile phones and tablet computers are becoming more popular, for example, multiplayer online tactical sports games. In the electronic game, when a certain game character is attacked by other game characters or magic effects exerted by a virtual environment, effects including buff (gain effect) or Debuff (negative gain effect) and the like are exerted on the game character, and at the moment, the server calculates loss information of the game numerical value according to a gain state example corresponding to the buff or the Debuff, so that the game character loses certain game numerical values such as life values at regular intervals. When a game role participates in a large-scale group battle in an electronic game, the game role is often continuously attacked by other game roles, so that a large number of gain state examples with different generation time and different values are applied to the game role, at the moment, a server needs to calculate game value loss information every tens of milliseconds and transmit the game value loss information to a client terminal, so that the pressure of the server is suddenly increased, the information transmission cost is increased, and even more, the server is down.
Disclosure of Invention
Based on this, it is necessary to provide a method and an apparatus for acquiring game gain state instances, a computer readable storage medium and a computer device, and a method for processing game character life values, aiming at the technical problems that generation of a large number of gain state instances leads to sudden increase of server pressure and increase of information transmission overhead.
A method for acquiring game gain state instances comprises the following steps:
acquiring a first gain state instance;
looking up a second gain state instance in the gain state instance list;
aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example from the gain effective time of the second gain state example;
determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain a target gain state example.
A method for processing life value of a game character comprises the following steps:
determining a target gain state instance of the game character; wherein, the target gain state example is obtained according to the obtaining method of the game gain state example;
when the gain effective time of the target gain state example arrives, acquiring a gain value corresponding to the gain effective time;
and adjusting the life value of the game role according to the gain value.
An apparatus for obtaining game gain state instances, the apparatus comprising:
a state instance obtaining module, configured to obtain a first gain state instance;
a state instance searching module for searching a second gain state instance in the gain state instance list;
an overlap time determining module, configured to align a gain validation time of the first gain state instance with a gain validation time of the second gain state instance, and obtain, from the gain validation times of the second gain state instances, an overlap validation time aligned with each gain validation time of the first gain state instance;
a gain value determination module for determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and the state instance merging module is used for updating the gain value at the moment of the overlapping effective moment in the second gain state instance to the target gain value to obtain a target gain state instance.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring a first gain state instance;
looking up a second gain state instance in the gain state instance list;
aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example from the gain effective time of the second gain state example;
determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain a target gain state example.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a first gain state instance;
looking up a second gain state instance in the gain state instance list;
aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example from the gain effective time of the second gain state example;
determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain a target gain state example.
After the first gain state instance is obtained and the second gain state instance is searched in the gain state instance list, the gain validation time of the first gain state instance is aligned with the gain validation time of the second gain state instance, so that all the gain validation times of the first gain state instance can be aligned with the gain validation time of the second gain state instance, the overlapping validation time aligned with each gain validation time of the first gain state instance is obtained in the gain validation time of the second gain state instance, then the target gain value is determined according to the gain value of the first gain state instance and the gain value of the second gain state instance, and finally the gain value of the overlapping validation time in the second gain state instance is updated to the target gain value, and when the gain state instances generated at different times are in the same game role, combining two or more gain state instances into one gain state instance, reducing the times of calculating game numerical value loss information by a server, reducing high-frequency information transmission and reducing the service pressure and the cost of information transmission.
Drawings
FIG. 1 is a diagram illustrating an exemplary application environment for a method for obtaining a game gain state according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a method for obtaining an example of a game gain state according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an example gain state in one embodiment of the present application;
fig. 4a is a schematic diagram of a first gain state instance before a gain validation time is aligned with a gain validation time of a second gain state instance according to an embodiment of the present application;
fig. 4b is a schematic diagram illustrating the alignment of the gain validation time of the first gain state instance and the gain validation time of the second gain state instance according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating exemplary steps for obtaining a target gain state according to an embodiment of the present application;
fig. 6a is a schematic diagram of a first gain state instance before a gain validation time of the first gain state instance is aligned with a gain validation time of the second gain state instance in another embodiment of the present application;
fig. 6b is a schematic diagram illustrating a gain validation time of a first gain state instance aligned with a gain validation time of a second gain state instance according to another embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a step of aligning the gain validation time of the first gain state instance with the gain validation time of the second gain state instance according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating the steps of finding a second gain state instance in the gain state instance list according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating the steps of finding a second gain state instance in the gain state instance list in another embodiment of the present application;
FIG. 10a is a flow chart illustrating a method for processing a life value of a game character according to an embodiment of the present application;
FIG. 10b is a schematic view of a game interface showing a loss of life of a game character according to an embodiment of the present application;
FIG. 11 is a block diagram of an exemplary game gain state obtaining device according to an embodiment of the present application;
FIG. 12 is a block diagram of an overlap time determination module in an embodiment of the present application;
FIG. 13 is a block diagram of a state instance lookup module in accordance with an embodiment of the present application;
FIG. 14 is a block diagram of a game character life value processing apparatus according to an embodiment of the present application;
FIG. 15 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an electronic game, buff is a common term meaning that a gain effect is applied to a game character to increase an attribute value, a capability value, and the like of the game character, so that a target game object enters a continuous gain state. Accordingly, debuff is also a common term, meaning that a reduction effect is applied to a game character to reduce the attribute value and capability value of the game character, so that the target game object enters a persistent negative state. Note that DOT (Damage Over Time, sustained injury effect) refers to a single effect in which a life value of a game character is lost by a certain value at regular intervals in an electronic game in which a life value setting is present; HOT (sustained Time) refers to a single effect in which the life value of a game character is increased by a certain value at regular intervals in an electronic game with a life value setting, wherein the DOT type gain state examples may include a bleeding effect type gain state example, a current effect type gain state example, and an electric effect type gain state example. The game character can be various characters in the game, including a virtual player character used for representing a user in the electronic game, a non-player character in a riding or game. It should be understood that, when the Dot type debug effect or the HOT type BUFF effect is applied to the game character in the server, a corresponding gain state instance is generated, and the server calculates the life loss information according to the gain state instance, so as to enable the game character to increase (or lose) a certain life value at regular intervals.
For example, in a setting in a video game, after a game character is subjected to the magic of the Wien-Menu, every time the game character is damaged by the life value of another game character, 30% of the damage value is converted into a continuous bleeding effect, and the bleeding effect is realized by a DOT-type gain state example. For example, when a game character is damaged by a 100-point life value after receiving the shenwei gate passive magic, based on the game mechanism of the shenwei gate passive magic, the server directly deducts 70 point values of the game character, and deducts the remaining 30 point life values for 7.5 seconds every 1.5 seconds and deducts 6 point life values for each time, wherein the deduction for 7.5 seconds every 1.5 seconds is realized by the server according to a DOT type gain state example.
FIG. 1 is a diagram of an exemplary implementation of a method for obtaining game gain states in one embodiment. Referring to fig. 1, the method for acquiring the game gain state example is applied to an electronic game system. The electronic game system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. Wherein, a user manipulates a game role used for representing the user in an electronic game through a terminal 110, when the game role used for representing the user is attacked by other game roles and enters a debuff effect, the server 120 generates a corresponding first gain state instance, searches a second gain state instance in a gain state instance list, aligns a gain effective time of the first gain state instance with a gain effective time of the second gain state instance, and acquires an overlapping effective time aligned with each gain effective time of the first gain state instance in the gain effective time of the second gain state instance; determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance; and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain a target gain state example. The subsequent server 120 obtains the target gain state instance, obtains the gain value corresponding to the gain effective time when the gain effective time of the target gain state instance is reached, adjusts the game value of the game role according to the gain value, and transmits the adjusted gain value to the terminal 110 for displaying, so that the user can know the role state of the game role operated by the user.
As shown in FIG. 2, in one embodiment, a method for obtaining an instance of a game gain state is provided. The embodiment is mainly illustrated by applying the method to the server 120 in fig. 1. Referring to fig. 2, the method for acquiring the game gain state example specifically includes the following steps:
in step S202, a first gain state instance is obtained.
Here, the gain state instance refers to an instance of a mechanism that continuously increases or continuously decreases an attribute value or a capability value of a game character when applied to the game character, such as a debof instance of DOT (Damage Over Time) type or a buff instance of HOT (sustained Over Time) type. For example, when a DOT type debug instance is applied to a game character, a gain state instance corresponding to the effect of the DOT type debug is usually added, and the server calculates the life loss information according to the gain state instance, so as to realize that the game character loses a certain life value at regular intervals.
It should be understood that the relevant data information of the gain state instance includes a gain validation time, a gain value, a gain validation period, a gain validation number, and a gain duration, for example, after a gain state instance applied to a certain game character is validated, the server increases (or decreases) the game value of the game character by a corresponding gain value once at the corresponding gain validation time within the gain duration of the gain state instance, wherein the time interval between each gain validation time is constant, that is, the game value corresponding to the game character is increased (or decreased) by the corresponding gain value with the gain validation period within the gain duration, and the total increase (or decrease) number is equal to the gain validation number. As shown in fig. 3, fig. 3 is a schematic diagram of an example of a gain state in an embodiment of the present application, where a gain value of the example of the gain state is a, a gain effective period is T, a number of times of gain effective is N (N is an integer), and a gain duration is S, where a value obtained by dividing the gain duration S by the gain effective period T is an integer, and it is assumed that an effective time of the example of the gain state in the diagram is a, and a gain effective time is a time, (a + T) time, (a +2T), and the like.
In one embodiment, the first gain state instance may refer to a newly generated gain state instance for a game character. For example, when a game character is injured by a life value of 100 points, a sustained injury effect of 55 points, which is 55% of the injury value, is generated, that is, the sustained injury effect lasts for 6 seconds every 1.5 seconds, and each hop corresponds to 11 points of injury to the game character; at this time, the server correspondingly generates a first gain state instance, the gain value a of the first gain state instance is equal to 11, the gain effective period T is equal to 1.5 seconds, the gain duration S is equal to 6 seconds, the number of times of gain effective is 5 times, assuming that the initial gain effective time is 0 second, the corresponding first gain effective time is 0 second, the second gain effective time is 1.5 seconds, the third gain effective time is 3 seconds, and so on, all the gain effective times of the first gain state instance can be obtained.
In step S204, a second gain state instance is searched in the gain state instance list.
Wherein, the gain state example list records the gain state example which is applied to the game character. In one embodiment, the existing gain state instances in the gain state instance list may be grouped according to the gain effective period of the gain state instances, each group including multiple types of gain state instances; for example, the types of gain state instances include, but are not limited to, a DEBuff instance of DOT type, a Buff instance of HOT type for the life value of the game character. It should be understood that the type of gain state found in the list of gain state instances for the second gain state instance is the same as the type of gain state for the first gain state instance.
After the first gain state instance is obtained, the server searches for the second gain state instance in the gain state instance list, specifically, the second gain state instance may be the gain state instance found in the gain state instance list, where the gain effective period of the first gain state instance and the gain effective period of the first gain state instance are multiplied by each other, and the gain effective period is used as the second gain state instance. Further, the gain-effect period of the first gain-state instance is an integer multiple of the gain-effect period of the second gain-state instance. The second gain state instance is found in the gain state instance list, and the first gain state instance and the second gain state instance are combined, so that the server can realize the gain state instance for the game role without increasing the information transmission overhead of the server.
Step S206, aligning the gain validation time of the first gain state instance with the gain validation time of the second gain state instance, and acquiring, from the gain validation time of the second gain state instance, an overlap validation time aligned with each gain validation time of the first gain state instance.
The moment when the gain takes effect is the moment when the gain value corresponding to the game role is increased or decreased in the gain state example; the overlap effective time refers to the effective time of the gain of the first gain state example and the second gain state example which are aligned one by one.
After the first gain state instance and the second gain state instance are obtained, the server aligns the gain effective time of the first gain state instance with the gain effective time of the second gain state instance, so that all the gain effective times of the first gain state instance can be aligned with the gain effective time of the second gain state instance, and the effective times of the first gain state instance and the second gain state instance which are aligned and overlapped one by one are obtained.
Specifically, the gain validation time of the first gain state instance may be aligned with the gain validation time of the second gain state instance, and specifically, the next gain validation time in the second gain state instance may be determined as the first gain validation time of the first gain state instance, that is, the initial gain validation time of the first gain state instance is aligned with a delay backward to the latest gain validation time of the second gain state instance. As described above, the gain validation period of the first gain state instance is multiplied by the gain validation period of the second gain state instance, and all the gain validation times of the first gain state instance can be aligned with one gain validation time of the second gain state instance by delaying the initial gain validation time of the first gain state instance back to the latest gain validation time of the second gain state instance.
For example, taking the first gain state example and the second gain state example shown in fig. 4a as an example, as shown in fig. 4a, the gain effective period of the second gain state example is T seconds, and the gain duration is 9T seconds; the gain effective period of the first gain state instance is 2T seconds, and the gain duration is 6T seconds; taking the initial gain effective time of the second gain state example as 0 second, and generating a newly added first gain state example by the server at the a second; aligning the first gain state instance with the second gain state instance, and aligning the gain effective time t1, t2, t3, t4 of the first gain state instance with the gain effective time t3, t5, t7, t9 of the second gain state instance one by one, as shown in fig. 4b, acquiring an overlapping effective time aligned with each gain effective time of the first gain state instance in the gain effective time of the second gain state instance, where the overlapping effective time includes time t3, time t5, time t7, and time t 9.
In step S208, a target gain value is determined according to the gain value of the first gain state instance and the gain value of the second gain state instance.
The target gain value refers to a gain value at the moment when the overlap takes effect after the first gain state instance and the second gain state instance are combined.
The target gain value is determined according to the gain value of the first gain state instance and the gain value of the second gain state instance, specifically, the sum of the gain value of the first gain state instance and the gain value of the second gain state instance may be calculated, and the sum is determined as the target gain value. For example, assuming that the gain value of the first gain state instance is a1 point, i.e., the game value corresponding to the game character is decreased by a1 point at the gain validation time, and the gain value of the second gain state instance is a2 point, i.e., the game value corresponding to the game character is decreased by a2 point at the gain validation time, the target gain value is (a1+ a2) point, i.e., the game value corresponding to the game character is decreased by (a1+ a2) point at the overlapping validation time after the first gain state instance and the second gain state instance are combined.
Step S210, updating the gain value at the moment of the overlapping validation in the second gain state instance to a target gain value, so as to obtain a target gain state instance.
After the overlap effective time and the target gain value in the second gain state instance are obtained, the gain value corresponding to the overlap effective time in the second gain state instance is updated to the target gain value, and the gain state instance obtained after updating is determined to be the target gain state instance. It should be understood that, when the server adjusts the game value of the game character according to the target gain state instance, the effects of the first gain state instance and the second gain state instance of the game character are simultaneously achieved, and the information transmission overhead of the server is not changed.
For example, taking the first gain state example and the second gain state example shown in fig. 4a and 4b as an example, where the gain value of the first gain state example is a1 point, and the gain value of the second gain state example is a2 point, the target gain value is (a1+ a2) point. After aligning the first gain state instance with the second gain state instance, acquiring an overlap validation time aligned with each gain validation time of the first gain state instance in the gain validation times of the second gain state instance, where the overlap validation time includes time t3, time t5, time t7 and time t9, updating the gain value corresponding to the overlap validation time in the second gain state instance to a target gain value, that is, (a1+ a2), and determining the updated gain state instance to be the target gain state instance, as shown in fig. 4b, when the server adjusts the game value of the game character according to the target gain state instance, the server increases (or decreases) the corresponding (a1+ a2) point from the game value of the game character at time t3, time t5, time t7 and time t 9.
The method for obtaining the game gain state example aligns the gain effective time of the first gain state example with the gain effective time of the second gain state example after obtaining the first gain state example and searching the second gain state example in the gain state example list, so that all the gain effective times of the first gain state example can be aligned with the gain effective time of the second gain state example, thereby obtaining the overlapping effective time aligned with each gain effective time of the first gain state example in the gain effective time of the second gain state example, then determining the target gain value according to the gain value of the first gain state example and the gain value of the second gain state example, finally updating the gain value of the overlapping effective time in the second gain state example to the target gain value, obtaining the target gain state example, the method and the device realize the superposition of the first gain state example and the second gain state example, when the gain state examples generated at different time are in the same game role, the two or more gain state examples are combined into one gain state example, the frequency of the server for calculating the game numerical value loss information is reduced, the high-frequency information transmission is reduced, the service pressure and the information transmission overhead are reduced, meanwhile, all the gain state examples are not required to be separately recorded and stored, and the consumption of the storage capacity in the server is reduced.
In addition, in the conventional technology, there are two methods for processing gain state instances, for example, DOT-type gain state instances, one is to directly generate gain state instances, set an upper limit value of the number of instances, judge the priority of each gain state instance according to a preset rule when the upper limit value of the number of instances is exceeded, and delete the gain state instance with the lowest priority; when a gain state example is generated, when the gain value of the newly generated gain state example is larger than the loss value of the game value of the original gain state example, the newly added gain state example directly covers the original gain state example; the traditional technology processes the gain state example to cause the increase or decrease of a large number of game values which should be effective to be ineffective, causes errors in the measurement and calculation of the game values such as life values of game characters in the game process, and causes the situation of incalcability in extreme cases, which causes difficulties in the reasonability and balance design of a game mechanism. According to the scheme, the first gain state example and the second gain state example are superposed and combined, when gain state examples generated at different time are in the same game role, two or more gain state examples are combined into one gain state example, the frequency of game numerical value loss information calculation of the server is reduced, high-frequency information transmission is achieved, and meanwhile accuracy of game numerical value calculation is not lost.
In one embodiment, as shown in fig. 5, after the step S210 updates the gain value at the overlap effective time in the second gain state instance to the target gain value, the method further includes:
step S212, determining the remaining gain duration in the second gain state instance.
Wherein the gain duration refers to the length of time that the gain state instance is in effect. Since the second gain state instance is a gain state instance that has been generated for a certain time on the game character, the second gain state instance has been in effect for a certain time when the first gain state instance is acquired, and at this time, the server acquires the remaining gain duration in the second gain state instance.
For example, taking the first gain state example and the second gain state example in fig. 6a as an example, as shown in fig. 6a, the gain effective period of the second gain state example is T, the gain duration is 5T seconds, and the gain value is a 2; the gain effective period of the first gain state example is T, the gain duration is 5T seconds, and the gain value is A1; and taking the initial gain effective time of the second gain state example as 0 second, the server generates a newly added first gain state example at a second, that is to say, the server generates the first gain state example at a second, the second gain state example is effective for a second, and the residual gain duration of the second gain state example is (S-a) seconds.
Step S214, when the gain duration of the first gain state instance is longer than the remaining gain duration of the second gain state instance, determining the gain effective time of the first gain state instance except the overlap effective time as the newly added gain effective time in the target gain state instance, and determining the gain value of the first gain state instance as the gain value of the newly added gain effective time in the target gain state instance.
After the remaining gain duration in the second gain state instance is determined, comparing the remaining gain duration in the second gain state instance with the gain duration of the first gain state instance, and when the gain duration of the first gain state instance is greater than the remaining gain duration of the second gain state instance, that is, after the second gain state instance is completed, the first gain state instance is still within the gain effective duration and a certain number of times of gain effectiveness is not effective, determining the remaining gain effective time and gain value in the first gain state instance as the gain effective time and gain value newly added in the target gain state instance.
For example, taking the first gain state example and the second gain state example in fig. 6a as an example, after the first gain state example and the second gain state example in fig. 6a are aligned, the gain validation times t1, t2, t3, and t4 of the first gain state example correspond to the gain validation times t3, t4, t5, and t6 of the second gain state example one-to-one, as shown in fig. 6b, at this time, the overlap validation times aligned with the respective gain validation times of the first gain state example in the gain validation times of the second gain state example include time t3, time t4, time t5, and time t 6. By updating the gain value corresponding to the overlap effective time in the second gain state instance to the target gain value, that is, the point (a1+ a2), the updated gain state instance is determined as the target gain state instance, and meanwhile, since the gain duration S of the first gain state instance is longer than the remaining gain duration (5T-a) seconds of the second gain state instance, that is, after the second gain state instance is completed, the number of times of gain effectiveness (corresponding to the gain effective time T5 and the time T6) still remains in the first gain state instance is not effective, at this time, the gain effective time and the gain value newly increased in the target gain state instance are determined according to the remaining gain effective time (time T5 and the time T6) in the first gain state instance, as shown in fig. 6 b.
In one embodiment, as shown in fig. 7, the step of aligning the gain validation time of the first gain state instance with the gain validation time of the second gain state instance includes:
step S702 determines the next moment when the gain of the second gain state instance is valid.
The next moment when the gain becomes effective is the time when the corresponding gain value is increased (or decreased) from the game value corresponding to the game character next time based on the current moment. Because the second gain state instance is a gain state instance generated for a certain time on the game role, when the first gain state instance is acquired, the second gain state instance takes effect for a period of time, and at the moment, the server acquires the next gain effective moment of the second gain state instance.
Step S704, align the initial gain validation time of the first gain state instance with the next gain validation time of the second gain state instance.
The initial gain validation time refers to the time when the gain is validated for the first time in the gain state example, that is, the corresponding gain value is increased (or decreased) from the game value corresponding to the game character for the first time. And after the next gain effective time of the second gain state example is obtained, determining the time corresponding to the next gain effective time of the second gain state example as the initial effective time of the first gain state. By delaying and aligning the initial gain validation time of the first gain state instance backwards to the latest gain validation time of the second gain state instance, all the gain validation times of the first gain state instance can be aligned with the gain validation times of the second gain state instance, so as to obtain the validation times of the first gain state instance and the second gain state instance which are aligned and overlapped one by one.
For example, taking the first gain state example and the second gain state example in fig. 4a and fig. 4b as examples, as shown in fig. 4a, the gain effective period of the second gain state example is T, the gain duration is 9T seconds, and the gain value is a 2; the gain effective period of the first gain state example is 2T, the gain duration is 6T seconds, and the gain value is A1; and the server generates a newly added first gain state instance at a second by taking the initial gain effective time of the second gain state instance as 0 second, at this time, the server determines the next gain effective time of the second gain state instance, namely (2T) second, and by delaying the initial effective time of the first gain state instance backwards to (2T) second, the gain effective times T1, T2, T3 and T4 of the first gain state instance are respectively aligned with the gain effective times T3, T5, T7 and T9 of the second gain state instance.
In one embodiment, the list of gain state instances comprises a plurality of existing gain state instances; as shown in fig. 8, the step of looking up the second gain state instance in the gain state instance list comprises:
step S802, a first gain effective period of the first gain state instance and a second gain effective period of the gain state instance already existing in the gain state instance list are obtained.
Step S804, a target existing gain state instance in which the quotient of the second gain effective period and the first gain effective period is an integer is screened from the existing gain state instances.
In step S806, when there is a target existing gain state instance in the existing gain state, the target existing gain state instance is determined as a second gain state instance.
The existing gain state instance is a gain state instance which is already applied to the game role, and the existing gain state instance of a certain game role is recorded in the corresponding gain state instance list of the game role.
Specifically, after the first gain state instance is obtained, a first gain effective period of the first gain state instance and a second gain effective period of each existing gain state instance are determined, the first gain effective period is compared with the second gain effective periods of the existing gain state instances, a target existing gain state instance in which the quotient of the second gain effective period and the first gain effective period is an integer is obtained, a target existing gain state instance in which the first gain effective period and the second gain effective period are in a multiple relation is obtained, and the target existing gain state instance is determined as the second gain state instance.
For example, assume that the gain state instance list includes an existing gain state instance a with a gain validation period of 1.5, an existing gain state instance B with a gain validation period of 2, an existing gain state instance C with a gain validation period of 2.5, and an existing gain state instance D with a gain validation period of 10, at this time, the game character is attacked by other game characters, and a first gain state instance E with a new gain validation period of 5 is correspondingly generated. Comparing the first gain state instance with the gain effective period of each existing gain state instance, wherein the gain effective period of the first gain state instance E is divided by the existing gain state instance C to form an integer, determining the existing gain state instance C as a second gain state instance, and combining the first gain state instance E and the existing gain state instance C to obtain the target gain state instance.
Further, in one embodiment, when the number of target existing gain state instances in which the quotient of the second effective period divided by the first effective period is an integer is multiple, the target existing gain state instance in which the quotient of the second effective period divided by the first effective period is the maximum may be determined as the second gain state instance.
In one embodiment, after the step of obtaining the second gain validation period of each existing gain state instance in the gain state instance list, the method further includes: and when the target existing gain state example does not exist in the existing gain state, directly determining the first newly-added example as the target gain state example, and writing the target gain state example into the gain state example list.
Specifically, when there is no target existing gain state instance in the existing gain state, that is, there is no existing gain state instance in which the second gain effective period can divide the first gain effective period, at this time, there is no second gain state instance in the gain state instance list in which the first gain state instance can be merged, and the server may write the first gain state instance directly into the gain state instance list.
In one embodiment, the step of determining the target existing gain state instance as the second gain state instance comprises: acquiring a gain effective time threshold of a target existing gain state example; and when the gain effective times of the first gain state example are smaller than the gain effective times threshold, determining the target existing gain state example as a second gain state example.
The number of times of gain validation refers to the number of times of validation in the gain state instance. The gain validation time threshold refers to an upper limit of the validation times in the gain state instance, and specifically, the server may store, through one circular queue, the gain value of the gain state instance applied to the game character, which is validated each time the gain is valid for the duration of the gain, and then increase (or decrease), through a timer, the corresponding gain value from the game value of the game character at the time of validation each gain, so the gain validation time threshold may refer to an upper limit of the storage space of the corresponding circular queue.
And when the gain effective times of the first gain state example are smaller than the gain effective times threshold of the target existing gain state example, determining the target existing gain state example as a second gain state example.
Further, when the number of times of gain effectiveness of the first gain state instance is greater than the threshold of the number of times of gain effectiveness of the target existing gain state instance, there is no second gain state instance in the gain state instance list where the first gain state instance can be merged, and the server may write the first gain state instance directly into the gain state instance list.
In one embodiment, as shown in fig. 9, the step of looking up the second gain state instance in the list of gain state instances comprises:
step S902, a first gain effective period of the first gain state instance and a second gain effective period of the gain state instance existing in the gain state instance list are obtained;
step S904, screening the target existing gain state example in which the quotient of the second gain effective period and the first gain effective period is an integer from the existing gain state examples; when the target existing gain state instance exists in the existing gain state, executing step S906, and when the target existing gain state instance does not exist in the existing gain state, executing step S910;
step S906, obtaining a threshold value of the number of gain effective times of the target existing gain state instance, and executing step S908 when the number of gain effective times of the first gain state instance is smaller than the threshold value of the number of gain effective times; when the number of times of gain effectiveness of the first gain state instance is greater than the threshold value of the number of times of gain effectiveness, executing step S910;
in step S908, the target existing gain state instance is determined as the second gain state instance.
In step S910, the first newly added instance is directly determined as the target gain state instance and written into the gain state instance list.
The embodiment is described by using a specific example, and it is assumed that the gain state example list includes an existing gain state example a with a gain validation period of 1.5, an existing gain state example B with a gain validation period of 2, and an existing gain state example C with a gain validation period of 10, at this time, a game character is attacked by other game characters, and a first gain state example D with a new gain validation period of 5 is correspondingly generated. By comparing the gain effective period of the first gain state instance and each existing gain state instance, the gain effective period of the first gain state instance can be divided by the gain effective period of the first gain state instance, that is, no existing gain state instance in the gain state instance list in which the first gain state instance can be combined, so that the first newly added instance is directly determined as the target gain state instance and written into the gain state instance list.
For another example, assume that the gain state instance list includes an existing gain state instance a with a gain validation period of 1.5, an existing gain state instance B with a gain validation period of 2, and an existing gain state instance C with a gain validation period of 2.5, at this time, the game character is attacked by other game characters, and a first gain state instance D with a new gain validation period of 5 is correspondingly generated. By comparing the gain-effective period of the first gain state instance with the gain-effective periods of the respective existing gain state instances, the gain-effective period of the first gain state instance D divided by the existing gain state instance C is an integer. Then, obtaining a gain effective time threshold of the existing gain state instance C, namely an upper limit value of a storage space of a circular queue corresponding to the existing gain state instance C, and determining the existing gain state instance C as a second gain state instance when the gain effective time of the first gain state instance D is smaller than the gain effective time threshold of the existing gain state instance C; and finally, combining the first gain state example E with the existing gain state example C to obtain a target gain state example.
Further, when the number of times of gain validation of the first gain state instance D is greater than the threshold of the number of times of gain validation of the existing gain state instance C, that is, the existing gain state instance C has no sufficient storage space corresponding to the circular queue to store the gain value of the first gain state instance D in which the gain is validated each time, and there is no existing gain state instance in the gain state instance list in which the first gain state instance can be merged, the first newly added instance is directly determined as the target gain state instance and written into the gain state instance list.
In one embodiment, the second gain state instance comprises a second queue of gain values; each queue element of the second gain value queue stores the gain value of the second gain state instance at each gain effective moment; updating the gain value at the moment of the overlapping effective in the second gain state example to a target gain value to obtain a target gain state example, wherein the step of updating the gain value at the moment of the overlapping effective in the second gain state example comprises the following steps: updating the gain value corresponding to the moment of the overlapping effect in the second gain value queue to a target gain value; determining the gain effective time except the overlapping effective time in the first gain state example as the newly added gain effective time in the target gain state example, and determining the gain value of the first gain state example as the gain value of the newly added gain effective time in the target gain state example, wherein the steps comprise: and generating a newly added queue element in the second gain value queue according to the gain effective time except the overlapping effective time in the first gain state example and the gain value of the first gain state example.
Wherein, the server can store the gain value of the gain state example applied to the game character in each gain effective in the gain duration through a circular queue, and then increase (or decrease) the corresponding gain value from the game value of the game character at the time of each gain effective through a timer. The second gain value queue refers to a circular queue corresponding to the second gain state instance, wherein each queue element corresponds to a gain value when the gain of the second gain state instance is effective.
Specifically, the gain value of the queue element corresponding to the overlap effective time in the second gain state instance is updated to the target gain value, and when the gain duration of the first gain state instance is longer than the remaining gain duration of the second gain state instance, a new queue element is added to the second gain value queue corresponding to the second gain state instance according to the gain effective time of the first gain state instance except the overlap effective time and the gain value of the first gain state instance, where the gain value of the newly added queue element is the gain value of the first gain state instance, and the effective time is the gain effective time of the first gain state instance except the overlap effective time.
In one embodiment, the method for acquiring the game gain state instance comprises the following steps:
1. a first gain state instance is obtained.
2. A second gain state instance is looked up in a gain state instance list, the gain state instance list comprising a plurality of existing gain state instances.
2-1, acquiring a first gain effective period of the first gain state example and a second gain effective period of the existing gain state example in the gain state example list;
2-2, screening a target existing gain state example with the quotient of the second gain effective period and the first gain effective period as an integer from the existing gain state examples;
2-3a, when the target existing gain state example exists in the existing gain state, determining the target existing gain state example as a second gain state example.
2-3a-1, acquiring a gain effective time threshold of the target existing gain state instance;
2-3a-2, when the number of times of gain effectiveness of the first gain state example is smaller than the threshold value of the number of times of gain effectiveness, determining the target existing gain state example as a second gain state example.
2-3b, when the target existing gain state example does not exist in the existing gain state, directly determining the first newly-added example as the target gain state example, and writing the target gain state example into the gain state example list.
3. And aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example in the gain effective time of the second gain state example.
3-1, determining the next gain effective moment of the second gain state example;
3-2 align the initial gain-in-effect time of the first gain state instance with the next gain-in-effect time of the second gain state instance.
4. Determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
5. and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain the target gain state example.
6. Determining a remaining gain duration in the second gain state instance;
7. and when the gain duration of the first gain state example is longer than the rest gain duration of the second gain state example, determining the gain effective time except the overlapping effective time in the first gain state example as the newly-added gain effective time in the target gain state example, and determining the gain value of the first gain state example as the gain value of the newly-added gain effective time in the target gain state example.
When the number of target existing gain state instances in which the quotient of the second effective period divided by the first effective period is an integer is multiple, the target existing gain state instance in which the quotient of the second effective period divided by the first effective period is the maximum may be determined as the second gain state instance.
As shown in FIG. 10a, in one embodiment, a method for processing a life value of a game character is provided. The embodiment is mainly illustrated by applying the method to the server 120 in fig. 1. Referring to fig. 10a, the method for acquiring the life value of the game character specifically includes the following steps:
step S1002, determining a target gain state instance of a game role; wherein, the target gain state example is obtained according to the obtaining method of the game gain state example in any one of the above embodiments;
step S1004, when the gain effective time of the target gain state example arrives, acquiring a gain value corresponding to the gain effective time;
step S1006, the life value of the game role is adjusted according to the gain value.
The game character can be an avatar in the electronic game for representing the user in a virtual manner, or can be an avatar in the electronic game for representing the character interacting with the user in a virtual manner. The target gain state instance refers to a deboff effect or a buff effect applied to the game character, and may be a target gain state instance obtained by combining two gain state instances, where the target gain state instance may be a DOT type gain state instance.
Specifically, the server determines a target gain state instance from a gain state instance list of the game role, acquires a gain value corresponding to the gain effective time in the target gain state instance when the gain effective time of the target gain state instance is reached, and adjusts the life value of the game role according to the gain value.
Taking the target gain state example in fig. 4b as an example, the gain effective period of the target gain state example in the figure is T seconds, and the gain duration is 2S seconds; taking the initial gain effective time of the target gain state example as 0 second, assuming that the time reaches t3 seconds, the third gain effective time of the target gain state example reaches, acquiring the gain value (A1+ A2) of the target gain state example at t3 seconds, and then subtracting (or adding) (A1+ A2) from the life value of the game character; when the time reaches t4 seconds, the third gain validation time of the target gain state instance is reached, the gain value A2 of the target gain state instance at t4 seconds is obtained, and then the A2 value is subtracted (or added) from the life value of the game character.
Furthermore, the server generates life loss information according to the gain value while adjusting the life value of the game role according to the gain value, and transmits the life loss information to the terminal for displaying, so that the user can know the role state of the game role operated by the user. Specifically, as shown in fig. 10B, when the game character a representing the user is attacked by the game character B and enters the debuff effect, the server generates a corresponding first gain state instance, searches for a second gain state instance in the gain state instance list of the game character a, and then aligns and combines the gain effective time of the first gain state instance and the gain effective time of the second gain state instance to obtain the target gain state instance. The server obtains a target gain state example of the game role A, when the gain effective time of the target gain state example arrives, a gain value corresponding to the gain effective time is obtained, the game numerical value of the game role A is adjusted according to the gain value, meanwhile, life loss information of the game role A is generated according to the gain value, the loss information is transmitted to the terminal to be displayed, and as shown in figure 10b, the life loss point 385 of the game role A at the current gain effective time and the life loss point 300 of the game role A at the last gain effective time are lost.
In one embodiment, the target gain state instance comprises a queue of gain values; each queue element of the gain value queue stores the gain value of the target gain state instance at each gain effective moment; when the gain effective time of the target gain state example arrives, the step of obtaining the gain value corresponding to the gain effective time comprises the following steps: and when the gain effective time of the target gain state example arrives, obtaining a gain value corresponding to the gain effective time from the gain queue.
Wherein, the server can store the gain value of the gain state example applied to the game character in each gain effective in the gain duration through a circular queue, and then increase (or decrease) the corresponding gain value from the game value of the game character at the time of each gain effective through a timer. Specifically, a target gain state instance is determined, after the gain validation time of the target gain state instance is reached, a gain value corresponding to the current gain validation time is obtained from a gain queue corresponding to the target gain state instance, and the gain value is increased (or decreased) from the life value of the game character subsequently.
In one embodiment, after the step of obtaining the gain value corresponding to the gain effective time from the gain queue, the method further includes: and deleting the gain queue corresponding to the target gain state example when no gain value remains in the gain queue.
When the gain queue has no residual gain value, that is, the target gain state example has no gain state effective time or gain effective times, at this time, the target gain state example applied to the game character is finished, and the server deletes the gain queue corresponding to the target gain state example.
Also taking the target gain state example in fig. 4b as an example, taking the initial gain effective time of the target gain state example as 0 second, assuming that the time reaches t3 seconds, the third gain effective time of the target gain state example arrives, obtaining the gain value (a1+ a2) of the target gain state example at t3 seconds, and then subtracting (or adding) (a1+ a2) from the life value of the game character; when the time reaches t4 seconds, the third gain validation time of the target gain state instance arrives, the gain value A2 of the target gain state instance at t3 seconds is obtained, then the A2 value is subtracted (or added) from the life value of the game character, and the like, when the time reaches t10 seconds, the tenth gain validation time of the target gain state instance arrives, the gain value A2 of the target gain state instance at t10 seconds is obtained, then the A2 value is subtracted (or added) from the life value of the game character, at this time, the gain queue corresponding to the target gain state instance has no residual gain value, namely the gain state instance no-effective time or gain validation times, the target gain state instance applied to the game character has ended, and the server deletes the gain queue corresponding to the target gain state instance 2.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 11, there is provided an apparatus 1100 for obtaining an example of a game gain state, the apparatus comprising: a state instance acquisition module 1102, a state instance lookup module 1104, an overlap time determination module 1106, a gain value determination module 1108, and a state instance merging module 1110, wherein:
a state instance obtaining module 1102, configured to obtain a first gain state instance;
a state instance lookup module 1104 for looking up a second gain state instance in the gain state instance list;
an overlap time determining module 1106, configured to align a gain validation time of the first gain state instance with a gain validation time of the second gain state instance, and obtain, from the gain validation times of the second gain state instances, an overlap validation time aligned with each gain validation time of the first gain state instance;
a gain value determination module 1108 for determining a target gain value based on the gain value of the first gain state instance and the gain value of the second gain state instance;
the state instance merging module 1110 is configured to update the gain value at the overlapping effective time in the second gain state instance to a target gain value, so as to obtain a target gain state instance.
In one embodiment, the state instance merging module is to: determining a remaining gain duration in the second gain state instance; and when the gain duration of the first gain state example is longer than the rest gain duration of the second gain state example, determining the gain effective time except the overlapping effective time in the first gain state example as the newly-added gain effective time in the target gain state example, and determining the gain value of the first gain state example as the gain value of the newly-added gain effective time in the target gain state example.
In one embodiment, as shown in FIG. 12, the overlap time determination module 1106 includes:
a time determination module 1106a, configured to determine a next gain validation time of the second gain state instance;
a time alignment module 1106b, configured to align an initial gain validation time of the first gain state instance with a next gain validation time of the second gain state instance.
In one embodiment, as shown in FIG. 13, the list of gain state instances includes a plurality of existing gain state instances; a state instance lookup module 1104, comprising:
a period determining module 1104a, configured to obtain a first gain effective period of the first gain state instance and a second gain effective period of an existing gain state instance in the gain state instance list;
an example screening module 1104b, configured to screen, from the existing gain state examples, a target existing gain state example in which a quotient of the second gain effective period and the first gain effective period is an integer;
an instance determination module 1104c, configured to determine that there is a target existing gain state instance in the existing gain state as a second gain state instance.
In one embodiment, the instance determination module is specifically configured to: acquiring a gain effective time threshold of a target existing gain state example; and when the gain effective times of the first gain state example are smaller than the gain effective times threshold value, determining the target existing gain state example as a second gain state example.
In one embodiment, the example determining module is further configured to determine the first newly added example as the target gain state example directly, and write the target gain state example into the gain state example list, if the target existing gain state example does not exist in the existing gain state.
In one embodiment, the second gain state instance comprises a second queue of gain values; each queue element of the second gain value queue stores the gain value of the second gain state instance at each gain effective moment; the state instance merging module is used for updating the gain value corresponding to the overlapped effective moment in the second gain value queue into a target gain value; determining a remaining gain duration in the second gain state instance; and when the gain duration of the first gain state example is longer than the rest gain duration of the second gain state example, generating a newly added queue element in the second gain value queue according to the gain effective time except the overlapping effective time in the first gain state example and the gain value of the first gain state example.
In one embodiment, as shown in fig. 14, there is provided a game character life value processing apparatus 1400, comprising: a gain state instance obtaining module 1402, a gain value obtaining module 1404, and a life value adjusting module 1406, wherein:
a gain state instance obtaining module 1402, configured to determine a target gain state instance of the game character; wherein the target gain state instance is obtained according to any one of the above claims 1 to 7;
a gain value obtaining module 1404, configured to obtain a gain value corresponding to a gain effective time when the gain effective time of the target gain state instance arrives;
and a life value adjusting module 1406 for adjusting the life value of the game role according to the gain value.
In one embodiment, the target gain state instance comprises a queue of gain values; each queue element of the gain value queue stores the gain value of the target gain state instance at each gain effective moment; and the gain value acquisition module is specifically used for acquiring the gain value corresponding to the gain effective time from the gain queue when the gain effective time of the target gain state example is reached.
In one embodiment, the apparatus for processing a life value of a game character further includes a gain state instance deleting module, configured to delete a gain queue corresponding to a target gain state instance when there is no remaining gain value in the gain queue.
FIG. 15 is a diagram showing an internal structure of a computer device in one embodiment. The computer device may specifically be the server 120 in fig. 1. As shown in fig. 15, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may further store a computer program that, when executed by the processor, causes the processor to implement a method of acquiring an instance of a game gain state or a method of processing a life value of a game character. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform a method of obtaining an instance of a game gain state or a method of processing a life value of a game character. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the apparatus for obtaining the game gain state example provided in the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 15. The memory of the computer device may store various program modules constituting the acquiring means of the game gain state instance, such as the state instance acquiring module 1102, the state instance searching module 1104, the overlap time determining module 1106, the gain value determining module 1108, and the state instance merging module 1110 shown in fig. 11. The respective program modules constitute computer programs that cause the processor to execute the steps in the acquisition method of the game gain state example of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 15 may execute step S202 by a state instance acquisition module in the game gain state instance acquisition means shown in fig. 11. The computer device may perform step S204 by the state instance lookup module. The computer device may perform step S206 by the overlap time determination module. The computer device may perform step S208 by the gain value determination module. The computer device may perform step S210 through the state instance merging module.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described method of obtaining an instance of a game gain state. Here, the steps of the method for acquiring the game gain state instance may be steps in the method for acquiring the game gain state instance of the above-described embodiments.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the above-described method for obtaining an instance of a game gain state. Here, the steps of the method for acquiring the game gain state instance may be steps in the method for acquiring the game gain state instance of the above-described embodiments.
In one embodiment, the processing apparatus for the life value of the game character provided by the present application can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in fig. 15. The memory of the computer device may store various program modules constituting the processing means of the life value of the game character, such as a gain state instance obtaining module 1402, a gain value obtaining module 1404, and a life value adjusting module 1406 shown in fig. 14. The respective program modules constitute computer programs that cause the processors to execute the steps in the game character life value processing methods of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 15 may execute step S1002 by the gain state instance acquisition module in the processing apparatus of the game character life value as shown in fig. 14. The computer device may perform step S1004 by the gain value acquisition module. The computer device may perform step S1006 through the vital value adjusting module.
In one embodiment, a computer device is provided, which includes a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, causing the processor to perform the steps of the method for processing a life value of a game character described above. The steps of the game character life value processing method herein may be the steps in the game character life value processing methods of the respective embodiments described above.
In one embodiment, a computer-readable storage medium is provided, which stores a computer program, which, when executed by a processor, causes the processor to execute the steps of the above-described method for processing a game character life value. The steps of the game character life value processing method herein may be the steps in the game character life value processing methods of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus DRAM (RDRAM), and interface DRAM (DRDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (19)

1. A method for acquiring game gain state instances comprises the following steps:
acquiring a first gain state instance;
looking up a second gain state instance in the gain state instance list;
aligning the gain effective time of the first gain state example with the gain effective time of the second gain state example, and acquiring the overlapped effective time aligned with each gain effective time of the first gain state example from the gain effective time of the second gain state example;
determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and updating the gain value at the moment of the overlapping effective moment in the second gain state example to be the target gain value to obtain a target gain state example.
2. The method of claim 1, wherein the step of updating the gain value at the overlap effective time in the second gain state instance to the target gain value further comprises:
determining a gain duration remaining in the second gain state instance;
when the gain duration of the first gain state example is longer than the remaining gain duration of the second gain state example, determining the gain effective time of the first gain state example except the overlap effective time as the newly-added gain effective time of the target gain state example, and determining the gain value of the first gain state example as the gain value of the newly-added gain effective time of the target gain state example.
3. The method of claim 1, wherein aligning the gain validation time of the first gain state instance with the gain validation time of the second gain state instance comprises:
determining a next gain validation time of the second gain state instance;
aligning an initial gain-in-effect time of the first gain state instance with a next gain-in-effect time of the second gain state instance.
4. The method of claim 1, wherein the list of gain state instances comprises a plurality of existing gain state instances;
the step of looking up a second gain state instance in the list of gain state instances comprises:
acquiring a first gain effective period of the first gain state instance and a second gain effective period of an existing gain state instance in the gain state instance list;
screening a target existing gain state example with the quotient of a second gain effective period and the first gain effective period as an integer from the existing gain state examples;
and when the existing gain state example has a target existing gain state example, determining the target existing gain state example as a second gain state example.
5. The method of claim 4, wherein the step of determining the target existing gain state instance as a second gain state instance comprises:
acquiring a gain effective time threshold of a target existing gain state example;
and when the gain effective times of the first gain state example are smaller than the gain effective time threshold, determining the target existing gain state example as a second gain state example.
6. The method of claim 4, wherein the step of obtaining the second gain validation period of the existing gain state instances in the gain state instance list further comprises:
and when the existing gain state example does not have a target existing gain state example, directly determining the first gain state example as the target gain state example, and writing the target gain state example into the gain state example list.
7. The method of claim 2, wherein the second gain state instance comprises a second queue of gain values; wherein each queue element of the second gain value queue stores a gain value of a second gain state instance at each gain validation time;
the step of updating the gain value at the overlapped effective moment in the second gain state instance to the target gain value to obtain a target gain state instance includes:
updating the gain value corresponding to the overlapped effective moment in the second gain value queue to be the target gain value;
the step of determining the gain effective time in the first gain state instance except the overlap effective time as the newly added gain effective time in the target gain state instance, and determining the gain value of the first gain state instance as the gain value of the newly added gain effective time in the target gain state instance, includes:
and generating a newly added queue element in the second gain value queue according to the gain effective time except the overlapping effective time in the first gain state example and the gain value of the first gain state example.
8. A method for processing life value of a game character is characterized by comprising the following steps:
determining a target gain state instance of the game character; wherein the target gain state instance is obtained according to the method of any one of the preceding claims 1 to 7;
when the gain effective time of the target gain state example arrives, acquiring a gain value corresponding to the gain effective time;
and adjusting the life value of the game role according to the gain value.
9. The method of claim 8, wherein the target gain state instance comprises a queue of gain values; wherein each queue element of the gain value queue stores the gain value of the target gain state instance at each gain effective moment;
the step of obtaining the gain value corresponding to the gain effective time when the gain effective time of the target gain state instance is reached includes:
and when the gain effective time of the target gain state example arrives, obtaining the gain value corresponding to the gain effective time from the gain value queue.
10. The method according to claim 9, wherein after the step of obtaining the gain value corresponding to the moment when the gain is valid from the gain value queue, the method further comprises:
and deleting the gain value queue corresponding to the target gain state example when no gain value remains in the gain value queue.
11. An apparatus for obtaining an instance of a game gain state, the apparatus comprising:
a state instance obtaining module, configured to obtain a first gain state instance;
a state instance searching module for searching a second gain state instance in the gain state instance list;
an overlap time determining module, configured to align a gain validation time of the first gain state instance with a gain validation time of the second gain state instance, and obtain, from the gain validation times of the second gain state instances, an overlap validation time aligned with each gain validation time of the first gain state instance;
a gain value determination module for determining a target gain value according to the gain value of the first gain state instance and the gain value of the second gain state instance;
and the state instance merging module is used for updating the gain value at the moment of the overlapping effective moment in the second gain state instance to the target gain value to obtain a target gain state instance.
12. The apparatus of claim 11, wherein the state instance merging module is configured to: determining a gain duration remaining in the second gain state instance; and when the gain duration of the first gain state example is longer than the rest gain duration of the second gain state example, determining the gain effective time except the overlap effective time in the first gain state example as the newly-added gain effective time in the target gain state example, and determining the gain value of the first gain state example as the gain value of the newly-added gain effective time in the target gain state example.
13. The apparatus of claim 11, wherein the overlap time determining module comprises:
a time determination module, configured to determine a next gain validation time of the second gain state instance;
and the time alignment module is used for aligning the initial gain effective time of the first gain state example with the next gain effective time of the second gain state example.
14. The apparatus of claim 11, wherein the list of gain state instances comprises a plurality of existing gain state instances; the state instance searching module comprises: a period determining module, configured to obtain a first gain effective period of the first gain state instance and a second gain effective period of an existing gain state instance in the gain state instance list; an example screening module, configured to screen, from the existing gain state examples, a target existing gain state example in which a quotient of a second gain validation period and the first gain validation period is an integer; an instance determination module, configured to determine, when a target existing gain state instance exists in the existing gain state instances, the target existing gain state instance as a second gain state instance.
15. The apparatus of claim 14, wherein the instance determination module is specifically configured to: acquiring a gain effective time threshold of a target existing gain state example; and when the gain effective times of the first gain state example are smaller than the gain effective time threshold, determining the target existing gain state example as a second gain state example.
16. The apparatus of claim 14, wherein the instance determining module is further configured to determine the first gain state instance directly as a target gain state instance and write the target gain state instance to a gain state instance list when no target existing gain state instance exists in the existing gain state instances.
17. The apparatus of claim 12, wherein the second gain state instance comprises a second queue of gain values; wherein each queue element of the second gain value queue stores a gain value of a second gain state instance at each gain validation time; the state instance merging module is used for updating the gain value corresponding to the overlapped effective moment in the second gain value queue to a target gain value; determining a remaining gain duration in the second gain state instance; and when the gain duration of the first gain state example is longer than the rest gain duration of the second gain state example, generating a newly added queue element in the second gain value queue according to the gain effective time except the overlapping effective time in the first gain state example and the gain value of the first gain state example.
18. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 10.
19. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 10.
CN201911354252.3A 2019-12-25 2019-12-25 Method and device for acquiring game gain state example and computer storage medium Active CN111097173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911354252.3A CN111097173B (en) 2019-12-25 2019-12-25 Method and device for acquiring game gain state example and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911354252.3A CN111097173B (en) 2019-12-25 2019-12-25 Method and device for acquiring game gain state example and computer storage medium

Publications (2)

Publication Number Publication Date
CN111097173A CN111097173A (en) 2020-05-05
CN111097173B true CN111097173B (en) 2021-04-20

Family

ID=70425024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911354252.3A Active CN111097173B (en) 2019-12-25 2019-12-25 Method and device for acquiring game gain state example and computer storage medium

Country Status (1)

Country Link
CN (1) CN111097173B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3827190B2 (en) * 1999-03-31 2006-09-27 株式会社スクウェア・エニックス GAME DEVICE, GAME CONTROL METHOD, AND RECORDING MEDIUM
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device
CN110393920A (en) * 2019-07-29 2019-11-01 网易(杭州)网络有限公司 State generation method, device, electronic equipment and storage medium in game

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104801040A (en) * 2014-01-28 2015-07-29 玩酷科技股份有限公司 Game skill casting method with spirit collection operation
CN110585709A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Skill attribute adjusting method and device for virtual role

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3827190B2 (en) * 1999-03-31 2006-09-27 株式会社スクウェア・エニックス GAME DEVICE, GAME CONTROL METHOD, AND RECORDING MEDIUM
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device
CN110393920A (en) * 2019-07-29 2019-11-01 网易(杭州)网络有限公司 State generation method, device, electronic equipment and storage medium in game

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《恐怖黎明》DOT持续伤害介绍、计算及增伤方法,恐怖黎明持续伤害怎么计算;eszvgy;《https://wap.gamersky.com/gl/Content-977601.html》;20171112;全文 *
MMORPG战斗系统设计;langresser;《https://blog.csdn.net/langresser_king/article/details/17266893?utm_source=app》;20131211;全文 *

Also Published As

Publication number Publication date
CN111097173A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN106933854B (en) Short link processing method and device and server
US20200210407A1 (en) Data verification method, apparatus, and system and device
US10953326B2 (en) Method and apparatus
US20160179836A1 (en) Method for updating data table of keyvalue database and apparatus for updating table data
CN110968585B (en) Storage method, device, equipment and computer readable storage medium for alignment
CN110390075A (en) Matrix preprocess method, device, terminal and readable storage medium storing program for executing
CN111097173B (en) Method and device for acquiring game gain state example and computer storage medium
CN104408178A (en) Device and method for WEB control loading
CN111597480A (en) Webpage resource preloading method and device, computer equipment and storage medium
CN110636042B (en) Method, device and equipment for updating verified block height of server
JP6412667B1 (en) Server, method, and program for providing game
Desireddy et al. Optimize In-kernel swap memory by avoiding duplicate swap out pages
CN111124932A (en) Scheme verification method, system, device, computer equipment and storage medium
JP2020195680A5 (en)
CN112765491B (en) Link prediction method and device considering node local area link compactness
KR101093962B1 (en) Ranking determination system and method using statistics
JP7178230B2 (en) SERVER, METHOD AND PROGRAM FOR PROVIDING A GAME
CN114614992B (en) Signature value output and verification method, device, computer equipment and storage medium
JP6049920B1 (en) Information processing apparatus and information processing program
CN114578714B (en) Method and device for determining simulation operation times based on performance index convergence control
US20240177045A1 (en) Method for determining decoherence model of qubit and computer readable storage medium
CN111135583B (en) Data processing method, device, server and storage medium
CN110324699B (en) Method and device for determining information carrier, electronic equipment and storage medium
CN112016123B (en) Verification method and device of privacy protection algorithm and electronic equipment
CN109615059B (en) Edge filling and filter expansion operation method and system in convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant