CN115120974A - Method and electronic device for controlling virtual object - Google Patents
Method and electronic device for controlling virtual object Download PDFInfo
- Publication number
- CN115120974A CN115120974A CN202210861715.0A CN202210861715A CN115120974A CN 115120974 A CN115120974 A CN 115120974A CN 202210861715 A CN202210861715 A CN 202210861715A CN 115120974 A CN115120974 A CN 115120974A
- Authority
- CN
- China
- Prior art keywords
- virtual object
- target
- skill
- identification
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000009467 reduction Effects 0.000 claims abstract description 27
- 230000004044 response Effects 0.000 claims abstract description 17
- 238000004590 computer program Methods 0.000 claims abstract description 12
- 238000003860 storage Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims description 19
- 238000007667 floating Methods 0.000 claims description 7
- 230000009471 action Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000006378 damage Effects 0.000 description 6
- 208000027418 Wounds and injury Diseases 0.000 description 5
- 208000014674 injury Diseases 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 3
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007123 defense Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000116 mitigating effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/533—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/822—Strategy games; Role-playing games
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/807—Role playing or strategy games
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments of the present disclosure relate to a method, apparatus, electronic device, computer-readable storage medium, and computer program product for controlling a virtual object. The method comprises the following steps: displaying an object identifier and additional state information of at least one virtual object in a first area of a display interface, wherein the additional state information comprises gain information and/or reduction information; displaying at least one skill identification in a second area of the display interface; acquiring a trigger operation of a user aiming at a target skill identifier in at least one skill identifier and a target object identifier of a target virtual object in at least one virtual object; and releasing the target skill to the target virtual object in response to the triggering operation. In this way, the skill can be released based on the triggering operation of the user on the skill identification and the object identification, so that the operation mode of the user for releasing the skill can be simplified, the control operation of the user on the virtual object is facilitated, and the user experience is improved.
Description
Technical Field
Embodiments of the present disclosure relate generally to the field of computer technology, and more particularly, relate to a method, apparatus, electronic device, computer-readable storage medium, and computer program product for controlling a virtual object.
Background
A battle game, also referred to as a competitive game, may refer to a game in which a plurality of virtual objects compete in the same virtual world (e.g., the same game scene). For example, the Battle Game may include a Multiplayer Online Battle Arena Games (MOBA), a card Game, a Simulation Game (SLG), and the like.
Virtual objects may include hero characters controlled by user operations and enemy characters not controlled by user operations, such as: various monsters in the game, level heads (BOSS), etc. The terminal device can control based on the operation of the user, so that the hero character and the enemy character can be matched, and the additional state information (including gain and/or reduction information and the like) of the hero character and the enemy character can be changed through skill or props in the matching process.
In the existing game, the releasing operation of the virtual object skill is too complicated, and the fighting experience of a user is influenced.
Disclosure of Invention
According to an example embodiment of the present disclosure, a scheme for controlling a virtual object is provided, which is capable of releasing a skill based on a user's trigger operation on a skill identification and an object identification, thereby simplifying a user operation.
In a first aspect of the present disclosure, there is provided a method for controlling a virtual object, comprising: displaying an object identifier and additional state information of at least one virtual object in a first area of a display interface, wherein the additional state information comprises gain information and/or reduction information; displaying at least one skill identification in a second area of the display interface; acquiring a trigger operation of a user for a target skill identification in at least one skill identification and a target object identification of a target virtual object in at least one virtual object; and releasing the target skill to the target virtual object in response to the triggering operation.
In a second aspect of the present disclosure, there is provided an electronic device comprising: at least one processing unit; at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform acts comprising: displaying an object identifier and additional state information of at least one virtual object in a first area of a display interface, wherein the additional state information comprises gain information and/or reduction information; displaying at least one skill identification in a second area of the display interface; acquiring a trigger operation of a user for a target skill identification in at least one skill identification and a target object identification of a target virtual object in at least one virtual object; and releasing the target skill to the target virtual object in response to the triggering operation.
In a third aspect of the present disclosure, there is provided an apparatus for controlling a virtual object, comprising: a first display module configured to display an object identifier and additional state information of at least one virtual object in a first area of a display interface, the additional state information including gain information and/or reduction information; a second display module configured to display at least one skill identification in a second area of the display interface; the obtaining operation module is configured to obtain a trigger operation of a user for a target skill identifier in the at least one skill identifier and a target object identifier of a target virtual object in the at least one virtual object; and a release skill module configured to release the target skill to the target virtual object in response to the triggering operation.
In a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon machine-executable instructions that, when executed by an apparatus, cause the apparatus to perform the method described in accordance with the first aspect of the present disclosure.
In a fifth aspect of the disclosure, a computer program product is provided, comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method described according to the first aspect of the disclosure.
A sixth aspect of the present disclosure provides an electronic device, including: processing circuitry configured to perform the method described according to the first aspect of the present disclosure.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
fig. 1 illustrates a flow diagram of an example process, according to some embodiments of the present disclosure;
FIG. 2 illustrates a schematic diagram of a display interface, according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of a display interface, according to some embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of a display interface, in accordance with some embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of a display interface, according to some embodiments of the present disclosure;
FIG. 6 shows a block diagram of an example apparatus according to an embodiment of the present disclosure; and
FIG. 7 illustrates a block diagram of an example device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In embodiments of the present disclosure, the term "virtual environment" refers to a virtualized environment that is presented or provided by an application running on a terminal device. The virtual environment may be a simulation of a real world, a semi-simulation semi-fictional world, or a pure fictional world. The virtual environment may provide virtual resources that are available for use by the at least two virtual objects such that the at least two virtual objects compete against each other in the virtual environment. By way of example, the virtual environment may include virtual buildings such as a base, a city pool, and the like that are virtual.
In embodiments of the present disclosure, the term "virtual object" refers to a movable object in a virtual environment. For example, when the virtual environment is a three-dimensional environment, the virtual object may be a three-dimensional virtual model. The virtual objects in the virtual environment may be controlled by the user through his terminal device or may be controlled by the server.
In embodiments of the present disclosure, the term "character" refers to a virtual object that is controlled by user operations, and may include, for example, a hero character.
In an embodiment of the present disclosure, the term "additional state information" refers to a gain (buff) or a reduction (de-buff) effect acting on a virtual object, which may represent a change in the skill or state of the virtual object. For example, the virtual object (such as hero character) can be controlled to release related skills during the fight process, additional state information can be applied to the virtual object itself or an enemy virtual object, gain information can be applied to the virtual object itself (such as attack increase, defense enhancement, resistance enhancement of certain elements and the like), and reduction information can be applied to the enemy (such as injury deepening, defense reduction, resistance reduction of certain elements and the like).
In embodiments of the present disclosure, the term "combat attributes" refers to numerical values of the combat power of a virtual object, which may be related to at least one of: the level of the virtual object, the type, number, level, etc. of the possessed skills, the type, number, level, etc. of the possessed equipment, the type, number, level, etc. of the possessed gains, etc.
In embodiments of the present disclosure, the term "control" refers to a visual control presented on a display interface of an application, such as a picture, a text box, a button, a label, a menu bar, an input bar, and the like.
In current games, a user may release skills for one or more virtual objects through a specific operation, so that additional state information of the virtual objects to be played is changed, for example, gain or reduction. However, the operation of releasing skills by a user is complicated at present, and the additional state information carried by the virtual object cannot be confirmed quickly; for example, the user may first need to locate a virtual object in the interface that the user wants to apply the skill, and then release the skill at or near the location of the virtual object. The operation mode influences the efficiency of releasing skills of the user, and easily causes errors of releasing skills of the user, so that the efficiency of the game is influenced, and the user experience is poor.
In order to at least partially solve the defects in the above technical solutions, embodiments of the present disclosure provide a scheme for controlling a virtual object, which can release skills based on a trigger operation of a user on a skill identifier and an object identifier, so that an operation manner of releasing the skills by the user can be simplified, thereby improving game efficiency and improving user experience.
Fig. 1 illustrates a flow diagram of an example process 100, in accordance with some embodiments of the present disclosure. At block 110, an object identification and additional state information of at least one virtual object is displayed in a first region of a display interface, the additional state information including gain information and/or reduction information. At block 120, at least one skill identification is displayed in a second area of the display interface. At block 130, a trigger action of the user for a target skill identification of the at least one skill identification and a target object identification of a target virtual object of the at least one virtual object is obtained. At block 140, in response to the triggering operation, the target skill is released to the target virtual object.
Alternatively or additionally, in some embodiments, the object identification and additional state information of the at least one virtual object may be displayed in the first area of the display interface based on information viewing instructions of the user. Illustratively, an information viewing instruction from the user may be obtained, and the information viewing instruction may be, for example, clicking (clicking or double clicking) a specific button, staying for a time period exceeding a preset time period at a specific position by the mouse, or the like. Illustratively, the information viewing instructions may be for viewing an object identification and additional state information of at least one virtual object, wherein the object identification may be a virtual avatar, hero ID, etc., and wherein the additional state information may be gain information, subtractive information, etc.
Specifically, in the current game, gain/reduction information of a virtual object is usually displayed near a corresponding character or avatar, and a user can display more complete gain/reduction information on an interface by clicking (for example, long-pressing or mouse-stopping) a virtual object desired to be viewed and a corresponding gain/reduction icon. However, for a player with a new hand or a player unfamiliar with gain/reduction information, when the player wants to release skills, the player needs to first check and know the gain/reduction information of the character controlling the operation, so that the operation required to be performed is too cumbersome, and the efficiency of the game is affected. However, in the embodiment of the present disclosure, the user can conveniently view the gain/reduction information through the information viewing instruction, and even for a novice player, the user can view more detailed gain/reduction information through a simple clicking operation on the information panel to know the gain/reduction information of each character of the own party and the enemy. Therefore, the overlong time consumption caused by the fact that the user views the gain/reduction information through complicated operation is avoided, and the efficiency of viewing the gain/reduction information can be improved.
For example, FIG. 2 shows an example of a display interface 200, where the interface 200 includes an information panel 210. In some examples, a user may click on the information panel 210, and a user clicking on the information panel 210 may be considered an information viewing instruction for the user.
Optionally, the interface 200 may present other information, such as the duration of the game 220, the setting button 230, the avatar 241 of the virtual object controlled by the user, the virtual environment 250 of the game, and the like. Optionally, an avatar 241 may be displayed in the second region 240 of the interface 200. It is to be understood that the locations, etc. of the controls and scenes shown in the interface 200 of fig. 2 are merely illustrative and should not be construed as limiting embodiments of the present disclosure.
In some embodiments of the present disclosure, additional status information of the own at least one virtual object and/or additional status information of the enemy at least one virtual object may be displayed in the first area based on the information viewing instruction of the user.
It will be appreciated that the game is still in progress while the additional status information is displayed. Illustratively, a virtual environment in which the opponent is engaged with the enemy may be displayed on the display interface.
In some examples, the first region may be displayed in a floating window manner. For example, the first region may be a floating layer region displayed on an upper layer of the virtual environment. Alternatively, the float layer region may be rendered with a certain transparency. Illustratively, the transparency of the float layer region may be adjusted based on transparency adjustment instructions of the user. For example, a user may adjust the transparency of a float region in a "settings" panel by dragging a transparency progress bar associated with the float region or entering a transparency value.
For example, FIG. 3 illustrates an example of a display interface 300. The main area 310 of the interface 300 may display a virtual environment (or battle scene or other name, etc.), for example, the content displayed by the area 310 may refer to the interface 200 shown in fig. 2. Referring to fig. 3, additional status information may be displayed in a float region 320 of the interface 300. The float layer region 320 may include a first sub-region 321 and a second sub-region 322, wherein the first sub-region 321 displays additional status information of one or more virtual objects of own (my team), and wherein the second sub-region 322 displays additional status information of one or more virtual objects of an enemy (enemy team). For example, an avatar of a certain virtual object in an enemy team and additional status information 323 indicating "10% increase in injury by flame-like skill and 20 seconds(s) remaining" are displayed in the second subregion 322.
Optionally, the floating layer region 320 may be presented with a transparency set or input by the user, so that the user can roughly view changes (e.g., color, brightness, etc.) in the virtual environment even when the floating layer region 320 is displayed.
In this way, by displaying the first area in a floating window manner, the user may view, for example, real-time additional status information of all or part of the virtual object in the first area in full and in detail. In this way, the specification of the respective gains can be quickly understood even for a novice user or a user who is not familiar with the application. In addition, by displaying gain/reduction information globally, it is helpful to help the user make decisions, determine reasonable coping measures such as when and what skills are available for which virtual object, and the like.
It can be understood that, in the embodiment of the present disclosure, when the gain information is displayed, the own-party gain information may be displayed in the first area, and the enemy gain information may not be displayed in the second area; or vice versa. That is, the user may adjust the displayed content according to the needs of the user, for example, only the own-party gain information or only the enemy gain information is displayed, which is not limited in the present disclosure.
Additionally or alternatively, in other examples, the position and size of the first region is adjustable. For example, the user may drag or the like to move the first area. For example, the user may adjust the position of the boundary of the first area to change its size.
In some examples, the first region may be located to the side of the display interface. Illustratively, the middle of the interface displays the virtual environment and the left or right side displays the first region. In some examples, the size of the first region may be less than a preset threshold. Illustratively, the ratio of the area of the first region to the area of the display interface is smaller than a preset value. For example, the preset value is 30% or other values, which the present disclosure is not limited to.
In this way, the occlusion of the first area to the virtual environment can be reduced, reducing the impact on the game. Therefore, the game experience of the user can be ensured, and especially for hand games, the situation that the user views the virtual environment insufficiently is avoided. Optionally, the size of the first area may also be adjusted up in time, which enables the user to view the additional status information more clearly, which is feasible and beneficial especially for large screen display devices.
Optionally, in some embodiments, the user may turn off the display of the additional state information for the one or more virtual objects in the first region. For example, the first area may initially display the object identifications of N virtual objects and the respective additional state information, and the user may cause the first area to display only the object identifications of M virtual objects of the N virtual objects and the respective additional state information by turning off or canceling the operation, where M < N. In this way, the user may view additional state information for only a portion of the virtual objects that the user is interested in or interested in, and not display additional state information for all of the virtual objects.
Optionally, in some embodiments, the user may also turn the first area off or off from display, and may thereafter redisplay the first area again via the information viewing instructions.
In some embodiments, additional state information for the at least one virtual object displayed in the first area may be updated in real-time based on the real-time state of the game. For example, the gain information and/or the mitigation information may be updated in real-time. In this way, the gain/reduction information can be dynamically updated in real time, refreshed and displayed in real time according to the game situation, facilitating the user to view the real-time gain/reduction information.
In some embodiments of the present disclosure, at least one skill identification may also be displayed in a second area of the display interface. It should be noted that the blocks 110 and 120 shown in fig. 1 do not imply a sequence of two operations, and in fact, the operations at the blocks 110 and 120 are independent and have no interdependency.
For example, the skill identification displayed in the second area may be based on the hero associated with the virtual object controlled by the user. For example, the user may determine one or more heroes at the time of character selection and setup, and optionally, may display the associated one or more hero cards in the second area.
In some embodiments of the present disclosure, the skill identification may indicate whether the corresponding skill is capable of being released, e.g., in a state to be released. Illustratively, the skill identification may also be implemented as an avatar identification or other type of identification of the corresponding hero, etc., to which the present disclosure is not limited. For example, in a card game, the state of the avatar corresponding to the virtual object is usually related to the skill, and the skill identifier can be represented by the avatar identifier.
For example, FIG. 4 illustrates an example of a display interface 400. The main area 410 of the interface 400 may display a virtual environment (or battle scene or other name, etc.), for example, the content displayed by the area 410 may refer to the interface 200 shown in fig. 2.
Referring to fig. 4, an object identification and additional state information of at least one virtual object may be displayed in a first area 420 of the interface 400. For example, in fig. 4, the first area 420 includes two entries 421 and 422, including an avatar 1 and additional state information 1, an avatar 2 and additional state information 2, respectively.
Referring to fig. 4, at least one skill identification may be displayed in the second area 240 of the interface 400. For example, in fig. 4, the second area 240 includes 3 hero cards 431-433, for example, the names of hero are "a", "b", and "c" in order. Cards 431 and 432 are currently in a state that cannot be released, and card 433 is currently in a state to be released. For example, card 433 may be highlighted and cards 431 and 432 may be displayed conventionally. For example, the card 433 may be displayed in color, and the cards 431 and 432 may be displayed in gray. For example, in FIG. 4, cards 431 and 432 are shown as dashed boxes, indicating that they cannot be released; the card 433 is shown as a solid box indicating that it can be released. It will be appreciated that the status of the card (i.e., whether it can be released or activated, etc.) can be displayed in different ways, and this disclosure is not intended to be exhaustive.
Optionally, in some embodiments, the target virtual object may be determined based on additional state information of the at least one virtual object in the first region, in case a target skill identification of the at least one skill identification in the second region is determined. Further, a target virtual object may be indicated in the first area.
For example, the second area may display a plurality of skill identifiers, and the user may select the target skill identifier through a triggering operation, such as a single click. For example, the second area may display a plurality of skill identifiers, and if only one of the skill identifiers is in a to-be-released state, the skill identifier in the to-be-released state may be defaulted as the target skill identifier, while the remaining skill identifiers are not activated.
For example, the manner in which the target virtual object is indicated may prompt the user by highlighting, flashing, etc. In conjunction with fig. 4, assuming that the determined target virtual object is associated with 421, avatar 1 and additional state information 1 may be displayed in a particular manner, such as in an animated manner, which the present disclosure does not limit.
Optionally, in some examples, the user may also adjust or modify the target virtual object when the first region indicates the target virtual object. For example, the user may perform a trigger operation on an object identifier or the like of another virtual object in the first area, so as to use the other virtual object as the adjusted target virtual object. For example, in connection with FIG. 4, when avatar 1 and additional state information 1 are indicated in an animated manner, the user may click on "avatar 2" to modify the target virtual object to 422 associated with avatar 2.
In some embodiments of the present disclosure, the user may release the target skill corresponding to the target skill identification to the target virtual object by a triggering operation. In some examples, the trigger operation may be a drag operation, a single click operation, a double click operation, and the like, to which the present disclosure is not limited.
For example, where a target skill identification is determined and a target virtual object is indicated, the user may release the target skill by, for example, double-clicking on the target skill identification.
For example, the user may perform a drag operation from the display location of the target skill identification to the display location of the target object identification to release the skill. Referring to fig. 4, the drag operation is shown by arrow 440.
For example, the user may release skills by sequential click operations on the target skill identification and on the target object identification.
Therefore, the user operation can be simplified, the user can release the skill more quickly and conveniently, the game efficiency is improved, and the user experience is improved.
Optionally, in some embodiments of the present disclosure, the display of the first area may also be cancelled after the target skill is released. For example, the target virtual object may be displayed in the virtual environment so that the user may visually view the state of the target virtual object after the skills have been released. For example, if the release of the target skill causes the target virtual object to be severely bled (e.g., only 2% blood remaining or some other value), the user may further release the next skill to kill the target virtual object. For example, if the release of the target skill causes the target virtual object to die and the target virtual object is an enemy's head of sight (boss), the battle is ended.
Optionally, in some embodiments of the present disclosure, the method may further include: prompt information associated with the additional status information is displayed in the first area. Illustratively, the prompt information is used to prompt skill information associated with the additional status information.
For example, if a "burn" is reduced in some virtual object displayed in the first region: the injury caused by the used flame-like skill is increased by 30%, and the prompting message can prompt the flame-like skill.
For example, the prompt information may be displayed in association with additional state information, for example, corresponding prompt information may be displayed in association with the additional state information of a virtual object.
Optionally, in some examples, the prompt may also indicate injury information that would be caused to the virtual object using the prompted skill. For example, the damage that the increased skill may cause to the virtual object may be calculated and quantitatively displayed based on the additional state information of the virtual object (or in combination with the additional state information of the virtual object controlled by the user). For example, injury information may be vitality-20, blood loss 20%, etc.
For example, fig. 5 illustrates an example of a display interface 500. The main area 510 of the interface 500 may display a virtual environment (or battle scene or other name, etc.), for example, the contents displayed by the area 510 may refer to the interface 400 shown in FIG. 4. Referring to fig. 5, an avatar 521 of a virtual object, additional state information 522, and corresponding prompt information 523 may be displayed in a first region 520 of an interface 500.
Alternatively or additionally, in some embodiments of the present disclosure, new cue information may also be updated and displayed in real-time based on the state of the game.
Through the scheme of the embodiment of the disclosure, the skill can be released based on the triggering operation of the user on the target skill identification and the target object identification, so that the operation mode of the user for releasing the skill can be simplified, the game efficiency can be improved, and the user experience is improved. In some embodiments, a prompt message may also be displayed to provide a reference for the user to release skills for the user to perform subsequent operations.
It should be understood that the display interfaces described above in connection with fig. 2-5 are merely illustrative and should not be construed as limiting embodiments of the present disclosure. For example, in actual scenarios, more or fewer content and/or controls may be presented on the display interface, and portions of the illustrated content and/or controls may be removed or replaced, such modifications and variations also being within the scope of embodiments of the present disclosure.
It should be understood that in the embodiments of the present disclosure, "first", "second", "third", etc. are only for indicating that a plurality of objects may be different, but at the same time do not exclude the same between two objects, and should not be construed as any limitation to the embodiments of the present disclosure.
It should also be understood that the manner, the case, the category, and the division of the embodiments of the present disclosure are for convenience of description only and should not be construed as a particular limitation, and features of various manners, categories, cases, and embodiments may be combined with each other in a case where they are logically consistent.
It should also be understood that the above-described contents are only for helping those skilled in the art to better understand the embodiments of the present disclosure, and are not intended to limit the scope of the embodiments of the present disclosure. Various modifications or changes or combinations may occur to those skilled in the art in light of the foregoing description. Such modifications, variations, or combinations are also within the scope of the embodiments of the present disclosure.
It should also be understood that the above description focuses on highlighting the differences before the various embodiments, and that the same or similar parts may be referred to or referred to each other, and for brevity, are not described in detail herein.
Fig. 6 illustrates a schematic block diagram of an example apparatus 600, in accordance with some embodiments of the present disclosure. The apparatus 600 may be implemented by software, hardware or a combination of both. In some embodiments, apparatus 600 may be implemented as a terminal device. In the embodiment of the present disclosure, the terminal device may be a desktop computer, a tablet computer, a smart phone, and the like, which is not limited in the present disclosure.
As shown in fig. 6, the apparatus 600 includes a first display module 610, a second display module 620, an acquisition action module 630, and a release skill module 640. The first display module 610 is configured to display an object identification and additional state information of at least one virtual object in a first region of a display interface, the additional state information including gain information and/or mitigation information. The second display module 620 is configured to display the at least one skill identification in a second area of the display interface. The obtaining operation module 630 is configured to obtain a trigger operation of the user for a target skill identification of the at least one skill identification and a target object identification of a target virtual object of the at least one virtual object. The release skill module 640 is configured to release the target skill to the target virtual object in response to the triggering operation.
In some embodiments of the present disclosure, the apparatus 600 may further include a cancellation display module (not shown in the figures) configured to cancel displaying the first area after releasing the target skill.
In some embodiments of the present disclosure, the first display module 610 is further configured to determine a target virtual object based on additional state information of respective ones of the at least one virtual object in response to the target skill identification being determined in the second region; and indicating the target virtual object in the first area.
Illustratively, the target skill identification is determined to include: the target skill identification is selected by the user through a trigger operation, or the target skill identification is in a release waiting state.
Optionally, the first display module 610 is further configured to modify the indicated target virtual object in response to a user triggering operation on an object identification of another virtual object of the at least one virtual object.
Illustratively, the triggering operation includes: the method comprises the following steps of double-click operation of a target skill identification by a user, dragging operation of the user from a display position of the target skill identification to a display position of a target object identification, or sequential click operation of the user on the target skill identification and the target object identification of a target virtual object.
In some embodiments of the present disclosure, the first display module 610 is further configured to display prompt information associated with the additional status information in the first area, wherein the prompt information is for prompting skill information related to the additional status information.
In some embodiments of the present disclosure, the first display module 610 is configured to display the object identification and the additional state information of the at least one virtual object in the first area of the display interface in response to an information viewing instruction of the user.
Illustratively, the apparatus 600 may further include an updating module configured to update in real-time additional state information of the at least one virtual object displayed in the first area based on the state of the game.
Optionally, the first area is displayed in a floating window manner. Optionally, the apparatus 600 may further include an adjusting module (not shown in the figure) configured to adjust the transparency of the first area in response to a transparency adjustment instruction of the user.
Optionally, in some examples, a ratio of an area of the first region to an area of the display interface is less than a preset value.
The apparatus 600 of fig. 6 can be used to implement the process 100 described above in conjunction with fig. 1, and for the sake of brevity, will not be described again here.
The division of the modules or units in the embodiments of the present disclosure is schematic, and is only a logical function division, and in actual implementation, there may be another division manner, and in addition, each functional unit in the embodiments of the present disclosure may be integrated into one unit, may also exist alone physically, or may be integrated into one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Fig. 7 illustrates a block diagram of an example device 700 that may be used to implement embodiments of the present disclosure. It should be understood that the device 700 illustrated in fig. 7 is merely exemplary and should not constitute any limitation as to the functionality or scope of the implementations described herein. For example, the process 100 described above may be performed using the apparatus 700.
As shown in fig. 7, device 700 is in the form of a general purpose computing device. Components of computing device 700 may include, but are not limited to, one or more processors or processing units 710, memory 720, storage 730, one or more communication units 740, one or more input devices 750, and one or more output devices 760. The processing unit 710 may be a real or virtual processor and may be capable of performing various processes according to programs stored in the memory 720. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of computing device 700.
The computing device 700 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 720 may include a computer program product 725 having one or more program modules configured to perform the various methods or acts of the various implementations of the disclosure.
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above. According to an exemplary implementation of the present disclosure, a computer program product is provided, on which a computer program is stored which, when being executed by a processor, carries out the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of various implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand various implementations disclosed herein.
Claims (16)
1. A method for controlling a virtual object, comprising:
displaying an object identifier and additional state information of at least one virtual object in a first area of a display interface, wherein the additional state information comprises gain information and/or reduction information;
displaying at least one skill identification in a second area of the display interface;
acquiring a trigger operation of a user for a target skill identification in the at least one skill identification and a target object identification of a target virtual object in the at least one virtual object;
releasing the target skill to the target virtual object in response to the triggering operation.
2. The method of claim 1, further comprising:
canceling the display of the first area after releasing the target skill.
3. The method of claim 1 or 2, the acquisition trigger operation further comprising:
in response to the target skill identification in the second region being determined, determining the target virtual object based on additional state information for each of the at least one virtual object; and
indicating the target virtual object in the first area.
4. The method of claim 3, wherein the target skill identification is determined comprising:
the target skill identification is selected by the user through a trigger action or,
the target skill identification is in a to-be-released state.
5. The method of claim 3 or 4, further comprising:
and modifying the indicated target virtual object in response to the triggering operation of the user on the object identification of another virtual object in the at least one virtual object.
6. The method of any of claims 1-5, the triggering operation comprising:
a double-click operation of the target skill identification by the user,
a drag operation by the user from the display location of the target skill identification to the display location of the target object identification, or,
a sequential click operation by the user on the target skill identification and on the target object identification of the target virtual object.
7. The method of any of claims 1 to 6, further comprising:
and displaying prompt information associated with the additional state information in the first area, wherein the prompt information is used for prompting skill information related to the additional state information.
8. The method of any of claims 1-7, the displaying the object identification and additional state information of the at least one virtual object comprising:
in response to a user's information viewing instruction, displaying an object identification and additional state information of the at least one virtual object in the first area of the display interface.
9. The method of any of claims 1 to 8, further comprising:
updating the additional state information of the at least one virtual object displayed in the first area in real-time based on a state of a game.
10. The method of any of claims 1-9, wherein the first region is displayed in a floating window manner.
11. The method of any of claims 1 to 10, further comprising:
adjusting the transparency of the first area in response to the transparency adjustment instruction of the user.
12. The method of any of claims 1-11, wherein a ratio of an area of the first region to an area of the display interface is less than a preset value.
13. An electronic device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform acts comprising:
displaying an object identifier and additional state information of at least one virtual object in a first area of a display interface, wherein the additional state information comprises gain information and/or reduction information;
displaying at least one skill identification in a second area of the display interface;
acquiring a trigger operation of a user for a target skill identification in the at least one skill identification and a target object identification of a target virtual object in the at least one virtual object; and
releasing the target skill to the target virtual object in response to the triggering operation.
14. An apparatus for controlling a virtual object, comprising:
a first display module configured to display an object identifier and additional state information of at least one virtual object in a first area of a display interface, the additional state information including gain information and/or reduction information;
a second display module configured to display at least one skill identification in a second area of the display interface;
an obtaining operation module configured to obtain a trigger operation of a user for a target skill identification in the at least one skill identification and a target object identification of a target virtual object in the at least one virtual object; and
a release skill module configured to release the target skill to the target virtual object in response to the triggering operation.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 12.
16. A computer program product having a computer program stored thereon, which when executed by a processor, implements the method according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210861715.0A CN115120974B (en) | 2022-07-20 | 2022-07-20 | Method for controlling virtual object and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210861715.0A CN115120974B (en) | 2022-07-20 | 2022-07-20 | Method for controlling virtual object and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115120974A true CN115120974A (en) | 2022-09-30 |
CN115120974B CN115120974B (en) | 2024-08-13 |
Family
ID=83384746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210861715.0A Active CN115120974B (en) | 2022-07-20 | 2022-07-20 | Method for controlling virtual object and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115120974B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024104008A1 (en) * | 2022-11-17 | 2024-05-23 | 腾讯科技(深圳)有限公司 | Equipment suggestion method and apparatus for virtual object, equipment adjustment method and apparatus for virtual object, and medium and product |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106237615A (en) * | 2016-07-22 | 2016-12-21 | 广州云火信息科技有限公司 | Many unity elements skill operation mode |
CN108287657A (en) * | 2018-01-25 | 2018-07-17 | 网易(杭州)网络有限公司 | Technical ability applying method and device, storage medium, electronic equipment |
JP2021108792A (en) * | 2020-01-07 | 2021-08-02 | 株式会社カプコン | Game program, game device, and game system |
CN113827970A (en) * | 2021-09-28 | 2021-12-24 | 网易(杭州)网络有限公司 | Information display method and device, computer readable storage medium and electronic equipment |
CN113941149A (en) * | 2021-09-26 | 2022-01-18 | 网易(杭州)网络有限公司 | Game behavior data processing method, nonvolatile storage medium and electronic device |
CN114367111A (en) * | 2022-01-06 | 2022-04-19 | 腾讯科技(深圳)有限公司 | Game skill control method, related device, equipment and storage medium |
WO2022105552A1 (en) * | 2020-11-20 | 2022-05-27 | 腾讯科技(深圳)有限公司 | Information processing method and apparatus in virtual scene, and device, medium and program product |
-
2022
- 2022-07-20 CN CN202210861715.0A patent/CN115120974B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106237615A (en) * | 2016-07-22 | 2016-12-21 | 广州云火信息科技有限公司 | Many unity elements skill operation mode |
CN108287657A (en) * | 2018-01-25 | 2018-07-17 | 网易(杭州)网络有限公司 | Technical ability applying method and device, storage medium, electronic equipment |
JP2021108792A (en) * | 2020-01-07 | 2021-08-02 | 株式会社カプコン | Game program, game device, and game system |
WO2022105552A1 (en) * | 2020-11-20 | 2022-05-27 | 腾讯科技(深圳)有限公司 | Information processing method and apparatus in virtual scene, and device, medium and program product |
CN113941149A (en) * | 2021-09-26 | 2022-01-18 | 网易(杭州)网络有限公司 | Game behavior data processing method, nonvolatile storage medium and electronic device |
CN113827970A (en) * | 2021-09-28 | 2021-12-24 | 网易(杭州)网络有限公司 | Information display method and device, computer readable storage medium and electronic equipment |
CN114367111A (en) * | 2022-01-06 | 2022-04-19 | 腾讯科技(深圳)有限公司 | Game skill control method, related device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
不懂得珍惜吧11: ""王者荣耀中如何让技能释放到指定敌方英雄身上"", 《HTTPS://JINGYAN.BAIDU.COM/ARTICLE/466506585829FBB548E5F823.HTML》, 13 September 2019 (2019-09-13), pages 1 - 3 * |
腾讯游戏: ""【策划爆料】补刀键攻击-精确选取攻击对象"", 《HTTPS://PVP.QQ.COM/WEBPLAT/INFO/NEWS_VERSION3/15592/18024/18025/18028/M13207/201605/459663.SHTML》, 3 May 2016 (2016-05-03), pages 1 - 3 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024104008A1 (en) * | 2022-11-17 | 2024-05-23 | 腾讯科技(深圳)有限公司 | Equipment suggestion method and apparatus for virtual object, equipment adjustment method and apparatus for virtual object, and medium and product |
Also Published As
Publication number | Publication date |
---|---|
CN115120974B (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113101660B (en) | Game display control method and device | |
CN111298449B (en) | Control method and device in game, computer equipment and storage medium | |
CN111672116B (en) | Method, device, terminal and storage medium for controlling virtual object release technology | |
CN108114467A (en) | Switching method, device, processor and the terminal of reality-virtualizing game scene | |
CN107596691A (en) | Play strategic exchange method and device | |
JP7416944B2 (en) | Interaction methods, equipment and electronic devices for tactical planning in games | |
US20220266139A1 (en) | Information processing method and apparatus in virtual scene, device, medium, and program product | |
JP7571989B2 (en) | Method and device for displaying a chessboard screen, terminal device, and computer program | |
US20230078340A1 (en) | Virtual object control method and apparatus, electronic device, storage medium, and computer program product | |
WO2023138192A1 (en) | Method for controlling virtual object to pick up virtual prop, and terminal and storage medium | |
US20230330534A1 (en) | Method and apparatus for controlling opening operations in virtual scene | |
CN111481930B (en) | Virtual object control method and device, computer equipment and storage medium | |
CN113476825B (en) | Role control method, role control device, equipment and medium in game | |
CN115120974A (en) | Method and electronic device for controlling virtual object | |
CN111803960B (en) | Method and device for starting preset flow | |
CN109091864B (en) | Information processing method and device, mobile terminal and storage medium | |
WO2024011785A1 (en) | Information processing method and apparatus, and electronic device and readable storage medium | |
WO2023024078A1 (en) | Virtual object control method and apparatus, electronic device, and storage medium | |
CN115337638A (en) | Information control display control method and device in game and electronic equipment | |
CN113694521A (en) | Injury processing method, apparatus, electronic device and storage medium | |
US20240342606A1 (en) | Method and apparatus for requesting and discarding virtual consumable, terminal, and storage medium | |
CN113694520B (en) | Prop effect processing method and device, electronic equipment and storage medium | |
US20240316458A1 (en) | Information display | |
WO2024067009A1 (en) | Virtual character display method and apparatus, and storage medium and electronic device | |
CN114146413B (en) | Virtual object control method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |