CN113476823B - Virtual object control method and device, storage medium and electronic equipment - Google Patents

Virtual object control method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113476823B
CN113476823B CN202110790206.9A CN202110790206A CN113476823B CN 113476823 B CN113476823 B CN 113476823B CN 202110790206 A CN202110790206 A CN 202110790206A CN 113476823 B CN113476823 B CN 113476823B
Authority
CN
China
Prior art keywords
control
action
virtual object
combination
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110790206.9A
Other languages
Chinese (zh)
Other versions
CN113476823A (en
Inventor
李雪妹
王子奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110790206.9A priority Critical patent/CN113476823B/en
Publication of CN113476823A publication Critical patent/CN113476823A/en
Application granted granted Critical
Publication of CN113476823B publication Critical patent/CN113476823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to the technical field of man-machine interaction, in particular to a virtual object control method, a virtual object control device, a storage medium and electronic equipment. The virtual object control method comprises the following steps: displaying the virtual object in an initial action on a graphical user interface; responding to a first touch operation acting on a first control, and controlling the virtual object to switch from an initial action to a first action; receiving a second touch operation acting on the first control, and providing a combined control corresponding to the second-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub-control, and the second-level combined action corresponding to the first sub-control is a superposition action of a first action executed by the first control and a second action executed by the second control, which are triggered respectively; and responding to a third touch operation acting on the combination control, and controlling the virtual object to execute a secondary combination action corresponding to the combination control. The virtual object control method provided by the invention can realize the control problem of virtual character action combination.

Description

Virtual object control method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of man-machine interaction, in particular to a virtual object control method, a virtual object control device, a storage medium and electronic equipment.
Background
With the development of man-machine interaction technology, games applied to various touch devices are also layered endlessly. For shooting types, actions or other similar games, it is often necessary to combine different gesture behaviors, such as turning around while squatting, in controlling the virtual character.
In the prior art, touch control pieces with different actions are located in different touch areas in an interactive interface, and when the action combination control is performed, at least two fingers are matched with the corresponding control pieces in the different touch areas to realize the combination control of the virtual character actions. In addition, currently, players generally use double-finger operation, that is, other key contents need to be abandoned when controlling action combination, and the cost is extremely high, so that the operation of virtual character action combination is less in the game.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a virtual object control method, a device, a storage medium and electronic equipment, which aim to solve the control problem of virtual character action combination.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to an aspect of the disclosed embodiments, there is provided a virtual object control method, for providing a first control corresponding to a first action at a first position and a second control corresponding to a second action at a second position through a graphical user interface, including: displaying the virtual object in an initial action on the graphical user interface; responding to a first touch operation acted on the first control, and controlling the virtual object to switch from the initial action to the first action; receiving a second touch operation acting on the first control, and providing a combined control corresponding to the two-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively; and responding to a third touch operation acting on the combination control, and controlling the virtual object to execute the secondary combination action corresponding to the combination control.
According to some embodiments of the disclosure, based on the foregoing solution, the combination control further includes a second sub-control, where a second-level combination action corresponding to the second sub-control is the initial action.
According to some embodiments of the disclosure, based on the foregoing solution, the combination control further includes a third sub-control, where a second-level combination action corresponding to the third sub-control is a superposition action after the initial action and the second action that is separately triggered to be performed by the second control.
According to some embodiments of the present disclosure, based on the foregoing solution, after providing the combination control corresponding to the two-level combination action at the third position, the method further includes: determining a mutual exclusion control corresponding to the combined control; the mutex control is masked in the graphical user interface.
According to some embodiments of the disclosure, based on the foregoing solution, the determining the mutual exclusion control corresponding to the combination control includes: determining an associated action corresponding to the secondary combined action based on the secondary combined action corresponding to the combined control; and configuring the control corresponding to the association action as the mutual exclusion control.
According to some embodiments of the disclosure, based on the foregoing solution, when the third touch operation is a sliding operation, the controlling the virtual object to execute the two-level combined action method corresponding to the combined control includes: acquiring a sliding touch point of the sliding operation; and when the sliding touch point falls on the combination control and meets the selected condition, controlling the virtual object to execute the secondary combination action corresponding to the combination control.
According to some embodiments of the present disclosure, based on the foregoing solution, when the third touch operation is a sliding operation, the method further includes: and when the third touch operation is detected to be finished, controlling the virtual object to execute the initial action.
According to some embodiments of the present disclosure, based on the foregoing solution, when the third touch operation is a click operation, the method further includes: and when the third touch operation is detected to be ended, receiving the third touch operation aiming at other combined controls.
According to some embodiments of the present disclosure, based on the foregoing solution, a third control corresponding to a third action is provided at a fourth position through the graphical user interface, and the method further includes: when the third touch operation is detected to meet the touch threshold, providing a combination control corresponding to the three-level combination action at a fifth position; the third-level combined action comprises a superposition action of the first action executed by the first control, the second action executed by the second control and the third action executed by the third control, which are triggered respectively; and responding to a fourth touch operation of the combination control corresponding to the three-level combination action, and controlling the virtual object to execute the three-level combination action.
According to a second aspect of the embodiments of the present disclosure, there is provided a virtual object control apparatus, which provides a first control corresponding to a first action at a first location and a second control corresponding to a second action at a second location through a graphical user interface, including: the initial module is used for displaying the virtual object in initial action on the graphical user interface; the switching module is used for responding to a first touch operation acted on the first control and controlling the virtual object to switch from the initial action to the first action; the providing module is used for receiving a second touch operation acting on the first control, and providing a combined control corresponding to the second-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively; and the response module is used for responding to the third touch operation acted on the combination control and controlling the virtual object to execute the secondary combination action corresponding to the combination control.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a virtual object control method as in the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the virtual object control method as in the above embodiments.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
in some embodiments of the present disclosure, in response to a first touch operation acting on a first control, a virtual object is controlled to switch from an initial action to a first action; then when the second touch operation of the user aiming at the first control meets a touch threshold, providing a combined control corresponding to the second-level combined action at a third position; the combined control comprises a first sub control, and the first sub control corresponds to a superposition action of the first action executed by the first control and the second action executed by the second control; and finally, responding to a third touch operation acted on the combination control, and controlling the virtual object to execute a secondary combination action corresponding to the combination control. According to the virtual object control method, on one hand, an additional function is added to the existing action control, when the touch operation meets the touch threshold, a combination control corresponding to the second-level combination action is provided at a third position, the combination control can be selected to execute the corresponding second-level combination action, so that a user can control the second-level combination action of the virtual object only by operating the first control, and the operation efficiency is improved; on the other hand, the secondary combined actions correspond to different secondary combined actions, for example, the combined actions formed by overlapping the first action and the second action, so that the action types of the virtual objects are richer.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 schematically illustrates a graphical user interface of the prior art;
FIG. 2 schematically illustrates a flow diagram of a virtual object control method in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a graphical user interface diagram in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates another graphical user interface diagram in an exemplary embodiment of the present disclosure;
fig. 5 schematically illustrates a composition diagram of a virtual object control apparatus in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure;
Fig. 7 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
With the development of man-machine interaction technology, games applied to various touch devices are also layered endlessly. For shooting types, actions, or other similar games, it is often desirable to combine different gesture behaviors when controlling the virtual character.
In the prior art, touch control pieces with different actions are located in different touch areas in an interactive interface, and when the action combination control is performed, at least two fingers are matched with the corresponding control pieces in the different touch areas to realize the combination control of the virtual character actions.
Fig. 1 schematically shows a graphical interactive interface of the prior art. Referring to fig. 1, the interface includes a game scene and a virtual object located in the game scene, where the virtual object is in an initial motion. The controls of the left probe and the right probe are positioned on the left side of the interface, the controls of the squat, groveling and jumping are positioned on the right side of the interface, and the controls of shooting, aiming and the like are also provided in the interface. The probe requires at least two fingers to complete while controlling the avatar to squat down, such as a right hand holding "squat" and a left hand holding "left probe".
However, the player with the largest proportion is generally a double-finger operation, that is, to realize the combined control of the two actions, other core actions such as shooting, aiming and the like must be abandoned, and the cost is extremely high, even for the high-end player with smaller proportion, the cost of the two fingers is too high, so that the operation is rarely happened in the game.
Therefore, the present disclosure provides a virtual object control method, which generates a second control in a preset range of a first control, so as to control different action combinations of a virtual character by a single finger.
Implementation details of the technical solutions of the embodiments of the present disclosure are set forth in detail below.
Fig. 2 schematically illustrates a flowchart of a virtual object control method in an exemplary embodiment of the present disclosure. As shown in fig. 2, the virtual object control method includes steps S1 to S3:
step S1, displaying a virtual object in initial action on the graphical user interface;
step S2, responding to a first touch operation acting on the first control, and controlling the virtual object to switch from the initial action to the first action;
step S3, receiving a second touch operation acting on the first control, and providing a combined control corresponding to the two-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively;
And S4, responding to a third touch operation acting on the combination control, and controlling the virtual object to execute the secondary combination action corresponding to the combination control.
In some embodiments of the present disclosure, in response to a first touch operation acting on a first control, a virtual object is controlled to switch from an initial action to a first action; then when the second touch operation of the user aiming at the first control meets a touch threshold, providing a combined control corresponding to the second-level combined action at a third position; the combined control comprises a first sub control, and the first sub control corresponds to a superposition action of the first action executed by the first control and the second action executed by the second control; and finally, responding to a third touch operation acted on the combination control, and controlling the virtual object to execute a secondary combination action corresponding to the combination control. According to the virtual object control method, on one hand, an additional function is added to the existing action control, when the touch operation meets the touch threshold, a combination control corresponding to the second-level combination action is provided at a third position, the combination control can be selected to execute the corresponding second-level combination action, so that a user can control the second-level combination action of the virtual object only by operating the first control, and the operation efficiency is improved; on the other hand, the secondary combined actions correspond to different secondary combined actions, for example, the combined actions formed by overlapping the first action and the second action, so that the action types of the virtual objects are richer.
Hereinafter, each step of the virtual object control method in the present exemplary embodiment will be described in more detail with reference to the accompanying drawings and examples.
And step S1, displaying the virtual object in the initial action on the graphical user interface.
In one embodiment of the present disclosure, referring to FIG. 1, the interface includes a game scene and a virtual object located in the game scene, where a player controls the virtual object to move in the game scene, while the virtual object is in an initial motion.
The virtual scene is a virtual scene displayed when an application program runs on a terminal or a server. Optionally, the virtual scene is a simulation environment for the real world, or a semi-simulation and semi-fiction virtual environment, or a pure fiction virtual environment, and the virtual environment can be sky, land, ocean, etc., wherein the land comprises environmental elements such as deserts, cities, etc.
And the virtual object refers to a dynamic object that can be controlled in the virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. The virtual object is a Character that a Player controls through an input device, or is an artificial intelligence (Artificial Intelligence, AI) set in a virtual environment fight by training, or is a Non-Player Character (NPC) set in a virtual environment fight. Optionally, the virtual object is a virtual character playing an athletic in the virtual scene.
In a virtual scenario, the virtual object is now in an initial motion, such as standing a hand-held weapon. The initial actions can be customized according to the requirements.
And step S2, responding to a first touch operation acted on the first control, and controlling the virtual object to switch from the initial action to the first action.
In one embodiment of the present disclosure, the first control is an action control, corresponding to a first action, and when a first touch operation triggering the first control is detected, a gesture adjustment instruction is triggered to control the permission object to switch from an initial action in a current state to a first action corresponding to the first control.
Referring to fig. 1, the first control may be "squat", "groveling" or "jump" on the right side of the interface, and of course, the first control may also be a control such as "left probe", "right probe" on the left side of the interface. The individual triggers of these controls may all perform corresponding actions, and the disclosure is not particularly limited herein.
Step S3, receiving a second touch operation acting on the first control, and providing a combined control corresponding to the two-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub-control, and the secondary combined action corresponding to the first sub-control is a superposition action of the first action executed by the first control and the second action executed by the second control.
In one embodiment of the present disclosure, it is desirable to set conditions under which a combination control appears. Whether the combined control can appear or not can be judged according to the duration of control pressing or the pressure value. For example, the first touch operation may be that the user presses the first control, that is, after the user presses "squat", the virtual object performs the action of "squat", if the user does not loose his hand after pressing "squat", the pause time exceeds the preset value, it is determined that the touch threshold is met, and at this time, the combined control corresponding to the second combined action may be provided at the third position.
Of course, other touch thresholds may be set, for example, after the user presses "squat", pressing "squat" with force, or sliding a preset track, etc., which is used to determine whether the condition for providing the combined control is met, so the touch threshold is not specifically limited in this disclosure.
In one embodiment of the present disclosure, it is desirable to provide a combination control in a third position. In order to improve the single-hand operation efficiency of the user, the user can control the combined action through one finger, and the third position can be in a preset range of the first control, so that the third position can be determined according to the first position of the first control and the single-finger operation distance.
The first position, that is, the position coordinate of the first control on the graphical user interface, and the single-finger operation distance, that is, the farthest distance of the single-finger operation under the general condition of the user, may be set to a preset value based on historical data, or may be set by user definition. Therefore, a preset range can be defined by taking the first position of the first control as the center and taking the single-finger operation distance as the radius, and the third position of the combined control is required to fall within the preset range.
It should be noted that, the secondary combined action is determined based on the primary action, and the primary action is the action content existing in the current game scene, that is, the action that can be executed by the action control by being triggered alone, for example, a "left probe", "jump", etc., and also the initial action that does not need to be triggered by the control.
The generated combination control can be one or a plurality, which is related to the determined number of the secondary combination actions, and one secondary combination action corresponds to one combination control.
For example, the combined control includes a first sub-control, and the second-level combined action corresponding to the first sub-control is a superposition action of the first action executed by the first control and the second action executed by the second control. The first action corresponding to the first control and the second action corresponding to the second control belong to one-level actions, namely actions which can be obtained by independently triggering the action control.
The combined control further comprises a second sub-control, and the secondary combined action corresponding to the second sub-control is used as an initial action. The initial action is a first-level action, and is an existing action. And setting a combination control corresponding to the initial action, and combining the initial action with other actions to obtain other secondary combination actions.
The combined control further comprises a third sub-control, and the second-level combined action corresponding to the third sub-control is an overlapped action after the initial action and the second action which is independently triggered to be executed by the second control. Wherein the initial action and the second action both belong to a primary action.
Fig. 3 schematically illustrates a graphical user interface diagram in an exemplary embodiment of the present disclosure. Referring to fig. 3, after the user generates the combined control based on the "squat" trigger of the first control, a hot zone is formed with the "squat" button as the center, and the combined control of "squat and left probe" and "squat and right probe" appears on the left and right sides. At the same time, a combined control of "standing", "standing and left probe" and "standing and right probe" is also generated.
The squatting and left probe and the squatting and right probe correspond to two-level combined actions corresponding to the first sub-control, namely action controls obtained by respectively triggering the squatting control and the left probe and the right probe. The two actions are the primary actions corresponding to the existing action control, and the generated combined control can control the virtual object to execute the secondary combined action after the two actions are combined and overlapped.
"standing" corresponds to the secondary combined action corresponding to the second child control. The standing left probe and the standing right probe correspond to the two-level combined actions corresponding to the third sub-control, and are action controls obtained by respectively superposing the initial actions and the left probe and the right probe controls.
Fig. 4 schematically illustrates another graphical user interface diagram in an exemplary embodiment of the present disclosure. Referring to fig. 4, the upper side is provided with a "groveling and tilting head", the left side is provided with a "groveling and left probe", and the right side is provided with a combination control of "groveling and right probe".
The generated three combined controls belong to the first sub-control, and the action data corresponding to the controls are combined controls formed by overlapping the first control 'groveling' with the second control 'left probe', 'upward head', 'right probe'.
Of course, in other embodiments of the disclosure, the combined control may further include a fourth sub-control and a fifth sub-control, where the second-level combined action corresponding to the fourth sub-control is a mutually exclusive action mutually exclusive with the first action, and the second-level combined action corresponding to the fifth sub-control is a combined action in which the mutually exclusive action is overlapped with the first action. For example, referring to fig. 3, controls such as "groveling", "jumping" and the like may be placed in the hot zone according to the requirements, and other combined controls such as "groveling left probe" and "groveling right probe" may be generated at the same time.
It should be noted that, the combination control provided in fig. 3 and fig. 4 is only an exemplary illustration, and may be a two-level combination action obtained by performing different combinations with other existing one-level actions, and the embodiments of the present disclosure are not limited to the present disclosure.
Based on the method, the combination control corresponding to the secondary combination action is provided on the basis of the existing action control, on one hand, the secondary combination action is obtained by combining on the basis of the existing action, the richness of the action is improved, and the development amount of the action is reduced; on the other hand, the combined control is provided at the third position, so that a user can obtain the combined control only by aiming at the first control operation, and different combined actions are executed through the combined control, and the control corresponding to two or even more primary actions is not required to be controlled independently, so that the operation efficiency of the user can be further improved, and the user experience is improved.
Meanwhile, based on the method thought provided by the disclosure, when more than two actions can be combined, for example, in a graphical user interface, not only a first control corresponding to a first action, a second control corresponding to a second action, but also a third control corresponding to a third action are provided, and the method further includes: when the third touch operation is detected to meet the touch threshold, providing a combination control corresponding to the three-level combination action at a fifth position; the third-level combined action comprises a superposition action of the first action executed by the first control, the second action executed by the second control and the third action executed by the third control, which are triggered respectively; and responding to a fourth touch operation of the combination control corresponding to the three-level combination action, and controlling the virtual object to execute the three-level combination action.
That is, the method is to overlap with the primary action on the basis of the secondary combined action in advance, or to combine and overlap three primary actions. For example, the first action, the second action and the third action are all overlapped to obtain three-level combined actions, so that the combined actions controlled by one hand of a user are richer, and the single-hand operation efficiency is higher.
In one embodiment of the present disclosure, after providing the combination control corresponding to the two-level combination action at the third location, the method further includes: determining a mutual exclusion control corresponding to the combined control; the mutex control is masked in the graphical user interface.
Specifically, because the created combined control is used for controlling the action of the virtual object, in order to ensure the logic accuracy of the combined control on the action control of the virtual object, the mutual exclusion control related to the combined control needs to be shielded, so that the mutual exclusion of the touch operation of the mutual exclusion control and the touch operation of the combined control by a user is prevented.
Referring to fig. 1 and 3, the controls of the "left probe" and the "right probe" are included in fig. 1, and after the combined control is provided in fig. 3, the actions of the probes have been combined to obtain the combined control, so that the controls of the "left probe" and the "right probe" need to be shielded, and the corresponding functions fail. For example, when the user selects "squat and left probe" in the second control, the user again clicks "right probe" will fail, thereby ensuring the accuracy of the control logic.
Referring to fig. 4, after the combined control is provided, the elevation head and the probe are controlled by the combined control, so the action controls of the elevation head, the left probe and the right probe need to be shielded.
In one embodiment of the disclosure, the determining the mutual exclusion control corresponding to the combination control includes: determining an associated action corresponding to the secondary combined action based on the secondary combined action corresponding to the combined control; and configuring the control corresponding to the association action as the mutual exclusion control.
Specifically, each of the combination controls corresponds to a secondary combination action, and different contents of the secondary combination actions, such as a combination of the first action and the second action, or an initial action, or a combination of the initial action and other actions, have been mentioned above.
Thus, the action data based on the two-level combined action can determine the associated action related to the action, that is, the one-level action constituting the action. Such as the first and second actions mentioned above, which are actions that the individual trigger action controls may perform, such as "squat", "left probe", etc., although the initial actions triggered by the action controls, such as "stand", may not be required.
The specific associated action determination needs to be determined according to action data involved in the combination control, which is not specifically limited by the present disclosure.
Referring to FIG. 3, in the combined control of FIG. 3, the associated actions involved include "squat", "stand", "left probe" and "left probe", and the controls corresponding to these actions are configured as exclusive controls. And reference is made to fig. 4 for an example in which the associated actions involved in combining the controls include "left probe", "right probe" and "head up".
In addition, determining the mutex control based on the generated second control further comprises determining the mutex control according to the control area of the second control and shielding the mutex control. The second control must occupy an area of the graphical interactive interface when it is generated, and the control that is completely covered by the area also needs to be masked.
It should be noted that, the mutex control may be added with a user confirmation mechanism without performing a masking process. For example, when the user has touch operations on the combination control and the mutex control at the same time, a message is popped up for the user to confirm.
And S4, responding to a third touch operation acting on the combination control, and controlling the virtual object to execute the secondary combination action corresponding to the combination control.
Specifically, each combination control corresponds to a corresponding secondary combination action, after the combination control is provided, third touch operation of a user is received, the combination control selected by the third touch operation is determined, and then the combination control is executed to control the virtual object to execute the secondary combination action.
The third touch operation may be a sliding operation or a clicking operation.
In one embodiment of the present disclosure, when the third touch operation is a sliding operation, the user may select the combination control through sliding. Specifically, the method comprises: acquiring a sliding touch point of the sliding operation; and when the sliding touch point falls on the combination control and meets the selected condition, controlling the virtual object to execute the secondary combination action corresponding to the combination control.
Taking the situation shown in fig. 3 as an example, when the user's finger slides leftwards from "squat" to "squat and left probe" and the sliding touch point exceeds the preset time, the user's finger is considered and meets the selected condition, and then "squat and left probe" is executed. A similar user finger pulls the "squat" from the "squat and left probe" and the virtual object then performs the "squat" action, sliding right to the hot zone as the "squat and right probe" and pulling back resumes the "squat". The control hot zone is in a standing state, the hot zone is in a left-right upper state, the probe is in a left-right upper state, and the corresponding control is executed when the fingers of the user slide to the corresponding hot zone.
It should be noted that, when selecting the combination control, only according to satisfying the selection condition, there is no requirement on the sliding track, that is, the user can directly slide from "squat" to "stand and left probe".
In addition, multiple selections of the combination control can be realized, so that the virtual roles can complete switching of different secondary combination actions. For example, the user performs the action by sliding from "squat" to "squat and left probe" first, then sliding up to "stand and left probe" to perform the action, and then sliding to "squat and right probe" to perform the action.
In one embodiment of the present disclosure, when the third touch operation is a sliding operation, the method further includes: and when the third touch operation is detected to be finished, controlling the virtual object to execute the initial action.
Specifically, since the third touch operation is a sliding operation, the user controls the virtual character to perform the secondary combination action through the continuous operation, and when the user releases the finger, that is, the third touch operation is finished, the virtual object resumes the initial action.
In one embodiment of the present disclosure, when the third touch operation is a click operation, the method further includes: and when the third touch operation is detected to be ended, receiving the third touch operation aiming at other combined controls.
Specifically, since the third touch operation is a click operation, the operation is not continuous when the combination control is selected, so when the click operation ends, other combination controls can still be selected.
In one embodiment of the present disclosure, a lightweight virtual object control method may also be provided. Specifically, receiving a second touch operation acting on the first control, and providing the second control at a third position when the second touch operation meets a touch threshold; and responding to a second control acting on the third position to execute the action corresponding to the second control.
Specifically, the first action is taken as "squat", and the second action is taken as "left probe" and "right probe" which can be combined with the first action, for example, and the detailed description will be made. In response to a first touch operation acting on "squat", the virtual object is controlled to perform a "squat" action. When the second touch operation meets the touch threshold, a second control is provided at a third position, namely a left probe is provided at the left side of the squat, a right probe is provided at the right side of the squat, the finger of the user slides leftwards to the left probe, the virtual character finishes the left probe in the squat gesture, the function is completely consistent with that of the action control of the left probe, when the finger of the user is pulled back to the squat from the left probe, the virtual character cancels the left probe, and similarly, the right probe is executed when the finger of the user slides rightwards to the right probe, and the pull back is cancelled.
According to the method, a new second action is not required to be generated, but the second control which is far away from the first control is moved to the preset range of the first control, so that the user can realize single-hand control, and the operation is more convenient.
In the virtual object control method in one embodiment of the present disclosure, the virtual object control method may be executed on a terminal device or a server. The terminal device may be a local terminal device. When the virtual object control method runs on the server, the virtual object control method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the virtual object control method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Fig. 5 schematically illustrates a composition diagram of a virtual object control apparatus in an exemplary embodiment of the present disclosure, and as shown in fig. 5, the virtual object control apparatus 500 may include an initial module 501, a switching module 502, a providing module 503, and a response module 504. Wherein:
an initialization module 501, configured to display a virtual object in an initial action on the graphical user interface;
A switching module 502, configured to control the virtual object to switch from the initial action to the first action in response to a first touch operation acting on the first control;
a providing module 503, configured to receive a second touch operation acting on the first control, and provide, at a third position, a combined control corresponding to the second combined action when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively;
and the response module 504 is configured to control the virtual object to execute the secondary combination action corresponding to the combination control in response to a third touch operation acting on the combination control. According to an exemplary embodiment of the present disclosure, the second action includes: an action of combining the first action with other actions.
According to an exemplary embodiment of the present disclosure, the combination control further includes a second sub-control, and the second-level combination action corresponding to the second sub-control is the initial action.
According to an exemplary embodiment of the disclosure, the combination control further includes a third sub-control, and the second-level combination action corresponding to the third sub-control is a superposition action after the initial action and the second action that is separately triggered to be executed by the second control.
According to an exemplary embodiment of the present disclosure, the providing module 503 further includes a shielding unit (not shown in the figure) configured to determine, after providing, at the third location, a combination control corresponding to the second level combination action, a mutual exclusion control corresponding to the combination control; the mutex control is masked in the graphical user interface.
According to an exemplary embodiment of the present disclosure, the shielding unit is further configured to determine an associated action corresponding to the secondary combined action based on the secondary combined action corresponding to the combined control; and configuring the control corresponding to the association action as the mutual exclusion control.
According to an exemplary embodiment of the present disclosure, the response module 504 includes a sliding response unit (not shown in the figure) configured to obtain a sliding touch point of the sliding operation when the second touch operation is the sliding operation; and when the sliding touch point falls on the combination control and meets the selected condition, controlling the virtual object to execute the secondary combination action corresponding to the combination control.
According to an exemplary embodiment of the present disclosure, the response module 504 further includes a first ending unit (not shown in the figure) configured to control the virtual object to perform the initial action when the third touch operation is detected to be ended.
According to an exemplary embodiment of the present disclosure, the response module 504 further includes a second ending unit (not shown in the figure) for, when the third touch operation is a click operation, the method further includes: and when the third touch operation is detected to be ended, receiving the third touch operation aiming at other combined controls.
According to an exemplary embodiment of the present disclosure, the virtual object control device 500 further includes a third control module (not shown in the figure) configured to provide, at a fifth position, a combination control corresponding to the three-level combination action when the third touch operation is detected to meet a touch threshold; the third-level combined action comprises a superposition action of the first action executed by the first control, the second action executed by the second control and the third action executed by the third control, which are triggered respectively; and responding to a fourth touch operation of the combination control corresponding to the three-level combination action, and controlling the virtual object to execute the three-level combination action.
The specific details of each module in the above-mentioned virtual object control apparatus 500 are already described in detail in the corresponding virtual object control method, and thus are not described herein again.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, a storage medium capable of implementing the above method is also provided. Fig. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the present disclosure, as shown in fig. 6, depicting a program product 600 for implementing the above-described method according to an embodiment of the present disclosure, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a cell phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 7 schematically illustrates a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
It should be noted that, the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 7, the computer system 700 includes a central processing unit (Central Processing Unit, CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a random access Memory (Random Access Memory, RAM) 703. In the RAM 703, various programs and information required for the system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. When executed by a Central Processing Unit (CPU) 701, performs the various functions defined in the system of the present disclosure.
It should be noted that, the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in the present disclosure a computer-readable signal medium may comprise an information signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present disclosure also provides a computer-readable medium that may be contained in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A virtual object control method, which provides a first control corresponding to a first action at a first position and a second control corresponding to a second action at a second position through a graphical user interface, comprising:
displaying the virtual object in an initial action on the graphical user interface;
responding to a first touch operation acted on the first control, and controlling the virtual object to switch from the initial action to the first action;
receiving a second touch operation acting on the first control, and providing a combined control corresponding to the two-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively;
When the third touch operation is a sliding operation, acquiring a sliding touch point of the sliding operation; and when the sliding touch point falls on the combination control and meets the selected condition, controlling the virtual object to execute the secondary combination action corresponding to the combination control.
2. The virtual object control method of claim 1, wherein the combined control further comprises a second sub-control, and a secondary combined action corresponding to the second sub-control is the initial action.
3. The virtual object control method according to claim 1, wherein the combination control further comprises a third sub-control, and the second-level combination action corresponding to the third sub-control is a superposition action after the initial action and the second action that is separately triggered to be executed by the second control.
4. The virtual object control method of claim 1, wherein after providing the combination control corresponding to the secondary combination action at the third location, the method further comprises:
determining a mutual exclusion control corresponding to the combined control;
the mutex control is masked in the graphical user interface.
5. The method of claim 4, wherein determining the mutex control corresponding to the combined control comprises:
Determining an associated action corresponding to the secondary combined action based on the secondary combined action corresponding to the combined control;
and configuring the control corresponding to the association action as the mutual exclusion control.
6. The virtual object control method according to claim 1, wherein when the third touch operation is a sliding operation, the method further comprises:
and when the third touch operation is detected to be finished, controlling the virtual object to execute the initial action.
7. The virtual object control method according to claim 1, wherein when the third touch operation is a click operation, the method further comprises:
and when the third touch operation is detected to be ended, receiving the third touch operation aiming at other combined controls.
8. The virtual object control method of claim 1, wherein a third control corresponding to a third action is provided at a fourth location through the graphical user interface, the method further comprising:
when the third touch operation is detected to meet the touch threshold, providing a combination control corresponding to the three-level combination action at a fifth position; the third-level combined action comprises a superposition action of the first action executed by the first control, the second action executed by the second control and the third action executed by the third control, which are triggered respectively;
And responding to a fourth touch operation of the combination control corresponding to the three-level combination action, and controlling the virtual object to execute the three-level combination action.
9. A virtual object control apparatus that provides a first control corresponding to a first action at a first location and a second control corresponding to a second action at a second location through a graphical user interface, comprising:
the initial module is used for displaying the virtual object in initial action on the graphical user interface;
the switching module is used for responding to a first touch operation acted on the first control and controlling the virtual object to switch from the initial action to the first action;
the providing module is used for receiving a second touch operation acting on the first control, and providing a combined control corresponding to the second-level combined action at a third position when the second touch operation meets a touch threshold; the combined control comprises a first sub control, and the secondary combined action corresponding to the first sub control is a superposition action of the first action executed by the first control and the second action executed by the second control, which are triggered respectively;
the response module is used for acquiring a sliding touch point of the sliding operation when the third touch operation is the sliding operation; and when the sliding touch point falls on the combination control and meets the selected condition, controlling the virtual object to execute the secondary combination action corresponding to the combination control.
10. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the virtual object method of any of claims 1 to 8.
11. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the virtual object method of any of claims 1 to 8.
CN202110790206.9A 2021-07-13 2021-07-13 Virtual object control method and device, storage medium and electronic equipment Active CN113476823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110790206.9A CN113476823B (en) 2021-07-13 2021-07-13 Virtual object control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110790206.9A CN113476823B (en) 2021-07-13 2021-07-13 Virtual object control method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113476823A CN113476823A (en) 2021-10-08
CN113476823B true CN113476823B (en) 2024-02-27

Family

ID=77938371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110790206.9A Active CN113476823B (en) 2021-07-13 2021-07-13 Virtual object control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113476823B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114225372B (en) * 2021-10-20 2023-06-27 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal, storage medium and program product
CN113975803B (en) * 2021-10-28 2023-08-25 腾讯科技(深圳)有限公司 Virtual character control method and device, storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450812A (en) * 2017-06-26 2017-12-08 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN108144293A (en) * 2017-12-15 2018-06-12 网易(杭州)网络有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN108553891A (en) * 2018-04-27 2018-09-21 腾讯科技(深圳)有限公司 Object method of sight and device, storage medium and electronic device
WO2019174443A1 (en) * 2018-03-12 2019-09-19 网易(杭州)网络有限公司 Information processing method and apparatus, and storage medium and electronic device
CN110420462A (en) * 2018-10-25 2019-11-08 网易(杭州)网络有限公司 The method and device of virtual objects locking, electronic equipment, storage medium in game
WO2020168877A1 (en) * 2019-02-21 2020-08-27 腾讯科技(深圳)有限公司 Object control method and apparatus, storage medium and electronic apparatus
CN111921194A (en) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN113082688A (en) * 2021-03-31 2021-07-09 网易(杭州)网络有限公司 Method and device for controlling virtual character in game, storage medium and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9901824B2 (en) * 2014-03-12 2018-02-27 Wargaming.Net Limited User control of objects and status conditions

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450812A (en) * 2017-06-26 2017-12-08 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN108144293A (en) * 2017-12-15 2018-06-12 网易(杭州)网络有限公司 Information processing method, information processing device, electronic equipment and storage medium
WO2019174443A1 (en) * 2018-03-12 2019-09-19 网易(杭州)网络有限公司 Information processing method and apparatus, and storage medium and electronic device
CN108553891A (en) * 2018-04-27 2018-09-21 腾讯科技(深圳)有限公司 Object method of sight and device, storage medium and electronic device
CN110420462A (en) * 2018-10-25 2019-11-08 网易(杭州)网络有限公司 The method and device of virtual objects locking, electronic equipment, storage medium in game
WO2020168877A1 (en) * 2019-02-21 2020-08-27 腾讯科技(深圳)有限公司 Object control method and apparatus, storage medium and electronic apparatus
CN111921194A (en) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN113082688A (en) * 2021-03-31 2021-07-09 网易(杭州)网络有限公司 Method and device for controlling virtual character in game, storage medium and equipment

Also Published As

Publication number Publication date
CN113476823A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN113476823B (en) Virtual object control method and device, storage medium and electronic equipment
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
US20240264740A1 (en) Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product
CN113350779A (en) Game virtual character action control method and device, storage medium and electronic equipment
US11803301B2 (en) Virtual object control method and apparatus, device, storage medium, and computer program product
CN111298431B (en) Construction method and device in game
CN112306351B (en) Virtual key position adjusting method, device, equipment and storage medium
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
WO2023066003A1 (en) Virtual object control method and apparatus, and terminal, storage medium and program product
CN111346369A (en) Shooting game interaction method and device, electronic equipment and storage medium
CN113827970B (en) Information display method and device, computer readable storage medium and electronic equipment
KR20240033087A (en) Control methods, devices, devices, storage media and program products of opening operations in hypothetical scenarios
CN114011062A (en) Information processing method, information processing device, electronic equipment and storage medium
CN111766989B (en) Interface switching method and device
CN113680062B (en) Information viewing method and device in game
CN115300904A (en) Recommendation method and device, electronic equipment and storage medium
CN115120979A (en) Display control method and device of virtual object, storage medium and electronic device
WO2023226569A9 (en) Message processing method and apparatus in virtual scenario, and electronic device, computer-readable storage medium and computer program product
Sheng et al. A novel menu interaction method using head-mounted display for smartphone-based virtual reality
CN114210046A (en) Virtual skill control method, device, equipment, storage medium and program product
CN115501599A (en) Virtual object control method, device, medium and equipment
CN115779432A (en) Virtual object control method, device, storage medium and electronic equipment
CN115944916A (en) Sound effect determination method and device, electronic equipment and storage medium
CN117122895A (en) Method, device, equipment and storage medium for controlling role evasion
CN118349637A (en) Man-machine interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant