CN117582656A - Interactive control method and device for game and electronic equipment - Google Patents

Interactive control method and device for game and electronic equipment Download PDF

Info

Publication number
CN117582656A
CN117582656A CN202311458943.4A CN202311458943A CN117582656A CN 117582656 A CN117582656 A CN 117582656A CN 202311458943 A CN202311458943 A CN 202311458943A CN 117582656 A CN117582656 A CN 117582656A
Authority
CN
China
Prior art keywords
action
virtual character
controlled virtual
scene
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311458943.4A
Other languages
Chinese (zh)
Inventor
陈希昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311458943.4A priority Critical patent/CN117582656A/en
Publication of CN117582656A publication Critical patent/CN117582656A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an interactive control method, an interactive control device and electronic equipment for a game, wherein the method comprises the following steps: displaying a scene picture of the game scene in the graphical user interface; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action. In the mode, the game operation smoothness and convenience of the player are improved, misoperation is avoided, and the game experience of the player is improved.

Description

Interactive control method and device for game and electronic equipment
Technical Field
The disclosure relates to the technical field of games, and in particular relates to an interactive control method and device for a game and electronic equipment.
Background
A shelter model is typically provided in the game scene and player-controlled virtual characters may enter the shelter for hiding the whereabouts. In order to realize continuous actions of entering the shelter after rolling, a player needs to operate the rolling control first and then operate the shelter entering control; part of the game sets the virtual character to automatically enter the shelter after tumbling around the shelter, and if the player expects not to enter the shelter, the operation of leaving the shelter needs to be performed. The game operation mode has more steps, is easy to produce misoperation and has poorer operation smoothness.
Disclosure of Invention
Accordingly, the present invention is directed to a game interaction control method, apparatus, and electronic device, so as to improve the smoothness and convenience of game operations of a player, avoid occurrence of misoperation of the player, and improve game experience of the player.
In a first aspect, an embodiment of the present invention provides an interactive control method for a game, where the method includes: displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
In a second aspect, an embodiment of the present disclosure further provides an interaction control apparatus for a game, including: the first display module is used for displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; the first control module is used for responding to the triggering operation aiming at the action control and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; the second control module is used for judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement an interactive control method for the game.
In a fourth aspect, embodiments of the present disclosure provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement an interactive control method for a game as described above.
The embodiment of the invention has the following beneficial effects:
the interactive control method, the interactive control device and the electronic equipment for the game display scene images of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action. In the mode, after the target action is completed, the controlled virtual character can be controlled by the player only through one trigger operation, and the shelter model is automatically executed according to whether the designated scene area corresponding to the scene position of the controlled virtual character after the target action is executed or not, so that the convenience and fluency of the game operation of the player are improved, the occurrence of misoperation is avoided, and the game experience of the player is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an interactive control method for a game provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of the operation of a display area for an action control provided by an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of another operation for an action control provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an interactive control device for a game according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the related art, a rolling control and a shelter control are provided on a game interface, a player can trigger the rolling control to control the virtual character to execute rolling action, and when the virtual character rolls to the vicinity of the shelter to enter the shelter, the shelter control is required to be clicked again, so that the problem of break points and unsmooth operation of the player exists; some games support the virtual character to roll over and then enter the shelter directly, i.e. the shelter is located nearby the virtual character after rolling over and can enter the shelter directly, but the setting of forcing the virtual character to enter the shelter after rolling over may not be the expected operation of the player, resulting in misoperation.
Based on the above, the method, the device and the electronic equipment for controlling interaction of the game provided by the embodiment of the disclosure can be applied to games with shelter shooting as a main battle mode.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the interactive control method of the game are completed on the cloud game server, and the function of the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the disclosure provides an interactive control method for a game, and a graphical user interface is provided through terminal equipment; the terminal device may be the aforementioned local terminal device, or may be the aforementioned client device in the cloud interaction system. A graphical user interface is provided through the terminal device on which interface content, e.g., game scene images, communication interactive windows, etc., may be displayed depending on the type of application being launched.
For the convenience of understanding the present embodiment, first, a detailed description will be given of a game interaction control method disclosed in the present embodiment, as shown in fig. 1, where the game interaction control method includes the following steps:
step S102, displaying a scene picture of a game scene in a graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control;
the action control is a trigger control for controlling the controlled virtual character to execute corresponding actions, the action control can comprise a plurality of types, different action controls can be used for triggering the controlled virtual character to execute different actions, the action control can be a rolling control, and the controlled virtual character is triggered to execute rolling actions; the action control may be a jump control that triggers the controlled virtual character to perform a jump action. And after receiving the triggering operation of the action control, controlling the controlled virtual character to execute the action corresponding to the action control.
Displaying a scene picture of a game scene in a graphical user interface provided by the terminal equipment, wherein the game scene comprises controlled virtual characters, and a player can control the controlled virtual characters to move in the virtual scene; and the graphical user interface is also provided with an action control, and the player controls the controlled virtual character to execute the action corresponding to the action control by triggering the action control.
Step S104, responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action;
the target action is the action executed by the controlled virtual character after the user triggers the action control, and the target action can be rolling, jumping and the like according to different action controls. The first operation type may be a sliding operation acting inside the display area of the action control, and when the action control includes a plurality of control sub-areas, the first operation type may also be a sliding operation from one sub-area to another sub-area.
After receiving the triggering operation for the action control, controlling the controlled virtual character to execute the target action, wherein when the terminal equipment recognizes that the operation type of the triggering operation is the first operation type, the scene position of the controlled virtual character in the game scene where the controlled virtual character executes the target action is required to be acquired. The scene position may be determined based on a position of the controlled virtual character before the target action is performed, and a moving direction of the controlled virtual character when the target action is performed.
Step S106, judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
The specified scene area is an area with a scene position of the controlled virtual character not exceeding a specified distance after the target action is executed, and the shape of the specified scene area is not limited and can be a circle, a sector or an irregular graph. For example, the specified scene area may be a circular area range centered on the scene location with a specified distance as a radius. The shelter model is a virtual model for controlling the virtual character to avoid attack of other virtual characters, and can be a building model such as a wall body or a virtual carrier model such as a vehicle. The above-described entry into the shelter model may refer to the controlled virtual character entering the shelter model for hiding, may be squat under a building model, or entering a virtual vehicle model. A standing motion may be understood as a gesture motion that the controlled avatar assumes without entering the shelter model after performing the target motion.
After the scene position of the controlled virtual character after the target action is executed is acquired, judging whether a shelter model exists in a designated scene area corresponding to the scene position, determining the follow-up action corresponding to the target action according to a judging result, and controlling the controlled virtual character to continuously execute the follow-up action. Herein, the follow-up actions include an action of entering the shelter model, or a standing action. In one case, when a shelter model exists in a designated scene area corresponding to a scene position, a follow-up action of the controlled virtual character is determined to be an action of entering the shelter model, in which case the controlled virtual character automatically enters the shelter model after completing a target action, such as squatting under a building model, or entering a virtual vehicle. In another case, when the shelter model does not exist in the designated scene area corresponding to the scene position, the follow-up action of the controlled virtual character is determined to be a standing action, and in this case, the controlled virtual character automatically stands after the target action is completed.
In the step, whether the controlled virtual character enters the shelter model after executing the target action is determined according to whether the shelter model exists in the designated scene area corresponding to the scene position where the controlled virtual character is located after executing the target action, so that personalized requirements of players are met, and misoperation of forcedly entering the shelter model is avoided. In addition, the player can control the controlled virtual character to automatically execute the shelter model entering or standing action after completing the target action only by performing the triggering operation once, so that the game operation smoothness and convenience of the player are improved, and the game experience of the player is improved.
According to the interactive control method of the game, scene images of a game scene are displayed in a graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action. In the mode, after the target action is completed, the controlled virtual character can be controlled by the player only through one trigger operation, and the shelter model is automatically executed according to whether the designated scene area corresponding to the scene position of the controlled virtual character after the target action is executed or not, so that the convenience and fluency of the game operation of the player are improved, the occurrence of misoperation is avoided, and the game experience of the player is improved.
In one mode, aiming at the triggering operation of the action control, determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to execute the standing action after controlling the controlled virtual character to execute the target action; wherein the second operation type is different from the first operation type.
The second operation type may be a click operation for a single click, a double click, or the like of the action control, and the second operation type is different from the first operation type.
That is, when the trigger operation for the action control is received and the operation type of the trigger operation is recognized as the second operation type, the controlled virtual character is controlled to execute the target action and then to execute the standing action, that is, when the operation type is the second operation type, no matter whether the shelter model exists nearby, the controlled virtual character does not enter the shelter model after executing the target action, but rather, the controlled virtual character is in the standing posture.
In one form, the second operation type includes: click operations on the action controls.
That is, when the player performs a click operation on the action control, it is determined that the operation type of the trigger operation on the action control is the second operation type, and the controlled virtual character is controlled to keep standing after the target action is performed, and at this time, the controlled virtual character does not enter the shelter model regardless of the existence of the shelter model in the vicinity.
In this way, when the player does not want the controlled virtual character to enter the shelter model, the controlled virtual character can keep standing after the target action is executed only by clicking the action control, and does not enter the shelter model. The method meets the personalized requirements of players, ensures that the whole operation accords with the expectations of the players, and avoids the generation of misoperation.
The following embodiments provide specific implementations of determining a follow-up action corresponding to a target action.
In one approach, it is determined that a shelter model exists within a designated scene area, and the controlled virtual character is controlled to perform an action into the shelter model.
In this manner, when a shelter model exists within a designated scene area, the controlled avatar is controlled to perform an action of entering the shelter model. Here, the entry shelter model may be a squat under a virtual building model or may be an entry into a virtual vehicle such as a vehicle. After entering the shelter model, the position of the controlled virtual character relative to the shelter model may be determined based on the location of the controlled virtual character near the shelter model after performing the target action, for example: when the target action is a rolling action, the controlled virtual character rolls and approaches the left side of the shelter model, and then automatically squats on the left side of the shelter or enters the position of the virtual carrier, which is far to the left.
In another approach, it is determined that there is no shelter model within the designated scene area, and the controlled avatar is controlled to perform a standing action.
In this manner, the controlled avatar is controlled to perform a standing action when it is determined that there is no shelter model within the designated scene area. That is, when the controlled virtual character performs the target action and the shelter model does not exist in the designated scene area, the controlled virtual character is controlled to perform the standing action. Such as: when the target action is a rolling action, determining that a specific scene area corresponding to the scene position after the controlled virtual character rolls does not have a shelter model, and keeping a standing state after the controlled virtual character rolls.
In one form, the first type of operation comprises: a first sliding operation within the display area of the action control.
It can be appreciated that the action control can be one control or a combination control formed by a plurality of sub-controls. The display area of the action control refers to the area in the graphical user interface where the action control is displayed. The triggering operation of the action control may be a first sliding operation acting inside the display area of the action control.
When the terminal device recognizes a first sliding operation acting on the inside of the display area of the action control, it is determined that the operation type of the trigger operation is the first operation type. The motion control is a roll control, and when the vehicle shelter model exists in the front, sliding operation from bottom to top is performed in the display area of the roll control, the controlled virtual character is controlled to roll forward, and the controlled virtual character enters the front vehicle shelter model to hide.
In a specific mode, a display area of the action control comprises an initial position area and a target action control area of the action control; the first operation type is a slide operation with an initial position area of the action control as a start point and a target action control area as an end point.
In this manner, it may be understood that the display area of the motion control is composed of two parts, that is, an initial position area of the motion control and a target motion control area, as shown in fig. 2, the operation corresponding to the first operation type may be a sliding operation using the initial position area of the motion control as a starting point and using the target motion control area as an end point, and when the terminal device recognizes the sliding operation using an arbitrary point in the initial position area of the motion control as a starting point and using an arbitrary point in the target motion control area as an end point, the operation type of the triggering operation is determined to be the first operation type, the controlled virtual character is controlled to execute the target motion according to the preset moving direction, and after the target motion is executed, the subsequent operation corresponding to the target motion is automatically executed, where the preset moving direction may be controlled by the direction control.
Further, the operation contact point in response to the slide operation is located in the target motion control area, and the display state of the target motion control area is updated.
When the operation contact point of the sliding operation is positioned in the target motion control area, the target motion control area is highlighted in the process that the operation contact point of the sliding operation slides into the target motion control area from the initial position area of the motion control.
The following embodiments provide implementations for controlling the entry of controlled avatars into a shelter model.
In one case, when the determination result indicates that a plurality of shelter models exist in the specified scene area, determining a target shelter model from the plurality of shelter models based on the orientation of the controlled virtual character; wherein the target shelter model is located in the direction of orientation of the controlled virtual character; the controlled virtual character is controlled to perform an action into the target shelter model.
That is, when there are a plurality of shelters in a designated scene area, after the controlled virtual character performs a target action, a shelter model located in the direction in which the controlled virtual character is directed is determined as a target shelter model, and then the controlled virtual character is automatically controlled to perform an action of entering the target shelter model. Illustratively, when the target action is a roll-over action, the controlled avatar rolls over to which shelter model, and then automatically controls which shelter model the controlled avatar enters.
In still another case, if a plurality of shelter models exist in the designated scene area, wherein a distance between the first shelter model and the controlled virtual character is smaller than a distance between the second shelter model and the controlled virtual character, the controlled virtual character can be controlled to enter the first shelter model after performing the target action by continuously performing the triggering operation corresponding to the first operation type twice, and then enter the second shelter model.
For example, the trigger operation corresponding to the first operation type is a first sliding operation acting on the inside of the display area of the scroll control. The controlled virtual character is controlled to roll forward to a shelter model which is closer to the controlled virtual character and then to a shelter model which is farther away from the controlled virtual character.
In one mode, responding to a triggering operation aiming at an action control, determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to start executing a target action; responding to the existence of a shelter model on an execution path of the target action, and determining the scene position of the controlled virtual character to be stopped based on the model position of the shelter model; and controlling the controlled virtual character to stop executing the target action at the scene position.
The execution path of the target action is a movement path in the movement direction when the controlled virtual character executes the target action, and the length of the execution path is the distance of the controlled virtual character moving for executing the target action once. The scene position to be stopped refers to the position of the controlled virtual character in the game scene after the target action is executed, and the scene position to be stopped is determined according to the model position of the shelter model.
That is, when the operation type of the trigger operation is the second operation type, it is necessary to determine a scene position in the game scene after the controlled virtual character completes the target action according to the model position of the shelter model when it is detected that the shelter model exists on the execution path of the target action, and control the controlled virtual character to stop executing the target action at the scene position so as to prevent entering the shelter model.
In one embodiment, the action control is a roll control, and when the player is identified to click the roll control, the operation type of the trigger operation is determined to be a second operation type, the controlled virtual character needs to be controlled to execute a roll action of rolling forward for 1m, but if a shelter model is located 0.6m in front of the controlled virtual character, the controlled virtual character is controlled to roll to 0.6m, and the roll is stopped and kept standing.
In one manner, the triggering operation further comprises: a second sliding operation of sliding to the area outside the action control by taking the action control as a starting point; responding to a second sliding operation aiming at the action control, and controlling the controlled virtual character to start executing the target action; and determining the moving direction of the operation contact point of the triggering operation as the moving direction of the target action, and controlling the controlled virtual character to execute the action of crossing the shelter model.
That is, a second sliding operation is performed on the motion control starting from the motion control and sliding to an area other than the motion control, the moving direction of the operation contact point of the second sliding operation is determined as the moving direction of the target motion, and the controlled virtual character is controlled to traverse the shelter model while the target motion is being performed.
Taking a rolling control as an example, a player slides from any position in a display area of the rolling control, when an operation contact point of the sliding operation slides from the inside of the display area of the rolling control to the position right below the display area of the rolling control, the controlled virtual character is controlled to execute rolling action of rolling for 1m right behind, if a shelter model is arranged at 0.6m behind the controlled virtual character, the controlled virtual character is controlled to roll over the shelter model, and the player keeps standing after reaching a rolling distance of 1 m.
In the mode, when the controlled virtual character rolls backwards, the situation that the whole operation does not accord with the expected situation of the player is avoided, and misoperation is avoided because the player cannot see the rear view.
In a specific embodiment, the motion control is a scroll control, the display area of the scroll control includes an initial position area and a target motion control area of the scroll control,
1) Recognizing that a player executes a sliding operation taking an initial position area of a rolling control as a starting point and taking a target action control area as an end point, determining an operation type of a triggering operation of the rolling control as a first operation type as shown in (a) in fig. 3, controlling a controlled virtual character to execute one rolling action, taking a scene position of the controlled virtual character after rolling as a circle center and taking 0.5m as a radius to obtain a circular area, and determining the inside of the circular area as a designated scene area.
When the shelter model exists in the designated scene area, controlling the controlled virtual character to enter the shelter model after performing a rolling action, such as rolling into the shelter;
when the designated scene area has no shelter model, the controlled virtual character is controlled to maintain a standing posture after performing a roll motion.
2) Recognizing that the player clicks the initial position area of the rolling control, determining that the operation type of the triggering operation of the rolling control is a second operation type as shown in (b) of fig. 3, and after the controlled virtual character is controlled to execute one rolling action, keeping the controlled virtual character in a standing posture regardless of the existence of a shelter model in the vicinity, and not entering the shelter model.
In the mode, whether the controlled virtual character performs the operation of entering the shelter model or not is distinguished by the clicking and sliding operation states of the rolling control, the personalized requirements of players are met, misoperation is avoided, and meanwhile overall game operation is smoother.
Corresponding to the above method embodiment, referring to fig. 4, there is shown a schematic diagram of an interactive control device for a game, where the device includes:
a first display module 402 for displaying a scene view of a game scene in a graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control;
a first control module 404, configured to control the controlled virtual character to execute the target action in response to a trigger operation for the action control; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action;
A second control module 406, configured to determine whether a shelter model exists in a specified scene area corresponding to the scene location, to obtain a determination result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
In the mode, after the controlled virtual character executes the target action, the player can be controlled to automatically execute the shelter model entering or standing action according to whether the shelter model exists in the appointed scene area corresponding to the scene position of the controlled virtual character, so that the game operation smoothness and convenience of the player are improved, misoperation is avoided, and the game experience of the player is improved.
The device further comprises a third control module, a second control module and a third control module, wherein the third control module is used for determining that the operation type of the triggering operation is a second operation type, and controlling the controlled virtual character to execute the standing action after controlling the controlled virtual character to execute the target action; wherein the second operation type is different from the first operation type.
The second operation type includes: click operations on the action controls.
The second control module is further used for determining that a shelter model exists in the appointed scene area and controlling the controlled virtual character to execute the action of entering the shelter model.
And the second control module is also used for determining that a shelter model does not exist in the appointed scene area and controlling the controlled virtual character to execute standing action.
The first operation type includes: a first sliding operation within the display area of the action control.
The display area of the action control comprises: an initial position area and a target action control area of the action control; the first operation type is a sliding operation with an initial position area of the motion control as a start point and a target motion control area as an end point.
The apparatus further includes a first updating operation for updating a display state of the target motion control area in response to the operation contact point of the sliding operation being located in the target motion control area.
The second control module is further configured to determine that the judgment result indicates that a plurality of shelter models exist in the specified scene area, and determine a target shelter model from the plurality of shelter models based on the orientation of the controlled virtual character; wherein the target shelter model is located in the direction of orientation of the controlled virtual character; the controlled virtual character is controlled to perform an action into the target shelter model.
The first control module is further configured to, in response to a trigger operation for the action control, determine that an operation type of the trigger operation is a second operation type, and control the controlled virtual character to start executing the target action; responding to the existence of a shelter model on an execution path of the target action, and determining the scene position of the controlled virtual character to be stopped based on the model position of the shelter model; and controlling the controlled virtual character to stop executing the target action at the scene position.
The triggering operation further includes: a second sliding operation of sliding to the area outside the action control by taking the action control as a starting point; the first control module is further used for responding to a second sliding operation aiming at the action control and controlling the controlled virtual character to start executing the target action; and determining the moving direction of the operating contact point of the second sliding operation as the moving direction of the target action, and controlling the controlled virtual character to execute the action of crossing the shelter model.
The embodiment also provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the interactive control method of the game. The electronic device may be a server or a terminal device.
Referring to fig. 5, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the interactive control method of the game described above.
Further, the electronic device shown in fig. 5 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 5, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
In the mode, after the controlled virtual character executes the target action, the player can be controlled to automatically execute the shelter model entering or standing action according to whether the shelter model exists in the appointed scene area corresponding to the scene position of the controlled virtual character, so that the game operation smoothness and convenience of the player are improved, misoperation is avoided, and the game experience of the player is improved.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to execute a standing action after controlling the controlled virtual character to execute a target action; wherein the second operation type is different from the first operation type.
The second operation type includes: click operations on the action controls.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: and determining that a shelter model exists in the appointed scene area, and controlling the controlled virtual character to execute the action of entering the shelter model.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: and determining that a shelter model does not exist in the appointed scene area, and controlling the controlled virtual character to execute standing action.
The first operation type includes: and (3) sliding operation on the action control.
The display area of the action control comprises: an initial position area and a target action control area of the action control; the first operation type is a slide operation with an initial position area of the action control as a start point and a target action control area as an end point.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: the operation contact point in response to the sliding operation is located in the target motion control area, and the display state of the target motion control area is updated.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: determining that the judging result indicates that a plurality of shelter models exist in the appointed scene area, and determining a target shelter model from the plurality of shelter models based on the direction of the controlled virtual character; wherein the target shelter model is located in the direction of orientation of the controlled virtual character; the controlled virtual character is controlled to perform an action into the target shelter model.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: responding to the triggering operation aiming at the action control, determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to start executing the target action; responding to the existence of a shelter model on an execution path of the target action, and determining the scene position of the controlled virtual character to be stopped based on the model position of the shelter model; and controlling the controlled virtual character to stop executing the target action at the scene position.
The processor in the electronic device may implement the following operations of the interactive control method of the game by executing machine executable instructions: the triggering operation further includes: a second sliding operation of sliding to the area outside the action control by taking the action control as a starting point; responding to a second sliding operation aiming at the action control, and controlling the controlled virtual character to start executing the target action; and determining the moving direction of the operating contact point of the second sliding operation as the moving direction of the target action, and controlling the controlled virtual character to execute the action of crossing the shelter model.
The present embodiment also provides a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the interactive control method for a game described above.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control; responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute the target action; determining the operation type of triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action; judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up actions include: an action to enter the shelter model, or a standing action.
In the mode, after the controlled virtual character executes the target action, the player can be controlled to automatically execute the shelter model entering or standing action according to whether the shelter model exists in the appointed scene area corresponding to the scene position of the controlled virtual character, so that the game operation smoothness and convenience of the player are improved, misoperation is avoided, and the game experience of the player is improved.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to execute a standing action after controlling the controlled virtual character to execute a target action; wherein the second operation type is different from the first operation type.
The second operation type includes: click operations on the action controls.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: and determining that a shelter model exists in the appointed scene area, and controlling the controlled virtual character to execute the action of entering the shelter model.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: and determining that a shelter model does not exist in the appointed scene area, and controlling the controlled virtual character to execute standing action.
The first operation type includes: and (3) sliding operation on the action control.
The display area of the action control comprises: an initial position area and a target action control area of the action control; the first operation type is a slide operation with an initial position area of the action control as a start point and a target action control area as an end point.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: the operation contact point in response to the sliding operation is located in the target motion control area, and the display state of the target motion control area is updated.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: determining that the judging result indicates that a plurality of shelter models exist in the appointed scene area, and determining a target shelter model from the plurality of shelter models based on the direction of the controlled virtual character; wherein the target shelter model is located in the direction of orientation of the controlled virtual character; the controlled virtual character is controlled to perform an action into the target shelter model.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: responding to the triggering operation aiming at the action control, determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to start executing the target action; responding to the existence of a shelter model on an execution path of the target action, and determining the scene position of the controlled virtual character to be stopped based on the model position of the shelter model; and controlling the controlled virtual character to stop executing the target action at the scene position.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the interactive control method for a game by executing the machine-executable instructions: the triggering operation further includes: a second sliding operation of sliding to the area outside the action control by taking the action control as a starting point; responding to a second sliding operation aiming at the action control, and controlling the controlled virtual character to start executing the target action; and determining the moving direction of the operating contact point of the second sliding operation as the moving direction of the target action, and controlling the controlled virtual character to execute the action of crossing the shelter model.
The interactive control method, apparatus, electronic device and computer program product of storage medium for game provided in the embodiments of the present invention include a computer readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (14)

1. A method of interactive control of a game, the method comprising:
displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control;
responding to the triggering operation aiming at the action control, and controlling the controlled virtual character to execute a target action; determining the operation type of the triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action;
Judging whether a shelter model exists in a designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up action includes: an act of entering the shelter model, or a standing act.
2. The method according to claim 1, wherein the method further comprises:
determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to execute a standing action after controlling the controlled virtual character to execute the target action; wherein the second operation type is different from the first operation type.
3. The method of claim 2, wherein the second operation type comprises: and clicking operation on the action control.
4. The method according to claim 1, wherein the step of controlling the controlled virtual character to execute the subsequent action corresponding to the target action based on the determination result includes:
and determining that a shelter model exists in the appointed scene area, and controlling the controlled virtual character to execute an action of entering the shelter model.
5. The method according to claim 1, wherein the step of controlling the controlled virtual character to execute the subsequent action corresponding to the target action based on the determination result includes:
and determining that a shelter model does not exist in the appointed scene area, and controlling the controlled virtual character to execute standing action.
6. The method of claim 1, wherein the first operation type comprises: a first sliding operation is performed inside the display area of the action control.
7. The method of claim 1, wherein the display area of the action control comprises: an initial position area of the action control and a target action control area:
the first operation type is a sliding operation taking an initial position area of the action control as a starting point and taking a target action control area as an end point.
8. The method of claim 7, wherein the method further comprises:
and updating the display state of the target action control area in response to the operation contact point of the sliding operation being positioned in the target action control area.
9. The method according to claim 1, wherein the step of controlling the controlled virtual character to execute the subsequent action corresponding to the target action based on the determination result includes:
Determining that the judging result indicates that a plurality of shelter models exist in the appointed scene area, and determining a target shelter model from the plurality of shelter models based on the direction of the controlled virtual character; wherein the target shelter model is located in a direction of orientation of the controlled virtual character;
and controlling the controlled virtual character to execute the action of entering the target shelter model.
10. The method of claim 1, wherein the step of controlling the controlled virtual character to perform a target action in response to a triggering operation for the action control comprises:
responding to the triggering operation aiming at the action control, determining the operation type of the triggering operation as a second operation type, and controlling the controlled virtual character to start executing a target action;
determining a scene position at which the controlled virtual character is to be stopped based on a model position of a shelter model in response to the shelter model existing on an execution path of the target action;
and controlling the controlled virtual role to stop executing the target action at the scene position.
11. The method of claim 1, wherein the triggering operation further comprises: a second sliding operation of sliding to an area outside the action control by taking the action control as a starting point; and responding to the triggering operation for the action control, controlling the controlled virtual character to execute a target action, wherein the step comprises the following steps:
Responding to a second sliding operation aiming at the action control, and controlling the controlled virtual character to start executing a target action;
and determining the moving direction of the operating contact point of the second sliding operation as the moving direction of the target action, and controlling the controlled virtual character to execute the action of crossing the shelter model.
12. An interactive control device for a game, the device comprising:
the first display module is used for displaying a scene picture of the game scene in the graphical user interface; wherein the game scene comprises controlled virtual roles; the graphical user interface is displayed with an action control;
the first control module is used for responding to the triggering operation aiming at the action control and controlling the controlled virtual character to execute a target action; determining the operation type of the triggering operation as a first operation type, and acquiring the scene position of the controlled virtual character after executing the target action;
the second control module is used for judging whether a shelter model exists in the designated scene area corresponding to the scene position, and obtaining a judging result; based on the judging result, controlling the controlled virtual character to execute the follow-up action corresponding to the target action; wherein the follow-up action includes: an act of entering the shelter model, or a standing act.
13. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the interactive control method of the game of any one of claims 1-11.
14. A machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the interactive control method of a game as claimed in any one of claims 1 to 11.
CN202311458943.4A 2023-11-02 2023-11-02 Interactive control method and device for game and electronic equipment Pending CN117582656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311458943.4A CN117582656A (en) 2023-11-02 2023-11-02 Interactive control method and device for game and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311458943.4A CN117582656A (en) 2023-11-02 2023-11-02 Interactive control method and device for game and electronic equipment

Publications (1)

Publication Number Publication Date
CN117582656A true CN117582656A (en) 2024-02-23

Family

ID=89915795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311458943.4A Pending CN117582656A (en) 2023-11-02 2023-11-02 Interactive control method and device for game and electronic equipment

Country Status (1)

Country Link
CN (1) CN117582656A (en)

Similar Documents

Publication Publication Date Title
CN108355354B (en) Information processing method, device, terminal and storage medium
US20230218999A1 (en) Information control method and apparatus in game, and electronic device
CN112791406B (en) Target locking method, device and terminal equipment
WO2023109328A1 (en) Game control method and apparatus
CN111905369A (en) Display control method and device in game and electronic equipment
CN111840987A (en) Information processing method and device in game and electronic equipment
CN111803941A (en) In-game display control method and device and electronic equipment
CN107807813B (en) Information processing method and terminal
CN112619147A (en) Game equipment replacing method and device and terminal device
CN117582656A (en) Interactive control method and device for game and electronic equipment
CN113813604A (en) Information interaction method and device in game and electronic equipment
CN115624754A (en) Interaction control method and device for releasing skills and electronic equipment
CN115708956A (en) Game picture updating method and device, computer equipment and medium
CN114130005A (en) Ball control method and device in game
CN111841003B (en) Information processing method and device in game and electronic equipment
CN115337638A (en) Information control display control method and device in game and electronic equipment
CN113975802A (en) Game control method, device, storage medium and electronic equipment
CN112774187A (en) Method and device for assembling weapon in game and electronic equipment
CN116392806A (en) Information display method and device in game and electronic equipment
CN113198177B (en) Game control display method and device, computer storage medium and electronic equipment
CN117018613A (en) Information marking method and device in game and electronic equipment
CN117797465A (en) Game skill control method, game skill control device, electronic equipment and readable storage medium
CN117462944A (en) Scene picture display control method and device and electronic equipment
CN117547812A (en) Interaction method and device in game and electronic equipment
CN117883785A (en) Operation control method and device in game and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination