CN113893522A - Virtual skill control method, device, equipment, storage medium and program product - Google Patents

Virtual skill control method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN113893522A
CN113893522A CN202111211673.8A CN202111211673A CN113893522A CN 113893522 A CN113893522 A CN 113893522A CN 202111211673 A CN202111211673 A CN 202111211673A CN 113893522 A CN113893522 A CN 113893522A
Authority
CN
China
Prior art keywords
virtual
touch operation
touch
target
operation sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111211673.8A
Other languages
Chinese (zh)
Inventor
仇健
张晓斐
谢中水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111211673.8A priority Critical patent/CN113893522A/en
Priority to CN202111668429.4A priority patent/CN114210046A/en
Publication of CN113893522A publication Critical patent/CN113893522A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method, a device, equipment, a computer readable storage medium and a computer program product of virtual skills; the method comprises the following steps: presenting a target virtual object and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions; receiving a trigger operation aiming at the virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker; and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects in the virtual scene. Through the application, the control success rate and the human-computer interaction efficiency of the virtual skill can be improved.

Description

Virtual skill control method, device, equipment, storage medium and program product
Technical Field
The present application relates to human-computer interaction technologies, and in particular, to a method, an apparatus, a device, a computer-readable storage medium, and a computer program product for controlling virtual skills.
Background
In most virtual scenes such as applications of migrating a street game to a mobile terminal (such as a mobile phone), due to the change of input devices, for example, an entity rocker on the street game is changed into a virtual rocker on the mobile terminal, a player needs to trigger hidden virtual skills through a series of touch operations aiming at key positions in the virtual rocker. However, due to the non-touch feeling of the virtual joystick, after the street game is migrated to the mobile terminal, each key position required for street game shuffling is difficult to be touched accurately, difficulty in successful shuffling is high, so that virtual skills are difficult to release successfully, a player needs to control the terminal to execute interactive operation for multiple times to achieve a certain interactive purpose, and human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment, a computer readable storage medium and a computer program product for controlling virtual skills, which can improve the control success rate and the human-computer interaction efficiency of the virtual skills.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for controlling virtual skills, which comprises the following steps:
presenting a target virtual object and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions;
receiving a trigger operation aiming at the virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker;
and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
An embodiment of the present application provides a control device for virtual skills, including:
the interface presentation module is used for presenting a target virtual object in an interface of a virtual scene and a virtual rocker for controlling the target virtual object to release virtual skills, and the virtual rocker comprises at least two virtual key positions;
the trigger receiving module is used for receiving trigger operation aiming at the virtual rocker, and the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker;
and the skill control module is used for controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill in response to the trigger operation, so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
In the above scheme, the trigger receiving module is further configured to receive at least two touch operations for the virtual key in the virtual joystick;
when the time interval between the touch moments corresponding to any two adjacent touch operations in the at least two touch operations is lower than a first time length, determining that the at least two touch operations are continuous touch operations, and taking a first touch operation sequence formed by the continuous at least two touch operations as the trigger operation.
In the above scheme, the trigger receiving module is further configured to start collecting touch operations on the virtual key in the virtual rocker from the starting point until the end point stops collecting, with a touch time of a first touch operation on the virtual rocker as a starting point and a time of a second duration from a time interval of the touch time as an end point;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
In the above scheme, the trigger receiving module is further configured to, when the first touch operation for the virtual key in the virtual joystick is acquired, continue to acquire the touch operation for the virtual key in the virtual joystick with the touch time of the first touch operation as a starting point;
when the time interval between the collected adjacent touch operations does not exceed a third time length, continuously collecting the touch operations aiming at the virtual key position in the virtual rocker, and stopping collecting when the time interval between the collected touch operations and the previous touch operation exceeds the third time length;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
In the foregoing solution, before the controlling the target virtual object to release the target skill, the apparatus further includes:
the matching determination module is used for matching the first touch operation sequence with the second touch operation sequence to obtain a matching degree;
and when the matching degree reaches a threshold value of the matching degree, determining that the sub-operation sequences in the first touch operation sequence and the second touch operation sequence are matched.
In the foregoing solution, before determining that the first touch operation sequence matches with a sub-operation sequence in a second touch operation sequence corresponding to a target skill, the apparatus further includes:
a subsequence generating module, configured to obtain a virtual key position corresponding to each touch operation in the second touch operation sequence;
extracting at least two key positions corresponding to the target skill from a plurality of virtual key positions corresponding to the second touch operation sequence;
and generating at least two touch operation sequences as sub-operation sequences of the second touch operation based on the extracted at least two key positions.
In the above scheme, the matching determining module is further configured to match the first touch operation sequence with each touch operation sequence, respectively, to obtain a matching result;
and when the matching result represents that a touch operation sequence consistent with the first touch operation sequence exists in the at least two touch operation sequences, determining that the first touch operation sequence is matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill.
In the above scheme, the apparatus further comprises:
the total sequence generation module is used for acquiring a plurality of virtual key positions for triggering the target skill and a touch sequence corresponding to each virtual key position;
based on the touch sequence, sorting the touch operation corresponding to the corresponding virtual key positions to obtain a second touch operation sequence;
correspondingly, the subsequence generating module is further configured to execute the selection of the non-key bits from the plurality of virtual key bits for a plurality of times;
combining the at least two key positions with the non-key positions selected each time to obtain at least two key position sets;
and sequencing the touch operation sets corresponding to the key position sets according to the touch sequence corresponding to the virtual key positions to obtain a touch operation sequence corresponding to the key position sets, wherein the touch operation sequence is used as a sub-operation sequence of the second touch operation.
In the foregoing solution, the subsequence generating module is further configured to obtain reference information, where the reference information includes: at least one of the attribute of the target virtual object and the key operation habit of the user corresponding to the current login account;
determining the position relation between the selected non-key position and the key position according to the reference information;
and based on the position relation, selecting non-key keys from the virtual keys for multiple times.
In the above scheme, the apparatus further comprises:
the operation control module is used for determining a target virtual key position corresponding to the last touch operation in the first touch operation sequence when the first touch operation sequence is not matched with the sub-operation sequence in the second touch operation sequence corresponding to the target skill;
and controlling the target virtual object to execute the operation indicated by the operation instruction corresponding to the target virtual key position.
In the foregoing solution, after the controlling the target virtual object to release the target skill, the apparatus further includes:
a secondary control module, configured to present a skill control for releasing the target skill again in a process of interacting the target virtual object with the other virtual object;
and controlling the target virtual object to release the target skill again in response to the triggering operation of the skill control.
In the foregoing solution, after the controlling the target virtual object to release the target skill again, the apparatus further includes:
the display canceling module is used for determining the display duration of the skill control, and canceling the display of the skill control when the display duration reaches the target duration; alternatively, the first and second electrodes may be,
determining the releasing times of releasing the target skill based on the skill control, and canceling the skill control from being displayed when the releasing times reaches the target times; alternatively, the first and second electrodes may be,
and when the target virtual object finishes interacting with the other virtual objects, canceling the display of the skill control.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual skill provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the control method for virtual skills provided in the embodiment of the present application.
The embodiment of the present application provides a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the control method for virtual technology provided in the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
when a first touch operation sequence aiming at a virtual key position in the virtual rocker is matched with a sub-operation sequence in a second touch operation sequence corresponding to a target skill, the target virtual object can be controlled to release the target skill, so that the matching success rate of the first touch operation sequence and the sub-operation sequence corresponding to the target skill is improved, the difficulty of picking and calling is reduced, the release success rate of the virtual skill is improved, the number of times of executing interactive operation for achieving a certain interactive purpose is reduced, and the human-computer interaction efficiency is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a control system 100 for virtual skills according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a method for controlling virtual skills according to an embodiment of the present application;
fig. 4 is a schematic diagram of a touch operation sequence provided in the present embodiment;
FIG. 5 is a schematic diagram of an interface provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a touch operation sequence corresponding to a target skill provided in the embodiment of the present application;
fig. 7 is a schematic diagram of a touch operation sequence of user input provided in an embodiment of the present application;
fig. 8 is a schematic flowchart of a method for controlling virtual skills according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a virtual skill control device according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video playing client, a game client, etc.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on a terminal, and the virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
For example, when the virtual scene is a three-dimensional virtual space, the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, for example, buildings, vehicles, or props such as weapons required for arming themselves or fighting with other virtual objects in the virtual scene, and the virtual scene may also be used to simulate real environments in different weathers, for example, weather such as sunny days, rainy days, foggy days, or dark nights. The user may control the movement of the virtual object in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, a virtual prop and the like, such as a character, an animal, a plant, a prop, an oil drum, a wall, a stone and the like displayed in a virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
For example, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene match by training, or a Non-user Character (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the control type of the virtual prop is not specifically limited in the application.
5) The virtual skills and various special functions which can assist a target virtual object to interact with other virtual objects in a virtual scene, in practical application, some virtual skills can be released through triggering aiming at corresponding skill controls, and some virtual skills can be released through touch operation sequences corresponding to touch, wherein the touch operation sequences are obtained by sequencing triggering operations corresponding to key combination consisting of a plurality of virtual keys according to a certain touch sequence.
The virtual skill in the embodiment of the present application refers to a skill released by touching a corresponding touch operation sequence, that is, a virtual skill released only after successful scrubbing and recruitment is performed, in general, different virtual skills correspond to different touch operation sequences, and when a touch operation sequence of a user matches a touch operation sequence corresponding to a target skill, scrubbing and recruitment are considered to be successful, that is, a touch operation of the user can successfully release the target skill.
The engagement means that in a game of fighting, street game or clearing game, different skills are released by key combination, and when a player performs touch operation on corresponding key positions according to the sequence of the combined keys, the corresponding skills are released, for example, the touch operation sequence corresponding to the virtual skill of "whirlwind leg" is: ↓, ↘ → B, when the player executes the corresponding touch operation according to the sequence, the virtual skill of the 'whirlwind leg' can be released by considering that the scrubbing and the waving are successful.
6) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (energy value, also referred to as red value) and a magic value (also referred to as blue value).
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a control system 100 for virtual skills according to an embodiment of the present application, in which terminals (illustratively, terminal 400-1 and terminal 400-2) are connected to a server 200 via a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless or wired link to implement data transmission.
The terminal can be various types of user terminals such as a smart phone, a tablet computer, a notebook computer and the like, and can also be a desktop computer, a game machine, a television or a combination of any two or more of the data processing devices; the server 200 may be a single server configured to support various services, may also be configured as a server cluster, may also be a cloud server, and the like.
In practical applications, the terminal is installed and operated with an application program supporting a virtual scene, where the application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online tactical sports game (MOBA), a Two-dimensional (2D) game application, a Three-dimensional (3D) game application, a virtual reality application program, a Three-dimensional map program, a military simulation program, or a Multiplayer gunfight survival game, and the application program may also be a stand-alone application program, such as a stand-alone 3D game program.
The virtual scene related in the embodiment of the present application may be used to simulate a three-dimensional virtual space, where the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the virtual scene may further include virtual articles, such as buildings, tables, vehicles, and virtual objects in the virtual scene that are used to arm themselves or interact with other virtual objects, such as weapons and other items. The virtual scene can also be used for simulating real environments in different weathers, such as sunny days, rainy days, foggy days, dark nights and the like. In practical implementation, the user may use the terminal to control the virtual object to perform activities in the virtual scene, including but not limited to: adjusting at least one of body posture, crawling, running, riding, jumping, driving, picking, shooting, attacking, throwing, cutting a stab.
Taking an electronic game scene as an exemplary scene, a user may operate on the terminal in advance, and after detecting the operation of the user, the terminal may download a game configuration file of the electronic game, where the game configuration file may include an application program, interface display data, virtual scene data, or the like of the electronic game, so that the user (or a player) may invoke the game configuration file when logging in the electronic game on the terminal to render and display an electronic game interface. The method comprises the steps that a user can perform touch operation on a terminal, the terminal can send an acquisition request of game data corresponding to the touch operation to a server after detecting the touch operation, the server determines the game data corresponding to the touch operation based on the acquisition request and returns the game data to the terminal, the terminal performs rendering display on the game data, and the game data can comprise virtual scene data, behavior data of virtual objects in a virtual scene and the like.
In practical application, the terminal presents a target virtual object and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions; receiving a trigger operation aiming at a virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker; and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with the sub-operation sequence in the second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
The virtual simulation application of military is taken as an exemplary scene, the virtual scene technology is adopted to enable a trainee to experience a battlefield environment in a real way in vision and hearing and to be familiar with the environmental characteristics of a to-be-battle area, necessary equipment is interacted with an object in the virtual environment, and the implementation method of the virtual battlefield environment can create a three-dimensional battlefield environment which is a dangerous image ring life and is almost real through background generation and image synthesis through a corresponding three-dimensional battlefield environment graphic image library comprising a battle background, a battlefield scene, various weaponry, fighters and the like.
In actual implementation, the terminal presents a target virtual object (such as a simulated fighter) and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions; receiving a trigger operation aiming at a virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker; and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with the sub-operation sequence in the second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects (such as a simulated enemy) in the virtual scene.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be the terminal 400-1, the terminal 400-2, or the server in fig. 1, and the electronic device is the terminal 400-1 or the terminal 400-2 shown in fig. 1 as an example, which is used to describe the electronic device implementing the method for controlling virtual technology in the embodiment of the present application. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the virtual skill control apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates a virtual skill control apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: an interface presenting module 5551, a trigger receiving module 5552 and a skill control module 5553, which are logical and thus may be arbitrarily combined or further split according to the implemented functions, the functions of the respective modules will be described below.
In some embodiments, the electronic device may implement the locking method for the virtual object in the virtual scene provided by the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any app. In general, the computer programs described above may be any form of application, module or plug-in.
Next, a description will be given of a method for controlling a virtual skill provided in the embodiment of the present application, and in actual implementation, the method may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in cooperation. Referring to fig. 3, fig. 3 is a schematic flow chart of a method for controlling virtual skills according to an embodiment of the present application, and the steps shown in fig. 3 will be described.
Step 101: the terminal presents a target virtual object and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions.
The terminal is provided with a client supporting a virtual scene, when a user opens the client on the terminal and the terminal runs the client, the terminal presents an interface of the virtual scene observed from the virtual scene at the view angle of a target virtual object, wherein the target virtual object is a virtual object in the virtual scene corresponding to a current user account, the user (also called a player) can control the target virtual object to interact with other virtual objects in the virtual scene based on the interface of the virtual scene, for example, the target virtual object is controlled to hold a virtual shooting prop (such as a virtual sniping gun, a virtual submachine gun, a virtual shotgun and the like) to shoot other virtual objects, or the target virtual object is controlled to release virtual skills to act on other virtual objects, wherein the other virtual objects are virtual objects in the virtual scene corresponding to other user accounts different from the current user account, may be in an enemy relationship with the target virtual object.
When a virtual scene such as a street game is migrated to a mobile terminal (such as a mobile phone), the virtual rocker bears the function of the physical rocker, the control of a target virtual object is realized by touching a virtual key position in the virtual rocker, the release of a virtual skill is controlled, and the like.
Step 102: receiving a trigger operation aiming at the virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker.
In some embodiments, the terminal may receive the trigger operation for the virtual joystick by: receiving at least two touch operations aiming at a virtual key position in a virtual rocker; when the time interval between the touch moments corresponding to any two adjacent touch operations in the at least two touch operations is lower than the first time length, the at least two touch operations are determined to be continuous touch operations, and a first touch operation sequence formed by the continuous at least two touch operations is used as a trigger operation for the virtual rocker.
Here, in practical applications, the terminal may receive touch operations in the form of a touch operation set, where the touch operations in the touch operation set may be sequentially arranged according to a touch sequence of the touch operations, and after receiving the touch operation set, the terminal determines all the touch operations in the received touch operation set to determine whether the touch operations belong to the same group, and during the determination, performs successive pairwise determination on all the touch operations in the touch operation set, for example, obtains a time interval between touch timings corresponding to any two adjacent touch operations, compares each time interval with a preset first time duration, and when each time interval is lower than the first time duration, may determine all the touch operations in the received touch operation set as the same group, and at this time, the touch operation set may be used as a first touch operation sequence; when a certain time interval exceeds a first duration, two touch operations corresponding to the time interval cannot be regarded as a group, and the two touch operations whose time interval exceeds the first duration are taken as a reference, and the touch operations in the touch operation set are divided into a plurality of touch operation sequences, for example, the touch operation set is: [ A, B, C, D, E, F, G ], assuming a first duration of 0.3 seconds, two-by-two time intervals are in sequence: 0.1 second, 0.2 second, 0.4 second, 0.1 second, 0.2 second, a, B, C, D in the touch operation set can be used as one continuous touch operation, and E, F, G can be used as another continuous touch operation, so the touch operation set is divided into two touch operation sequences of [ a, B, C, D ], [ E, F, G ], and the two touch operation sequences are respectively used as trigger operations for the virtual joystick.
In some embodiments, the terminal may receive the trigger operation for the virtual joystick by: taking the touch time of the first touch operation on the virtual rocker as a starting point, taking the time interval from the touch time as a second duration as an end point, and starting to collect the touch operation on the virtual key position in the virtual rocker from the starting point until the end point stops collecting; and when the number of the collected touch operations is at least two, taking a first touch operation sequence formed by the collected at least two touch operations as a trigger operation aiming at the virtual rocker.
Here, in an actual application, the first touch operation sequence may also be a touch operation sequence composed of all touch operations collected within a preset time period and collected from a touch time of a first touch operation on the virtual joystick as a starting point, for example, a first touch operation sequence obtained by combining all touch operations collected within 1 second from the starting point in a touch order of the touch operations as a trigger operation on the virtual joystick.
In some embodiments, the terminal may receive the trigger operation for the virtual joystick by: when the first touch operation aiming at the virtual key position in the virtual rocker is acquired, taking the touch time of the first touch operation as a starting point, and continuously acquiring the touch operation aiming at the virtual key position in the virtual rocker; when the time interval between the collected adjacent touch operations does not exceed the third time length, continuously collecting the touch operations aiming at the virtual key position in the virtual rocker, and stopping collecting when the time interval between the collected touch operations and the previous touch operation exceeds the third time length; and when the number of the collected touch operations is at least two, taking a first touch operation sequence formed by the collected at least two touch operations as a trigger operation aiming at the virtual rocker.
Here, in practical applications, the terminal may further determine the time interval of the acquired adjacent touch operations in real time, start acquisition from the first touch operation, when the acquired time interval between the adjacent touch operations does not exceed a preset third time length, continue acquisition until the acquired time interval of the adjacent touch operations is greater than the preset third time length, stop acquisition, and use a first touch operation sequence obtained by combining all the touch operations acquired at this stage according to the touch sequence of the touch operations as a trigger operation for the virtual joystick.
It should be noted that, in practical applications, it is considered that each touch operation for a virtual key carries key position information indicating a corresponding virtual key and a corresponding touch time, and in consideration of that a touch operation sequence corresponding to a virtual skill is a finite set, a collection duration can be further defined, that is, when a time interval of touch times of any two adjacent touch operations in continuously collected touch operations is lower than a preset third duration, a touch operation sequence obtained by combining touch operations collected from a starting point within a preset duration (e.g., 1 second) is used as a trigger operation for a virtual joystick, a touch operation sequence obtained by combining touch operations collected within a preset duration (e.g., 1 second) only after the start point is used as another trigger operation for the virtual joystick, and so on.
Step 103: and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with the sub-operation sequence in the second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
When the first touch operation sequence of the player aiming at the virtual rocker is matched with the sub-operation sequence (simplified operation sequence) corresponding to the target skill, successful pickup can be considered to be successful, namely the target virtual release is successfully triggered.
For example, referring to fig. 4, fig. 4 is a schematic diagram of a touch operation sequence provided in the embodiment of the present application, and for a target skill, a second touch operation sequence is: a-b-c-d-e, the sub-operation sequence can be set as: a-c-d-e, b-c-d-e, a-b-d.
In some embodiments, before the terminal controls the target virtual object to release the target skill, the first touch operation sequence and the second touch operation sequence may be matched to obtain a matching degree; and when the matching degree reaches a threshold value of the matching degree, determining that the sub-operation sequences in the first touch operation sequence and the second touch operation sequence are matched.
In practical implementation, after the terminal obtains the first touch operation sequence, the terminal matches the first touch operation sequence with the second touch operation sequence corresponding to the target skill, and when the matching degree reaches a matching degree threshold (e.g., 90%), it may be determined that the first touch operation sequence can successfully trigger the release of the target skill.
In some embodiments, after obtaining the first touch operation sequence, the terminal may match the first touch operation sequence with a second touch operation sequence corresponding to the target skill, and when the first touch operation sequence is consistent with the second touch operation sequence, it is determined that the first touch operation sequence is matched with the second touch operation sequence, and release of the target skill may be triggered, in which case, matching with a sub-touch operation sequence corresponding to the target skill is not required; and when the first touch operation sequence is consistent with the sub-touch operation sequence, the matching is determined to be successful, and the release of the target skill can be triggered.
In some embodiments, before determining that the first touch operation sequence matches with a sub-operation sequence in the second touch operation sequence corresponding to the target skill, the terminal may further generate the sub-operation sequence by: acquiring a virtual key position corresponding to each touch operation in the second touch operation sequence; extracting at least two key positions corresponding to the target skills from a plurality of virtual key positions corresponding to the second touch operation sequence; and generating at least two touch operation sequences as sub-operation sequences of the second touch operation based on the extracted at least two key positions.
Correspondingly, the terminal may determine that the first touch operation sequence matches the sub-operation sequence by: matching the first touch operation sequence with each touch operation sequence respectively to obtain a matching result; and when the matching result represents that a touch operation sequence consistent with the first touch operation sequence exists in the at least two touch operation sequences, determining that the first touch operation sequence is matched with a sub-operation sequence in the second touch operation sequence corresponding to the target skill.
For example, for the target skill, the corresponding second touch operation sequence is: a-b-c-d-e-f, wherein each element in the sequence indicates a virtual key corresponding to the touch operation, and the key refers to a virtual key playing a key role in successfully triggering the release of the target skill, if the key is: a. d, generating a touch operation sequence containing the key positions as a sub-touch operation sequence based on the key positions, wherein if the sub-touch operation sequence can be set as: a-d, a-b-d-e, a-b-c-d-f, a-d-e-f, and the like. After receiving the first touch operation sequence, the terminal matches the first touch operation sequence with a sub-touch operation sequence corresponding to the target skill, and when a touch operation sequence consistent with the first touch operation exists in all the sub-touch operation sequences, the terminal determines that the first touch operation sequence is matched with the sub-touch operation sequence in the second touch operation sequence corresponding to the target skill, and can trigger the release of the target skill.
In some embodiments, the terminal may obtain the second touch operation sequence by: acquiring a plurality of virtual key positions for triggering target skills and a touch sequence corresponding to each virtual key position; based on the touch sequence, sorting the touch operations corresponding to the corresponding virtual key positions to obtain a second touch operation sequence; correspondingly, the terminal can generate at least two touch operation sequences as sub-operation sequences of the second touch operation based on the extracted at least two key positions in the following manner: selecting non-key keys from the virtual keys for multiple times; combining at least two key positions and non-key positions selected each time to obtain at least two key position sets; and sequencing the touch operation sets corresponding to the key position sets according to the touch sequence corresponding to the virtual key positions to obtain touch operation sequences corresponding to the key position sets, wherein the touch operation sequences are used as sub-operation sequences of second touch operation.
Still taking the second touch operation sequence a-b-c-d-e-f as an example, each element in the sequence is used to indicate a touch operation for a virtual key, where the touch operation carries key information of the corresponding virtual key and a corresponding touch sequence, for example, the touch operation a corresponds to the virtual key 1, and the touch sequence of the touch operation a is before the touch operation b. In practical application, the touch operations in the sequence may also carry corresponding touch moments, and the touch sequence may be determined according to the touch moments.
In some embodiments, the terminal may perform the selection of the non-critical key multiple times from among the plurality of virtual key bits by: acquiring reference information, wherein the reference information comprises at least one of the attribute of a target virtual object and the key operation habit of a user corresponding to a current login account; determining the position relation between the selected non-key position and the key position according to the reference information; and based on the position relation, selecting the non-key position from the plurality of virtual key positions for a plurality of times.
In actual implementation, the attribute in the reference information refers to an interaction characteristic of a target virtual object, such as an attack power attribute, a dodging attribute and the like, the key operation habit is obtained according to the historical operation of a user corresponding to the current login account, non-key keys frequently used by the user and relative position relations with the key keys are screened out from all virtual key keys in the virtual rocker according to the reference information to generate a sub-operation sequence, and the success rate of picking up and calling is improved based on the generated sub-operation sequence.
In some embodiments, when the first touch operation sequence does not match with the sub-operation sequence in the second touch operation sequence corresponding to the target skill, the terminal may determine a target virtual key position corresponding to the last touch operation in the first touch operation sequence; and controlling the target virtual object to execute the operation indicated by the operation instruction corresponding to the target virtual key position.
Here, when the first touch operation sequence is inconsistent with the second touch operation and the sub-operation sequence, the target virtual object is controlled to execute the operation indicated by the target virtual key corresponding to the last touch operation in the first touch operation sequence, for example, the first touch operation sequence is: a-b-c-d-e-f, the target virtual key position is the virtual key position corresponding to the touch operation f, and if the virtual key position corresponding to the touch operation f indicates right shift, the target virtual object is controlled to right shift.
In some embodiments, after the terminal controls the target virtual object to release the target skill, in the process of interacting the target virtual object with other virtual objects, a skill control for releasing the target skill again can be presented; and controlling the target virtual object to release the target skill again in response to the triggering operation of the skill control.
Here, after the release of the target skill is realized by successfully triggering the virtual rocker to rub and move, a skill control for releasing the target skill can be presented in the interface of the virtual scene, so that when the user wants to control the target virtual object to release the target skill again, the user can release the target skill again only by triggering the skill control without executing a series of touch operation sequences corresponding to the target skill, and the release efficiency of the skill and the human-computer interaction efficiency are further improved.
In some embodiments, after the terminal controls the target virtual object to release the target skill again, the display skill control is cancelled when at least one of the following conditions is met: determining the presentation time length of the skill control, and canceling the display of the skill control when the presentation time length reaches the target time length; or determining the release times of releasing the target skill based on the skill control, and canceling the display of the skill control when the release times reach the target times; or, when the target virtual object finishes interacting with other virtual objects, the skill control is cancelled from being displayed.
In practical application, the display duration or the use times (namely the release times of the skills) of the skill control can be limited, and when the display duration reaches the preset target duration or the use times of the skill control reaches the preset target times, the skill control is cancelled to be displayed; in this case, if the target skill is used subsequently, the scrubbing and calling operation is still required to be executed, that is, the touch operation is executed on the virtual key in the virtual rocker, and when the touch operation sequence on the virtual key is successfully matched with the touch operation sequence on the target skill, the target skill can be released only after the scrubbing and calling are successful, so that the scrubbing and calling utilization rate and the interaction enthusiasm can be improved, and the ecological balance of the virtual scene can be maintained.
In addition, the presentation duration of the skill control can also be related to the interaction condition of the target virtual object, such as canceling the display of the skill control when the target virtual object finishes interacting with other virtual objects, such as the target virtual object beats other virtual objects to enter the next round, or the target virtual object fails to interact in the virtual scene. It can be understood that, if the target virtual object is in the next round or the user is in a new round, if the user wants to continue to use the target skill, the above-mentioned twisting operation still needs to be executed, and the target skill can be released only after the twisting operation is successful, so that not only can the twisting utilization rate and the interaction enthusiasm be improved, but also the ecological balance of the virtual scene can be maintained.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Taking a virtual scene as an example of a street machine migration game, the street machine migration game refers to a game for migrating a street machine game to a mobile phone, because of the change of an input device, for example, the input device dragged in the traditional street machine game is an entity rocker, a player holds and moves the entity rocker to achieve the purpose of dragging, after the street machine game is migrated to the mobile phone, the input device dragged becomes a virtual rocker under a mobile phone platform, the player generally drags by sliding the virtual rocker with a single finger, the difficulty of sliding to a specified direction is greatly improved compared with the rotating operation of holding the entity rocker in a limited rotating space by the single finger, and in view of the position close relationship between the single finger and each virtual key position in the virtual rocker, the difficulty of accurately touching each key position required for the street machine dragging is caused, the difficulty of successfully dragging is large, and the virtual skill is difficult to successfully release, therefore, the player needs to control the terminal to execute interactive operation for multiple times in order to achieve a certain interactive purpose, and the human-computer interaction efficiency is low.
Referring to fig. 5, fig. 5 is a schematic interface diagram provided in the embodiment of the present application, and the related art reduces the precision requirement of the operation by enlarging the virtual rocker, so as to improve the success rate of picking; however, the too large virtual joystick cannot meet the differentiation requirements brought by different player devices, the sizes of the hands of the players and the personal operation habits, and does not have universality; moreover, the overlarge virtual rocker occupies too much display space, so that not only is display resources wasted, but also the attractiveness of the whole interface is influenced; more importantly, the input problem caused by the virtual rocker relative to the entity rocker is not solved essentially by amplifying the virtual rocker, and the successful promotion range is limited.
Therefore, the embodiment of the application provides a method for controlling virtual skills, a first touch operation sequence formed by touch operations received for virtual key positions in a virtual rocker is matched with a simplified operation sequence (the sub-operation sequence) corresponding to a target skill, and when the first touch operation sequence and the simplified operation sequence are successfully matched, a target virtual object can be controlled to release the target skill.
In actual implementation, before a first touch operation sequence for releasing a target skill is touched, a pickup dictionary capable of releasing a target skill is set, wherein the pickup dictionary comprises a complete touch operation sequence and a plurality of simplified sub-touch operation sequences; when the user wants to use the target skill, the user can touch the virtual key position in the virtual rocker, the terminal receives a first touch operation sequence aiming at the virtual key position in the virtual rocker, the received first touch operation sequence is matched with the touch operation sequence in the picking dictionary, and when the matching is successful, the target virtual object can be controlled to release the target skill so as to assist the target virtual object to interact with other virtual objects in the game.
Referring to fig. 6, fig. 6 is a schematic view of a touch operation sequence corresponding to a target skill provided in the embodiment of the present application, and it is assumed that for the target skill, a complete touch operation sequence corresponding to the target skill that can trigger release of the target skill is: 1-2-3-4-5-6, wherein the serial number represents the touch sequence of the corresponding virtual key in the virtual joystick, the key positions 1, 2, 4 and 6 are extracted from the virtual key positions of the virtual joystick according to at least one of the attribute of the target virtual object and the key operation habit of the player corresponding to the current login account, and a simplified touch operation sequence capable of successfully triggering the release of the target skill, such as the simplified touch operation sequence 1-2-4-6, 1-2-3-4-6, 1-2-4-5-6, and the like, is generated based on the extracted key positions.
Referring to fig. 7, fig. 7 is a schematic diagram of a touch operation sequence input by a user according to an embodiment of the present application, in which a key input sequence of a player is represented by letters for convenience of correspondence, and fig. 7 illustrates various cases where it can be considered that the touch operation sequence input by the player can successfully trigger release of a target skill, for example, (i) the touch operation sequence input by the player is a-b-c-d, where a, b, c, and d respectively correspond to 1, 2, 4, and 6 in fig. 6, and key keys of the target skill are perfectly covered, that is, the touch operation sequence input by the user (a-b-c-d) is consistent with a sub-touch operation sequence, that is, it is considered that the picking is successful; as in (ii), the sequence of touch operations input by the player is a-b-c-d, where a, b, and c correspond to 1, 2, and 4 in fig. 6, respectively, and the end point d is deviated from 6 in fig. 6, and although the touch operation at the end point is not in place, the waving may be considered successful at this time in consideration of the continuity of the overall touch operation; as in (iii), the sequence of touch operations input by the player is a-b-c-d, where b, c, d correspond to 2, 4, 6 in fig. 6, respectively, the starting point a has a deviation from 1 in fig. 6, and although the touch operation at the starting point is not in place, the waving may be considered successful at this time in consideration of the continuity of the overall touch operation; as in (iv), the sequence of touch operations input by the player is a-b-c-d, where b and c correspond to 2 and 4 in fig. 6, respectively, and the starting point a and the ending point d are deviated from 1 and 6 in fig. 6, respectively, and although the touch operations at the ending point and the ending point are not in place, the wining may be considered successful at this time in consideration of the continuity of the overall touch operation; as in (v), the sequence of touch operations input by the player is a-b-c-d-e, where b, d, e correspond to 2, 4, 6 in fig. 6, respectively, the starting point a has a deviation from 1 in fig. 6, and c is a non-critical key position, that is, the player touches the non-critical key position in addition to the touch-critical key position, and although the touch operation at the end point is not in place, the wining can be considered successful at this time in consideration of the continuity of the overall touch operation.
Referring to fig. 8, fig. 8 is a schematic flow chart of a method for controlling virtual skills provided in an embodiment of the present application, where the method includes:
step 201: the terminal determines a picking dictionary corresponding to the target skill, wherein the picking dictionary comprises a complete touch operation sequence and a plurality of simplified touch operation sequences capable of triggering release of the target skill.
Step 202: when the first touch operation aiming at the virtual key position in the virtual rocker is collected, the touch operation aiming at the virtual key position in the virtual rocker is collected by taking the touch time of the first touch operation as a starting point.
Step 203: and judging whether the acquired time interval between the adjacent touch operations exceeds the target duration.
Here, after an editing layer (e.g., a cos layer) of a game on a terminal collects touch operations input by a user, it is determined whether a time interval between adjacent touch operations exceeds a target time length, and the determination is performed to determine whether the adjacent touch operations can be regarded as one touch operation sequence, where the target time length can be set, for example, assuming that the target time length is 0.3 seconds, and when two adjacent collected touch operations are a and B, if the time interval between a and B is less than or equal to 0.3 seconds, the touch operations a and B are regarded as the same touch operation sequence a-B, and if the time interval between a and B is greater than 0.3 seconds, the touch operations a and B are regarded as two touch operation sequences a and B.
When the time interval between adjacent touch operations does not exceed the target length, executing step 204; when the time interval between adjacent touch operations exceeds the target length, step 205 is executed.
Step 204: and continuously acquiring touch operation aiming at the virtual key position in the virtual rocker.
Here, if the time interval between the currently collected touch operation and the previously collected touch operation does not exceed the target duration, the collection of the touch operation input by the player is continued until the time interval between the touch operation input by the player and the previously input touch operation exceeds the target duration, or the collection is stopped when the player stops inputting the touch operation.
Step 205: and stopping collecting touch operation aiming at the virtual key position in the virtual rocker.
Step 206: and combining the collected multiple touch operations according to the touch sequence to obtain a first touch operation sequence.
Step 207: and judging whether the first touch operation sequence is matched with the complete touch operation sequence of the target skill.
Here, the collected first touch operation sequence is matched with the touch operation sequence in the bidding dictionary of the target skill, when matching, the first touch operation sequence may be matched with the complete touch operation sequence of the target skill, and when the two are matched, step 209 is executed; otherwise, step 208 is performed.
Step 208: and judging whether the first touch operation sequence is matched with the simplified touch operation sequence of the target skill.
Here, when the first touch operation sequence does not match with the complete touch operation sequence of the target skill, further, the first touch operation sequence is matched with the simplified operation sequence in the bidding dictionary of the target skill, and when the first touch operation sequence matches with the simplified operation sequence, step 209 is executed; otherwise, 210 is performed.
Step 209: and controlling the target virtual object to release the target skill.
Here, the editing layer transmits the judgment result to a simulator layer (such as a rom layer), and the simulator layer executes corresponding operation according to the judgment result, for example, when the judgment result represents successful twisting, the simulator layer controls the target virtual object to release the target skill; when the judgment result represents that the find fails, the simulator layer controls the target virtual object to perform other operations (i.e. execute step 210).
Step 210: the control target virtual object performs other operations.
For example, when the first touch operation sequence does not match the simplified operation sequence, the terminal may determine a target virtual key corresponding to the last touch operation in the first touch operation sequence, and control the target virtual object to perform the operation indicated by the operation instruction corresponding to the target virtual key.
Here, after the current first touch operation sequence is processed, the terminal may continue to acquire other touch operations, and process the subsequently acquired touch operations according to the above flow.
By adopting the mode, the first touch operation sequence input by the player is matched with the complete touch operation sequence or the simplified touch operation sequence corresponding to the target skill in a fuzzy matching mode, and when the first touch operation sequence is successfully matched with any one touch operation sequence, the pickup is considered to be successful, so that the influence of difficulty in successful pickup caused by the fact that the player is difficult to touch certain keys on the mobile phone is weakened, and the success rate of the pickup is improved; for example, before the fuzzy determination provided by the embodiment of the present application is adopted, the picking success rate is mostly lower than 40%, and some are even lower than 20%, and after the fuzzy determination scheme is adopted, the picking success rate is wholly higher than 60%, and some are set to be higher than 80%.
In addition, the embodiment of the application does not need to present an overlarge virtual rocker in the interface, so that the display resources are saved, the display interface is more reasonable, and the game experience of the player in the arcade hall is restored under the fuzzy matching mode without violating fairness because the shuffling operation of the player has a specific shuffling target in most cases.
Continuing with the exemplary structure of the virtual skill control device 555 implemented as a software module provided by the embodiments of the present application, in some embodiments, referring to fig. 9, where fig. 9 is a schematic structural diagram of the virtual skill control device provided by the embodiments of the present application, the software module stored in the virtual skill control device 555 in the memory 550 of fig. 2 may include:
an interface presenting module 5551, configured to present a target virtual object in an interface of a virtual scene, and a virtual joystick for controlling the target virtual object to release a virtual skill, where the virtual joystick includes at least two virtual key locations;
a trigger receiving module 5552, configured to receive a trigger operation for the virtual joystick, where the trigger operation includes a first sequence of touch operations for a virtual key in the virtual joystick;
the skill control module 5553 is configured to, in response to the trigger operation, control the target virtual object to release the target skill when it is determined that the first touch operation sequence matches a sub-operation sequence in a second touch operation sequence corresponding to the target skill, so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
In some embodiments, the trigger receiving module is further configured to receive at least two touch operations for a virtual key in the virtual joystick;
when the time interval between the touch moments corresponding to any two adjacent touch operations in the at least two touch operations is lower than a first time length, determining that the at least two touch operations are continuous touch operations, and taking a first touch operation sequence formed by the continuous at least two touch operations as the trigger operation.
In some embodiments, the trigger receiving module is further configured to start to collect, with a touch time of a first touch operation on the virtual joystick as a starting point and a time of a second duration apart from the touch time as an end point, the touch operation on a virtual key in the virtual joystick from the starting point until the end point stops collecting;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
In some embodiments, the trigger receiving module is further configured to, when a first touch operation for a virtual key in the virtual joystick is acquired, continue to acquire the touch operation for the virtual key in the virtual joystick with a touch time of the first touch operation as a starting point;
when the time interval between the collected adjacent touch operations does not exceed a third time length, continuously collecting the touch operations aiming at the virtual key position in the virtual rocker, and stopping collecting when the time interval between the collected touch operations and the previous touch operation exceeds the third time length;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
In some embodiments, before said controlling said target virtual object to release said target skill, said apparatus further comprises:
the matching determination module is used for matching the first touch operation sequence with the second touch operation sequence to obtain a matching degree;
and when the matching degree reaches a threshold value of the matching degree, determining that the sub-operation sequences in the first touch operation sequence and the second touch operation sequence are matched.
In some embodiments, before determining that the first sequence of touch operations matches a sub-sequence of operations in a second sequence of touch operations corresponding to a target skill, the apparatus further comprises:
a subsequence generating module, configured to obtain a virtual key position corresponding to each touch operation in the second touch operation sequence;
extracting at least two key positions corresponding to the target skill from a plurality of virtual key positions corresponding to the second touch operation sequence;
and generating at least two touch operation sequences as sub-operation sequences of the second touch operation based on the extracted at least two key positions.
In some embodiments, the matching determining module is further configured to match the first touch operation sequence with each touch operation sequence, respectively, to obtain a matching result;
and when the matching result represents that a touch operation sequence consistent with the first touch operation sequence exists in the at least two touch operation sequences, determining that the first touch operation sequence is matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill.
In some embodiments, the apparatus further comprises:
the total sequence generation module is used for acquiring a plurality of virtual key positions for triggering the target skill and a touch sequence corresponding to each virtual key position;
based on the touch sequence, sorting the touch operation corresponding to the corresponding virtual key positions to obtain a second touch operation sequence;
correspondingly, the subsequence generating module is further configured to execute the selection of the non-key bits from the plurality of virtual key bits for a plurality of times;
combining the at least two key positions with the non-key positions selected each time to obtain at least two key position sets;
and sequencing the touch operation sets corresponding to the key position sets according to the touch sequence corresponding to the virtual key positions to obtain a touch operation sequence corresponding to the key position sets, wherein the touch operation sequence is used as a sub-operation sequence of the second touch operation.
In some embodiments, the subsequence generation module is further configured to obtain reference information, where the reference information includes: at least one of the attribute of the target virtual object and the key operation habit of the user corresponding to the current login account;
determining the position relation between the selected non-key position and the key position according to the reference information;
and based on the position relation, selecting non-key keys from the virtual keys for multiple times.
In some embodiments, the apparatus further comprises:
the operation control module is used for determining a target virtual key position corresponding to the last touch operation in the first touch operation sequence when the first touch operation sequence is not matched with the sub-operation sequence in the second touch operation sequence corresponding to the target skill;
and controlling the target virtual object to execute the operation indicated by the operation instruction corresponding to the target virtual key position.
In some embodiments, after said controlling said target virtual object to release said target skill, said apparatus further comprises:
a secondary control module, configured to present a skill control for releasing the target skill again in a process of interacting the target virtual object with the other virtual object;
and controlling the target virtual object to release the target skill again in response to the triggering operation of the skill control.
In some embodiments, after said controlling again said target virtual object to release said target skill, said apparatus further comprises:
the display canceling module is used for determining the display duration of the skill control, and canceling the display of the skill control when the display duration reaches the target duration; alternatively, the first and second electrodes may be,
determining the releasing times of releasing the target skill based on the skill control, and canceling the skill control from being displayed when the releasing times reaches the target times; alternatively, the first and second electrodes may be,
and when the target virtual object finishes interacting with the other virtual objects, canceling the display of the skill control.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control method of the virtual technology described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to execute a control method of virtual skills provided by embodiments of the present application, for example, a method as shown in fig. 3.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (16)

1. A method of controlling a virtual skill, the method comprising:
presenting a target virtual object and a virtual rocker for controlling the target virtual object to release virtual skills in an interface of a virtual scene, wherein the virtual rocker comprises at least two virtual key positions;
receiving a trigger operation aiming at the virtual rocker, wherein the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker;
and responding to the trigger operation, and controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
2. The method of claim 1, wherein the receiving a trigger operation for the virtual rocker comprises:
receiving at least two touch operations aiming at the virtual key positions in the virtual rocker;
when the time interval between the touch moments corresponding to any two adjacent touch operations in the at least two touch operations is lower than a first time length, determining that the at least two touch operations are continuous touch operations, and taking a first touch operation sequence formed by the continuous at least two touch operations as the trigger operation.
3. The method of claim 1, wherein the receiving a trigger operation for the virtual rocker comprises:
taking the touch time of the first touch operation aiming at the virtual rocker as a starting point, taking the time interval between the touch time as a second duration as an end point, and starting to collect the touch operation aiming at the virtual key position in the virtual rocker from the starting point until the end point stops collecting;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
4. The method of claim 1, wherein the receiving a trigger operation for the virtual rocker comprises:
when the first touch operation aiming at the virtual key position in the virtual rocker is acquired, taking the touch time of the first touch operation as a starting point, and continuing to acquire the touch operation aiming at the virtual key position in the virtual rocker;
when the time interval between the collected adjacent touch operations does not exceed a third time length, continuously collecting the touch operations aiming at the virtual key position in the virtual rocker, and stopping collecting when the time interval between the collected touch operations and the previous touch operation exceeds the third time length;
and when the number of the acquired touch operations is at least two, taking a first touch operation sequence formed by the acquired at least two touch operations as the trigger operation.
5. The method of claim 1, wherein prior to said controlling said target virtual object to release said target skill, said method further comprises:
matching the first touch operation sequence with the second touch operation sequence to obtain a matching degree;
and when the matching degree reaches a threshold value of the matching degree, determining that the sub-operation sequences in the first touch operation sequence and the second touch operation sequence are matched.
6. The method of claim 1, wherein prior to determining that the first sequence of touch operations matches a sequence of sub-operations in a second sequence of touch operations corresponding to a target skill, the method further comprises:
acquiring a virtual key position corresponding to each touch operation in the second touch operation sequence;
extracting at least two key positions corresponding to the target skill from a plurality of virtual key positions corresponding to the second touch operation sequence;
and generating at least two touch operation sequences as sub-operation sequences of the second touch operation based on the extracted at least two key positions.
7. The method of claim 6, wherein the method further comprises:
matching the first touch operation sequence with each touch operation sequence respectively to obtain a matching result;
and when the matching result represents that a touch operation sequence consistent with the first touch operation sequence exists in the at least two touch operation sequences, determining that the first touch operation sequence is matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill.
8. The method of claim 6, wherein the method further comprises:
acquiring a plurality of virtual key positions for triggering the target skill and a touch sequence corresponding to each virtual key position;
based on the touch sequence, sorting the touch operation corresponding to the corresponding virtual key positions to obtain a second touch operation sequence;
generating at least two touch operation sequences based on the extracted at least two key positions as sub-operation sequences of the second touch operation, including:
selecting non-key keys from the plurality of virtual keys for a plurality of times;
combining the at least two key positions with the non-key positions selected each time to obtain at least two key position sets;
and sequencing the touch operation sets corresponding to the key position sets according to the touch sequence corresponding to the virtual key positions to obtain a touch operation sequence corresponding to the key position sets, wherein the touch operation sequence is used as a sub-operation sequence of the second touch operation.
9. The method of claim 8, wherein said performing the selection of the non-critical key multiple times from among the plurality of virtual keys comprises:
acquiring reference information, wherein the reference information comprises: at least one of the attribute of the target virtual object and the key operation habit of the user corresponding to the current login account;
determining the position relation between the selected non-key position and the key position according to the reference information;
and based on the position relation, selecting non-key keys from the virtual keys for multiple times.
10. The method of claim 1, wherein the method further comprises:
when the first touch operation sequence is not matched with a sub-operation sequence in a second touch operation sequence corresponding to a target skill, determining a target virtual key position corresponding to the last touch operation in the first touch operation sequence;
and controlling the target virtual object to execute the operation indicated by the operation instruction corresponding to the target virtual key position.
11. The method of claim 1, wherein after said controlling said target virtual object to release said target skill, said method further comprises:
presenting a skill control for releasing the target skill again in the process of interacting the target virtual object with the other virtual objects;
and controlling the target virtual object to release the target skill again in response to the triggering operation of the skill control.
12. The method of claim 11, wherein after said again controlling said target virtual object to release said target skill, said method further comprises:
determining the presentation time length of the skill control, and canceling the display of the skill control when the presentation time length reaches the target time length; alternatively, the first and second electrodes may be,
determining the releasing times of releasing the target skill based on the skill control, and canceling the skill control from being displayed when the releasing times reaches the target times; alternatively, the first and second electrodes may be,
and when the target virtual object finishes interacting with the other virtual objects, canceling the display of the skill control.
13. An apparatus for controlling virtual skills, the apparatus comprising:
the interface presentation module is used for presenting a target virtual object in an interface of a virtual scene and a virtual rocker for controlling the target virtual object to release virtual skills, and the virtual rocker comprises at least two virtual key positions;
the trigger receiving module is used for receiving trigger operation aiming at the virtual rocker, and the trigger operation comprises a first touch operation sequence aiming at a virtual key position in the virtual rocker;
and the skill control module is used for controlling the target virtual object to release the target skill when the first touch operation sequence is determined to be matched with a sub-operation sequence in a second touch operation sequence corresponding to the target skill in response to the trigger operation, so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of controlling virtual skills of any of claims 1 to 12 when executing executable instructions stored in said memory.
15. A computer-readable storage medium storing executable instructions for implementing the method of controlling virtual skills of any of claims 1 to 12 when executed by a processor.
16. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of controlling virtual skills of any of claims 1 to 12.
CN202111211673.8A 2021-10-18 2021-10-18 Virtual skill control method, device, equipment, storage medium and program product Withdrawn CN113893522A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111211673.8A CN113893522A (en) 2021-10-18 2021-10-18 Virtual skill control method, device, equipment, storage medium and program product
CN202111668429.4A CN114210046A (en) 2021-10-18 2021-12-31 Virtual skill control method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111211673.8A CN113893522A (en) 2021-10-18 2021-10-18 Virtual skill control method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN113893522A true CN113893522A (en) 2022-01-07

Family

ID=79192601

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111211673.8A Withdrawn CN113893522A (en) 2021-10-18 2021-10-18 Virtual skill control method, device, equipment, storage medium and program product
CN202111668429.4A Pending CN114210046A (en) 2021-10-18 2021-12-31 Virtual skill control method, device, equipment, storage medium and program product

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111668429.4A Pending CN114210046A (en) 2021-10-18 2021-12-31 Virtual skill control method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (2) CN113893522A (en)

Also Published As

Publication number Publication date
CN114210046A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN112076473B (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN112295230B (en) Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
WO2022257653A1 (en) Virtual prop display method and apparatus, electronic device and storage medium
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
CN113633964B (en) Virtual skill control method, device, equipment and computer readable storage medium
KR20230007392A (en) Method and apparatus, device, and storage medium for displaying a virtual environment picture
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112306351A (en) Virtual key position adjusting method, device, equipment and storage medium
CN113262488A (en) Control method, device and equipment for virtual object in virtual scene and storage medium
CN113559510A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN114344905A (en) Team interaction processing method, device, equipment, medium and program for virtual object
CN113144603A (en) Method, device, equipment and storage medium for switching call objects in virtual scene
WO2024098628A1 (en) Game interaction method and apparatus, terminal device, and computer-readable storage medium
CN115634449A (en) Method, device, equipment and product for controlling virtual object in virtual scene
CN113769379B (en) Method, device, equipment, storage medium and program product for locking virtual object
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN113893522A (en) Virtual skill control method, device, equipment, storage medium and program product
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN113769396B (en) Interactive processing method, device, equipment, medium and program product of virtual scene
CN113769392B (en) Method and device for processing state of virtual scene, electronic equipment and storage medium
CN112891930B (en) Information display method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220107

WW01 Invention patent application withdrawn after publication