WO2021244237A1 - 虚拟对象控制方法、装置、计算机设备及存储介质 - Google Patents

虚拟对象控制方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021244237A1
WO2021244237A1 PCT/CN2021/093061 CN2021093061W WO2021244237A1 WO 2021244237 A1 WO2021244237 A1 WO 2021244237A1 CN 2021093061 W CN2021093061 W CN 2021093061W WO 2021244237 A1 WO2021244237 A1 WO 2021244237A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
virtual object
area
positions
target
Prior art date
Application number
PCT/CN2021/093061
Other languages
English (en)
French (fr)
Inventor
胡勋
万钰林
翁建苗
粟山东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2021563357A priority Critical patent/JP7384521B2/ja
Priority to EP21782630.4A priority patent/EP3939679A4/en
Priority to KR1020217034213A priority patent/KR102648249B1/ko
Priority to US17/507,965 priority patent/US20220040579A1/en
Publication of WO2021244237A1 publication Critical patent/WO2021244237A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the embodiments of the present application relate to the field of computer technology, and in particular, to a virtual object control method, device, computer equipment, and storage medium.
  • the skill release operation is a common operation.
  • the user can control the virtual object to release the skill according to the aiming direction, and the aiming direction needs to be determined before the skill is released.
  • the user usually uses a finger to perform a touch operation in the touch area, thereby determining the aiming direction according to the touch position of the touch operation.
  • the touch position is difficult to control, which easily causes the actual touch position to be inconsistent with the touch position expected by the user, resulting in inaccurate aiming direction.
  • the embodiments of the present application provide a virtual object control method, device, computer equipment, and storage medium, which improve the accuracy of the aiming direction.
  • Embodiments provide a virtual object control method, the method includes:
  • determining at least two touch positions passed by the touch operation In response to a touch operation on the touch area, determining at least two touch positions passed by the touch operation, where the at least two touch positions are selected from a preset number of touch positions passed by the touch operation last;
  • the first virtual object is controlled to perform a release skill operation according to the first aiming direction.
  • Each embodiment also provides a device for acquiring aiming information, the device including:
  • the touch position determination module is configured to determine at least two touch positions passed by the touch operation in response to a touch operation on the touch area, and the at least two touch positions are selected from the preset number of touch operations last passed through Touch location
  • the target position determining module is configured to merge the at least two touch positions according to a preset strategy, so as to determine the target touch position of the touch operation;
  • a first direction determining module configured to determine the first aiming direction indicated by the target touch position
  • the first control module is configured to control the first virtual object to perform a release skill operation according to the first aiming direction.
  • Each embodiment further provides a computer device, the computer device includes a processor and a memory, and at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor, so as to realize the The operation performed in the virtual object control method.
  • Each embodiment further provides a computer-readable storage medium having at least one instruction stored in the computer-readable storage medium, and the at least one instruction is loaded and executed by a processor, so as to implement the virtual object control method described above The action performed.
  • the method, device, computer equipment, and storage medium provided by the embodiments of the present application no longer only determine the aiming direction according to the last touch position of the touch operation in the touch area, but determine at least two touch positions of the touch operation, and comprehensively consider the At least two touch positions determine the target touch position, which avoids the inconsistency between the last touch position due to user misoperation and the touch position expected by the user.
  • the obtained target touch position can reflect the touch position expected by the user, then the target touch position is indicated
  • the aiming direction can better meet the needs of users and improve the accuracy of the aiming direction.
  • the first virtual object is controlled to perform the release skill operation, and the control of the first virtual object release skill operation is also more accurate. .
  • FIG. 1A is an architecture diagram of an implementation environment to which an embodiment of the present application is applicable
  • FIG. 1B is a flowchart of a virtual object control method provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of another virtual object control method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a touch area provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another virtual scene interface provided by an embodiment of the present application.
  • FIG. 13 is a flowchart of controlling virtual object release skills provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another touch area provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another touch area provided by an embodiment of the present application.
  • FIG. 16 is a flowchart of determining a target touch position provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a virtual object control device provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of another virtual object control device provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • first the terms “first”, “second”, etc. used in this application can be used herein to describe various concepts, but unless otherwise specified, these concepts are not limited by these terms. These terms are only used to distinguish one concept from another.
  • the first virtual object may be referred to as the second virtual object
  • the second virtual object may be referred to as the first virtual object.
  • Multiplayer online tactical competition In the virtual scene, at least two rival camps occupy their respective map areas and compete with a certain victory condition as the goal.
  • the victory conditions include but are not limited to: occupying a stronghold or destroying the enemy camp’s stronghold, killing the virtual object of the enemy camp, ensuring one’s own survival in a specified scene and time, grabbing a certain resource, and surpassing the opponent’s score within a specified time
  • Tactical competition can be carried out in units of rounds, and the map of each round of tactical competition can be the same or different.
  • Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5.
  • MOBA Multiplayer Online Battle Arena, multiplayer online tactical competition
  • MOBA Multiplayer Online Battle Arena, multiplayer online tactical competition
  • a MOBA game can divide the virtual objects of multiple users into two rival camps, and scatter the virtual objects in the virtual scene to compete with each other to destroy or occupy all the enemy's strongholds as a victory condition.
  • MOBA games are based on rounds. The duration of a round of MOBA games is from the moment the game starts to the moment when a certain faction achieves the victory conditions.
  • FIG. 1A is a structural diagram of an implementation environment to which an embodiment of the present application is applicable. As shown in FIG.
  • the implementation environment may include a server 10 and a plurality of terminals, such as terminals 30, 40, and 50, which communicate through a network 20.
  • the server 10 may be a game server that provides online game services.
  • the terminals 30, 40, and 50 may be computing devices capable of running online games, such as smart phones, PCs, tablet computers, game consoles, and so on. Users of multiple terminals can access the same online game provided by the server 10 through the network 20 to conduct online battles.
  • Virtual object refers to a movable object in a virtual scene.
  • the movable object can be in any form, such as a virtual character, a virtual animal, an animation character, etc.
  • the virtual scene is a three-dimensional virtual scene
  • the virtual object may be a three-dimensional model, and each virtual object has its own shape and volume in the three-dimensional virtual scene and occupies a part of the space in the three-dimensional virtual scene.
  • a virtual object is a three-dimensional character constructed based on the technology of three-dimensional human bones.
  • the virtual object realizes different external images by wearing different skins.
  • the virtual object may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited in the embodiment of the present application.
  • a virtual scene is a virtual scene displayed (or provided) when the application is running on the terminal.
  • the virtual scene can be used to simulate a three-dimensional virtual space.
  • the three-dimensional virtual space can be an open space.
  • the virtual scene can be a virtual scene that simulates a real environment in reality, or a virtual scene that is semi-simulated and semi-fictional. It is a completely fictitious virtual scene, which can be any of two-dimensional virtual scenes, 2.5-dimensional virtual scenes and three-dimensional virtual scenes.
  • the virtual scene may include rivers, grass, land, buildings, etc.
  • the virtual scene is used for battles between at least two virtual objects.
  • the virtual scene also includes virtual resources for at least two virtual objects, such as weapons used by the virtual objects to arm themselves or fight with other virtual objects. Props.
  • the virtual scene can be a virtual scene in any electronic game.
  • the virtual scene is provided with a square map.
  • the square map includes a symmetrical lower left corner area and an upper right corner area, belonging to two hostile camps.
  • Each of the virtual objects occupies one of the areas, and destroys the target building in the opponent's area as the goal of the game's victory.
  • the virtual object control method provided by the embodiments of the present application can be applied in a variety of scenarios.
  • the terminal displays the first virtual object, and the user controls the first virtual object to perform the skill release operation.
  • the virtual object control method provided in the embodiments of this application can be used to determine the aiming direction of the first virtual object when the skill release operation is performed.
  • Fig. 1B is a flowchart of a virtual object control method provided by an embodiment of the present application.
  • the execution subject of the embodiments of the present application is a terminal, and the terminal can be a portable, pocket-sized, handheld, and other types of terminals, such as mobile phones, computers, and tablet computers.
  • the method includes:
  • the terminal determines at least two touch positions passed by the touch operation.
  • the at least two touch positions include a preset number of touch positions selected from the last passed by the touch operation.
  • the terminal displays a virtual scene interface.
  • the virtual scene interface includes a virtual scene.
  • the virtual scene may include a first virtual object, and may also include rivers, grasses, land, buildings, virtual resources used by virtual objects, etc.
  • the virtual scene interface may also include touch buttons, touch areas, etc., so that the user can control the first virtual object to perform operations through the touch buttons or touch areas.
  • the virtual object can be controlled to perform operations such as adjusting posture, crawling, walking, running, riding, flying, jumping, driving, picking, etc., and the virtual object can also be controlled to perform skill release operations or other operations.
  • the first virtual object is a virtual object controlled by the user, and the virtual scene may also include other virtual objects besides the first virtual object.
  • the other virtual objects may be virtual objects controlled by other users or by the terminal. Automatically controlled virtual objects, such as monsters, soldiers, neutral creatures, etc. in virtual scenes.
  • the first virtual object When the first virtual object performs the skill release operation, it needs to release the skill to another virtual object, or release the skill in a certain direction, or release the skill in a certain position, but in either case, first determine when the skill is released The aiming direction.
  • the touch area in the embodiment of the present application is used to trigger the release of the skill operation, and has the effect of adjusting the aiming direction.
  • the user's finger touches the touch area and performs a touch operation in the touch area, thereby generating a touch position.
  • the touch position indicates the aiming direction when the skill is released.
  • the user can select the aiming direction he wants by moving his finger in the touch area. Once the user lifts the finger, the terminal can determine the touch position when the finger is lifted
  • the aiming direction is to control the first virtual object to perform the skill release operation according to the aiming direction.
  • the user's finger touches the touch area, the finger moves in the touch area, and then the user lifts the finger.
  • the terminal determines at least two touch positions, and controls the first touch position according to the aiming direction indicated by the last touch position.
  • a virtual object performs a skill release operation, but in actual applications, when the user lifts the finger, the finger may move slightly, resulting in a displacement of the touch position, and a new touch position will be generated after the user's desired touch position, resulting in actual
  • the aiming direction indicated by the last touch position of is not the aiming direction expected by the user.
  • at least two touch positions of the touch operation can be determined, and then these touch positions are comprehensively considered to determine a more accurate aiming direction.
  • the terminal merges at least two touch positions according to a preset strategy, so as to determine a target touch position of a touch operation.
  • the terminal determines the first aiming direction indicated by the target touch position.
  • the terminal determines a target touch position according to at least two touch positions, and uses the aiming direction indicated by the target touch position as the first aiming direction. Since the at least two touch positions are more likely to include the touch position desired by the user, compared with the last touch position, the target touch position can better reflect the needs of the user, and the accuracy of the aiming direction is improved.
  • the target touch position is used to indicate the first aiming direction of the first virtual object.
  • the first aiming direction can be any direction in the virtual scene. For example, taking the first virtual object as the origin, the first aiming direction can be the first aiming direction. The left, upper right, lower right, etc., of a virtual object, or in a more precise manner, the first aiming direction may be a 30-degree direction, a 90-degree direction, etc. of the first virtual object.
  • the terminal controls the first virtual object to perform a release skill operation according to the first aiming direction.
  • the terminal After determining the first aiming direction, the terminal controls the first virtual object to perform the skill release operation in the first aiming direction.
  • the first virtual object may have different types of skills, for example, it may include directional skills, object skills, and location skills.
  • the target is different, for example, object-type skills, and the first virtual object is controlled to perform the release skill operation on the virtual object located in the aiming direction in the virtual scene; position; Class skills, control the first virtual object to perform release skills operations at a certain position in the aiming direction in the virtual scene; direction skills, control the first virtual object to perform release skills operations toward the aiming direction in the virtual scene.
  • the method provided by the embodiments of the present application no longer only determines the aiming direction according to the last touch position of the touch operation in the touch area, but determines at least two touch positions of the touch operation, and comprehensively considers the at least two touch positions to determine the target touch Position, avoiding the inconsistency between the last touch position caused by the user’s misoperation and the user’s expected touch position.
  • the obtained target touch position can reflect the user’s desired touch position, and the aiming direction indicated by the target touch position can better meet the user’s requirements. According to the demand, the accuracy of the aiming direction is improved.
  • the first virtual object is controlled to perform the release skill operation, and the control of the release skill operation of the first virtual object is also more accurate.
  • Fig. 2 is a flowchart of another virtual object control method provided by an embodiment of the present application.
  • the execution subject of this embodiment is a terminal.
  • the method includes:
  • the terminal displays the release skill button of the first virtual object through the virtual scene interface corresponding to the first virtual object.
  • the virtual scene interface is used to display the virtual scene within the field of view of the first virtual object.
  • the virtual scene interface may include the release skill button of the first virtual object, and may also include the first virtual object and other virtual objects.
  • Objects can also include rivers, grasses, land, buildings, virtual resources used by virtual objects, etc.
  • virtual objects can be divided into multiple types of virtual objects.
  • the virtual objects can be classified into multiple types according to the shape of the virtual object, the skills of the virtual object, or other classification criteria.
  • virtual objects are classified into multiple types according to their skills, and virtual objects may include fighter-type virtual objects, mage-type virtual objects, auxiliary virtual objects, shooter-type virtual objects, and assassin-type virtual objects.
  • the first virtual object in the embodiment of the present application may be any type of virtual object.
  • release skill buttons of the first virtual object There may be one or more release skill buttons of the first virtual object, and different release skill buttons correspond to different skills.
  • the release skill button includes text or image, and the text or image is used to describe the skill corresponding to the release skill button.
  • the embodiment of the present application only takes any one skill release button of the first virtual object as an example for description.
  • the virtual scene interface 300 includes a first virtual object 301 and a second virtual object 302, and the first virtual object 301 and the second virtual object 302 belong to different camps.
  • the virtual scene interface 300 also includes a plurality of skill release buttons 303, which are located at the lower right corner of the virtual scene interface. And a complete virtual scene map is displayed in the upper left corner of the virtual scene interface.
  • the terminal In response to the triggering operation of the skill release button, the terminal displays the touch area through the virtual scene interface.
  • the terminal displays the skill release button of the first virtual object through the virtual scene interface, the user performs a trigger operation on the skill release button, and the terminal detects the trigger operation of the skill release button by the user, and displays the touch area corresponding to the skill release button through the virtual scene interface .
  • the trigger operation may be a click operation, a sliding operation or other operations.
  • the user performs a trigger operation on any skill release button, and the terminal displays the touch area corresponding to the skill release button through the virtual scene interface.
  • the touch area may be circular, square or other shapes, and the touch area may be located at any position of the virtual scene, for example, at the lower right corner, lower left corner, etc. of the virtual scene.
  • the touch area includes a first touch sub-area and a second touch sub-area, and the second touch sub-area is located outside the first touch sub-area.
  • the user's finger touches the touch area. If the user lifts the finger, the terminal touch position is in the first touch sub-area, and the terminal controls the first virtual object to quickly release the skill; if the user lifts the finger, the terminal touch position is in the second touch In the sub-area, the terminal controls the first virtual object to actively aim to obtain the aiming direction.
  • the touch area 400 is a circular area
  • the shaded part is the first touch sub-area
  • the blank part is the second touch sub-area
  • the dot indicates when the finger is pressed Of the touch position.
  • the terminal determines at least two touch positions of the touch operation in response to the touch operation on the touch area.
  • the terminal displays the touch area, the user’s finger touches the touch area, the terminal detects the touch point corresponding to the user’s finger, the user’s finger moves in the touch area, and the position of the touch point changes accordingly until the user’s finger is lifted to complete the alignment.
  • Touch operation of the touch area and the terminal can determine at least two touch positions of the touch operation by detecting the touch operation.
  • the terminal touch position of the touch point is located in the second touch sub-area, and the terminal determines the target touch position of the touch operation according to at least two touch positions. That is, if the terminal touch position is located in the second touch sub-region, it is considered that the user desires to control the first virtual object to determine the aiming direction according to the user's operation.
  • the target touch position determined by the terminal may be located in the second touch sub-area, or may be located in the first touch sub-area. If the target touch position is located in the second touch sub-area, If the target touch position is located in the first touch sub-area, the terminal controls the first virtual object to quickly release the skill.
  • the touch operation includes a pressing operation of a touch point, a sliding operation of the touch point, and a lifting operation of the touch point.
  • the terminal responds to the pressing operation corresponding to the touch point in the touch area to determine the initial touch position corresponding to the pressing operation; At least one intermediate touch position in the process of sliding in the area; in response to the lifting operation corresponding to the touch point in the touch area, determining the end touch position corresponding to the lifting operation.
  • the preset number of touch positions may be determined from the at least one intermediate touch position and the end touch position.
  • the terminal When the terminal detects the pressing operation of the touch point in the touch area, it indicates that it is ready to perform the skill release operation.
  • the sliding process of the touch point is the adjustment of the aiming direction.
  • the lifting operation of the touch point When the lifting operation of the touch point is detected, it indicates that it has been adjusted.
  • the aiming direction adjustment is complete.
  • the terminal After the terminal responds to the pressing operation of the touch point in the touch area, it determines whether the touch point belongs to the touch area, if it is, it determines the initial touch position corresponding to the pressing operation, if not, it does not execute Release skill operations.
  • the terminal responds to the pressing operation corresponding to the touch point in the touch area, and assigns a touch identifier to the touch point.
  • the position of the touch point in the touch area can change, and the terminal can detect the touch position and its corresponding touch mark, and the terminal determines at least two touch positions detected in the touch area that match the touch mark, and Ensure that at least two touch positions determined belong to the same user finger, that is, belong to the same touch operation.
  • the touch point is used to refer to the position point generated by the user's finger contacting the display screen, and the touch identification may be a fingerID or other types of identification.
  • the position of the touch point when the user's finger is pressed is the initial touch position.
  • the touch mark of the touch point will not change, and the touch position can be The change occurs, and more touch positions are generated, and the touch position of the user's finger matches the touch identification of the touch point.
  • the position of the touch point when the user's finger is lifted is the end touch position, and the end touch position is the touch The last touch position during the operation.
  • determining the at least two touch positions of the touch operation may include the end touch position of the touch operation and at least one touch position before the end touch position.
  • the process of the user performing touch operations on the touch area is a dynamic process.
  • the terminal collects the virtual scene interface to obtain at least two virtual scenes arranged in the order of arrangement.
  • each virtual scene interface includes a touch position of a touch point, and at least two touch positions are also arranged in the order of arrangement.
  • at least two touch positions are arranged from the initial touch position to the end touch position in an arrangement sequence, and the terminal may select the preset number of touch positions arranged at the end from the at least two touch positions. For example, if two touch positions are selected, then the end touch position and the last touch position of the end touch position are selected.
  • the terminal may select the preset number of touch positions arranged in the front from the at least two touch positions.
  • the virtual scene interface may be collected at a fixed time interval.
  • the time interval between any two adjacent touch positions in the determined at least two touch positions is the same.
  • the touch position may be represented by coordinates, and each touch position has a corresponding coordinate.
  • the coordinates of the target touch position can be determined, thereby determining the target touch position.
  • a coordinate system is established with the center of the touch area as the coordinate origin, and each position in the touch area has a corresponding coordinate.
  • the virtual scene interface may also include another touch area of the first virtual object.
  • the other touch area is used to control the forward, backward, and movement of the first virtual object.
  • the touch area corresponding to the release skill button is operated on the other touch area, so that the first virtual object releases the skill while acting.
  • the virtual scene interface may also include other touch areas, which are not limited in the embodiment of the present application.
  • step 203 is similar to the foregoing step 101, and will not be repeated here.
  • the terminal determines the target touch position of the touch operation according to the at least two touch positions.
  • coordinates are used to represent the touch position
  • the terminal determines the weight of the at least two touch positions according to the sequence of the at least two touch positions; according to the weight of the at least two touch positions, the weight of the at least two touch positions The coordinates are weighted and combined to obtain the coordinates of the target touch position.
  • the weights of the at least two touch positions may be preset, and the sum of the weights of the at least two touch positions is 1, and the number of the at least two touch positions may also be preset.
  • the coordinates of the touch position include an abscissa and an ordinate
  • the terminal performs a weighted summation of the abscissas of the at least two touch positions according to the weights of the at least two touch positions to obtain the abscissa of the target touch position;
  • the weight of each touch position is weighted and summed with the ordinates of at least two touch positions to obtain the ordinate of the target touch position.
  • the terminal needs to determine the target touch position according to the last 3 touch positions, the terminal presets the weights of the three touch positions q1, q2, and q3, q1 is the weight of the end touch position, and q2 is the second to last touch position.
  • the weight of the touch position, q3 is the weight of the third-to-last touch position
  • the coordinates of the end-touch position are (x1, y1)
  • the coordinates of the second-to-last touch position are (x2, y2)
  • the coordinates of the third-to-last touch position The coordinates are (x3, y3)
  • the weight of the end touch position may be set to 0, and the weight of the last touch position of the end touch position may be 1.
  • the user's finger presses the touch area. Because the contact area between the user's finger and the touch area is large, when the user lifts the finger, the touch position of the touch point will be displaced, and a new touch position will be generated after the user's desired touch position, namely The end touch position is likely to be the touch position caused by the user's misoperation, and the last touch position of the end touch position is likely to be the touch position used by the user to determine the aiming direction, so the last touch position of the end touch position is determined as the target Touch location.
  • step 204 is similar to the foregoing step 102, and will not be repeated here.
  • the terminal determines the first aiming direction indicated by the target touch position.
  • the center point of the touch area is used as the origin
  • the target touch position is determined according to the coordinates of the target touch position in which direction of the center point, and this direction is regarded as the first aiming direction.
  • the first aiming direction is the aiming direction of the first virtual object.
  • step 205 is similar to the foregoing step 103, and will not be repeated here.
  • the terminal controls the first virtual object to perform a release skill operation according to the first aiming direction.
  • the terminal After the terminal determines the first aiming direction, the terminal controls the first virtual object to perform the skill release operation in the first aiming direction.
  • the terminal determines the aiming position in the first aiming direction according to the first aiming direction and the first preset distance; and controls the first virtual object to perform the skill release operation to the aiming position.
  • the distance between the aiming position and the first virtual object is the first preset distance
  • the first preset distance is the skill release distance set with the first virtual object as the origin.
  • the terminal determines the second virtual object closest to the first virtual object in the first aiming direction; controls the first virtual object to perform the skill release operation to the second virtual object.
  • the terminal determines the first aiming direction, it can automatically acquire the virtual object in the first aiming direction, and select the virtual object closest to the first virtual object as the second virtual object according to the position of the acquired virtual object.
  • the second virtual object when the second virtual object is a virtual object controlled by another user, according to different skill types, the second virtual object may belong to the same camp as the first virtual object, or may belong to a rival camp with the first virtual object.
  • the skill of the first virtual object is an attack type skill
  • the second virtual object may belong to the rival camp with the first virtual object, and the first virtual object performs an attack operation on the second virtual object
  • the skill of the first virtual object is treatment Type of skills
  • the second virtual object can belong to the same camp as the first virtual object, and the first virtual object performs a treatment operation on the second virtual object.
  • the embodiments of this application can be applied to a battle scene.
  • the virtual scene of the virtual scene interface includes a first virtual object and a second virtual object. If other virtual objects belong to the same camp as the first virtual object, the user can control the execution of the first virtual object.
  • the treatment operation is to treat other virtual objects; if the other virtual objects and the first virtual object belong to the hostile camp, the user can control the first virtual object to perform an attack operation and attack other virtual objects.
  • the terminal controls the first virtual object to perform the release skill operation, and when the second virtual object is attacked, the virtual scene interface changes as shown in Figure 5-12:
  • a circular touch area is displayed in the virtual scene interface 500, and the dot displayed in the touch area indicates the current touch position 1.
  • the aiming direction 1 of the first virtual object 501 is displayed in the virtual scene, and the user can preview the current aiming direction 1 through the virtual scene interface 500.
  • the virtual scene interface 600 shown in FIG. 6 is a virtual scene interface displayed after the user's finger moves in the direction indicated by the arrow in the touch area on the basis of the virtual scene interface 500, and the touch position in the touch area after the finger moves A change has taken place, from the touch position 1 to the touch position 2, and the aiming direction of the first virtual object 501 also changes correspondingly, from aiming direction 1 to aiming direction 2.
  • the virtual scene interface 700 shown in FIG. 7 is a virtual scene interface displayed after the user's finger moves again in the touch area on the basis of the virtual scene interface 600. The touch position in the touch area of the virtual scene interface 700 in FIG.
  • the change occurs again, from the touch position 2 to the touch position 3, and the aiming direction of the first virtual object 501 also changes again, from aiming direction 2 to aiming direction 3.
  • the virtual scene interface 800 shown in FIG. 8 is a virtual scene interface displayed after the user's finger moves again in the touch area on the basis of the virtual scene interface 700.
  • the touch position in the touch area of the virtual scene interface 800 in FIG. 8 The change occurs again, from the touch position 3 to the touch position 4, and the aiming direction of the first virtual object 501 also changes again, from aiming direction 3 to aiming direction 4.
  • the virtual scene interface 900 shown in FIG. 9 is based on the virtual scene interface 800.
  • the user lifts the finger to leave the touch area.
  • the virtual scene interface 900 no longer displays the touch area, and the terminal determines the target touch position according to the determined target touch position.
  • the final aiming direction 5 of the first virtual object 501 is determined.
  • the virtual scene interface 1000 shown in FIG. 10 is based on the virtual scene interface 900.
  • the first virtual object 501 starts to release the skill according to the aiming direction 5.
  • the preset duration is displayed in the release skill button, and the preset duration represents the skill. Cooldown duration. Within the preset duration, the user cannot trigger the release skill button again until the duration is reduced to 0.
  • the virtual scene interface 1100 shown in FIG. 11 is based on the virtual scene interface 1000.
  • the first virtual object 501 has already released the corresponding skill.
  • the duration displayed in the skill release button is reduced.
  • the virtual scene interface 1200 shown in FIG. 12 is based on the virtual scene interface 1100.
  • the skills released by the first virtual object 501 have already attacked the second virtual object, and at this time, the upper display duration of the skill release button decreases again.
  • the target touch position is located in the second touch area as an example.
  • the touch point is always located in the first touch area.
  • the terminal determines the second aiming direction according to a preset rule; controlling the first virtual object to perform the skill release operation according to the second aiming direction.
  • the preset rule may be a preset rule. The user can learn about the preset rule according to the description of the skill to determine whether to control the first virtual object to release the skill according to the preset rule, or manually adjust the aiming direction to control the first virtual object. Subject releases skills.
  • the preset rule is to release the skill on the virtual object whose distance is less than the second preset distance, and the terminal determines the position of the third virtual object whose distance from the first virtual object is less than the second preset distance;
  • the position of the first virtual object and the position of the third virtual object determine the second aiming direction, and perform the skill release operation according to the second aiming direction, thereby releasing the skill on the third virtual object.
  • the first preset distance may be the farthest distance that the skill corresponding to the release skill button can be released.
  • any virtual object may be selected from the multiple virtual objects.
  • the object is used as the third virtual object, or a virtual object with the smallest life value can be selected from a plurality of virtual objects as the third virtual object.
  • FIG. 13 is a flowchart of a skill release provided by an embodiment of the present application.
  • the skill release process includes:
  • the user triggers the release of the skill button, the terminal displays the touch area, and detects the pressing operation of the touch point.
  • the terminal is ready to release skills.
  • the terminal detects whether the user performs a lifting operation on the touch point, if yes, execute step 1304, if not, execute step 1305.
  • the terminal judges whether the user performs a sliding operation on the touch point, if yes, execute step 1303, if not, execute step 1302.
  • this embodiment of the present application only takes the touch area corresponding to one skill release button of the first virtual object as an example for description.
  • the touch area can be controlled in a manner similar to the foregoing embodiment to control the first virtual object to perform the skill release operation.
  • step 204 and step 205 may be executed by a server connected to the terminal. That is, the terminal determines at least two touch positions of the touch operation in response to the touch operation on the touch area, and sends the at least two touch positions to the server.
  • the server determines the target touch position of the touch operation according to the at least two touch positions, and according to the target The touch position determines the first aiming direction indicated by the target touch position, and sends the first aiming direction to the terminal, and the terminal controls the first virtual object to perform the skill release operation according to the first aiming direction.
  • the solid circle in the touch area 1400 represents the terminal touch position 1401 expected by the user
  • the hollow circle represents the actual terminal touch position 1402, which can be seen from FIG. 14 It can be seen that the end touch position 1401 expected by the user is quite different from the actual end touch position 1402. If the aiming direction is determined directly according to the actual end touch position 1402, the determined aiming direction is quite different from the aiming direction expected by the user, resulting in Unable to achieve the desired effect of the user.
  • the trajectory shown in the touch area 1501 is the trajectory generated by the touch operation
  • the solid dot in the touch area 1502 represents the terminal touch position 1511 desired by the user
  • the hollow dot represents The actual end touch position 1521 is determined using the method provided in this embodiment of the application to determine the target touch position 1512. It can be seen from FIG.
  • the target touch position 1512 has a small difference from the end touch position 1511 expected by the user, and the aiming direction determined according to the target touch position has a small difference from the aiming direction expected by the user, which can achieve the desired effect of the user.
  • the method provided by the embodiments of the present application no longer only determines the aiming direction according to the last touch position of the touch operation in the touch area, but determines at least two touch positions of the touch operation, and comprehensively considers the at least two touch positions to determine the target touch Position, avoiding the inconsistency between the last touch position caused by the user’s misoperation and the user’s expected touch position.
  • the obtained target touch position can reflect the user’s desired touch position, and the aiming direction indicated by the target touch position can better meet the user’s requirements. According to the demand, the accuracy of the aiming direction is improved.
  • the first virtual object is controlled to perform the release skill operation, and the control of the release skill operation of the first virtual object is also more accurate.
  • the touch area is divided into a first touch sub-area and a second touch sub-area, so that the user can quickly perform the skill release operation, or manually determine the aiming direction and then perform the skill release operation.
  • the user can flexibly choose according to the skills released by the first virtual object in the virtual scene, which improves flexibility.
  • touch marks are assigned to touch points, at least two touch positions matching the touch marks are determined, and the determined at least two touch positions are guaranteed to be touch positions of the same touch operation, and other fingers in the virtual scene interface are avoided.
  • the interference of the touch operation improves the accuracy of the operation.
  • FIG. 16 is a flow chart for determining a target touch position provided by an embodiment of the present application, and the flow includes:
  • the terminal detects a pressing operation on a touch point, obtains an initial touch position, and assigns a touch identifier to the touch point.
  • the terminal determines whether the touch point is located in the touch area.
  • the terminal assigns the coordinates of the initial touch position to (xn, yn), (xn-1, yn-1)... until (x1, y1). Among them, n represents the number of touch positions that need to be determined, and n is a positive integer. If the touch point is not located in the touch area, the process ends.
  • the terminal detects the movement operation of the touch point, and obtains the touch position after the movement.
  • the terminal judges whether the touch identifier of the moved touch position matches the touch identifier of the aforementioned touch point.
  • the terminal assigns the coordinates of the moved touch position to (xn, yn). If it does not match, the process ends.
  • repeat step 1604 assign (x2, y2) to (x1, y1), assign (x3, y3) to (x2, y2), and so on, add (xn-1, yn-1) is assigned to (xn-2, yn-2), (xn, yn) is assigned to (xn-1, yn-1), and the coordinates of the touch position after moving again are assigned to (xn, yn) .
  • the terminal detects the lifting operation of the touch point, and obtains the termination touch position.
  • the terminal judges whether the touch identification of the termination touch position matches the touch identification of the aforementioned touch point.
  • the terminal assigns (x2, y2) to (x1, y1), (x3, y3) to (x2, y2), and so on, assigns (xn-1, yn-1) to (xn-2, yn-2), assign (xn, yn) to (xn-1, yn-1), and assign the coordinates of the ending touch position to (xn, yn).
  • the terminal calculates the coordinates of the target touch position according to the coordinates of the n touch positions and the corresponding weights to obtain the target touch position.
  • n is a positive integer greater than 1.
  • n is 3, that is, three touch positions need to be determined, and the coordinates (x1, y1), (x2, y2) and (x3, y3) of the three touch positions are obtained.
  • the coordinates of the initial touch position are assigned (x1, y1), (x2, y2) and (x3, y3);
  • assign the coordinates of the second touch position assign the coordinates of the second touch position to (x3, y3), and after getting the third touch position, assign (x3, y3) (the coordinates of the second touch position) to ( x2, y2), the coordinates of the third touch position are assigned to (x3, y3);
  • (x2, y2) (the coordinates of the second touch position) are assigned to (x1, y1), assign (x3, y3) (the coordinates of the third touch position) to (x2, y2),
  • Fig. 17 is a schematic structural diagram of a virtual object control device provided by an embodiment of the present application. Referring to Figure 17, the device includes:
  • the touch position determining module 1701 is configured to determine at least two touch positions passed by the touch operation in response to a touch operation on the touch area, and the at least two touch positions are selected from the preset number of touch positions passed by the last touch operation;
  • the target position determining module 1702 is configured to merge at least two touch positions according to a preset strategy, so as to determine the target touch position of the touch operation;
  • the first direction determining module 1703 is configured to determine the first aiming direction indicated by the target touch position
  • the first control module 1704 is configured to control the first virtual object to perform the release skill operation according to the first aiming direction.
  • the device provided by the embodiment of the present application no longer only determines the aiming direction according to the last touch position of the touch operation in the touch area, but determines at least two touch positions of the touch operation, and comprehensively considers the at least two touch positions to determine the target touch Position, avoiding the inconsistency between the last touch position caused by the user’s misoperation and the touch position expected by the user.
  • the target touch position obtained can reflect the touch position expected by the user, and the aiming direction indicated by the target touch position can better meet the user’s requirements. According to the demand, the accuracy of the aiming direction is improved.
  • the first virtual object is controlled to perform the release skill operation, and the control of the release skill operation of the first virtual object is also more accurate.
  • the target position determining module 1702 includes:
  • the weight determining unit 1712 is configured to determine the weight of the at least two touch positions according to the sequence of the at least two touch positions;
  • the coordinate determining unit 1722 is configured to perform weighting processing on the coordinates of the at least two touch positions according to the weights of the at least two touch positions to obtain the coordinates of the target touch position.
  • the coordinates of the touch position include abscissa and ordinate, and the coordinate determination unit 1722 is configured to:
  • the ordinate of the at least two touch positions is weighted and summed to obtain the ordinate of the target touch position.
  • the touch position determination module 1701 includes:
  • the first position determining unit 1711 is configured to determine the initial touch position corresponding to the pressing operation in response to the pressing operation corresponding to the touch point in the touch area;
  • the second position determining unit 1721 is configured to determine at least one touch position in the process of the touch point sliding in the touch area
  • the third position determining unit 1731 is configured to determine the end touch position corresponding to the lifting operation in response to the lifting operation corresponding to the touch point in the touch area.
  • the apparatus further includes:
  • the identifier assignment module 1705 is configured to assign touch identifiers to the touch points in response to pressing operations corresponding to the touch points in the touch area;
  • the second position determining unit 1721 is configured to determine at least two touch positions matching the touch identifier detected in the touch area.
  • the first control module 1704 includes:
  • the object determining unit 1714 is configured to determine the second virtual object with the closest distance to the first virtual object in the first aiming direction;
  • the first control unit 1724 is configured to control the first virtual object to perform the skill release operation to the second virtual object.
  • the first control module 1704 includes:
  • the aiming position determining unit 1734 is configured to determine the aiming position in the first aiming direction according to the first aiming direction and the first preset distance, and the distance between the aiming position and the first virtual object is the first preset distance;
  • the second control unit 1744 is used to control the first virtual object to perform a release skill operation to the aiming position.
  • the touch area includes a first touch subarea and a second touch subarea, and the second touch subarea is located outside the first touch subarea;
  • the target position determining module 1702 is configured to terminate the touch position in the second touch sub-area, and determine the target touch position of the touch operation according to at least two touch positions.
  • the apparatus further includes:
  • the second direction determining module 1706 is configured to determine the second aiming direction according to a preset rule when at least two touch positions are both located in the first touch sub-area;
  • the second control module 1707 is used to control the first virtual object to perform the release skill operation according to the second aiming direction.
  • the second direction determining module 1706 includes:
  • the object position determining unit 1716 is configured to determine the position of the third virtual object whose distance from the first virtual object is less than the second preset distance;
  • the second direction determining unit 1726 is configured to determine the second aiming direction according to the position of the first virtual object and the position of the third virtual object.
  • the apparatus further includes:
  • the button display module 1708 is configured to display the release skill button of the first virtual object through the virtual scene interface corresponding to the first virtual object;
  • the touch area display module 1709 is configured to display the touch area through the virtual scene interface in response to the triggering operation of the skill release button.
  • the virtual object control device provided in the above embodiment controls a virtual object
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be allocated by different functional modules as needed. , That is, divide the internal structure of the terminal into different functional modules to complete all or part of the functions described above.
  • the virtual object control device provided in the foregoing embodiment and the virtual object adjustment method embodiment belong to the same concept. For the specific implementation process, please refer to the method embodiment, which will not be repeated here.
  • FIG. 19 shows a schematic structural diagram of a terminal 1900 provided by an exemplary embodiment of the present application.
  • the terminal 1900 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic Video experts compress the standard audio level 4) Player, laptop or desktop computer.
  • the terminal 1900 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and other names.
  • the terminal 1900 includes a processor 1901 and a memory 1902.
  • the processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1901 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish.
  • the processor 1901 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1901 may also include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1902 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1902 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1902 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1901 to implement the human-based Call pickup method for phone conversation.
  • the terminal 1900 may optionally further include: a peripheral device interface 1903 and at least one peripheral device.
  • the processor 1901, the memory 1902, and the peripheral device interface 1903 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1903 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1904, a display screen 1905, a camera component 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.
  • the peripheral device interface 1903 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1901 and the memory 1902.
  • the processor 1901, the memory 1902, and the peripheral device interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1901, the memory 1902, and the peripheral device interface 1903 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the terminal 1900 further includes one or more sensors 1910.
  • the one or more sensors 1910 include, but are not limited to: an acceleration sensor 1911, a gyroscope sensor 1912, a pressure sensor 1919, a fingerprint sensor 1914, an optical sensor 1915, and a proximity sensor 1916.
  • FIG. 19 does not constitute a limitation on the terminal 1900, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • FIG. 20 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the server 2000 may have relatively large differences due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 2001 and one Or more than one memory 2002, where at least one instruction is stored in the memory 2002, and the at least one instruction is loaded and executed by the processor 2001 to implement the methods provided in the foregoing method embodiments.
  • the server may also have components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server may also include other components for implementing device functions, which will not be described in detail here.
  • the server 2000 may be used to execute the steps executed by the server in the above virtual object control method.
  • An embodiment of the present application also provides a computer device, the computer device includes a processor and a memory, and at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the virtual object control method of the foregoing embodiment The operation performed in.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores at least one instruction, and the at least one instruction is loaded and executed by a processor, so as to implement the virtual object control method in the above-mentioned embodiment The action performed.
  • An embodiment of the present application also provides a computer program, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed in the virtual object control method of the foregoing embodiment.

Abstract

一种虚拟对象控制方法、装置、计算机设备及存储介质,包括:响应于对触控区域的触摸操作,确定触摸操作经过的至少两个触摸位置,根据预设策略对至少两个触摸位置进行合并,从而确定触摸操作的目标触摸位置(1512);确定目标触摸位置(1512)指示的第一瞄准方向;控制第一虚拟对象按照第一瞄准方向执行释放技能操作。其中,至少两个触摸位置包括选自触摸操作的终止最后经过的预设数目个触摸位置。

Description

虚拟对象控制方法、装置、计算机设备及存储介质
本申请要求于2020年06月05日提交中国专利局、申请号为202010507467.0、发明名称为“虚拟对象控制方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,特别涉及一种虚拟对象控制方法、装置、计算机设备及存储介质。
发明背景
随着计算机技术的发展及电子游戏的广泛普及,在电子游戏中可以控制虚拟对象执行多种多样的操作,极大地提升了游戏效果。其中,释放技能操作是一种常用的操作,用户可以控制虚拟对象按照瞄准方向释放技能,而在释放技能之前需要确定瞄准方向。
用户通常使用手指在触控区域中执行触摸操作,从而根据触摸操作的触摸位置确定瞄准方向。但是由于手指与触控区域的接触面积较大,触摸位置难以控制,很容易导致实际触摸位置与用户期望的触摸位置不一致,从而导致瞄准方向不准确。
发明内容
本申请实施例提供了一种虚拟对象控制方法、装置、计算机设备及存储介质,提高了瞄准方向的准确率。
各实施例提供了一种虚拟对象控制方法,所述方法包括:
响应于对触控区域的触摸操作,确定所述触摸操作经过的至少两个触摸位置,所述至少两个触摸位置选自所述触摸操作最后经过的预设数目个触摸位置;
根据预设策略对所述至少两个触摸位置进行合并,从而确定所述触摸操作的目标触摸位置;
确定所述目标触摸位置指示的第一瞄准方向;
控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作。
各实施例还提供了一种瞄准信息获取装置,所述装置包括:
触摸位置确定模块,用于响应于对触控区域的触摸操作,确定所述触摸操作经过的至少两个触摸位置,所述至少两个触摸位置选自所述触摸操作最后经过的预设数目个触摸位置;
目标位置确定模块,用于根据预设策略对所述至少两个触摸位置进行合并,从而确定所述触摸操作的目标触摸位置;
第一方向确定模块,用于确定所述目标触摸位置指示的第一瞄准方向;
第一控制模块,用于控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作。
各实施例还提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行,以实现如所述虚拟对象控制方法中所执行的操作。
各实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令,所述至少一条指令由处理器加载并执行,以实现如所述虚拟对象控制方法中所执行的操作。
本申请实施例提供的方法、装置、计算机设备及存储介质,不再仅根据触控区域中触摸操作的最后一个触摸位置确定瞄准方向,而是确定触摸操作的至少两个触摸位置,综合考虑该至少两个触摸位置确定目标触摸位置,避免了由于用户误操作 产生的最后一个触摸位置与用户期望的触摸位置不一致的情况,得到的目标触摸位置能够体现用户期望的触摸位置,则目标触摸位置指示的瞄准方向更能满足用户的需求,提高了瞄准方向的准确率,之后按照确定的瞄准方向,控制第一虚拟对象执行释放技能操作,对第一虚拟对象释放技能操作的控制也更加准确。。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请实施例的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例可适用的一个实施环境的架构图;
图1B是本申请实施例提供的一种虚拟对象控制方法的流程图;
图2是本申请实施例提供的另一种虚拟对象控制方法的流程图;
图3是本申请实施例提供的一种虚拟场景界面的示意图;
图4是本申请实施例提供的一种触控区域的示意图;
图5是本申请实施例提供的另一种虚拟场景界面的示意图;
图6是本申请实施例提供的另一种虚拟场景界面的示意图;
图7是本申请实施例提供的另一种虚拟场景界面的示意图;
图8是本申请实施例提供的另一种虚拟场景界面的示意图;
图9是本申请实施例提供的另一种虚拟场景界面的示意图;
图10是本申请实施例提供的另一种虚拟场景界面的示意图;
图11是本申请实施例提供的另一种虚拟场景界面的示意图;
图12是本申请实施例提供的另一种虚拟场景界面的示意图;
图13是本申请实施例提供的一种控制虚拟对象释放技能的流程图;
图14是本申请实施例提供的另一种触摸区域的示意图;
图15是本申请实施例提供的另一种触摸区域的示意图;
图16是本申请实施例提供的一种确定目标触摸位置的流程图;
图17是本申请实施例提供的一种虚拟对象控制装置的结构示意图;
图18是本申请实施例提供的另一种虚拟对象控制装置的结构示意图;
图19是本申请实施例提供的一种终端的结构示意图;
图20是本申请实施例提供的一种服务器的结构示意图。
实施本发明的方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种概念,但除非特别说明,这些概念不受这些术语限制。这些术语仅用于将一个概念与另一个概念区分。举例来说,在不脱离本申请的范围的情况下,可以将第一虚拟对象称为第二虚拟对象,将第二虚拟对象称为第一虚拟对象。
为了便于理解本申请实施例,对本申请实施例涉及到的名词进行解释:
多人在线战术竞技:在虚拟场景中,至少两个敌对阵营分别占据各自的地图区域,以某一种胜利条件作为目标进行竞技。该胜利条件包括但不限于:占领据点或摧毁敌对阵营据点、击杀敌对阵营的虚拟对象、在指定场景和时间内保证自身的存活、抢夺到某种资源、在指定时间内比分超过对方中的至少一种。战术竞技可以以局为单位来进行,每局战术竞技的地图可以相同,也可以不同。每个虚拟队伍包括 一个或多个虚拟对象,比如1个、2个、3个或5个。
MOBA(Multiplayer Online Battle Arena,多人在线战术竞技)游戏:是一种在虚拟场景中提供多个据点,处于不同阵营的用户控制虚拟对象在虚拟场景中对战,占领据点或摧毁敌对阵营据点的游戏。例如,MOBA游戏可将多个用户的虚拟对象分成两个敌对阵营,将虚拟对象分散在虚拟场景中互相竞争,以摧毁或占领敌方的全部据点作为胜利条件。MOBA游戏以局为单位,一局MOBA游戏的持续时间是从游戏开始的时刻至某一阵营达成胜利条件的时刻。图1A是本申请实施例可适用的一个实施环境的架构图。如图1A所示,该实施环境可包括通过网络20进行通信的服务器10和多个终端,例如终端30、40、50。服务器10可以是提供在线游戏服务的游戏服务器。终端30、40、50可以是能够运行在线游戏的计算设备,例如智能手机、PC、平板电脑、游戏主机,等。多个终端的用户可以通过网络20接入服务器10提供的同一局在线游戏,进行在线对战。
虚拟对象:是指在虚拟场景中的可活动对象,该可活动对象可以是任一种形态,例如虚拟人物、虚拟动物、动漫人物等。当虚拟场景为三维虚拟场景时,虚拟对象可以是三维立体模型,每个虚拟对象在三维虚拟场景中具有自身的形状和体积,占据三维虚拟场景中的一部分空间。虚拟对象是基于三维人体骨骼技术构建的三维角色,该虚拟对象通过穿戴不同的皮肤来实现不同的外在形象。在一些实现方式中,虚拟对象也可以采用2.5维或2维模型来实现,本申请实施例对此不加以限定。
虚拟场景:虚拟场景是应用程序在终端上运行时显示(或提供)的虚拟场景。虚拟场景可以用于模拟一个三维虚拟空间,该三维虚拟空间可以是一个开放空间,该虚拟场景可以是对现实中真实环境进行仿真的虚拟场景,也可以是半仿真半虚构的虚拟场景,还可以是完全虚构的虚拟场景,虚拟场景可以是二维虚拟场景、2.5维虚拟场景和三维虚拟场景中的任意一种。例如,该虚拟场景中可以包括河流、草丛、陆地、建筑物等。该虚拟场景用于至少两个虚拟对象之间进行对战,虚拟场景中还包括供至少两个虚拟对象使用的虚拟资源,如虚拟对象用于武装自己或与其他虚拟对象进行战斗所需的兵器等道具。
例如,虚拟场景可以为任一电子游戏中的虚拟场景,以电子游戏为MOBA为例,该虚拟场景提供有正方形地图,该正方形地图包括对称的左下角区域和右上角区域,属于两个敌对阵营的虚拟对象分别占据其中一个区域,并以摧毁对方区域的目标建筑物作为游戏胜利的目标。
本申请实施例提供的虚拟对象控制方法,可应用于多种场景下。
例如,应用于游戏中的对战场景下。在对战场景中,终端显示第一虚拟对象,用户控制第一虚拟对象执行释放技能操作,可以采用本申请实施例提供的虚拟对象控制方法,确定第一虚拟对象执行释放技能操作时的瞄准方向。
图1B是本申请实施例提供的一种虚拟对象控制方法的流程图。本申请实施例的执行主体为终端,该终端可以为便携式、袖珍式、手持式等多种类型的终端,如手机、计算机、平板电脑等。参见图1B,该方法包括:
101、终端响应于对触控区域的触摸操作,确定触摸操作经过的至少两个触摸位置。所述至少两个触摸位置包括选自所述触摸操作最后经过的预设数目个触摸位置。
本申请实施例中,终端显示虚拟场景界面,虚拟场景界面包括虚拟场景,虚拟场景中可以包括第一虚拟对象,还可以包括河流、草丛、陆地、建筑物、虚拟对象所使用的虚拟资源等,另外虚拟场景界面中还可以包括触控按钮、触控区域等,以 便用户通过触控按钮或触控区域来控制第一虚拟对象执行操作。例如,可以控制虚拟对象执行调整姿势、爬行、步行、奔跑、骑行、飞行、跳跃、驾驶、拾取等操作,还可以控制虚拟对象执行释放技能操作或者其他操作。
其中,第一虚拟对象是由用户控制的虚拟对象,虚拟场景中还可以包括除第一虚拟对象之外的其他虚拟对象,其他虚拟对象可以是由其他用户控制的虚拟对象,也可以是由终端自动控制的虚拟对象,例如,虚拟场景中的野怪、士兵、中立生物等。
第一虚拟对象在执行释放技能操作时,需要向另一个虚拟对象释放技能,或者向某一个方向释放技能,或者向某一个位置释放技能,但是无论是哪一种情况,首先需要确定释放技能时的瞄准方向。
本申请实施例中的触控区域用于触发释放技能操作,具有调整瞄准方向的作用,用户手指触摸触控区域,在触控区域中进行触摸操作,从而产生触摸位置。触摸位置指示释放技能时的瞄准方向,用户通过手指在触控区域中进行移动,可以选择自己想要的瞄准方向,一旦用户手指执行抬起动作,终端即可按照抬起手指时的触摸位置确定瞄准方向,控制第一虚拟对象按照该瞄准方向来执行释放技能操作。
相关技术中,用户手指触摸触控区域,手指在触控区域中进行移动,之后用户抬起手指,此过程中终端确定至少两个触摸位置,会按照最后一个触摸位置指示的瞄准方向,控制第一虚拟对象执行释放技能操作,但是实际应用中,在用户抬起手指时,手指可能会发生轻微移动,导致触摸位置产生了位移,在用户期望的触摸位置之后会产生新的触摸位置,导致实际的最后一个触摸位置指示的瞄准方向不是用户期望的瞄准方向。本申请实施例为了提高瞄准方向的准确率,满足用户的需求,可以确定触摸操作的至少两个触摸位置,后续综合考虑这些触摸位置来确定更为准确的瞄准方向。
102、终端根据预设策略对至少两个触摸位置进行合并,从而确定触摸操作的目标触摸位置。
103、终端确定目标触摸位置指示的第一瞄准方向。
本申请实施例中,终端根据至少两个触摸位置确定一个目标触摸位置,将目标触摸位置指示的瞄准方向作为第一瞄准方向。由于至少两个触摸位置中包括用户期望的触摸位置的可能性较大,因此与最后一个触摸位置相比较,目标触摸位置更能够体现用户的需求,提高了瞄准方向的准确率。
其中,目标触摸位置用于指示第一虚拟对象的第一瞄准方向,该第一瞄准方向可以为虚拟场景中的任一方向,例如,以第一虚拟对象作为原点,第一瞄准方向可以为第一虚拟对象的左侧、右上方、右下方等,或者采用更加精确的方式进行表示,第一瞄准方向可以为第一虚拟对象的30度方向、90度方向等。
104、终端控制第一虚拟对象按照第一瞄准方向执行释放技能操作。
终端确定第一瞄准方向之后,控制第一虚拟对象向第一瞄准方向执行释放技能操作。
第一虚拟对象可以具有不同类型的技能,例如,可以包括方向类技能、对象类技能及位置类技能。对于不同类型的技能,控制第一虚拟对象执行释放技能操作时,针对的对象不同,例如,对象类技能,控制第一虚拟对象向虚拟场景中位于瞄准方向上的虚拟对象执行释放技能操作;位置类技能,控制第一虚拟对象向虚拟场景中位于瞄准方向上的某一位置执行释放技能操作;方向类技能,控制第一虚拟对象向虚拟场景中的瞄准方向执行释放技能操作。
本申请实施例提供的方法,不再仅根据触控区域中触摸操作的最后一个触摸位置确定瞄准方向,而是确定触摸操作的至少两个触摸位置,综合考虑该至少两个触摸位置确定目标触摸位置,避免了由于用户误操作产生的最后一个触摸位置与用户期望的触摸位置不一致的情况,得到的目标触摸位置能够体现用户期望的触摸位置,则目标触摸位置指示的瞄准方向更能满足用户的需求,提高了瞄准方向的准确率,之后按照确定的瞄准方向,控制第一虚拟对象执行释放技能操作,对第一虚拟对象释放技能操作的控制也更加准确。
图2是本申请实施例提供的另一种虚拟对象控制方法的流程图。参见图2,该实施例的执行主体为终端,参见图2,该方法包括:
201、终端通过第一虚拟对象对应的虚拟场景界面,显示第一虚拟对象的释放技能按钮。
本申请实施例中,虚拟场景界面用于显示第一虚拟对象的视野范围内的虚拟场景,该虚拟场景界面中可以包括第一虚拟对象的释放技能按钮,还可以包括第一虚拟对象及其他虚拟对象,还可以包括河流、草丛、陆地、建筑物、虚拟对象所使用的虚拟资源等。
其中,虚拟对象可以分为多个类型的虚拟对象。例如,可以根据虚拟对象的外形、虚拟对象的技能或者根据其他划分标准将虚拟对象分为多个类型。例如,根据虚拟对象的技能将虚拟对象分为多个类型,则虚拟对象可以包括战士型虚拟对象、法师型虚拟对象、辅助型虚拟对象、射手型虚拟对象和刺客型虚拟对象。本申请实施例中的第一虚拟对象可以为任一类型的虚拟对象。
第一虚拟对象的释放技能按钮可以为一个或多个,不同的释放技能按钮对应不同的技能。可选地,释放技能按钮中包括文本或图像,该文本或图像用于描述该释放技能按钮对应的技能。本申请实施例仅是以第一虚拟对象的任一个技能释放按钮为例进行说明。
例如,参见图3所示的虚拟场景界面300,该虚拟场景界面300中包括第一虚拟对象301、第二虚拟对象302,第一虚拟对象301与第二虚拟对象302属于不同的阵营。该虚拟场景界面300还包括多个释放技能按钮303,该多个释放技能按钮位于虚拟场景界面的右下角。且该虚拟场景界面的左上角显示有完整的虚拟场景的地图。
202、终端响应于对释放技能按钮的触发操作,通过虚拟场景界面显示触控区域。
终端通过虚拟场景界面显示第一虚拟对象的释放技能按钮,用户对释放技能按钮执行触发操作,终端检测到用户对释放技能按钮的触发操作,通过虚拟场景界面显示该释放技能按钮对应的触控区域。其中,触发操作可以为点击操作、滑动操作或其他操作。
如果第一虚拟对象有多个释放技能按钮,用户对任一释放技能按钮进行触发操作,终端通过虚拟场景界面显示该释放技能按钮对应的触控区域。
其中,该触控区域可以为圆形、方形或其他形状,该触控区域可以位于虚拟场景的任意位置,例如,位于虚拟场景的右下角、左下角等。
在一种可能实现方式中,触控区域包括第一触控子区域和第二触控子区域,第二触控子区域位于第一触控子区域的外侧。用户手指触摸触控区域,如果用户抬起手指时,终止触摸位置在第一触控子区域,终端控制第一虚拟对象快速释放技能;如果用户抬起手指时,终止触摸位置在第二触控子区域,终端控制第一虚拟对象进 行主动瞄准得到瞄准方向。
例如,参见图4所示的触控区域400,该触控区域400为圆形区域,阴影部分为第一触控子区域,空白部分为第二触控子区域,圆点表示手指按下时的触摸位置。
203、终端响应于对触控区域的触摸操作,确定触摸操作的至少两个触摸位置。
终端显示触控区域,用户手指触摸该触控区域,终端检测到用户手指对应的触摸点,用户手指在触摸区域中移动,触摸点的位置也发生相应的变化,直至用户手指抬起,完成了对触控区域的触摸操作,而终端通过检测该触摸操作,可以确定触摸操作的至少两个触摸位置。
在一种可能实现方式中,用户手指抬起时,触摸点的终止触摸位置位于第二触控子区域,则终端根据至少两个触摸位置,确定触摸操作的目标触摸位置。也就是说,如果终止触摸位置位于第二触控子区域,则认为用户期望控制第一虚拟对象根据用户的操作确定瞄准方向。
可选地,终止触摸位置位于第二触控子区域时,终端确定的目标触摸位置可能位于第二触控子区域,也可能位于第一触控子区域,如果目标触摸位置位于第二触控子区域,则执行下述步骤204,如果目标触摸位置位于第一触控子区域,则终端控制第一虚拟对象快速释放技能。
在一种可能实现方式中,触摸操作包括触摸点的按下操作、触摸点的滑动操作以及触摸点的抬起操作。其中,用户手指与触摸区域接触时,将用户手指识别为触摸点,终端响应于触控区域中的触摸点对应的按下操作,确定按下操作对应的初始触摸位置;确定触摸点在触控区域中滑动的过程中的至少一个中间触摸位置;响应于触控区域中的触摸点对应的抬起操作,确定抬起操作对应的终止触摸位置。可以从所述至少一个中间触摸位置和所述终止触摸位置中确定所述预设数目个触摸位置。终端检测到触控区域中的触摸点的按下操作时,表示准备执行释放技能操作,对触摸点的滑动过程即是对瞄准方向的调整,检测到触摸点的抬起操作时,表示已经对瞄准方向调整完成。
可选地,终端响应于触控区域中的触控点的按下操作之后,判断触摸点是否属于该触控区域,如果是,确定按下操作对应的初始触摸位置,如果否,则不执行释放技能操作。
在一种可能实现方式中,为了避免用户多个手指的触摸操作发生冲突,终端响应于触控区域中的触摸点对应的按下操作,为触摸点分配触摸标识。之后触摸点在触控区域中的位置可以发生变化,终端可以检测到触摸位置及其对应的触摸标识,则终端确定触控区域中检测到的与该触摸标识匹配的至少两个触摸位置,以保证确定的至少两个触摸位置属于同一个用户手指,即属于同一触摸操作。其中,触摸点用于指代用户手指与显示屏幕接触而产生的位置点,该触摸标识可以为fingerID或其他类型的标识。
在执行触摸操作的过程中,用户手指按下时的触摸点所在的位置为初始触摸位置,触摸点在触控区域中滑动的过程中,触摸点的触摸标识不会发生变化,而触摸位置可以发生变化,进而产生更多的触摸位置,且该用户手指所产生的触摸位置与触摸点的触摸标识相匹配,用户手指抬起时的触摸点所在的位置为终止触摸位置,终止触摸位置为触摸操作过程中的最后一个触摸位置。
在一种可能实现方式中,对于触摸操作的全部触摸位置来说,由于在确定瞄准方向时,初始的几个触摸位置对瞄准方向的影响较小,最终的几个触摸位置对瞄准方向的影响较大。因此,确定触摸操作的至少两个触摸位置可以包括触摸操作的终 止触摸位置及终止触摸位置之前的至少一个触摸位置。
在一种可能实现方式中,用户对触控区域执行触摸操作的过程为动态过程,在用户执行触摸操作的过程中,终端对虚拟场景界面进行采集,得到按照排列顺序排列的至少两个虚拟场景界面,每个虚拟场景界面中包括触摸点的一个触摸位置,则至少两个触摸位置也按照排列顺序排列。例如,至少两个触摸位置按照排列顺序从初始触摸位置到终止触摸位置进行排列,则终端可以从至少两个触摸位置中,选取排列在最后的预设数量的触摸位置。例如,选取两个触摸位置,则选取终止触摸位置及终止触摸位置的上一个触摸位置。例如,至少两个触摸位置按照排列顺序从终止触摸位置到初始触摸位置进行排列,则终端可以从至少两个触摸位置中,选取排列在最前的预设数量的触摸位置。
可选地,虚拟场景界面可以是按固定的时间间隔来采集的,相应的,确定的至少两个触摸位置中任两个相邻触摸位置之间的时间间隔相同。
在一种可能实现方式中,触摸位置可以采用坐标表示,每个触摸位置有对应的坐标,对至少两个触摸位置的坐标进行处理,可以确定目标触摸位置的坐标,从而确定了目标触摸位置。可选地,以触控区域的中心为坐标原点,建立坐标系,该触控区域中的每个位置具有一个对应的坐标。
在一种可能实现方式中,虚拟场景界面中还可以包括第一虚拟对象的另一个触控区域,另一个触控区域用于控制第一虚拟对象的前进、后退、移动等动作,可以同时在释放技能按钮对应的触控区域与该另一触控区域上进行操作,使第一虚拟对象在行动的同时释放技能。当然,该虚拟场景界面中还可以包括其他的触控区域,本申请实施例对此不做限定。
步骤203的其他实施方式与上述步骤101类似,在此不再赘述。
204、终端根据至少两个触摸位置,确定触摸操作的目标触摸位置。
在一种可能实现方式中,采用坐标表示触摸位置,终端按照至少两个触摸位置的排列顺序,确定至少两个触摸位置的权重;按照至少两个触摸位置的权重,对至少两个触摸位置的坐标进行加权合并,得到目标触摸位置的坐标。其中,至少两个触摸位置的权重可以是预先设置的,且至少两个触摸位置的权重之和为1,至少两个触摸位置的数量也可以是预先设置的。
可选地,触摸位置的坐标包括横坐标和纵坐标,终端按照至少两个触摸位置的权重,对至少两个触摸位置的横坐标进行加权求和,得到目标触摸位置的横坐标;按照至少两个触摸位置的权重,对至少两个触摸位置的纵坐标进行加权求和,得到目标触摸位置的纵坐标。
例如,终端需要根据排列在最后的3个触摸位置,来确定目标触摸位置,则终端预先设置三个触摸位置的权重q1、q2和q3,q1是终止触摸位置的权重,q2是倒数第二个触摸位置的权重,q3是倒数第三个触摸位置的权重,终止触摸位置的坐标为(x1,y1),倒数第二个触摸位置的坐标为(x2,y2),倒数第三个触摸位置的坐标为(x3,y3),则目标触摸位置的横坐标为:x=x1*q1+x2*q2+x3*q3;目标触摸位置的纵坐标为:y=y1*q1+y2*q2+y3*q3,以得到目标触摸位置的坐标(x,y)。
可选地,至少两个触摸位置包括终止触摸位置和终止触摸位置的上一个触摸位置,则可以设置终止触摸位置的权重为0,终止触摸位置的上一个触摸位置的权重为1。用户手指按压触摸区域,由于用户手指与触控区域的接触面积较大,在用户抬起手指时,触摸点的触摸位置会产生位移,在用户期望的触摸位置之后会产生新的触摸位置,即终止触摸位置很可能是用户误操作产生的触摸位置,而终止触摸位 置的上一个触摸位置则很可能是用户用来确定瞄准方向的触摸位置,因此将终止触摸位置的上一个触摸位置确定为目标触摸位置。
步骤204的其他实施方式与上述步骤102类似,在此不再赘述。
205、终端确定目标触摸位置指示的第一瞄准方向。
在一种可能实现方式中,以触控区域的中心点为原点,根据目标触摸位置的坐标,确定目标触摸位置位于中心点的哪一个方向,将该方向作为第一瞄准方向,相应的,对于虚拟场景界面中的第一虚拟对象来说,第一瞄准方向即是该第一虚拟对象的瞄准方向,后续第一虚拟对象在执行释放技能操作时,以第一虚拟对象为原点,向该第一虚拟对象的第一瞄准方向释放技能。
步骤205的其他实施方式与上述步骤103类似,在此不再赘述。
206、终端控制第一虚拟对象按照第一瞄准方向执行释放技能操作。
终端确定第一瞄准方向之后,终端控制第一虚拟对象向第一瞄准方向执行释放技能操作。
在一种可能实现方式中,终端根据第一瞄准方向及第一预设距离,确定第一瞄准方向上的瞄准位置;控制第一虚拟对象向该瞄准位置执行释放技能操作。其中,瞄准位置与第一虚拟对象之间的距离为第一预设距离,该第一预设距离是以第一虚拟对象为原点而设置的技能释放距离。
在另一种可能实现方式中,终端确定第一瞄准方向上,与第一虚拟对象的距离最近的第二虚拟对象;控制第一虚拟对象向第二虚拟对象执行释放技能操作。其中,终端确定第一瞄准方向之后,可以自动获取该第一瞄准方向上的虚拟对象,根据获取到的虚拟对象的位置,选取其中距离第一虚拟对象最近的虚拟对象作为第二虚拟对象。
可选地,当第二虚拟对象为其他用户控制的虚拟对象时,根据技能类型的不同,该第二虚拟对象可以与第一虚拟对象属于同一阵营,也可以与第一虚拟对象属于敌对阵营。例如,第一虚拟对象的技能为攻击类型的技能,则第二虚拟对象可以与第一虚拟对象属于敌对阵营,第一虚拟对象向第二虚拟对象执行攻击操作;第一虚拟对象的技能为治疗类型的技能,则第二虚拟对象可以与第一虚拟对象属于同一阵营,第一虚拟对象向第二虚拟对象执行治疗操作。
本申请实施例可以应用于对战场景下,虚拟场景界面的虚拟场景中包括第一虚拟对象和第二虚拟对象,如果其他虚拟对象与第一虚拟对象属于同一阵营,用户可以控制第一虚拟对象执行治疗操作,对其他虚拟对象进行治疗;如果其他虚拟对象与第一虚拟对象属于敌对阵营,用户可以控制第一虚拟对象执行攻击操作,对其他虚拟对象进行攻击。
例如,终端控制第一虚拟对象执行释放技能操作,攻击第二虚拟对象时,虚拟场景界面的变化如图5-12所示:
参见图5所示的虚拟场景界面500,用户对释放技能按钮进行触发操作之后,在虚拟场景界面500中显示一个圆形的触控区域,该触控区域中显示的圆点表示当前的触摸位置1,此时在虚拟场景中显示第一虚拟对象501的瞄准方向1,用户可以通过虚拟场景界面500预览当前的瞄准方向1。
图6所示的虚拟场景界面600是在虚拟场景界面500的基础上,用户手指在触控区域中沿箭头所示的方向移动后显示的虚拟场景界面,手指移动后触控区域中的触摸位置发生了变化,从触摸位置1变为触摸位置2,第一虚拟对象501的瞄准方向也发生了对应的变化,由瞄准方向1变为瞄准方向2。图7所示的虚拟场景界面 700是在虚拟场景界面600的基础上,用户手指在触控区域中再次移动之后显示的虚拟场景界面,图7的虚拟场景界面700的触控区域中的触摸位置再次发生了变化,从触摸位置2变为触摸位置3,第一虚拟对象501的瞄准方向也再次发生变化,由瞄准方向2变为瞄准方向3。图8所示的虚拟场景界面800是在虚拟场景界面700的基础上,用户手指在触控区域中再次移动之后显示的虚拟场景界面,图8的虚拟场景界面800的触控区域中的触摸位置再次发生了变化,从触摸位置3变为触摸位置4,第一虚拟对象501的瞄准方向也再次发生变化,由瞄准方向3变为瞄准方向4。
图9所示的虚拟场景界面900是在虚拟场景界面800的基础上,用户手指抬起离开触控区域,此时虚拟场景界面900中不再显示触控区域,终端根据确定的目标触摸位置,确定第一虚拟对象501最终的瞄准方向5。
图10所示的虚拟场景界面1000是在虚拟场景界面900的基础上,第一虚拟对象501按照瞄准方向5开始释放技能,此时释放技能按钮的中显示预设时长,该预设时长表示技能冷却时长,在该预设时长内,用户无法再次触发该释放技能按钮,直至时长减小至0。图11所示的虚拟场景界面1100是在虚拟场景界面1000的基础上,第一虚拟对象501已经释放出对应的技能,此时释放技能按钮的中显示的时长减小。图12所示的虚拟场景界面1200是在虚拟场景界面1100的基础上,第一虚拟对象501释放出的技能已经攻击到第二虚拟对象,此时释放技能按钮的上层显示的时长再次减小。
另外,上述实施例中,均是以目标触摸位置位于第二触摸区域为例进行说明,在一种可能实现方式中,如果用户从按下手指到抬起手指,触摸点一直位于第一触控子区域,即至少两个触摸位置均位于第一触控子区域,则终端按照预设规则确定第二瞄准方向;控制第一虚拟对象按照第二瞄准方向执行释放技能操作。其中,预设规则可以是预先设置的规则,用户可以根据技能的相关描述了解该预设规则,以确定是根据该预设规则控制第一虚拟对象释放技能,还是手动调整瞄准方向控制第一虚拟对象释放技能。
可选地,该预设规则为对距离小于第二预设距离的虚拟对象释放技能,则终端确定与第一虚拟对象之间的距离小于第二预设距离的第三虚拟对象的位置;根据第一虚拟对象的位置和第三虚拟对象的位置,确定第二瞄准方向,按照第二瞄准方向执行释放技能操作,从而对第三虚拟对象释放技能。其中,第一预设距离可以为释放技能按钮对应的技能可以释放的最远距离。
可选地,如果在与第一虚拟对象之间的距离小于第一预设距离的范围内,存在除第一虚拟对象之外的多个虚拟对象,可以从多个虚拟对象中选取任一个虚拟对象作为第三虚拟对象,或者可以从多个虚拟对象中选取生命值最小的虚拟对象作为第三虚拟对象。
图13是本申请实施例提供的一种释放技能的流程图,参见图13,释放技能的流程包括:
1301、用户触发释放技能按钮,终端显示触控区域,检测到对触摸点的按下操作。
1302、终端准备释放技能。
1303、终端检测用户是否对触摸点进行抬起操作,如果是,执行步骤1304,如果否,执行步骤1305。
1304、根据确定的至少两个触摸位置确定目标触摸位置,确定目标触摸位置指示的第一瞄准方向,执行释放技能操作。
1305、终端判断用户是否对触摸点进行滑动操作,如果是,执行步骤1303,如果否,执行步骤1302。
需要说明的一点是,本申请实施例仅是以第一虚拟对象的一个技能释放按钮对应的触控区域为例进行说明,在另一实施例中,对于第一虚拟对象的其他技能释放按钮的触控区域,可以采用与上述实施例类似的方式,实现控制第一虚拟对象执行释放技能操作。
需要说明的另一点是,本申请实施例仅是以执行主体为终端为例进行说明,在另一实施例中,步骤204和步骤205可以由与终端连接的服务器来执行。即终端响应于对触控区域的触摸操作,确定触摸操作的至少两个触摸位置,将至少两个触摸位置发送给服务器,服务器根据至少两个触摸位置,确定触摸操作的目标触摸位置,根据目标触摸位置,确定目标触摸位置指示的第一瞄准方向,将第一瞄准方向发送给终端,终端控制第一虚拟对象按照第一瞄准方向执行释放技能操作。
相关技术中,参见图14所示的触控区域的示意图,触控区域1400中的实心圆点表示用户期望的终止触摸位置1401,空心圆点表示实际的终止触摸位置1402,从图14中可以看出,用户期望的终止触摸位置1401与实际的终止触摸位置1402相差较大,如果直接根据实际的终止触摸位置1402确定瞄准方向,则确定的瞄准方向与用户期望的瞄准方向相差较大,导致无法达到用户期望的效果。
参见图15所示的触控区域的示意图,触控区域1501中所示的轨迹为触控操作产生的轨迹,触控区域1502中实心圆点表示用户期望的终止触摸位置1511,空心圆点表示实际的终止触摸位置1521,采用本申请实施例提供的方法确定目标触摸位置1512,从图15中可以看出,虽然用户期望的终止触摸位置1511与实际的终止触摸位置1521相差较大,但是确定的目标触摸位置1512与用户期望的终止触摸位置1511相差较小,根据目标触摸位置确定的瞄准方向与用户期望的瞄准方向相差较小,可以达到用户期望的效果。
本申请实施例提供的方法,不再仅根据触控区域中触摸操作的最后一个触摸位置确定瞄准方向,而是确定触摸操作的至少两个触摸位置,综合考虑该至少两个触摸位置确定目标触摸位置,避免了由于用户误操作产生的最后一个触摸位置与用户期望的触摸位置不一致的情况,得到的目标触摸位置能够体现用户期望的触摸位置,则目标触摸位置指示的瞄准方向更能满足用户的需求,提高了瞄准方向的准确率,之后按照确定的瞄准方向,控制第一虚拟对象执行释放技能操作,对第一虚拟对象释放技能操作的控制也更加准确。
并且,本申请实施例中,将触控区域划分为第一触控子区域和第二触控子区域,使用户既可以快速执行释放技能操作,又可以手动确定瞄准方向再执行释放技能操作,用户可以根据虚拟场景中第一虚拟对象释放的技能,灵活选择,提高了灵活性。
并且,本申请实施例中,为触摸点分配触摸标识,确定与触摸标识匹配的至少两个触摸位置,保证确定的至少两个触摸位置为同一触摸操作的触摸位置,避免虚拟场景界面中其他手指的触摸操作的干扰,提高了操作准确性。
在一种可能实现方式中,图16是本申请实施例提供的一种确定目标触摸位置的流程图,该流程包括:
1601、终端检测对触摸点的按下操作,得到初始触摸位置,为该触摸点分配触摸标识。
1602、终端判断触摸点是否位于触控区域中。
1603、如果触摸点位于触控区域中,终端将初始触摸位置的坐标赋值给(xn, yn)、(xn-1,yn-1)…直至(x1,y1)。其中,n表示需要确定的触摸位置的数量,n为正整数。如果触摸点不位于触控区域中,则结束流程。
1604、终端检测对触摸点的移动操作,得到移动后的触摸位置。
1605、终端判断移动后的触摸位置的触摸标识与上述触摸点的触摸标识是否匹配。
1606、如果匹配,终端将移动后的触摸位置的坐标赋值给(xn,yn)。如果不匹配,则结束流程。
当用户再次移动触摸点时,重复执行步骤1604,将(x2,y2)赋值给(x1,y1),将(x3,y3)赋值给(x2,y2),依次类推,将(xn-1,yn-1)赋值给(xn-2,yn-2),将(xn,yn)赋值给(xn-1,yn-1),将再次移动后的触摸位置的坐标赋值给(xn,yn)。
1607、终端检测对触摸点的抬起操作,得到终止触摸位置。
1608、终端判断终止触摸位置的触摸标识与上述触摸点的触摸标识是否匹配。
1609、如果匹配,则终端将(x2,y2)赋值给(x1,y1),将(x3,y3)赋值给(x2,y2),依次类推,将(xn-1,yn-1)赋值给(xn-2,yn-2),将(xn,yn)赋值给(xn-1,yn-1),将终止触摸位置的坐标赋值给(xn,yn)。
1610、终端根据n个触摸位置的坐标和对应的权重,计算目标触摸位置的坐标,得到目标触摸位置。其中,n为大于1的正整数。
例如,n为3,即需要确定3个触摸位置,得到该3个触摸位置的坐标(x1,y1)、(x2,y2)和(x3,y3)。在确定触摸位置的坐标的过程中,假设共得到了10个触摸位置,得到初始触摸位置后,将初始触摸位置的坐标赋值(x1,y1)、(x2,y2)和(x3,y3);得到第二个触摸位置后,将第二个触摸位置的坐标赋值给(x3,y3),得到第三个触摸位置后,将(x3,y3)(第二个触摸位置的坐标)赋值给(x2,y2),第三个触摸位置的坐标赋值给(x3,y3);得到第四个触摸位置的坐标后,将(x2,y2)(第二个触摸位置的坐标)赋值给(x1,y1),将(x3,y3)(第三个触摸位置的坐标)赋值给(x2,y2),第四个触摸位置的坐标赋值给(x3,y3),直至得到终止触摸位置,将(x2,y2)(第八个触摸位置的坐标)赋值给(x1,y1),将(x3,y3)(第九个触摸位置的坐标)赋值给(x2,y2),终止触摸位置的坐标赋值给(x3,y3),得到最后三个触摸位置的坐标。
图17是本申请实施例提供的一种虚拟对象控制装置的结构示意图。参见图17,该装置包括:
触摸位置确定模块1701,用于响应于对触控区域的触摸操作,确定触摸操作经过的至少两个触摸位置,至少两个触摸位置选自触摸操作最后经过的预设数目个触摸位置;
目标位置确定模块1702,用于根据预设策略对至少两个触摸位置进行合并,从而确定触摸操作的目标触摸位置;
第一方向确定模块1703,用于确定目标触摸位置指示的第一瞄准方向;
第一控制模块1704,用于控制第一虚拟对象按照第一瞄准方向执行释放技能操作。
本申请实施例提供的装置,不再仅根据触控区域中触摸操作的最后一个触摸位置确定瞄准方向,而是确定触摸操作的至少两个触摸位置,综合考虑该至少两个触摸位置确定目标触摸位置,避免了由于用户误操作产生的最后一个触摸位置与用户期望的触摸位置不一致的情况,得到的目标触摸位置能够体现用户期望的触摸位置, 则目标触摸位置指示的瞄准方向更能满足用户的需求,提高了瞄准方向的准确率,之后按照确定的瞄准方向,控制第一虚拟对象执行释放技能操作,对第一虚拟对象释放技能操作的控制也更加准确。
在一种可能实现方式中,参见图18,目标位置确定模块1702,包括:
权重确定单元1712,用于按照至少两个触摸位置的排列顺序,确定至少两个触摸位置的权重;
坐标确定单元1722,用于按照至少两个触摸位置的权重,对至少两个触摸位置的坐标进行加权处理,得到目标触摸位置的坐标。
在另一种可能实现方式中,参见图18,触摸位置的坐标包括横坐标和纵坐标,坐标确定单元1722,用于:
按照至少两个触摸位置的权重,对至少两个触摸位置的横坐标进行加权求和,得到目标触摸位置的横坐标;
按照至少两个触摸位置的权重,对至少两个触摸位置的纵坐标进行加权求和,得到目标触摸位置的纵坐标。
在另一种可能实现方式中,参见图18,触摸位置确定模块1701,包括:
第一位置确定单元1711,用于响应于触控区域中的触摸点对应的按下操作,确定按下操作对应的初始触摸位置;
第二位置确定单元1721,用于确定触摸点在触控区域中滑动的过程中的至少一个触摸位置;
第三位置确定单元1731,用于响应于触控区域中的触摸点对应的抬起操作,确定抬起操作对应的终止触摸位置。
在另一种可能实现方式中,参见图18,装置还包括:
标识分配模块1705,用于响应于触控区域中的触摸点对应的按下操作,为触摸点分配触摸标识;
第二位置确定单元1721,用于确定触控区域中检测到的与触摸标识匹配的至少两个触摸位置。
在另一种可能实现方式中,参见图18,第一控制模块1704,包括:
对象确定单元1714,用于确定第一瞄准方向上,与第一虚拟对象之间的距离最近的第二虚拟对象;
第一控制单元1724,用于控制第一虚拟对象向第二虚拟对象执行释放技能操作。
在另一种可能实现方式中,参见图18,第一控制模块1704,包括:
瞄准位置确定单元1734,用于根据第一瞄准方向及第一预设距离,确定第一瞄准方向上的瞄准位置,瞄准位置与第一虚拟对象之间的距离为第一预设距离;
第二控制单元1744,用于控制第一虚拟对象向瞄准位置执行释放技能操作。
在另一种可能实现方式中,参见图18,触控区域包括第一触控子区域及第二触控子区域,第二触控子区域位于第一触控子区域的外侧;
目标位置确定模块1702,用于终止触摸位置位于第二触控子区域,则根据至少两个触摸位置,确定触摸操作的目标触摸位置。
在另一种可能实现方式中,参见图18,装置还包括:
第二方向确定模块1706,用于至少两个触摸位置均位于第一触控子区域,则按照预设规则确定第二瞄准方向;
第二控制模块1707,用于控制第一虚拟对象按照第二瞄准方向执行释放技能操作。
在另一种可能实现方式中,参见图18,第二方向确定模块1706,包括:
对象位置确定单元1716,用于确定与第一虚拟对象之间的距离小于第二预设距离的第三虚拟对象的位置;
第二方向确定单元1726,用于根据第一虚拟对象的位置和第三虚拟对象的位置,确定第二瞄准方向。
在另一种可能实现方式中,参见图18,装置还包括:
按钮显示模块1708,用于通过第一虚拟对象对应的虚拟场景界面,显示第一虚拟对象的释放技能按钮;
触控区域显示模块1709,用于响应于对释放技能按钮的触发操作,通过虚拟场景界面显示触控区域。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
需要说明的是:上述实施例提供的虚拟对象控制装置在控制虚拟对象时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟对象控制装置与虚拟对象调整方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图19示出了本申请一个示例性实施例提供的终端1900的结构示意图。该终端1900可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1900还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1900包括有:处理器1901和存储器1902。
处理器1901可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1901可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1901也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1901可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1901还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1902可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1902还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1902中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1901所执行以实现本申请中方法实施例提供的基于人机对话的来电代接方法。
在一些实施例中,终端1900还可选包括有:外围设备接口1903和至少一个外围设备。处理器1901、存储器1902和外围设备接口1903之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1903相连。 具体地,外围设备包括:射频电路1904、显示屏1905、摄像头组件1906、音频电路1907、定位组件1908和电源1909中的至少一种。
外围设备接口1903可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1901和存储器1902。在一些实施例中,处理器1901、存储器1902和外围设备接口1903被集成在同一芯片或电路板上;在一些其他实施例中,处理器1901、存储器1902和外围设备接口1903中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
在一些实施例中,终端1900还包括有一个或多个传感器1910。该一个或多个传感器1910包括但不限于:加速度传感器1911、陀螺仪传感器1912、压力传感器1919、指纹传感器1914、光学传感器1915以及接近传感器1916。
本领域技术人员可以理解,图19中示出的结构并不构成对终端1900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
图20是本申请实施例提供的一种服务器的结构示意图,该服务器2000可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(Central Processing Units,CPU)2001和一个或一个以上的存储器2002,其中,存储器2002中存储有至少一条指令,该至少一条指令由处理器2001加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器还可以包括其他用于实现设备功能的部件,在此不做赘述。
服务器2000可以用于执行上述虚拟对象控制方法中服务器所执行的步骤。
本申请实施例还提供了一种计算机设备,该计算机设备包括处理器和存储器,存储器中存储有至少一条指令,该至少一条指令由处理器加载并执行,以实现上述实施例的虚拟对象控制方法中所执行的操作。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有至少一条指令,该至少一条指令由处理器加载并执行,以实现上述实施例的虚拟对象控制方法中所执行的操作。
本申请实施例还提供了一种计算机程序,该计算机程序中存储有至少一条指令,该至少一条指令由处理器加载并执行,以实现上述实施例的虚拟对象控制方法中所执行的操作。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上仅为本申请实施例的可选实施例,并不用以限制本申请实施例,凡在本申请实施例的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟对象控制方法,由一终端设备执行,所述方法包括:
    响应于对触控区域的触摸操作,确定所述触摸操作经过的至少两个触摸位置,所述至少两个触摸位置选自所述触摸操作最后经过的预设数目个触摸位置;
    根据预设策略对所述至少两个触摸位置进行合并,从而确定所述触摸操作的目标触摸位置;
    确定所述目标触摸位置指示的第一瞄准方向;
    控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作。
  2. 根据权利要求1所述的方法,确定所述触摸操作的目标触摸位置,包括:
    按照所述至少两个触摸位置的排列顺序,确定所述至少两个触摸位置的权重;
    按照所述至少两个触摸位置的权重,对所述至少两个触摸位置的坐标进行加权合并,得到所述目标触摸位置的坐标。
  3. 根据权利要求2所述的方法,得到所述目标触摸位置的坐标,包括:
    按照所述至少两个触摸位置的权重,对所述至少两个触摸位置的横坐标进行加权求和,得到所述目标触摸位置的横坐标;
    按照所述至少两个触摸位置的权重,对所述至少两个触摸位置的纵坐标进行加权求和,得到所述目标触摸位置的纵坐标。
  4. 根据权利要求1所述的方法,确定所述触摸操作经过的至少两个触摸位置,包括:
    响应于所述触控区域中的触摸点对应的按下操作,
    确定所述触摸点在所述触控区域中滑动的过程中的至少一个中间触摸位置;
    响应于所述触控区域中的所述触摸点对应的抬起操作,确定所述抬起操作对应的终止触摸位置;
    从所述至少一个中间触摸位置和所述终止触摸位置中确定所述预设数目个触摸位置。
  5. 根据权利要求4所述的方法,进一步包括:
    响应于所述触控区域中的触摸点对应的按下操作,为所述触摸点分配触摸标识;
    所述确定所述触摸点在所述触控区域中滑动的过程中的至少一个触摸位置,包括:
    确定所述触控区域中检测到的与所述触摸标识匹配的至少两个触摸位置。
  6. 根据权利要求1所述的方法,所述控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作,包括:
    确定所述第一瞄准方向上,与所述第一虚拟对象之间的距离最近的第二虚拟对象;
    控制所述第一虚拟对象向所述第二虚拟对象执行所述释放技能操作。
  7. 根据权利要求1所述的方法,所述控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作,包括:
    根据所述第一瞄准方向及第一预设距离,确定所述第一瞄准方向上的瞄准位置,所述瞄准位置与所述第一虚拟对象之间的距离为所述第一预设距离;
    控制所述第一虚拟对象向所述瞄准位置执行所述释放技能操作。
  8. 根据权利要求1所述的方法,所述触控区域包括第一触控子区域及第二触控子区域,所述第二触控子区域位于所述第一触控子区域的外侧;所述根据所述至少两个触摸位置,确定所述触摸操作的目标触摸位置,包括:
    所述终止触摸位置位于所述第二触控子区域,则根据所述至少两个触摸位置,确定所述触摸操作的目标触摸位置。
  9. 根据权利要求8所述的方法,所述响应于对触控区域的触摸操作,确定所述触摸操作的至少两个触摸位置之后,所述方法还包括:
    所述至少两个触摸位置均位于所述第一触控子区域,则按照预设规则确定第二瞄准方向;
    控制所述第一虚拟对象按照所述第二瞄准方向执行所述释放技能操作。
  10. 根据权利要求9所述的方法,所述按照预设规则确定第二瞄准方向,包括:
    确定与所述第一虚拟对象之间的距离小于第二预设距离的第三虚拟对象的位置;
    根据所述第一虚拟对象的位置和所述第三虚拟对象的位置,确定所述第二瞄准方向。
  11. 根据权利要求1所述的方法,所述响应于对触控区域的触摸操作,确定所述触摸操作的至少两个触摸位置之前,所述方法还包括:
    通过所述第一虚拟对象对应的虚拟场景界面,显示所述第一虚拟对象的释放技能按钮;
    响应于对所述释放技能按钮的触发操作,通过所述虚拟场景界面显示所述触控区域。
  12. 一种虚拟对象控制装置,包括:
    触摸位置确定模块,用于响应于对触控区域的触摸操作,确定所述触摸操作经过的至少两个触摸位置,所述至少两个触摸位置选自所述触摸操作的最后经过的预设数目个触摸位置;
    目标位置确定模块,用于根据预设策略对所述至少两个触摸位置进行合并,从而确定所述触摸操作的目标触摸位置;
    第一方向确定模块,用于确定所述目标触摸位置指示的第一瞄准方向;
    第一控制模块,用于控制第一虚拟对象按照所述第一瞄准方向执行释放技能操作。
  13. 根据权利要求12所述的装置,所述目标触摸位置确定模块,包括:
    权重确定单元,用于按照所述至少两个触摸位置的排列顺序,确定所述至少两个触摸位置的权重;
    坐标确定单元,用于按照所述至少两个触摸位置的权重,对所述至少两个触摸位置的坐标进行加权合并,得到所述目标触摸位置的坐标。
  14. 根据权利要求13所述的装置,所述目标位置确定模块,包括:
    权重确定单元,用于按照所述至少两个触摸位置的排列顺序,确定所述至少两个触摸位置的权重;
    坐标确定单元,用于按照所述至少两个触摸位置的权重,对所述至少两个触摸位置的坐标进行加权合并,得到所述目标触摸位置的坐标。
  15. 根据权利要求12所述的装置,所述触摸位置确定模块,包括:
    第一位置确定单元,用于响应于所述触控区域中的触摸点对应的按下操作,确定所述按下操作对应的初始触摸位置;
    第二位置确定单元,用于确定所述触摸点在所述触控区域中滑动的过程中的至少一个触摸位置;
    第三位置确定单元,用于响应于所述触控区域中的所述触摸点对应的抬起操作,确定所述抬起操作对应的终止触摸位置。
  16. 根据权利要求12所述的装置,所述第一控制模块,包括:
    对象确定单元,用于确定所述第一瞄准方向上,与所述第一虚拟对象之间的距离最近的第二虚拟对象;
    第一控制单元,用于控制所述第一虚拟对象向所述第二虚拟对象执行所述释放技能操作。
  17. 根据权利要求12所述的装置,所述第一控制模块,包括:
    瞄准位置确定单元,用于根据所述第一瞄准方向及第一预设距离,确定所述第一瞄准方向上的瞄准位置,所述瞄准位置与所述第一虚拟对象之间的距离为所述第一预设距离;
    第二控制单元,用于控制所述第一虚拟对象向所述瞄准位置执行所述释放技能操作。
  18. 根据权利要求12所述的装置,所述触控区域包括第一触控子区域及第二触控子区域,所述第二触控子区域位于所述第一触控子区域的外侧;
    目标位置确定模块,用于所述终止触摸位置位于所述第二触控子区域,则根据所述至少两个触摸位置,确定所述触摸操作的目标触摸位置。
  19. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行,以实现如权利要求1至11中任一权利要求所述的虚拟对象控制方法中所执行的操作。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条指令,所述至少一条指令由处理器加载并执行,以实现如权利要求1至11中任一权利要求所述的虚拟对象控制方法中所执行的操作。
PCT/CN2021/093061 2020-06-05 2021-05-11 虚拟对象控制方法、装置、计算机设备及存储介质 WO2021244237A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021563357A JP7384521B2 (ja) 2020-06-05 2021-05-11 仮想オブジェクトの制御方法、装置、コンピュータ機器及びコンピュータプログラム
EP21782630.4A EP3939679A4 (en) 2020-06-05 2021-05-11 METHOD AND APPARATUS FOR CONTROLLING VIRTUAL OBJECT, COMPUTER DEVICE AND STORAGE MEDIA
KR1020217034213A KR102648249B1 (ko) 2020-06-05 2021-05-11 가상 객체 제어 방법 및 장치, 컴퓨터 디바이스, 및 저장 매체
US17/507,965 US20220040579A1 (en) 2020-06-05 2021-10-22 Virtual object control method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010507467.0 2020-06-05
CN202010507467.0A CN111672115B (zh) 2020-06-05 2020-06-05 虚拟对象控制方法、装置、计算机设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/507,965 Continuation US20220040579A1 (en) 2020-06-05 2021-10-22 Virtual object control method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021244237A1 true WO2021244237A1 (zh) 2021-12-09

Family

ID=72435176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093061 WO2021244237A1 (zh) 2020-06-05 2021-05-11 虚拟对象控制方法、装置、计算机设备及存储介质

Country Status (6)

Country Link
US (1) US20220040579A1 (zh)
EP (1) EP3939679A4 (zh)
JP (1) JP7384521B2 (zh)
KR (1) KR102648249B1 (zh)
CN (1) CN111672115B (zh)
WO (1) WO2021244237A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111672115B (zh) * 2020-06-05 2022-09-23 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、计算机设备及存储介质
CN112717403B (zh) * 2021-01-22 2022-11-29 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113591A1 (en) * 2015-04-23 2018-04-26 Nubia Technology Co., Ltd. Method for realizing function adjustment by using a virtual frame region and mobile terminal thereof
CN108837506A (zh) * 2018-05-25 2018-11-20 网易(杭州)网络有限公司 一种竞速游戏中虚拟道具的控制方法、装置及存储介质
CN109224439A (zh) * 2018-10-22 2019-01-18 网易(杭州)网络有限公司 游戏瞄准的方法及装置、存储介质、电子装置
US20190265882A1 (en) * 2016-11-10 2019-08-29 Cygames, Inc. Information processing program, information processing method, and information processing device
CN110613933A (zh) * 2019-09-24 2019-12-27 网易(杭州)网络有限公司 游戏中技能释放控制方法、装置、存储介质和处理器
CN111672115A (zh) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、计算机设备及存储介质

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5044956A (en) * 1989-01-12 1991-09-03 Atari Games Corporation Control device such as a steering wheel for video vehicle simulator with realistic feedback forces
US5999168A (en) * 1995-09-27 1999-12-07 Immersion Corporation Haptic accelerator for force feedback computer peripherals
JP2004057634A (ja) * 2002-07-31 2004-02-26 Shuji Sonoda ゲーム装置及びそれを実現するプログラム
US20090143141A1 (en) * 2002-08-06 2009-06-04 Igt Intelligent Multiplayer Gaming System With Multi-Touch Display
US8210943B1 (en) * 2006-05-06 2012-07-03 Sony Computer Entertainment America Llc Target interface
US8834245B2 (en) * 2007-08-17 2014-09-16 Nintendo Co., Ltd. System and method for lock on target tracking with free targeting capability
JP4932010B2 (ja) 2010-01-06 2012-05-16 株式会社スクウェア・エニックス ユーザインタフェース処理装置、ユーザインタフェース処理方法、およびユーザインタフェース処理プログラム
US8920240B2 (en) * 2010-04-19 2014-12-30 Guillemot Corporation S.A. Directional game controller
JP5832489B2 (ja) * 2013-08-26 2015-12-16 株式会社コナミデジタルエンタテインメント 移動制御装置及びプログラム
CN104216617B (zh) * 2014-08-27 2017-05-24 小米科技有限责任公司 光标位置确定方法和装置
CN105194873B (zh) * 2015-10-10 2019-01-04 腾讯科技(成都)有限公司 一种信息处理方法、终端及计算机存储介质
CN105214309B (zh) 2015-10-10 2017-07-11 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质
JP6661513B2 (ja) * 2016-10-31 2020-03-11 株式会社バンク・オブ・イノベーション ビデオゲーム処理装置、及びビデオゲーム処理プログラム
KR102531542B1 (ko) * 2016-12-05 2023-05-10 매직 립, 인코포레이티드 혼합 현실 환경의 가상 사용자 입력 콘트롤들
CN109529327B (zh) * 2017-09-21 2022-03-04 腾讯科技(深圳)有限公司 虚拟交互场景中目标定位方法、装置及电子设备
CN108509139B (zh) * 2018-03-30 2019-09-10 腾讯科技(深圳)有限公司 虚拟对象的移动控制方法、装置、电子装置及存储介质
CN108499104B (zh) * 2018-04-17 2022-04-15 腾讯科技(深圳)有限公司 虚拟场景中的方位显示方法、装置、电子装置及介质
CN110624240B (zh) * 2018-06-25 2023-10-13 腾讯科技(上海)有限公司 一种网络游戏的操作控制方法、装置、终端设备和介质
JP7207911B2 (ja) * 2018-09-06 2023-01-18 株式会社バンダイナムコエンターテインメント プログラム、ゲームシステム、サーバシステム及びゲーム提供方法
US11189127B2 (en) * 2019-06-21 2021-11-30 Green Jade Games Ltd Target shooting game with banking of player error
CN110275639B (zh) * 2019-06-26 2023-03-28 Oppo广东移动通信有限公司 触摸数据处理方法、装置、终端及存储介质
US11071906B2 (en) * 2019-10-08 2021-07-27 Zynga Inc. Touchscreen game user interface
CN111151002A (zh) * 2019-12-30 2020-05-15 北京金山安全软件有限公司 触控瞄准方法和装置
US11541317B2 (en) * 2020-02-06 2023-01-03 Sony Interactive Entertainment Inc. Automated weapon selection for new players using AI
CN116688502A (zh) * 2022-02-25 2023-09-05 腾讯科技(深圳)有限公司 虚拟场景中的位置标记方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113591A1 (en) * 2015-04-23 2018-04-26 Nubia Technology Co., Ltd. Method for realizing function adjustment by using a virtual frame region and mobile terminal thereof
US20190265882A1 (en) * 2016-11-10 2019-08-29 Cygames, Inc. Information processing program, information processing method, and information processing device
CN108837506A (zh) * 2018-05-25 2018-11-20 网易(杭州)网络有限公司 一种竞速游戏中虚拟道具的控制方法、装置及存储介质
CN109224439A (zh) * 2018-10-22 2019-01-18 网易(杭州)网络有限公司 游戏瞄准的方法及装置、存储介质、电子装置
CN110613933A (zh) * 2019-09-24 2019-12-27 网易(杭州)网络有限公司 游戏中技能释放控制方法、装置、存储介质和处理器
CN111672115A (zh) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、计算机设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3939679A4 *

Also Published As

Publication number Publication date
CN111672115A (zh) 2020-09-18
EP3939679A1 (en) 2022-01-19
EP3939679A4 (en) 2022-08-31
US20220040579A1 (en) 2022-02-10
KR20210151843A (ko) 2021-12-14
JP7384521B2 (ja) 2023-11-21
CN111672115B (zh) 2022-09-23
JP2022540278A (ja) 2022-09-15
KR102648249B1 (ko) 2024-03-14

Similar Documents

Publication Publication Date Title
CN111481932B (zh) 虚拟对象的控制方法、装置、设备和存储介质
JP7331124B2 (ja) 仮想オブジェクトの制御方法、装置、端末及び記憶媒体
CN111672127B (zh) 虚拟对象的控制方法、装置、设备以及存储介质
CN110465087B (zh) 虚拟物品的控制方法、装置、终端及存储介质
JP7390400B2 (ja) 仮想オブジェクトの制御方法並びにその、装置、端末及びコンピュータプログラム
KR20210143301A (ko) 가상 객체 제어 방법 및 장치, 디바이스, 및 저장 매체
TWI818343B (zh) 虛擬場景的適配顯示方法、裝置、電子設備、儲存媒體及電腦程式產品
US20230050933A1 (en) Two-dimensional figure display method and apparatus for virtual object, device, and storage medium
WO2021244237A1 (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
TWI804032B (zh) 虛擬場景中的資料處理方法、裝置、設備、儲存媒體及程式產品
WO2021244310A1 (zh) 虚拟场景中的虚拟对象控制方法、装置、设备及存储介质
WO2022037529A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
TWI793838B (zh) 虛擬對象互動模式的選擇方法、裝置、設備、媒體及產品
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
US20230271087A1 (en) Method and apparatus for controlling virtual character, device, and storage medium
WO2023138175A1 (zh) 卡牌施放方法、装置、设备、存储介质及程序产品
CN114225372B (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
JP7419400B2 (ja) 仮想オブジェクトの制御方法、装置、端末及びコンピュータプログラム
WO2024051414A1 (zh) 热区的调整方法、装置、设备、存储介质及程序产品
CN113599829B (zh) 虚拟对象的选择方法、装置、终端及存储介质
WO2023246307A1 (zh) 虚拟环境中的信息处理方法、装置、设备及程序产品
CN117764758A (zh) 用于虚拟场景的群组建立方法、装置、设备及存储介质
CN116764222A (zh) 游戏角色的控制方法、装置、计算机设备和存储介质
CN116421968A (zh) 虚拟角色控制方法、装置、电子设备和存储介质
CN113546403A (zh) 角色控制方法、装置、终端和计算机可读存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20217034213

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021782630

Country of ref document: EP

Effective date: 20211012

ENP Entry into the national phase

Ref document number: 2021563357

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21782630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE