WO2022242400A1 - 虚拟对象的技能释放方法、装置、设备、介质及程序产品 - Google Patents

虚拟对象的技能释放方法、装置、设备、介质及程序产品 Download PDF

Info

Publication number
WO2022242400A1
WO2022242400A1 PCT/CN2022/087836 CN2022087836W WO2022242400A1 WO 2022242400 A1 WO2022242400 A1 WO 2022242400A1 CN 2022087836 W CN2022087836 W CN 2022087836W WO 2022242400 A1 WO2022242400 A1 WO 2022242400A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
objects
skill
virtual object
response
Prior art date
Application number
PCT/CN2022/087836
Other languages
English (en)
French (fr)
Inventor
潘科宇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2023553103A priority Critical patent/JP2024513658A/ja
Priority to US17/990,579 priority patent/US20230078592A1/en
Publication of WO2022242400A1 publication Critical patent/WO2022242400A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the embodiments of the present application relate to the field of human-computer interaction, and in particular to a method, device, device, medium, and program product for releasing skills of virtual objects.
  • a user controls a virtual fighter plane to attack multiple virtual creatures, and the virtual fighter plane simultaneously launches virtual missiles along multiple fixed directions, and the virtual creatures located in the fixed directions will be attacked by the virtual missiles.
  • the present application provides a method, device, equipment, medium and program product for releasing skills of virtual objects, which improves the efficiency of human-computer interaction of users. Described technical scheme is as follows:
  • a method for releasing skills of a virtual object comprising:
  • the virtual environment screen displays a first virtual object and at least one second virtual object, the first virtual object has a first skill
  • a lock indicator of the first skill is displayed on the virtual environment screen, and the lock indicator is used to lock objects located in the release area of the first skill. n second virtual objects are locked;
  • control the m virtual flying objects released by the first virtual object to automatically track the n second virtual objects, where m and n are both integers not less than 2.
  • a device for releasing skills of a virtual object comprising:
  • a display module configured to display a virtual environment screen, the virtual environment screen displays a first virtual object and at least one second virtual object, and the first virtual object has a first skill;
  • the display module is further configured to display a locking indicator of the first skill on the virtual environment screen in response to the first target locking operation of the first skill, and the locking indicator is used to align with the first target of the first skill. Lock the n second virtual objects in the release area of the skill;
  • a control module configured to, in response to the first release operation of the first skill, control the m virtual flying objects released by the first virtual object to automatically track the n second virtual objects, where m and n are different An integer less than 2.
  • a computer device includes: a processor and a memory, the memory stores a computer program, the computer program is loaded and executed by the processor to implement the above-mentioned The skill release method of the virtual object.
  • a computer-readable storage medium stores a computer program, and the computer program is loaded and executed by a processor to implement the method for releasing skills of a virtual object as described above.
  • a computer program product comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for releasing skills of virtual objects provided in the above aspects.
  • Fig. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment
  • Fig. 2 shows a flowchart of a method for releasing skills of a virtual object provided by an exemplary embodiment
  • Fig. 3 shows a schematic diagram of a horizontal version of a virtual environment screen provided by an exemplary embodiment
  • Fig. 4 shows a schematic diagram of a horizontal version of a virtual environment screen provided by another exemplary embodiment
  • Fig. 5 shows a schematic diagram of a horizontal virtual environment screen provided by another exemplary embodiment
  • Fig. 6 shows a schematic diagram of a horizontal virtual environment screen provided by another exemplary embodiment
  • Fig. 7 shows a schematic diagram of a horizontal virtual environment screen provided by another exemplary embodiment
  • Fig. 8 shows a schematic diagram of a horizontal virtual environment screen provided by another exemplary embodiment
  • Fig. 9 shows a flowchart of a method for releasing skills of a virtual object provided by another exemplary embodiment
  • Fig. 10 shows a flowchart of a method for releasing skills of a virtual object provided by another exemplary embodiment
  • Fig. 11 shows a structural block diagram of a device for releasing skills of virtual objects provided by an exemplary embodiment of the present application
  • Fig. 12 shows a schematic structural diagram of a computer device provided by an exemplary embodiment of the present application.
  • Lock indicator used to lock the virtual object in the release area of the first skill.
  • Locking refers to the real-time detection and real-time positioning of virtual objects located in the release area of the first skill. Through real-time positioning, it is ensured that the m virtual flying objects released by the first virtual object can automatically track n second virtual objects. attribute value is reduced.
  • the lock indicator is represented by geometric shapes such as a circle, a sector, and a rectangle.
  • the lock indicator is displayed invisible on the horizontal version virtual environment screen, that is, the lock indicator cannot be seen by the naked eye; optionally, the lock indicator is displayed in a visible way on the horizontal version virtual environment screen, that is, the user can Visually see the lock indicator.
  • the lock indicator can lock the static virtual object in the release area of the first skill, that is, the lock indicator detects and locates the static virtual object, and automatically tracks the static virtual object after locking. The attribute value of the virtual object is reduced.
  • the locking indicator can lock the dynamic virtual object in the release area of the first skill, that is, the locking indicator can detect and locate the dynamic virtual object in real time, and then automatically track the dynamic virtual object. The attribute value of the dynamic virtual object is reduced.
  • the angle between the lock indicator and the landscape view of the virtual environment is a right angle, that is, the display plane of the locking indicator is perpendicular to the viewing angle direction; optionally, the angle between the locking indicator and the horizontal viewing angle of the virtual environment is The angle between them is an acute angle.
  • the lock indicator appears as a circular area on the horizontal version of the virtual environment screen, the circular area is perpendicular to the horizontal viewing angle direction of the virtual environment; optional, when the lock indicator is displayed on the horizontal version of the virtual environment screen When it is a circular area, the included angle between the circular area and the horizontal viewing angle of the virtual environment is an acute angle.
  • the attribute value of the virtual object for example, the life value, the energy value of the released skill, the defense power, the attack power, the movement speed, etc.
  • Horizontal version game refers to the game that controls the movement route of the game character on the horizontal screen. In all or most of the screens in the horizontal version of the game, the movement route of the game character is along the horizontal direction.
  • horizontal version games are divided into horizontal version clearance, horizontal version adventure, horizontal version competition, horizontal version strategy and other games; according to technology, horizontal version games are divided into two-dimensional (2D) horizontal version games and three-dimensional (3D) horizontal version games. ) side-scrolling game.
  • Virtual Environment is the virtual environment displayed (or provided) by the application when it is run on the terminal.
  • the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are described with an example that the virtual environment is a three-dimensional virtual environment.
  • the virtual environment may provide a fighting environment for virtual objects.
  • a fighting environment for virtual objects exemplary, in a side-scrolling game, one or two virtual objects fight in a single game in a virtual environment, and the virtual objects avoid attacks launched by enemy units and dangers in the virtual environment (such as poisonous gas circles, swamps, etc.) etc.) to achieve the purpose of surviving in the virtual environment, when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment ends, and the virtual object that successfully passes the route in the checkpoint is the winner.
  • Each client can control one or more virtual objects in the virtual environment.
  • the competitive mode of the battle may include a single player battle mode, a two-person team battle mode, or a multiplayer large group battle mode, and this embodiment does not limit the battle mode.
  • the horizontal version of the virtual environment image is an image in which the virtual environment is observed from the horizontal screen perspective of the avatar, for example, a shooting game in which the avatar is observed from the vertical direction on the right side of the avatar.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable object may be a virtual character, a virtual animal, an animation character, etc., for example: a character or an animal displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional model created based on animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • virtual objects can be classified into different role types based on attribute values or skills possessed. For example, if the target virtual object has remote output skills, its corresponding role type can be a shooter; if it has auxiliary skills, its corresponding role type can be assistant.
  • the same virtual object can correspond to multiple character types.
  • Horizontal version game refers to the game that controls the movement route of the game character on the horizontal screen. In all or most of the screens in the horizontal version of the game, the movement route of the game character is along the horizontal direction.
  • horizontal version games are divided into horizontal version clearance, horizontal version adventure, horizontal version competition, horizontal version strategy and other games; according to technology, horizontal version games are divided into two-dimensional (2D) horizontal version games and three-dimensional (3D) horizontal version games. ) side-scrolling game.
  • Virtual props Refers to the props that virtual objects can use in the virtual environment, including virtual weapons that can change the attribute values of other virtual objects, bullets and other supply props, shields, armor, armored vehicles and other defensive props, virtual beams, virtual shock waves, etc.
  • the virtual flying object belongs to the special props in the virtual props.
  • the virtual flying object can be a virtual prop with flying properties, a virtual prop thrown by a virtual object, or a virtual object launched when a virtual object shoots. props
  • the information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application All are authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with the relevant laws, regulations and standards of the relevant countries and regions.
  • Fig. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a first terminal 120 , a server 140 and a second terminal 160 .
  • the first terminal 120 is installed and runs an application program supporting a virtual environment.
  • the application program can be any one of three-dimensional map program, horizontal version shooting, horizontal version adventure, horizontal version pass, horizontal version strategy, virtual reality (Virtual Reality, VR) application, augmented reality (Augmented Reality, AR) program .
  • the first terminal 120 is the terminal used by the first user.
  • the first user uses the first terminal 120 to control the first virtual object located in the virtual environment to perform activities.
  • the activities include but are not limited to: adjusting body posture, walking, running, jumping, At least one of riding, driving, aiming, picking up, using throwing props, and attacking other virtual objects.
  • the first virtual object is a first virtual character, such as a simulated character object or an anime character object.
  • the first user controls the first virtual character to perform activities through UI controls on the virtual environment screen.
  • the first terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 140 includes a processor 144 and a memory 142, and the memory 142 further includes a receiving module 1421, a control module 1422, and a sending module 1423, and the receiving module 1421 is used to receive a request sent by a client, such as a teaming request; the control module 1422 It is used to control the rendering of the virtual environment picture; the sending module 1423 is used to send a response to the client, such as sending a prompt message of team formation success to the client.
  • the server 140 is used to provide background services for applications supporting the 3D virtual environment.
  • the server 140 undertakes the main calculation work, and the first terminal 120 and the second terminal 160 undertake the secondary calculation work; or, the server 140 undertakes the secondary calculation work, and the first terminal 120 and the second terminal 160 undertake the main calculation work; Alternatively, the server 140, the first terminal 120, and the second terminal 160 use a distributed computing architecture to perform collaborative computing.
  • the second terminal 160 is installed and runs an application program supporting a virtual environment.
  • the application program can be any one of three-dimensional map program, horizontal version shooting, horizontal version adventure, horizontal version pass, horizontal version strategy, virtual reality application program, and augmented reality program.
  • the second terminal 160 is a terminal used by the second user.
  • the second user uses the second terminal 160 to control the second virtual object located in the virtual environment to perform activities.
  • the activities include but are not limited to: adjusting body posture, walking, running, jumping, At least one of riding, driving, aiming, picking up, using throwing props, and attacking other virtual objects.
  • the second virtual object is a second virtual character, such as a simulated character object or an anime character object.
  • first virtual object and the second virtual object are in the same virtual environment.
  • first virtual object and the second virtual object may belong to the same team, the same organization, the same camp, have friendship or have temporary communication rights.
  • first virtual object and the second virtual object may also belong to different camps, different teams, different organizations or have a hostile relationship.
  • the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms (Android or IOS).
  • the first terminal 120 may generally refer to one of the multiple terminals
  • the second terminal 160 may generally refer to one of the multiple terminals.
  • This embodiment only uses the first terminal 120 and the second terminal 160 as an example for illustration.
  • the device types of the first terminal 120 and the second terminal 160 are the same or different, and the device types include: smart phones, tablet computers, e-book readers, MP3 players, MP4 players, laptop portable computers and desktop computers. at least one.
  • the following embodiments are described by taking a terminal including a smart phone as an example.
  • the number of the foregoing terminals may be more or less. For example, there may be only one terminal, or there may be dozens or hundreds of terminals, or more.
  • the embodiment of the present application does not limit the number of terminals and device types.
  • Fig. 2 shows a flowchart of a method for releasing skills of a virtual object provided by an exemplary embodiment of the present application.
  • the method is executed by the first terminal 120 (or a client in the first terminal 120 ) shown in FIG. 1 for illustration.
  • the method includes:
  • Step 220 displaying a virtual environment screen, the virtual environment screen displays a first virtual object and at least one second virtual object, and the first virtual object has a first skill;
  • the virtual environment picture is at least one of a horizontal virtual environment picture, a vertical virtual environment picture, a three-dimensional virtual environment picture, and a two-dimensional virtual environment picture.
  • the first virtual object refers to a movable object in the virtual environment, and the movable object may be a virtual character, a virtual animal, an animation character, and the like.
  • the second virtual object refers to another movable object in the virtual environment.
  • the first virtual object and the second virtual object are in the same virtual environment.
  • the first virtual object and the second virtual object may belong to the same team, the same organization, the same camp, have friendship or have temporary communication rights.
  • the first virtual object and the second virtual object may also belong to different camps, different teams, different organizations or have a hostile relationship. In this application, an example is taken in which the first virtual object and the second virtual object have a hostile relationship.
  • the release of the first skill by the first virtual object causes the attribute value of the second virtual object to decrease.
  • the first skill refers to the release ability possessed by the first virtual object in the virtual environment.
  • the first skill means that the first virtual object has the ability to release skills; optionally, the first skill means that the first virtual object has the ability to release virtual props; The ability of the virtual prop to release the virtual flying object; optionally, the first skill refers to the ability of the first virtual object to release the virtual flying object through the skill.
  • the first skill refers to the ability of the first virtual object to release the virtual flying object through the skill.
  • the skill is the basic skill of the first virtual object, wherein the basic skill is that the first virtual object does not need to learn Skills that can be mastered (such as preset abilities such as normal attacks); optionally, the skill is a learning skill of the first virtual object, wherein the learning skill is a skill that the first virtual object can only master after learning or picking up .
  • the first virtual object releases m virtual flying objects by releasing skills.
  • the first skill refers to the ability of the first virtual object to release virtual flying objects through virtual props, for example, the first virtual object tears up the virtual scroll to release m virtual flying objects.
  • the first virtual object releases m virtual flying objects at one time; optionally, the first virtual object releases virtual flying objects in batches, and the number of virtual flying objects released in each batch is at least two.
  • the first skill of the first virtual object is displayed as that the first virtual object simultaneously releases m virtual flying objects to reduce the attribute value of the second virtual object in an automatic tracking manner.
  • the horizontal version of the virtual environment picture is obtained by collecting the field of view acquired by the first virtual object based on the horizontal version of the viewing angle in the virtual environment, and displaying the field of view on the terminal.
  • a virtual environment is a virtual environment that is displayed (or provided) by an application when it is run on a terminal.
  • the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • FIG. 3 shows a horizontal virtual environment screen of an exemplary embodiment of the present application.
  • the horizontal virtual environment screen includes one of a first virtual object 301 and at least one second virtual object.
  • the second virtual object 302 and the first virtual object 301 in FIG. 3 move along the horizontal direction until entering the level of the horizontal virtual environment picture shown in FIG. 3 .
  • Step 240 in response to the first target locking operation of the first skill, display the lock indicator of the first skill on the virtual environment screen, and the lock indicator is used to lock n second virtual objects located in the release area of the first skill locking;
  • the angle between the lock indicator and the landscape view of the virtual environment is a right angle, that is, the display plane of the locking indicator is perpendicular to the viewing angle direction; optionally, the angle between the locking indicator and the horizontal viewing angle of the virtual environment is The angle between them is an acute angle.
  • the lock indicator appears as a circular area in the horizontal version of the virtual environment screen, the circular area is perpendicular to the horizontal viewing angle of the virtual environment; optional, when the lock indicator appears in the horizontal version of the virtual environment screen as In the case of a circular area, the included angle between the circular area and the horizontal viewing angle of the virtual environment is an acute angle.
  • the first target locking operation is used to lock n second virtual objects located in the release area of the first skill.
  • the first target locking operation is a user's touch and drop operation on the horizontal version of the virtual environment screen, and the terminal determines the first positioning point of the locking indicator of the first skill on the horizontal version of the virtual environment screen, and based on the The first positioning point generates a locking indicator; optionally, the first target locking operation is a locking operation performed by the user on a peripheral component connected to the terminal.
  • a first anchor point is determined on the virtual environment screen, and a lock indicator is generated based on the first anchor point of the lock indicator.
  • the release area of the first skill refers to the action area of the first skill in the virtual environment interface.
  • the release area of the first skill is displayed as a closed geometric figure, and the closed geometric figure surrounds n second virtual objects; optionally, the release area of the first skill is displayed as several closed geometric figures, each A closed geometric figure encloses several second virtual objects among the n second virtual objects.
  • FIG. 4 is a schematic diagram of a horizontal version of a virtual environment screen according to an exemplary embodiment of the present application.
  • the horizontal version of the virtual environment screen includes a first virtual object 301 and a first virtual object in at least one second virtual object Two virtual objects 302, the lock indicator 401 in FIG. A circle appears in the center.
  • the second virtual object may be a three-dimensional model created based on animation skeleton technology. Each second virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • the three-dimensional virtual environment picture is collected from the three-dimensional virtual environment through the camera model, and the plane where the release area of the first skill is located is between the plane where the three-dimensional virtual environment picture is located There is an angle. According to the area where the indicator is locked on the three-dimensional virtual environment screen by the first skill, the release area of the first skill in the three-dimensional virtual environment screen is determined.
  • Step 260 in response to the first release operation of the first skill, control the first virtual object to release m virtual flying objects to automatically track n second virtual objects.
  • both m and n are integers not less than 2.
  • the m virtual flying objects adjust the attribute values of the n second virtual objects, and the adjustment methods include decreasing attribute values and increasing attribute values.
  • the attribute value includes but not limited to at least one of the life value of the second virtual object, the energy value of releasing skills, defense power, attack power, and moving speed.
  • the first virtual object in response to the first release operation of the first skill, is controlled to simultaneously release m virtual flying objects to reduce the attribute values of n second virtual objects in an automatic tracking manner, or, in response to the first The first release operation of a skill controls the first virtual object and releases m virtual flying objects to increase the attribute values of n second virtual objects in an automatic tracking manner.
  • the m virtual flying objects are simultaneously released by the first virtual object, or the m virtual flying objects are released sequentially by the first virtual object.
  • m virtual flying objects refer to m virtual flying swords, and the first virtual object releases a virtual flying sword every 0.3 seconds.
  • Virtual flying objects refer to the virtual props in the first skill released by the virtual object in the virtual environment, including supplementary props such as bullets that can change the attribute value of the second virtual object, virtual flying shields, virtual beams, virtual shock waves, etc.
  • the virtual flying object is a virtual prop displayed by the hand when the first virtual object releases a skill, virtual props such as virtual flying dagger, virtual flying knife, virtual flying sword, virtual flying axe, grenades, flash bombs, smoke bombs, etc. Throwing virtual props.
  • m virtual flying objects are displayed as the same performance characteristics, such as m virtual flying objects are displayed as m identical virtual flying swords; optional, m virtual flying objects are displayed as completely different performance characteristics, such as m virtual flying objects are displayed as m different virtual flying swords; optionally, m virtual flying objects are displayed as part of the same performance characteristics, such as m virtual flying objects are displayed as m/2 first virtual flying swords and m/2 second virtual flying sword.
  • the m virtual flying objects have the ability to automatically track n second virtual objects, that is, after releasing the m virtual flying objects in response to the first virtual object, there is no need for the user to manipulate the m virtual flying objects again.
  • the flying object automatically tracks n second virtual objects.
  • controlling the m virtual flying objects released by the first virtual object to automatically track n second virtual objects includes at least the following two methods:
  • each of the n second virtual objects corresponds to at least one virtual flying object among the m virtual flying objects.
  • the ith virtual flying object among the m virtual flying objects is controlled to automatically track one of the n second virtual objects.
  • the i-th virtual object in the case that m is not less than n, i is greater than n and i is not greater than m, control the i+1th virtual flying object among the m virtual flying objects to automatically track among the n second virtual objects The i-nth virtual object of .
  • n virtual flying objects adjust the attribute values of n second virtual objects
  • control the m virtual flying objects simultaneously released by the first virtual object to The automatic tracking method respectively reduces the attribute value of one second virtual object among the n second virtual objects.
  • the i-th virtual flying object among the m virtual flying objects is controlled to automatically track the n second
  • the attribute value of the i-th virtual object in the virtual objects is reduced; in the case that m is not less than n, i is greater than n and i is not greater than m, control the i+1th virtual flying object among the m virtual flying objects to The automatic tracking method reduces the attribute value of the i-nth virtual object among the n second virtual objects.
  • each of the n virtual objects corresponds to at least one virtual flying object among the m virtual flying objects.
  • the m virtual flying objects released by the first virtual object are controlled to automatically track m second virtual objects among the n second virtual objects.
  • the m second virtual objects are in one-to-one correspondence with the m virtual flying objects.
  • the m virtual flying objects are sequentially assigned to the m second virtual objects among the n virtual objects until the assignment of the m virtual flying objects is completed. It is worth noting that, in the current embodiment, the m second virtual objects are in one-to-one correspondence with the m virtual flying objects.
  • n virtual flying objects adjust the attribute values of n second virtual objects, for example, when m is less than n, control the m virtual flying objects simultaneously released by the first virtual object to automatically In the tracking manner, attribute values of m second virtual objects among the n second virtual objects are reduced.
  • FIG. 5 is a schematic diagram of a horizontal virtual environment screen in an exemplary embodiment of the present application.
  • FIG. 5 shows that five virtual flying objects reduce the attribute values of three second virtual objects in an automatic tracking manner
  • the horizontal version of the virtual environment screen in FIG. 5 includes a first virtual object 301 and a second virtual object 302 in at least one second virtual object, and five virtual flying objects 501 automatically track three locked second virtual objects 402 Attribute values are reduced, and the distribution method is shown in Figure 5. From left to right, 2 virtual flying objects 501 are assigned to the first locked second virtual object 402, and 2 virtual flying objects 501 are assigned to the second virtual object 402. For the locked second virtual object 402 , one virtual flying object 501 is assigned to the third locked second virtual object 402 .
  • each virtual object in the n second virtual objects corresponds to at least one virtual flying object in the m virtual flying objects; when m is smaller than n, m in the n second virtual objects The second virtual objects are in one-to-one correspondence with the m virtual flying objects.
  • the above method improves the aiming speed of multiple virtual props attacking multiple virtual objects at the same time, greatly reduces the difficulty of the player's operation, and improves the efficiency of the user's human-computer interaction.
  • step 270 and step 280 are also included.
  • the first terminal 120 (or the second terminal 120) shown in FIG. A client in a terminal 120) is executed as an example for illustration.
  • Step 270 in response to the second target locking operation of the first skill, lock the target virtual object in the at least one second virtual object on the virtual environment screen;
  • the second target locking operation is used for the first skill to lock one of the second virtual objects in the release area of the first skill.
  • the second target locking operation is a user's touch operation on the horizontal version of the virtual environment screen
  • the terminal determines the second positioning point of the first skill on the horizontal version of the virtual environment screen, based on the second positioning point, the terminal determines the first skill
  • the object of action is the target virtual object; optionally, the second target locking operation is the user’s locking operation on the peripheral component connected to the terminal.
  • the second anchor point is determined on the screen, and based on the second anchor point, the terminal determines the action object of the first skill, that is, the target virtual object.
  • the target virtual object is a virtual object selected by the user from at least one second virtual object on the horizontal virtual environment screen, and the target virtual object is an action object for the user to release the first skill.
  • a second anchor point of the first skill is determined, and the second anchor point is used to align at least one second virtual environment on the horizontal version of the virtual environment screen.
  • the target dummy in the object is locked.
  • Step 280 in response to the second release operation of the first skill, control the m virtual flying objects simultaneously released by the first virtual object to automatically track the target virtual object.
  • the second release operation refers to an operation for releasing the first skill.
  • the attribute value includes but not limited to at least one of the life value of the second virtual object, the energy value of releasing skills, defense power, attack power, and moving speed.
  • m virtual flying objects adjust the attribute values of n second virtual objects, for example, in response to the second release operation of the first skill, control the m virtual flying objects simultaneously released by the first virtual object The object reduces the attribute value of the target virtual object in an automatic tracking manner.
  • FIG. 6 is a schematic diagram of a horizontal version of a virtual environment screen in an exemplary embodiment of the present application.
  • FIG. 6 shows that five virtual flying objects reduce target virtual objects in an automatic tracking manner.
  • the horizontal version of the virtual environment The screen includes a first virtual object 301 and a second virtual object 302 in at least one second virtual object.
  • Five virtual flying objects 501 reduce the attribute value of the locked target virtual object 602 in an automatic tracking manner.
  • the distribution method is as follows As shown in FIG. 6 , the five virtual flying objects 501 are all assigned to the locked target virtual object 602 .
  • the virtual environment screen further includes at least two candidate lock indicators of the first skill, the shapes of the candidate lock indicators are different from each other, and in response to the selection operation of the target lock indicator among the candidate lock indicators, Fixed the lock indicator for the first skill.
  • the shape of the candidate lock indicator may be at least one of a circle, a rectangle, a regular hexagon, a regular pentagon, and an ellipse.
  • the present application does not specifically limit the shape of the candidate lock indicator.
  • a candidate lock indicator 701 of the first skill is displayed on the upper right of the virtual environment screen, and the shapes of the candidate lock indicator 701 include circle, rectangle and regular hexagon. When you click on the circle, the release area of the first skill is the circle. When you click on the rectangle, the release area of the first skill is the rectangle.
  • the rotation angle of the target lock indicator can be set by the user.
  • in response to detecting a touch drop operation on the virtual environment screen determine a first positioning point of the locking indicator of the first skill on the horizontal version of the virtual environment screen; based on the first positioning point of the locking indicator, Display a lock indicator on the virtual environment screen; in response to detecting a sliding operation on the virtual environment screen, determine the target second virtual object among the n second virtual objects according to the sliding end point of the sliding operation; in response to detecting the virtual environment screen Control the m virtual flying objects released by the first virtual object to automatically track n second virtual objects of the same type as the target second virtual object. Exemplarily, as shown in FIG.
  • the touch down operation on the virtual environment screen is detected, the first positioning point 801 of the lock indicator of the first skill on the horizontal virtual environment screen is determined, and the first positioning point 801 Displays a circular lock indicator for the center of the circle.
  • a sliding operation on the virtual environment screen is detected, and a sliding end point 802 of the sliding operation is obtained.
  • the sliding end point 802 points to the target second virtual object 803 , and the m virtual flying objects released by the first virtual object are controlled to automatically track n second virtual objects of the same type 803 as the target second virtual object.
  • the terminal realizes the locking of the target virtual object and the allocation of virtual flying objects, and the user can quickly realize the locking of the target virtual object by directly touching the target virtual object , improving the user's locking efficiency of the target virtual object, and improving the user's human-computer interaction experience.
  • FIG. 9 is a flowchart of a method for releasing the ability of a virtual object in an exemplary embodiment of the present application. This method is used in this embodiment from FIG. 1
  • the illustrated first terminal 120 (or the client in the first terminal 120 ) is used for illustration. The method includes:
  • Step 910 displaying a virtual environment screen, where a first virtual object and at least one second virtual object are displayed on the virtual environment screen, and the first virtual object has a first skill
  • the first skill refers to the skill released by the first virtual object in the virtual environment.
  • the first skill of the first virtual object is displayed as the first virtual object releases m virtual flying objects in an automatic tracking manner.
  • the attribute value of the second dummy object is decreased.
  • the horizontal virtual environment screen is a user screen that displays the virtual environment on the terminal.
  • a virtual environment is a virtual environment that is displayed (or provided) by an application when it is run on a terminal.
  • the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • Step 920 in response to detecting the touch drop operation on the virtual environment screen, determine the first positioning point of the lock indicator of the first skill on the virtual environment screen, and the lock indicator is used to control the n located in the release area of the first skill A second virtual object is locked;
  • the terminal detects the potential of the screen, and when the terminal detects a potential change caused by the user's touch-down operation on the screen, that is, the terminal detects a touch-down operation on the horizontal version of the virtual environment screen, the terminal determines The locking indicator of the first skill is at the first anchor point on the virtual environment screen. Wherein, the first positioning point is used for positioning the locking indicator of the first skill.
  • the first positioning point is inside the locking indicator, that is, the first positioning point is inside or on the edge of the closed geometric figure of the locking indicator.
  • the locking indicator is a closed circle shape
  • the first anchor point is the center of the circle.
  • the first anchor point is outside the lock indicator, that is, the first anchor point is outside the closed geometric figure of the lock indicator, based on a preset position between the position of the first anchor point and the position of the lock indicator
  • the terminal can determine the position of the lock indicator according to the position of the first positioning point.
  • Step 930 based on the first positioning point of the lock indicator, display the lock indicator on the virtual environment screen;
  • the locking indicator is a circle, and the first positioning point is used as the center of the locking indicator, and the locking indicator is displayed on the horizontal version of the virtual environment screen; optional, the locking indicator is an ellipse, and the first positioning point is As a special location point of the lock indicator, the lock indicator is displayed on the horizontal version of the virtual environment screen; optionally, the lock indicator is a square, and the first positioning point is used as the center of the lock indicator, and the lock indicator is displayed on the horizontal version of the virtual environment screen Display the lock indicator; optionally, the lock indicator is fan-shaped, and the first anchor point is used as a special position point of the lock indicator; optionally, the lock indicator is a rectangle, and the first anchor point is used as the vertex of the lock indicator .
  • the relationship between the first positioning point and the locking indicator is not limited, as long as there is a corresponding relationship between the first positioning point and the locking indicator.
  • the lock indicator is a closed circle
  • the first positioning point is the center of the circle
  • the lock indicator 401 locks three second virtual objects 302 .
  • the lock indicator is circular, and the lock indicator is displayed on the horizontal virtual environment screen in response to using the first positioning point as the center of the lock indicator and using a preset radius as the radius of the lock indicator device.
  • the preset radius is the radius of the lock indicator preset by the application, that is, the user cannot change the radius of the lock indicator; optionally, the preset radius is the lock indicator preset by the user on the client , the user can change the radius of the lock indicator.
  • Step 940 in response to detecting a touch and slide operation on the virtual environment screen, changing the position of the control lock indicator
  • the terminal after the terminal displays the lock indicator, the terminal detects a touch and slide operation on the horizontal virtual environment screen, and the terminal controls the position of the lock indicator to change. For example, in response to the terminal detecting a leftward sliding operation on the horizontal virtual environment screen, the terminal controls the lock indicator to move leftward along with the leftward sliding operation.
  • the terminal may perform step 950 after performing step 940, or directly perform step 950 without performing step 940, which is not limited in this application.
  • Step 950 in response to detecting the touch-out operation on the virtual environment screen, controlling the m virtual flying objects released by the first virtual object to automatically track n second virtual objects.
  • both m and n are integers not less than 2.
  • m virtual flying objects adjust the attribute values of n second virtual objects, for example, in response to the detection of a touch-out operation on the horizontal version of the virtual environment screen, control the simultaneous release of the first virtual object
  • the m virtual flying objects reduce the attribute values of the n second virtual objects in an automatic tracking manner.
  • the terminal in response to detecting the touch-out operation on the horizontal version of the virtual environment screen, controls the m virtual flying objects released by the first virtual object to automatically track the attribute values of the n second virtual objects to reduce.
  • the radius of the lock indicator is changed in response to a change in touch pressure on the first anchor point.
  • the terminal detects the user's touch pressure on the first positioning point, and changes the radius of the lock indicator based on the pressure change.
  • the terminal detects that the user's touch pressure on the first positioning point increases, so that the radius of the lock indicator increases until the radius of the lock indicator reaches a preset maximum radius.
  • the terminal detects that the user's touch pressure on the first positioning point decreases, so that the radius of the lock indicator decreases until the radius of the lock indicator reaches a preset minimum radius.
  • the locking indicator appears as a closed geometric figure set based on the first anchor point of the horizontal version of the virtual environment picture.
  • the terminal determines the position of the lock indicator through the first positioning point, and further generates the range of the lock indicator, which simplifies the generation method of the lock indicator, and thus the obtained range of the lock indicator includes the first target that the user wants to attack. Two dummy objects.
  • the above method improves the aiming speed of multiple virtual props attacking multiple virtual objects at the same time, greatly reduces the difficulty of the player's operation, and improves the efficiency of the user's human-computer interaction.
  • FIG. 10 is a flowchart of a method for releasing skills of a virtual object in an exemplary embodiment of the present application.
  • the method is executed by the first terminal 120 (or the client in the first terminal 120) shown in FIG. 1 Give an example.
  • the method includes:
  • Step 1001 performing a touch operation on the horizontal virtual environment screen
  • the terminal In response to the user performing a touch operation on the horizontal version of the virtual environment screen, the terminal generates a touch point on the horizontal version of the virtual environment screen.
  • Step 1002 whether there is an enemy within the radius set by the touch point
  • the terminal judges whether there is an enemy within the radius set based on the touch point, and if there is an enemy within the radius, execute step 1004; if there is no enemy within the radius, execute step 1003.
  • Step 1003 whether the touch point presses the enemy
  • the terminal judges whether the user presses the enemy at the touch point, if the user presses the enemy at the touch point, execute step 1005, and if the user does not press the enemy at the touch point, execute step 1001.
  • Step 1004 lock all enemies within the range
  • the terminal Based on the touch point and the set radius, the terminal generates a lock range, and the terminal locks all enemies within the lock range.
  • Step 1005 directly lock the current enemy
  • the terminal Based on the terminal judging that the user presses the enemy at the touch point, the terminal directly locks on the current enemy.
  • Step 1006 whether the first virtual object holds a weapon
  • the terminal judges whether the first virtual object in the horizontal version of the virtual environment screen holds a weapon, and if the first virtual object holds a weapon, execute step 1007 , and if the first virtual object does not hold a weapon, execute step 1008 .
  • Step 1007 distribute and launch all weapons equally to locked enemies
  • the terminal In response to the first virtual object holding the weapon, the terminal evenly distributes and fires all the weapons held by the first virtual object to the locked enemy.
  • the algorithm of equal distribution is: traverse the enemies, and distribute the weapons to the traversed enemies in turn. If the traverse reaches the end, traverse from the beginning again until the weapons are distributed. The result of this is that if the number of weapons is less than the number of enemies, there will be some enemies that are not targeted by the weapons. If the number of weapons is greater than the number of enemies, there will be multiple enemies assigned to multiple weapons.
  • Step 1008 cancel firing the weapon.
  • the terminal cancels firing the weapon.
  • Fig. 11 is a structural block diagram of a device for releasing skills of a virtual object according to an exemplary embodiment of the present application, and the device includes:
  • a display module 111 configured to display a virtual environment screen, where a first virtual object and at least one second virtual object are displayed on the virtual environment screen, and the first virtual object has a first skill;
  • the display module 111 is further configured to display a locking indicator of the first skill on the virtual environment screen in response to the first target locking operation of the first skill, and the locking indicator is used to lock the nth skills located in the release area of the first skill. Two dummy objects are locked;
  • the control module 112 is configured to control the m virtual flying objects released by the first virtual object to automatically track n second virtual objects in response to the first release operation of the first skill, where m and n are both integers not less than 2.
  • control module 112 is also used to control the automatic tracking of the m virtual flying objects released by the first virtual object at the same time when m is not less than n to automatically track the nth virtual flying objects respectively.
  • control module 112 is further configured to control the i-th virtual flying object among the m virtual flying objects to automatically and tracking the i-th virtual object among the n second virtual objects.
  • control module 112 is also used to control the i+1th virtual flight among the m virtual flying objects under the condition that m is not less than n, i is greater than n and i is not greater than m.
  • the object automatically tracks the i-nth virtual object among the n second virtual objects.
  • control module 112 is further configured to control the m virtual flying objects released by the first virtual object to automatically track one of the n second virtual objects when m is less than n.
  • the control module 112 is further configured to control the m virtual flying objects released by the first virtual object to automatically track one of the n second virtual objects when m is less than n.
  • the display module 111 is further configured to lock the target virtual object in the at least one second virtual object on the virtual environment screen in response to the second target locking operation of the first skill.
  • control module 112 is further configured to control the m virtual flying objects simultaneously released by the first virtual object to automatically track the target virtual object in response to the second release operation of the first skill.
  • the display module 111 is further configured to determine a second anchor point of the first skill in response to detecting a touch-down operation on the virtual environment screen, and the second anchor point is used in the horizontal version of the virtual environment screen. Locking on the target virtual object in the at least one second virtual object.
  • control module 112 is further configured to control the m virtual flying objects simultaneously released by the first virtual object to automatically track the target virtual object in response to detecting a touch-out operation on the virtual environment screen.
  • the display module 111 is further configured to determine a first positioning point of the lock indicator of the first skill on the virtual environment screen in response to detecting a touch-down operation on the virtual environment screen.
  • the display module 111 is further configured to display the lock indicator on the virtual environment screen based on the first positioning point of the lock indicator.
  • control module 112 is further configured to control the m virtual flying objects simultaneously released by the first virtual object to automatically track the n second virtual flying objects in response to detecting a touch-out operation on the virtual environment screen. object.
  • the locking indicator is circular.
  • the display module 111 is further configured to use the first positioning point as the center of the lock indicator, and display the lock indicator on the virtual environment screen.
  • the display module 111 is further configured to use the first positioning point as the center of the lock indicator and a preset radius as the radius of the lock indicator to display the lock indicator on the virtual environment screen.
  • the display module 111 is further configured to change the radius of the lock indicator in response to a change in the touch pressure on the first positioning point.
  • control module 112 is further configured to control the m virtual flying objects simultaneously released by the first virtual object in an automatic tracking manner in response to the first release operation of the first skill. and reducing the attribute values of the n second virtual objects.
  • the display module 111 is further configured to display candidate lock indicators of at least two first skills, and the shapes of the candidate lock indicators are different from each other; The selection operation of the target lock indicator in the tool determines the lock indicator of the first skill.
  • control module 112 is further configured to determine a second anchor point of the first skill in response to detecting a touch drop operation on the virtual environment screen, and the second anchor point is used for the horizontal virtual environment screen Lock the target virtual object in at least one second virtual object; the display module 111 is also used to display the lock indicator on the virtual environment screen based on the first anchor point of the lock indicator; the control module 112 is also used to respond to A touch-out operation on the virtual environment screen is detected, and m virtual flying objects released by the first virtual object are controlled to automatically track n second virtual objects.
  • the automatic tracking method reduces the attribute values of the n second virtual objects within the range of the locking indicator.
  • the device improves the aiming speed of multiple virtual props attacking multiple virtual objects at the same time, greatly reduces the difficulty of the player's operation, and improves the user's human-computer interaction efficiency.
  • Fig. 12 shows a structural block diagram of a computer device 1200 provided by an exemplary embodiment of the present application.
  • the computer device 1200 can be a portable mobile terminal, such as: smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Expert compresses standard audio levels 4) Players, laptops or desktops.
  • the computer device 1200 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, or other names.
  • a computer device 1200 includes: a processor 1201 and a memory 1202 .
  • the processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • Processor 1201 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • Processor 1201 may also include a main processor and a coprocessor, the main processor is a processor for processing data in a wake-up state, and is also called a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
  • CPU Central Processing Unit
  • the processor 1201 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1201 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1202 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1202 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1202 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1201 to implement the extra-domain network provided by the method embodiments in this application Acceleration methods for resources.
  • the computer device 1200 may optionally further include: a peripheral device interface 1203 and at least one peripheral device.
  • the processor 1201, the memory 1202, and the peripheral device interface 1203 may be connected through buses or signal lines.
  • Each peripheral device can be connected to the peripheral device interface 1203 through a bus, a signal line or a circuit board.
  • the peripheral device may include: at least one of a radio frequency circuit 1204 , a display screen 1205 , a camera component 1206 , an audio circuit 1207 , a positioning component 1208 and a power supply 1209 .
  • the computing device 1200 also includes one or more sensors 1210 .
  • the one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211 , a gyro sensor 1212 , a pressure sensor 1213 , an optical sensor 1214 and a proximity sensor 1215 .
  • FIG. 12 does not constitute a limitation to the computer device 1200, and may include more or less components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • the present application also provides a computer-readable storage medium, wherein at least one instruction, at least one program, code set or instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set or The instruction set is loaded and executed by the processor to implement the method for releasing skills of the virtual object provided by the above method embodiments.
  • the present application provides a computer program product or computer program, the computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for releasing skills of a virtual object provided by the above method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种虚拟对象的技能释放方法、装置、设备及存储介质,属于人机交互领域。所述方法包括:显示虚拟环境画面,虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,第一虚拟对象具有第一技能(220);响应于第一技能的第一目标锁定操作,在虚拟环境画面上显示第一技能的锁定指示器,锁定指示器用于对位于第一技能的释放区域内的n个第二虚拟对象进行锁定(240);响应于第一技能的第一释放操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象,m和n均为不小于2的整数(260)。该方法提高了多个虚拟道具同时攻击多个虚拟对象的瞄准速度,提高了用户的人机交互效率。

Description

虚拟对象的技能释放方法、装置、设备、介质及程序产品
本申请要求于2021年05月20日提交的申请号为202110553091.1、发明名称为“虚拟对象的技能释放方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人机交互领域,特别涉及一种虚拟对象的技能释放方法、装置、设备、介质及程序产品。
背景技术
在横版游戏中,用户经常需要控制多个虚拟道具同时攻击多个虚拟对象。相关技术中,用户控制虚拟战机对多个虚拟生物进行攻击,虚拟战机沿多个固定方向同时发射虚拟导弹,位于固定方向上的虚拟生物会遭受虚拟导弹的攻击。
相关技术中,若用户欲控制虚拟战机攻击当前固定方向之外的目标虚拟生物,则需移动虚拟战机,使得目标虚拟生物处于改变后的虚拟战机的固定方向上,相关技术存在操作复杂问题,用户控制虚拟战机瞄准目标虚拟生物速度缓慢,效率低下。
发明内容
本申请提供了一种虚拟对象的技能释放方法、装置、设备、介质及程序产品,提高了用户的人机交互效率。所述技术方案如下:
根据本申请的一方面,提供了一种虚拟对象的技能释放方法,所述方法包括:
显示虚拟环境画面,所述虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,所述第一虚拟对象具有第一技能;
响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,所述锁定指示器用于对位于所述第一技能的释放区域内的n个第二虚拟对象进行锁定;
响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,m和n均为不小于2的整数。
根据本申请的另一方面,提供了一种虚拟对象的技能释放装置,所述装置包括:
显示模块,用于显示虚拟环境画面,所述虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,所述第一虚拟对象具有第一技能;
显示模块,还用于响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,所述锁定指示器用于对位于所述第一技能的释放区域内的n个第二虚拟对象进行锁定;
控制模块,用于响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,m和n均为不小于2的整数。
根据本申请的一个方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如上所述的虚拟对象的技能释放方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序由处理器加载并执行以实现如上所述的虚拟对象的技能释放方法。
根据本申请的另一方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储 介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面提供的虚拟对象的技能释放方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过在横版虚拟环境画面上设置第一技能的锁定指示器,锁定处于锁定指示器范围内的第二虚拟对象,之后控制第一虚拟对象同时释放的m个虚拟飞行物自动追踪锁定指示器范围内的n个第二虚拟对象。上述方法提高了多个虚拟道具同时攻击多个虚拟对象的瞄准速度,极大地降低了玩家的操作难度,提高了用户的人机交互效率。
附图说明
图1示出了一个示例性实施例提供的计算机系统的结构框图;
图2示出了一个示例性实施例提供的虚拟对象的技能释放方法的流程图;
图3示出了一个示例性实施例提供的横版虚拟环境画面的示意图;
图4示出了另一个示例性实施例提供的横版虚拟环境画面的示意图;
图5示出了另一个示例性实施例提供的横版虚拟环境画面的示意图;
图6示出了另一个示例性实施例提供的横版虚拟环境画面的示意图;
图7示出了另一个示例性实施例提供的横版虚拟环境画面的示意图;
图8示出了另一个示例性实施例提供的横版虚拟环境画面的示意图;
图9示出了另一个示例性实施例提供的虚拟对象的技能释放方法的流程图;
图10示出了另一个示例性实施例提供的虚拟对象的技能释放方法的流程图;
图11示出了本申请一个示例性实施例提供的虚拟对象的技能释放装置的结构框图;
图12示出了本申请一个示例性实施例提供的一种计算机设备的结构示意图。
具体实施方式
首先,对本申请实施例中涉及的名词进行简单介绍:
锁定指示器:用于对位于第一技能的释放区域内的虚拟对象进行锁定。
锁定指对位于第一技能的释放区域内的虚拟对象进行实时检测和实时定位,通过实时定位,确保最终第一虚拟对象释放的m个虚拟飞行物能以自动追踪方式对n个第二虚拟对象的属性值进行减少。
可选的,锁定指示器表现为圆形、扇形、矩形等几何形状。可选的,锁定指示器在横版虚拟环境画面采用隐形的方式显示,即锁定指示器无法由肉眼可见;可选的,锁定指示器在横版虚拟环境画面采用显形的方式显示,即用户可直观看出锁定指示器。在一个可选的实施例中,锁定指示器可对第一技能的释放区域内的静态虚拟对象进行锁定,即锁定指示器对静止虚拟对象进行检测和定位,锁定之后以自动追踪的方式对静态虚拟对象的属性值进行减少。在一个可选的实施例中,锁定指示器可对第一技能的释放区域内的动态虚拟对象进行锁定,即锁定指示器对动态虚拟对象进行实时检测和实时定位,之后以自动追踪的方式对动态虚拟对象的属性值进行减少。
在一个实施例中,锁定指示器与虚拟环境的横版视角之间存在夹角。可选的,锁定指示器与虚拟环境的横版视角之间的夹角为直角,也即锁定指示器的显示平面垂直于视角方向;可选的,锁定指示器与虚拟环境的横版视角之间的夹角为锐角。可选的,当锁定指示器在横版虚拟环境画面表现为圆形区域时,该圆形区域与虚拟环境的横版视角方向垂直;可选的,当锁定指示器在横版虚拟环境画面表现为圆形区域时,该圆形区域与虚拟环境的横版视角之间的夹角为锐角。虚拟对象的属性值:比如,生命值、释放技能的能量值、防御力、攻击力、移动速度等。
横版游戏:是指将游戏角色的移动路线控制在水平画面上的游戏。横版游戏中的全部画面或绝大部分画面中,游戏角色的移动路线都是沿着水平方向进行的。按照内容来分,横版游戏分为横版过关、横版冒险、横版竞技、横版策略等游戏;按照技术来分,横版游戏分为 二维(2D)横版游戏和三维(3D)横版游戏。
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。
可选的,该虚拟环境可以提供虚拟对象的对战环境。示例性的,在横版游戏中,一个或两个虚拟对象在虚拟环境中进行单局对战,虚拟对象通过躲避敌方单位发起的攻击和虚拟环境中存在的危险(比如,毒气圈、沼泽地等)来达到在虚拟环境中存活的目的,当虚拟对象在虚拟环境中的生命值为零时,虚拟对象在虚拟环境中的生命结束,最后顺利通过关卡内的路线的虚拟对象是获胜方。每一个客户端可以控制虚拟环境中的一个或多个虚拟对象。可选地,该对战的竞技模式可以包括单人对战模式、双人小组对战模式或者多人大组对战模式,本实施例对对战模式不加以限定。
示例性的,横版虚拟环境画面是以虚拟角色的横屏视角对虚拟环境进行观察的画面,比如,以虚拟角色的右侧垂直方向对虚拟角色进行观察的射击游戏。
虚拟对象:是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在三维虚拟环境中显示的人物、动物。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
可选的,虚拟对象基于属性值或拥有的技能可划分为不同的角色类型。比如,目标虚拟对象具有远程输出类型技能,则其对应的角色类型可为射手;若具有辅助型技能,则其对应的角色类型可为辅助。可选的,同一虚拟对象可对应多种角色类型。
横版游戏:是指将游戏角色的移动路线控制在水平画面上的游戏。横版游戏中的全部画面或绝大部分画面中,游戏角色的移动路线都是沿着水平方向进行的。按照内容来分,横版游戏分为横版过关、横版冒险、横版竞技、横版策略等游戏;按照技术来分,横版游戏分为二维(2D)横版游戏和三维(3D)横版游戏。
虚拟道具:是指虚拟对象在虚拟环境中能够使用的道具,包括能够改变其他虚拟对象的属性值的虚拟武器,子弹等补给道具,盾牌、盔甲、装甲车等防御道具,虚拟光束、虚拟冲击波等用于虚拟对象释放技能时通过手部展示的虚拟道具,以及虚拟对象的部分身体躯干,比如手部、腿部,以及能够改变其他虚拟对象的属性值的虚拟道具,包括手枪、步枪、狙击枪等远距离虚拟道具,匕首、刀、剑、绳索等近距离虚拟道具,飞斧、飞刀、手榴弹、闪光弹、烟雾弹等投掷类虚拟道具。在本申请中,虚拟飞行物属于虚拟道具中的特殊道具,虚拟飞行物可以使本身具有飞行属性的虚拟道具,也可以使虚拟对象投掷的虚拟道具,还可以是虚拟对象进行射击时发射的虚拟道具。
需要说明的是,本申请所涉及的信息(包括但不限于用户设备信息、用户个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经用户授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
图1示出了本申请一个示例性实施例提供的计算机系统的结构框图。该计算机系统100包括:第一终端120、服务器140和第二终端160。
第一终端120安装和运行有支持虚拟环境的应用程序。该应用程序可以是三维地图程序、横版射击、横版冒险、横版过关、横版策略、虚拟现实(Virtual Reality,VR)应用程序、增强现实(Augmented Reality,AR)程序中的任意一种。第一终端120是第一用户使用的终端,第一用户使用第一终端120控制位于虚拟环境中的第一虚拟对象进行活动,该活动包括但不限于:调整身体姿态、行走、奔跑、跳跃、骑行、驾驶、瞄准、拾取、使用投掷类道具、攻 击其他虚拟对象中的至少一种。示例性的,第一虚拟对象是第一虚拟人物,比如仿真人物对象或动漫人物对象。示例性的,第一用户通过虚拟环境画面上的UI控件来控制第一虚拟角色进行活动。
第一终端120通过无线网络或有线网络与服务器140相连。
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。示例性的,服务器140包括处理器144和存储器142,存储器142又包括接收模块1421、控制模块1422和发送模块1423,接收模块1421用于接收客户端发送的请求,如组队请求;控制模块1422用于控制虚拟环境画面的渲染;发送模块1423用于向客户端发送响应,如向客户端发送组队成功的提示信息。服务器140用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。
第二终端160安装和运行有支持虚拟环境的应用程序。该应用程序可以是三维地图程序、横版射击、横版冒险、横版过关、横版策略、虚拟现实应用程序、增强现实程序中的任意一种。第二终端160是第二用户使用的终端,第二用户使用第二终端160控制位于虚拟环境中的第二虚拟对象进行活动,该活动包括但不限于:调整身体姿态、行走、奔跑、跳跃、骑行、驾驶、瞄准、拾取、使用投掷类道具、攻击其他虚拟对象中的至少一种。示例性的,第二虚拟对象是第二虚拟人物,比如仿真人物对象或动漫人物对象。
可选地,第一虚拟对象和第二虚拟对象处于同一虚拟环境中。可选地,第一虚拟对象和第二虚拟对象可以属于同一个队伍、同一个组织、同一个阵营、具有好友关系或具有临时性的通讯权限。可选地,第一虚拟对象和第二虚拟对象也可以属于不同阵营、不同队伍、不同的组织或具有敌对关系。
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台(安卓或IOS)上的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。第一终端120和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。以下实施例以终端包括智能手机来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
图2示出了本申请一个示例性实施例提供的虚拟对象的技能释放方法的流程图。本实施例以该方法由图1所示的第一终端120(或第一终端120内的客户端)来执行进行举例说明。该方法包括:
步骤220,显示虚拟环境画面,虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,第一虚拟对象具有第一技能;
虚拟环境画面是横版虚拟环境画面、纵版虚拟环境画面、三维虚拟环境画面、二维虚拟环境画面中的至少一种。
第一虚拟对象指在虚拟环境中的一个可活动对象,该可活动对象可以是虚拟人物、虚拟动物、动漫人物等。第二虚拟对象指虚拟环境中的另一个可活动对象。可选的,第一虚拟对象和第二虚拟对象处于同一虚拟环境中。可选的,第一虚拟对象和第二虚拟对象可以属于同一个队伍、同一个组织、同一个阵营、具有好友关系或具有临时性的通讯权限。可选的,第一虚拟对象和第二虚拟对象也可以属于不同阵营、不同队伍、不同的组织或具有敌对关系。在本申请中,以第一虚拟对象和第二虚拟对象具有敌对关系举例说明。在本申请中,第一虚 拟对象释放第一技能使得第二虚拟对象的属性值进行减少。
第一技能指虚拟环境中的第一虚拟对象拥有的释放能力。可选的,第一技能指第一虚拟对象具有释放技能的能力;可选的,第一技能指第一虚拟对象具有释放虚拟道具的能力;可选的,第一技能指第一虚拟对象通过虚拟道具释放虚拟飞行物的能力;可选的,第一技能指第一虚拟对象通过技能释放虚拟飞行物的能力。
在一个实施例中,第一技能指第一虚拟对象通过技能释放虚拟飞行物的能力,可选的,该技能是第一虚拟对象的基础技能,其中,基础技能是第一虚拟对象无需学习即可掌握的技能(如,普通攻击等预设能力);可选的,该技能是第一虚拟对象的学习型技能,其中,学习型技能是第一虚拟对象经过学习或拾得才可掌握的技能。在本申请中对该技能的获取方式不加以限定。示意性的,第一虚拟对象通过释放技能释放m个虚拟飞行物。
在一个实施例中,第一技能指第一虚拟对象通过虚拟道具释放虚拟飞行物的能力,如,第一虚拟对象撕毁虚拟卷轴,释放出m个虚拟飞行物。
可选的,第一虚拟对象一次性释放出m个虚拟飞行物;可选的,第一虚拟对象分批次释放虚拟飞行物,每批次释放的虚拟飞行物的数量为至少两个。
在一个实施例中,第一虚拟对象的第一技能显示为第一虚拟对象同时释放m个虚拟飞行物以自动追踪的方式对第二虚拟对象的属性值进行减少。
横版虚拟环境画面是基于虚拟环境中的横版视角采集第一虚拟对象获取的视野,并将该视野显示在终端上得到的。虚拟环境是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。
在一个实施例中,图3示出了本申请一个示例性实施例的横版虚拟环境画面,示意性的,横版虚拟环境画面包括第一虚拟对象301和至少一个第二虚拟对象中的一个第二虚拟对象302,图3中的第一虚拟对象301沿着水平方向移动,直至进入如图3所示的横版虚拟环境画面的关卡。
步骤240,响应于第一技能的第一目标锁定操作,在虚拟环境画面上显示第一技能的锁定指示器,锁定指示器用于对位于第一技能的释放区域内的n个第二虚拟对象进行锁定;
在一个实施例中,锁定指示器与虚拟环境的横版视角之间存在夹角。可选的,锁定指示器与虚拟环境的横版视角之间的夹角为直角,也即锁定指示器的显示平面垂直于视角方向;可选的,锁定指示器与虚拟环境的横版视角之间的夹角为锐角。可选的,当锁定指示器在横版虚拟环境画面表现为圆形区域时,该圆形区域与虚拟环境的横版视角垂直;可选的,当锁定指示器在横版虚拟环境画面表现为圆形区域时,该圆形区域与虚拟环境的横版视角之间的夹角为锐角。
第一目标锁定操作用于对位于第一技能的释放区域内的n个第二虚拟对象进行锁定。可选的,第一目标锁定操作为用户在横版虚拟环境画面上的触摸落下操作,终端确定第一技能的锁定指示器在横版虚拟环境画面上的第一定位点,基于锁定指示器的第一定位点,生成锁定指示器;可选的,第一目标锁定操作为用户在与终端连接的外设部件上的锁定操作,示意性的,用户通过与终端连接的外设手柄在横版虚拟环境画面上确定第一定位点,基于锁定指示器的第一定位点,生成锁定指示器。
第一技能的释放区域指虚拟环境界面中第一技能的作用区域。可选的,第一技能的释放区域显示为一个封闭的几何图形,封闭的几何图形包围n个第二虚拟对象;可选的,第一技能的释放区域显示为若干个封闭的几何图形,每个封闭的几何图形包围n个第二虚拟对象中的若干个第二虚拟对象。
示意性的,结合参考图4,图4是本申请一个示例性实施例的横版虚拟环境画面的示意图,横版虚拟环境画面包括第一虚拟对象301和至少一个第二虚拟对象中的一个第二虚拟对象302,图4中锁定指示器401内包括3个锁定的第二虚拟对象402,其中,锁定的第二虚拟 对象402表现出被锁定的表现特征,锁定的第二虚拟对象402的骨骼中心呈现出圆圈。值得说明的一点是,第二虚拟对象可以是基于动画骨骼技术创建的三维立体模型。每个第二虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
可选的,在虚拟环境画面是三维虚拟环境画面的情况下,三维虚拟环境画面是通过摄像机模型从三维虚拟环境中采集的,第一技能的释放区域所在平面与三维虚拟环境画面所在平面之间存在夹角。根据第一技能在三维虚拟环境画面上锁定指示器的区域,确定第一技能在三维虚拟环境画面内的释放区域。
步骤260,响应于第一技能的第一释放操作,控制第一虚拟对象释放出m个虚拟飞行物自动追踪n个第二虚拟对象。
其中,m和n均为不小于2的整数。
可选的,m个虚拟飞行物对n个第二虚拟对象的属性值进行调整,调整的方法包括属性值的减少和属性值的增加。属性值包括但不限于第二虚拟对象的生命值、释放技能的能量值、防御力、攻击力,移动速度中的至少一个。示例性的,响应于第一技能的第一释放操作,控制第一虚拟对象同时释放出m个虚拟飞行物以自动追踪方式对n个第二虚拟对象的属性值进行减少,或者,响应于第一技能的第一释放操作,控制第一虚拟对象同释放出m个虚拟飞行物以自动追踪方式对n个第二虚拟对象的属性值进行增加。
可选的,m个虚拟飞行物由第一虚拟对象同时释放,或者,m个虚拟飞行物由第一虚拟对象依次按序释放。示例性的,m个虚拟飞行物指代m把虚拟飞剑,第一虚拟对象每隔0.3秒释放一把虚拟飞剑。
虚拟飞行物是指虚拟对象在虚拟环境中释放的第一技能中的虚拟道具,包括能够改变第二虚拟对象的属性值的子弹等补给道具,虚拟飞行盾牌、虚拟光束、虚拟冲击波等。示例性的,虚拟飞行物是第一虚拟对象释放技能时通过手部展示的虚拟道具,虚拟飞行匕首、虚拟飞刀、虚拟飞剑、虚拟飞斧等虚拟道具,手榴弹、闪光弹、烟雾弹等投掷类虚拟道具。
可选的,m个虚拟飞行物显示为相同的表现特征,如m个虚拟飞行物显示为m把相同的虚拟飞剑;可选的,m个虚拟飞行物显示为完全不同的表现特征,如m个虚拟飞行物显示为m把不同的虚拟飞剑;可选的,m个虚拟飞行物显示为部分相同的表现特征,如m个虚拟飞行物显示为m/2把第一虚拟飞剑和m/2把第二虚拟飞剑。
可选的,m个虚拟飞行物具有自动追踪n个第二虚拟对象的能力,即,响应于第一虚拟对象释放m个虚拟飞行物之后,无需用户再次操控m个虚拟飞行物,m个虚拟飞行物自动追踪n个第二虚拟对象。
在一个实施例中,响应于第一技能的第一释放操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象至少包括以下两种方法:
第一,在m不小于n的情况下,控制第一虚拟对象同时释放出的m个虚拟飞行物分别自动追踪n个第二虚拟对象中的一个第二虚拟对象;
其中,n个第二虚拟对象中的每个第二虚拟对象至少与m个虚拟飞行物中的一个虚拟飞行物相对应。
在一个可选的实施例中,在m不小于n、i不大于n且i大于0的情况下,控制m个虚拟飞行物中的第i个虚拟飞行物自动追踪n个第二虚拟对象中的第i个虚拟对象;在m不小于n、i大于n且i不大于m的情况下,控制m个虚拟飞行物中的第i+1个虚拟飞行物自动追踪n个第二虚拟对象中的第i-n个虚拟对象。
在m个虚拟飞行物对n个第二虚拟对象的属性值进行调整的情况下,可选的,在m不小于n的情况下,控制第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式分别对n个第二虚拟对象中的一个第二虚拟对象的属性值进行减少。
在一个可选的实施例中,在m不小于n、i不大于n且i大于0的情况下,控制m个虚拟飞行物中的第i个虚拟飞行物以自动追踪方式对n个第二虚拟对象中的第i个虚拟对象的属性 值进行减少;在m不小于n、i大于n且i不大于m的情况下,控制m个虚拟飞行物中的第i+1个虚拟飞行物以自动追踪方式对n个第二虚拟对象中的第i-n个虚拟对象的属性值进行减少。
即,首先,m个虚拟飞行物遍历n个第二虚拟对象,直至遍历至最后一个虚拟对象,之后,第n+1个虚拟飞行物从头开始遍历,直至m个虚拟飞行物遍历完成。值得说明的一点是,在当前实施例下,n个虚拟对象中的每个虚拟对象至少与m个虚拟飞行物中的一个虚拟飞行物相对应。
第二,在m小于n的情况下,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象中的m个第二虚拟对象。
其中,m个第二虚拟对象与m个虚拟飞行物一一对应。
即,m个虚拟飞行物依次分配至n个虚拟对象中的m个第二虚拟对象,直至m个虚拟飞行物分配完成。值得说明的一点是,在当前实施例下,m个第二虚拟对象与m个虚拟飞行物一一对应。
在m个虚拟飞行物对n个第二虚拟对象的属性值进行调整的情况下,示例性的,在m小于n的情况下,控制第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对n个第二虚拟对象中的m个第二虚拟对象的属性值进行减少。
示例性的,图5是本申请一个示例性实施例的横版虚拟环境画面的示意图,图5示出了5个虚拟飞行物以自动追踪方式对3个第二虚拟对象的属性值进行减少,图5中横版虚拟环境画面包括第一虚拟对象301和至少一个第二虚拟对象中的一个第二虚拟对象302,5个虚拟飞行物501以自动追踪方式对3个锁定的第二虚拟对象402的属性值进行减少,其分配方式如图5所示,从左往右,2个虚拟飞行物501分配至第一个锁定的第二虚拟对象402,2个虚拟飞行物501分配至第二个锁定的第二虚拟对象402,1个虚拟飞行物501分配至第三个锁定的第二虚拟对象402。
综上所述,通过在横版虚拟环境画面上设置第一技能的锁定指示器,锁定处于锁定指示器范围内的第二虚拟对象,之后控制第一虚拟对象同时释放的m个虚拟飞行物自动追踪锁定指示器范围内的n个第二虚拟对象的属性值。当m不小于n时,n个第二虚拟对象中的每个虚拟对象至少与m个虚拟飞行物中的一个虚拟飞行物相对应;当m小于n时,n个第二虚拟对象中的m个第二虚拟对象与m个虚拟飞行物一一对应。
上述方法提高了多个虚拟道具同时攻击多个虚拟对象的瞄准速度,极大地降低了玩家的操作难度,提高了用户的人机交互效率。
为实现对单个第二虚拟对象进行目标锁定和技能释放,基于图2所示的实施例还包括步骤270和步骤280,本实施例以该方法由图1所示的第一终端120(或第一终端120内的客户端)来执行进行举例说明。
步骤270,响应于第一技能的第二目标锁定操作,在虚拟环境画面上对至少一个第二虚拟对象中的目标虚拟对象进行锁定;
第二目标锁定操作用于第一技能对第一技能释放区域内的其中一个第二虚拟对象进行锁定。可选的,第二目标锁定操作为用户在横版虚拟环境画面上的触摸操作,终端确定第一技能在横版虚拟环境画面的第二定位点,基于第二定位点,终端确定第一技能的作用对象即目标虚拟对象;可选的,第二目标锁定操作为用户在与终端连接的外设部件上的锁定操作,示意性的,用户通过与终端连接的外设手柄在横版虚拟环境画面上确定第二定位点,基于第二定位点,终端确定第一技能的作用对象即目标虚拟对象。
目标虚拟对象是用户在横版虚拟环境画面上至少一个第二虚拟对象中选中的虚拟对象,目标虚拟对象为用户释放第一技能的作用对象。
在一个实施例中,响应于检测到横版虚拟环境画面上的触摸落下操作,确定第一技能的 第二定位点,第二定位点用于在横版虚拟环境画面上对至少一个第二虚拟对象中的目标虚拟对象进行锁定。
步骤280,响应于第一技能的第二释放操作,控制第一虚拟对象同时释放出的m个虚拟飞行物自动追踪目标虚拟对象。
第二释放操作指用于释放第一技能的操作。
属性值包括但不限于第二虚拟对象的生命值、释放技能的能量值、防御力、攻击力,移动速度中的至少一个。
在m个虚拟飞行物对n个第二虚拟对象的属性值进行调整的情况下,示例性的,响应于第一技能的第二释放操作,控制第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对目标虚拟对象的属性值进行减少。
在一个实施例中,响应于检测到横版虚拟环境画面上的触摸离开操作,控制第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对目标虚拟对象的属性值进行减少。
示意性的,图6是本申请一个示例性实施例的横版虚拟环境画面示意图,图6示出了5个虚拟飞行物以自动追踪方式对目标虚拟对象进行减少,图6中横版虚拟环境画面包括第一虚拟对象301和至少一个第二虚拟对象中的一个第二虚拟对象302,5个虚拟飞行物501以自动追踪方式对锁定的目标虚拟对象602的属性值进行减少,其分配方式如图6所示,5个虚拟飞行物501全都分配至锁定的目标虚拟对象602。
在一个实施例中,虚拟环境画面还包括至少两种第一技能的候选锁定指示器,候选锁定指示器的形状互不相同,响应于对候选锁定指示器中的目标锁定指示器的选择操作,确定第一技能的锁定指示器。候选锁定指示器的形状可以是圆、矩形、正六边形、正五边形、椭圆中的至少一种。本申请对候选锁定指示器的形状不做具体限定。示例性的,如图7所示,在虚拟环境画面的右上方显示第一技能的候选锁定指示器701,候选锁定指示器701的形状包括圆、矩形和正六边形。在点击其中的圆时,第一技能的释放区域即为圆。在点击其中的矩形时,第一技能的释放区域即为矩形。
可选地,目标锁定指示器的转角可由用户自行设置。
在一个实施例中,响应于检测到虚拟环境画面上的触摸落下操作,确定第一技能的锁定指示器在横版虚拟环境画面上的第一定位点;基于锁定指示器的第一定位点,在虚拟环境画面上显示锁定指示器;响应于检测到虚拟环境画面上的滑动操作,根据滑动操作的滑动终点确定n个第二虚拟对象中的目标第二虚拟对象;响应于检测到虚拟环境画面上的触摸离开操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪与目标第二虚拟对象类型相同的n个第二虚拟对象。示例性的,如图8所示,检测到虚拟环境画面上的触摸落下操作,确定第一技能的锁定指示器在横版虚拟环境画面上的第一定位点801,并以第一定位点801为圆心显示圆形的锁定指示器。检测到虚拟环境画面上的滑动操作,得到滑动操作的滑动终点802。该滑动终点802指向目标第二虚拟对象803,则控制第一虚拟对象释放出的m个虚拟飞行物自动追踪与目标第二虚拟对象类型803相同的n个第二虚拟对象。
综上所述,通过第二目标锁定操作和第二释放操作,终端实现对目标虚拟对象的锁定和虚拟飞行物的分配,用户通过直接触碰目标虚拟对象即可快速实现对目标虚拟对象的锁定,提高了用户对目标虚拟对象的锁定效率,以及提高了用户的人机交互体验。
为进一步实现对技能释放区域内的n个虚拟对象进行目标锁定和技能释放,图9是本申请一个示例性实施例的虚拟对象的能力释放方法的流程图,本实施例以该方法由图1所示的第一终端120(或第一终端120内的客户端)来执行进行举例说明。该方法包括:
步骤910,显示虚拟环境画面,虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,第一虚拟对象具有第一技能;
第一技能指虚拟环境中的第一虚拟对象所释放的技能,在一个实施例中,第一虚拟对象 的第一技能显示为第一虚拟对象同时释放m个虚拟飞行物以自动追踪的方式对第二虚拟对象的属性值进行减少。
横版虚拟环境画面为在终端上显示虚拟环境的用户画面。虚拟环境是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。
步骤920,响应于检测到虚拟环境画面上的触摸落下操作,确定第一技能的锁定指示器在虚拟环境画面上的第一定位点,锁定指示器用于对位于第一技能的释放区域内的n个第二虚拟对象进行锁定;
在一个实施例中,终端对屏幕进行电位检测,响应于终端检测到基于用户在屏幕上的触摸落下操作引起的电位变化时,即终端检测到横版虚拟环境画面上的触摸落下操作,终端确定第一技能的锁定指示器在虚拟环境画面上的第一定位点。其中,第一定位点用于对第一技能的锁定指示器进行定位。
在一个实施例中,第一定位点处于锁定指示器内,即第一定位点处于锁定指示器的封闭几何图形的内部或边缘,示意性的,结合参考图4,锁定指示器为封闭的圆形,第一定位点为该圆形的圆心。
在一个实施例中,第一定位点处于锁定指示器外,即第一定位点处于锁定指示器的封闭几何图形的外部,基于预设的第一定位点的位置和锁定指示器的位置之间的映射关系,终端通过第一定位点的位置可确定锁定指示器的位置。
步骤930,基于锁定指示器的第一定位点,在虚拟环境画面上显示锁定指示器;
可选的,锁定指示器为圆形,将第一定位点作为锁定指示器的圆心,在横版虚拟环境画面上显示锁定指示器;可选的,锁定指示器为椭圆,将第一定位点作为锁定指示器的特殊位置点,在横版虚拟环境画面上显示锁定指示器;可选的,锁定指示器为正方形,将第一定位点作为锁定指示器的中心,在横版虚拟环境画面上显示锁定指示器;可选的,锁定指示器为扇形,将第一定位点作为锁定指示器的特殊位置点;可选的,锁定指示器为矩形,将第一定位点作为锁定指示器的顶点。在本申请中,对第一定位点与锁定指示器的关系并不加以限定,满足第一定位点和锁定指示器之间存在对应关系即可。
示意性的,结合参考图4,锁定指示器为封闭的圆形,第一定位点为该圆形的圆心,锁定指示器401锁定3个第二虚拟对象302。
在一个实施例中,锁定指示器为圆形,响应于将第一定位点作为锁定指示器的圆心,以及将预先设置的半径作为锁定指示器的半径,在横版虚拟环境画面上显示锁定指示器。
可选的,预先设置的半径为应用程序预先设置的锁定指示器的半径,即用户无法改变锁定指示器的半径;可选的,预先设置的半径为用户在客户端上预先设置的锁定指示器的半径,即用户可改变锁定指示器的半径。
步骤940,响应于检测到虚拟环境画面上的触摸滑动操作,控制锁定指示器的位置发生改变;
在一个可选的实施例中,在终端显示锁定指示器之后,终端检测到横版虚拟环境画面上的触摸滑动操作,终端控制锁定指示器的位置发生改变。如,响应于终端检测到横版虚拟环境画面上的向左滑动操作,终端控制锁定指示器随着向左滑动操作而向左移动。
值得说明的一点是,终端可执行步骤940之后执行步骤950,也可不执行步骤940直接执行步骤950,本申请对此不加以限定。
步骤950,响应于检测到虚拟环境画面上的触摸离开操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象。
其中,m和n均为不小于2的整数。
在m个虚拟飞行物对n个第二虚拟对象的属性值进行调整的情况下,示例性的,响应于检测到横版虚拟环境画面上的触摸离开操作,控制第一虚拟对象同时释放出的m个虚拟飞行 物以自动追踪方式对n个第二虚拟对象的属性值进行减少。
在一个实施例中,响应于检测到横版虚拟环境画面上的触摸离开操作,终端控制第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对n个第二虚拟对象的属性值进行减少。
在一个实施例中,响应于第一定位点上的触控压力发生改变,改变锁定指示器的半径。
具体的,终端检测到用户在第一定位点上的触控压力,并基于压力的变化改变锁定指示器的半径。示意性的,终端检测到用户在第一定位点上的触控压力增大,使得锁定指示器的半径增大,直至锁定指示器的半径达到预设的半径最大值。示意性的,终端检测到用户在第一定位点上的触控压力减小,使得锁定指示器的半径减小,直至锁定指示器的半径达到预设的半径最小值。
综上所述,通过在虚拟环境画面上设置第一技能的锁定指示器,锁定处于锁定指示器范围内的第二虚拟对象,之后控制第一虚拟对象释放的m个虚拟飞行物自动追踪锁定指示器范围内的n个第二虚拟对象。其中,锁定指示器表现为基于横版虚拟环境画面的第一定位点设置的封闭几何图形。
上述方法中,终端通过第一定位点确定锁定指示器的位置,进一步生成锁定指示器的范围,简化了锁定指示器的生成方式,且因此得到的锁定指示器范围内包含了用户欲攻击的第二虚拟对象。
上述方法提高了多个虚拟道具同时攻击多个虚拟对象的瞄准速度,极大地降低了玩家的操作难度,提高了用户的人机交互效率。
图10是本申请一个示例性实施例的虚拟对象的技能释放方法的流程图,本实施例以该方法由图1所示的第一终端120(或第一终端120内的客户端)来执行进行举例说明。该方法包括:
步骤1001,在横版虚拟环境画面进行触碰操作;
响应于用户在横版虚拟环境画面进行触碰操作,终端在横版虚拟环境画面生成触碰点。
步骤1002,是否有敌人在触碰点设定的半径内;
终端判断基于触碰点设置的半径内是否存在敌人,若半径内存在敌人,执行步骤1004;若半径内不存在敌人,执行步骤1003。
步骤1003,触碰点是否按压敌人;
终端判断用户是否在触碰点处按压敌人,若用户在触碰点处按压敌人,执行步骤1005,若用户未在触碰点处按压敌人,执行步骤1001。
步骤1004,锁定所有范围内的敌人;
基于触碰点和设置的半径,终端生成锁定范围,终端锁定所有锁定范围内的敌人。
步骤1005,直接锁定当前敌人;
基于终端判断用户在触碰点处按压敌人,终端直接锁定当前敌人。
步骤1006,第一虚拟对象是否持有武器;
终端判断横版虚拟环境画面的第一虚拟对象是否持有武器,若第一虚拟对象持有武器,执行步骤1007,若第一虚拟对象未持有武器,执行步骤1008。
步骤1007,把所有武器平均分配发射向锁定的敌人;
响应于第一虚拟对象持有武器,终端把第一虚拟对象持有的所有武器平均分配发射向锁定的敌人。
平均分配的算法是:遍历敌人,依次把武器分配到遍历的敌人身上,如果遍历到尾部,则又从头开始遍历,直到武器分配完。这样的结果是如果武器数量小于敌人数,则会存在一些敌人没有被武器瞄准射击,如果武器数大于敌人数,则会有多个敌人分配到多个武器。
步骤1008,取消发射武器。
响应于第一虚拟对象未持有武器,终端取消发射武器。
图11是本申请一个示例性实施例的虚拟对象的技能释放装置的结构框图,该装置包括:
显示模块111,用于显示虚拟环境画面,虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,第一虚拟对象具有第一技能;
显示模块111,还用于响应于第一技能的第一目标锁定操作,在虚拟环境画面上显示第一技能的锁定指示器,锁定指示器用于对位于第一技能的释放区域内的n个第二虚拟对象进行锁定;
控制模块112,用于响应于第一技能的第一释放操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象,m和n均为不小于2的整数。
在一个可选的实施例中,控制模块112还用于在m不小于n的情况下,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪分别自动追踪所述n个第二虚拟对象中的一个第二虚拟对象;其中,n个第二虚拟对象中的每个第二虚拟对象至少与m个虚拟飞行物中的一个虚拟飞行物相对应。
在一个可选的实施例中,控制模块112还用于在m不小于n、i不大于n且i大于0的情况下,控制所述m个虚拟飞行物中的第i个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i个虚拟对象。
在一个可选的实施例中,控制模块112还用于在m不小于n、i大于n且i不大于m的情况下,控制所述m个虚拟飞行物中的第i+1个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i-n个虚拟对象。
在一个可选的实施例中,控制模块112还用于在m小于n的情况下,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象中的m个第二虚拟对象;其中,m个第二虚拟对象与m个虚拟飞行物一一对应。
在一个可选的实施例中,显示模块111还用于响应于第一技能的第二目标锁定操作,在虚拟环境画面上对至少一个第二虚拟对象中的目标虚拟对象进行锁定。
在一个可选的实施例中,控制模块112还用于响应于第一技能的第二释放操作,控制第一虚拟对象同时释放出的m个虚拟飞行物自动追踪目标虚拟对象。
在一个可选的实施例中,显示模块111还用于响应于检测到虚拟环境画面上的触摸落下操作,确定第一技能的第二定位点,第二定位点用于在横版虚拟环境画面上对至少一个第二虚拟对象中的目标虚拟对象进行锁定。
在一个可选的实施例中,控制模块112还用于响应于检测到虚拟环境画面上的触摸离开操作,控制第一虚拟对象同时释放出的m个虚拟飞行物自动追踪目标虚拟对象。
在一个可选的实施例中,显示模块111还用于响应于检测到虚拟环境画面上的触摸落下操作,确定第一技能的锁定指示器在虚拟环境画面上的第一定位点。
在一个可选的实施例中,显示模块111还用于基于锁定指示器的第一定位点,在虚拟环境画面上显示锁定指示器。
在一个可选的实施例中,控制模块112还用于响应于检测到虚拟环境画面上的触摸离开操作,控制第一虚拟对象同时释放出的m个虚拟飞行物自动追踪对n个第二虚拟对象。
在一个可选的实施例中,锁定指示器为圆形。
在一个可选的实施例中,显示模块111还用于将第一定位点作为锁定指示器的圆心,在虚拟环境画面上显示锁定指示器。
在一个可选的实施例中,显示模块111还用于将第一定位点作为锁定指示器的圆心,以及将预先设置的半径作为锁定指示器的半径,在虚拟环境画面上显示锁定指示器。
在一个可选的实施例中,显示模块111还用于响应于第一定位点上的触控压力发生改变,改变锁定指示器的半径。
在一个可选地实施例中,控制模块112还用于响应于所述第一技能的所述第一释放操作,控制所述第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对所述n个第二虚拟对 象的属性值进行减少。
在一个可选的实施例中,显示模块111还用于显示至少两种第一技能的候选锁定指示器,候选锁定指示器的形状互不相同;控制模块112还用于响应于对候选锁定指示器中的目标锁定指示器的选择操作,确定第一技能的锁定指示器。
在一个可选的实施例中,控制模块112还用于响应于检测到虚拟环境画面上的触摸落下操作,确定第一技能的第二定位点,第二定位点用于在横版虚拟环境画面上对至少一个第二虚拟对象中的目标虚拟对象进行锁定;显示模块111还用于基于锁定指示器的第一定位点,在虚拟环境画面上显示锁定指示器;控制模块112还用于响应于检测到虚拟环境画面上的触摸离开操作,控制第一虚拟对象释放出的m个虚拟飞行物自动追踪n个第二虚拟对象。
综上所述,通过在横版虚拟环境画面上设置第一技能的锁定指示器,锁定处于锁定指示器范围内的第二虚拟对象,之后控制第一虚拟对象同时释放的m个虚拟飞行物以自动追踪的方式对锁定指示器范围内的n个第二虚拟对象的属性值进行减少。上述装置提高了多个虚拟道具同时攻击多个虚拟对象的瞄准速度,极大地降低了玩家的操作难度,提高了用户的人机交互效率。
图12示出了本申请一个示例性实施例提供的计算机设备1200的结构框图。该计算机设备1200可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。计算机设备1200还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1200包括有:处理器1201和存储器1202。
处理器1201可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1201可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1201也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1201可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1201还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1202可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1202还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1202中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1201所执行以实现本申请中方法实施例提供的域外网络资源的加速方法。
在一些实施例中,计算机设备1200还可选包括有:外围设备接口1203和至少一个外围设备。处理器1201、存储器1202和外围设备接口1203之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1203相连。示例地,外围设备可以包括:射频电路1204、显示屏1205、摄像头组件1206、音频电路1207、定位组件1208和电源1209中的至少一种。
在一些实施例中,计算机设备1200还包括有一个或多个传感器1210。该一个或多个传感器1210包括但不限于:加速度传感器1211、陀螺仪传感器1212、压力传感器1213、光学传感器1214以及接近传感器1215。
本领域技术人员可以理解,图12中示出的结构并不构成对计算机设备1200的限定,可 以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请还提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述方法实施例提供的虚拟对象的技能释放方法。
本申请提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方法实施例提供的虚拟对象的技能释放方法。

Claims (20)

  1. 一种虚拟对象的技能释放方法,所述方法由计算机设备执行,所述方法包括:
    显示虚拟环境画面,所述虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,所述第一虚拟对象具有第一技能;
    响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,所述锁定指示器用于对位于所述第一技能的释放区域内的n个第二虚拟对象进行锁定;
    响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,m和n均为不小于2的整数。
  2. 根据权利要求1所述的方法,其中,所述控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,包括:
    在m不小于n的情况下,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪分别自动追踪所述n个第二虚拟对象中的一个第二虚拟对象;
    其中,所述n个第二虚拟对象中的每个第二虚拟对象至少与所述m个虚拟飞行物中的一个虚拟飞行物相对应。
  3. 根据权利要求2所述的方法,其中,所述在m不小于n的情况下,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪方式分别对所述n个第二虚拟对象中的一个第二虚拟对象,包括:
    在m不小于n、i不大于n且i大于0的情况下,控制所述m个虚拟飞行物中的第i个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i个虚拟对象;
    在m不小于n、i大于n且i不大于m的情况下,控制所述m个虚拟飞行物中的第i+1个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i-n个虚拟对象。
  4. 根据权利要求1所述的方法,其中,所述控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,包括:
    在m小于n的情况下,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象中的m个第二虚拟对象;
    其中,所述m个第二虚拟对象与所述m个虚拟飞行物一一对应。
  5. 根据权利要求1至4任一所述的方法,其中,所述方法还包括:
    响应于所述第一技能的第二目标锁定操作,在所述虚拟环境画面上对所述至少一个第二虚拟对象中的目标虚拟对象进行锁定;
    响应于所述第一技能的第二释放操作,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪所述目标虚拟对象。
  6. 根据权利要求5所述的方法,其中,
    所述响应于所述第一技能的第二目标锁定操作,在所述虚拟环境画面上对所述至少一个第二虚拟对象中的目标虚拟对象进行锁定,包括:
    响应于检测到所述虚拟环境画面上的触摸落下操作,确定所述第一技能的第二定位点,所述第二定位点用于在所述横版虚拟环境画面上对所述至少一个第二虚拟对象中的目标虚拟对象进行锁定;
    所述响应于所述第一技能的第二释放操作,控制所述第一虚拟对象同时释放出的m个虚 拟飞行物自动追踪所述目标虚拟对象,包括:
    响应于检测到所述横版虚拟环境画面上的触摸离开操作,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪所述目标虚拟对象。
  7. 根据权利要求1至4任一所述的方法,其中,所述响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,包括:
    响应于检测到所述虚拟环境画面上的触摸落下操作,确定所述第一技能的锁定指示器在所述横版虚拟环境画面上的第一定位点;
    基于所述锁定指示器的第一定位点,在所述虚拟环境画面上显示所述锁定指示器;
    所述响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,包括:
    响应于检测到所述虚拟环境画面上的触摸离开操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象。
  8. 根据权利要求7所述的方法,其中,所述锁定指示器为圆形;
    所述基于所述锁定指示器的第一定位点,在所述虚拟环境画面上显示所述锁定指示器,包括:
    将所述第一定位点作为所述锁定指示器的圆心,在所述虚拟环境画面上显示所述锁定指示器。
  9. 根据权利要求8所述的方法,其中,所述将所述第一定位点作为所述锁定指示器的圆心,在所述虚拟环境画面上显示所述锁定指示器,包括:
    将所述第一定位点作为所述锁定指示器的圆心,以及将预先设置的半径作为所述锁定指示器的半径,在所述虚拟环境画面上显示所述锁定指示器。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    响应于所述第一定位点上的触控压力发生改变,改变所述锁定指示器的半径。
  11. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:
    响应于所述第一技能的所述第一释放操作,控制所述第一虚拟对象同时释放出的m个虚拟飞行物以自动追踪方式对所述n个第二虚拟对象的属性值进行减少。
  12. 根据权利要求1至4任一所述的方法,其中,所述虚拟环境画面还包括至少两种所述第一技能的候选锁定指示器,所述候选锁定指示器的形状互不相同;
    所述方法还包括:
    响应于对所述候选锁定指示器中的目标锁定指示器的选择操作,确定所述第一技能的锁定指示器。
  13. 根据权利要求1至4任一所述的方法,其中,所述响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,包括:
    响应于检测到所述虚拟环境画面上的触摸落下操作,确定所述第一技能的锁定指示器在所述横版虚拟环境画面上的第一定位点;
    基于所述锁定指示器的第一定位点,在所述虚拟环境画面上显示所述锁定指示器;
    所述响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,包括:
    响应于检测到所述虚拟环境画面上的滑动操作,根据所述滑动操作的滑动终点确定所述 n个第二虚拟对象中的目标第二虚拟对象;
    响应于检测到所述虚拟环境画面上的触摸离开操作,控制所述第一虚拟对象释放出的所述m个虚拟飞行物自动追踪与所述目标第二虚拟对象类型相同的所述n个第二虚拟对象。
  14. 一种虚拟对象的技能释放装置,其中,所述装置包括:
    显示模块,用于显示虚拟环境画面,所述虚拟环境画面显示有第一虚拟对象和至少一个第二虚拟对象,所述第一虚拟对象具有第一技能;
    显示模块,还用于响应于所述第一技能的第一目标锁定操作,在所述虚拟环境画面上显示所述第一技能的锁定指示器,所述锁定指示器用于对位于所述第一技能的释放区域内的n个第二虚拟对象进行锁定;
    控制模块,用于响应于所述第一技能的第一释放操作,控制所述第一虚拟对象释放出的m个虚拟飞行物自动追踪所述n个第二虚拟对象,m和n均为不小于2的整数。
  15. 根据权利要求14所述的装置,其中,
    所述控制模块,还用于在m不小于n的情况下,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪分别自动追踪所述n个第二虚拟对象中的一个第二虚拟对象;
    其中,所述n个第二虚拟对象中的每个第二虚拟对象至少与所述m个虚拟飞行物中的一个虚拟飞行物相对应。
  16. 根据权利要求15所述的装置,其中,
    所述控制模块,还用于在m不小于n、i不大于n且i大于0的情况下,控制所述m个虚拟飞行物中的第i个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i个虚拟对象;
    在m不小于n、i大于n且i不大于m的情况下,控制所述m个虚拟飞行物中的第i+1个虚拟飞行物自动追踪所述n个第二虚拟对象中的第i-n个虚拟对象。
  17. 根据权利要求15所述的装置,其中,
    所述控制模块,还用于在m小于n的情况下,控制所述第一虚拟对象同时释放出的m个虚拟飞行物自动追踪对所述n个第二虚拟对象中的m个第二虚拟对象;
    其中,所述m个第二虚拟对象与所述m个虚拟飞行物一一对应。
  18. 一种计算机设备,其中,所述计算机设备包括:处理器和存储器,所述存储器存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至11任一所述的虚拟对象的技能释放方法。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至11任一所述的虚拟对象的技能释放方法。
  20. 一种计算机程序产品,包括计算机程序或指令,其中,所述计算机程序或指令被处理器执行时实现权利要求1至11中任一项所述的虚拟对象的技能释放方法。
PCT/CN2022/087836 2021-05-20 2022-04-20 虚拟对象的技能释放方法、装置、设备、介质及程序产品 WO2022242400A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023553103A JP2024513658A (ja) 2021-05-20 2022-04-20 仮想オブジェクトのスキルリリース方法および装置、デバイス、媒体並びにプログラム
US17/990,579 US20230078592A1 (en) 2021-05-20 2022-11-18 Ability casting method and apparatus for virtual object, device, medium and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110553091.1A CN113117330B (zh) 2021-05-20 2021-05-20 虚拟对象的技能释放方法、装置、设备及介质
CN202110553091.1 2021-05-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/990,579 Continuation US20230078592A1 (en) 2021-05-20 2022-11-18 Ability casting method and apparatus for virtual object, device, medium and program product

Publications (1)

Publication Number Publication Date
WO2022242400A1 true WO2022242400A1 (zh) 2022-11-24

Family

ID=76782292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087836 WO2022242400A1 (zh) 2021-05-20 2022-04-20 虚拟对象的技能释放方法、装置、设备、介质及程序产品

Country Status (4)

Country Link
US (1) US20230078592A1 (zh)
JP (1) JP2024513658A (zh)
CN (1) CN113117330B (zh)
WO (1) WO2022242400A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113117330B (zh) * 2021-05-20 2022-09-23 腾讯科技(深圳)有限公司 虚拟对象的技能释放方法、装置、设备及介质
CN113633972B (zh) * 2021-08-31 2023-07-21 腾讯科技(深圳)有限公司 虚拟道具的使用方法、装置、终端及存储介质
CN114425161A (zh) * 2022-01-25 2022-05-03 网易(杭州)网络有限公司 目标锁定方法、装置、电子设备及存储介质
CN114949842A (zh) * 2022-06-15 2022-08-30 网易(杭州)网络有限公司 虚拟对象的切换方法及装置、存储介质、电子设备
CN115350473A (zh) * 2022-09-13 2022-11-18 北京字跳网络技术有限公司 虚拟对象的技能控制方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010017395A (ja) * 2008-07-11 2010-01-28 Namco Bandai Games Inc プログラム、情報記憶媒体及びゲーム装置
US20100273544A1 (en) * 2009-04-22 2010-10-28 Namco Bandai Games Inc. Information storage medium, game device, and method of controlling game device
CN110448891A (zh) * 2019-08-08 2019-11-15 腾讯科技(深圳)有限公司 控制虚拟对象操作远程虚拟道具的方法、装置及存储介质
CN111659118A (zh) * 2020-07-10 2020-09-15 腾讯科技(深圳)有限公司 道具控制方法和装置、存储介质及电子设备
CN113117330A (zh) * 2021-05-20 2021-07-16 腾讯科技(深圳)有限公司 虚拟对象的技能释放方法、装置、设备及介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5042743A (en) * 1990-02-20 1991-08-27 Electronics And Space Corporation Apparatus and method for multiple target engagement
US7388605B2 (en) * 2002-11-12 2008-06-17 Hewlett-Packard Development Company, L.P. Still image capturing of user-selected portions of image frames
US8650507B2 (en) * 2008-03-04 2014-02-11 Apple Inc. Selecting of text using gestures
KR101705872B1 (ko) * 2010-09-08 2017-02-10 삼성전자주식회사 모바일 디바이스의 화면상의 영역 선택 방법 및 장치
KR101377010B1 (ko) * 2013-05-15 2014-09-03 김신우 다중조준점 적용 방법 및 이를 구현하기 위한 프로그램이 저장된 기록 매체
CN104133595A (zh) * 2014-06-06 2014-11-05 蓝信工场(北京)科技有限公司 一种在电子设备的触摸屏上选中多个对象的方法和装置
WO2018103634A1 (zh) * 2016-12-06 2018-06-14 腾讯科技(深圳)有限公司 一种数据处理的方法及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010017395A (ja) * 2008-07-11 2010-01-28 Namco Bandai Games Inc プログラム、情報記憶媒体及びゲーム装置
US20100273544A1 (en) * 2009-04-22 2010-10-28 Namco Bandai Games Inc. Information storage medium, game device, and method of controlling game device
CN110448891A (zh) * 2019-08-08 2019-11-15 腾讯科技(深圳)有限公司 控制虚拟对象操作远程虚拟道具的方法、装置及存储介质
CN111659118A (zh) * 2020-07-10 2020-09-15 腾讯科技(深圳)有限公司 道具控制方法和装置、存储介质及电子设备
CN113117330A (zh) * 2021-05-20 2021-07-16 腾讯科技(深圳)有限公司 虚拟对象的技能释放方法、装置、设备及介质

Also Published As

Publication number Publication date
CN113117330B (zh) 2022-09-23
US20230078592A1 (en) 2023-03-16
JP2024513658A (ja) 2024-03-27
CN113117330A (zh) 2021-07-16

Similar Documents

Publication Publication Date Title
WO2022242400A1 (zh) 虚拟对象的技能释放方法、装置、设备、介质及程序产品
US8052527B2 (en) Calculation control method, storage medium, and game device
WO2022083449A1 (zh) 虚拟投掷道具的使用方法、装置、终端及存储介质
WO2021213026A1 (zh) 虚拟对象的控制方法、装置、设备及存储介质
WO2022057624A1 (zh) 控制虚拟对象使用虚拟道具的方法、装置、终端及介质
CN110732135B (zh) 虚拟场景显示方法、装置、电子设备及存储介质
WO2022017063A1 (zh) 控制虚拟对象恢复属性值的方法、装置、终端及存储介质
CN110465087B (zh) 虚拟物品的控制方法、装置、终端及存储介质
WO2021227733A1 (zh) 虚拟道具的显示方法、装置、设备及存储介质
US11847734B2 (en) Method and apparatus for displaying virtual environment picture, device, and storage medium
TWI803147B (zh) 虛擬對象控制方法、裝置、設備、儲存媒體及程式産品
US20220161138A1 (en) Method and apparatus for using virtual prop, device, and storage medium
CN110876849B (zh) 虚拟载具的控制方法、装置、设备及存储介质
US20230052088A1 (en) Masking a function of a virtual object using a trap in a virtual environment
CN113041622A (zh) 虚拟环境中虚拟投掷物的投放方法、终端及存储介质
US20220379209A1 (en) Virtual resource display method and related apparatus
CN113769394A (zh) 虚拟场景中的道具控制方法、装置、设备及存储介质
CN111265876B (zh) 虚拟环境中的道具使用方法、装置、设备及存储介质
US20230030619A1 (en) Method and apparatus for displaying aiming mark
US20230016383A1 (en) Controlling a virtual objectbased on strength values
CN114042309A (zh) 虚拟道具的使用方法、装置、终端及存储介质
CN114042317A (zh) 基于虚拟对象的交互方法、装置、设备、介质及程序产品
CN114210062A (zh) 虚拟道具的使用方法、装置、终端、存储介质及程序产品
CN117298580A (zh) 虚拟对象的互动方法、装置、设备、介质及程序产品
CN114470755A (zh) 虚拟环境画面的显示方法、装置、设备、介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22803725

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023553103

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE