CN112494942A - Information processing method, information processing device, computer equipment and storage medium - Google Patents

Information processing method, information processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112494942A
CN112494942A CN202011488851.7A CN202011488851A CN112494942A CN 112494942 A CN112494942 A CN 112494942A CN 202011488851 A CN202011488851 A CN 202011488851A CN 112494942 A CN112494942 A CN 112494942A
Authority
CN
China
Prior art keywords
virtual object
data layer
skill
position data
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011488851.7A
Other languages
Chinese (zh)
Inventor
陈伟杰
曾珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011488851.7A priority Critical patent/CN112494942A/en
Publication of CN112494942A publication Critical patent/CN112494942A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information processing method, an information processing device, computer equipment and a storage medium, and a user interface is displayed, wherein the user interface comprises a virtual scene, a first virtual object and a second virtual object which are positioned in the virtual scene; releasing a specified skill to a second virtual object based on a current location of the first virtual object in the virtual scene; acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene; adjusting the position distribution of the blocking area based on the current position and the skill parameter. According to the embodiment of the application, the position data layer is created, the position distribution of the blocking area of the virtual objects can be independently adjusted, and the accuracy of interaction between the virtual objects and the virtual scene in a game is improved.

Description

Information processing method, information processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to an information processing method and apparatus, a computer device, and a storage medium.
Background
With the continuous development of computer communication technology, a great deal of popularization and application of terminals such as smart phones, tablet computers, notebook computers and the like are realized, the terminals are developed towards diversification and individuation directions and become indispensable terminals for people in life and work increasingly, and entertainment games capable of being operated on the terminals are produced in order to meet the pursuit of people for spiritual life.
In the MOBA game and the MMO game, a user can dynamically change the communication relation between game units and between the game units and certain areas in a game scene in the game process by triggering a skill control in a user interface, so that the interaction between the game units and the interaction between the game units and the game scene can be enhanced. However, the current game interaction logic easily causes the accuracy of the interaction between the game units and the game scenes. For example, all gaming units within a game scene share the same blocking data, and all gaming units are affected by the blocking data.
Disclosure of Invention
Embodiments of the present application provide an information processing method, an information processing apparatus, a computer device, and a storage medium, which can independently adjust the position distribution of blocking areas of virtual objects, and improve the accuracy of interaction between virtual objects in a game and between a virtual object and a virtual scene.
The embodiment of the application provides an information processing method, which comprises the following steps:
displaying a user interface, wherein the user interface comprises a virtual scene, and a first virtual object and a second virtual object which are positioned in the virtual scene;
releasing a specified skill to a second virtual object based on a current location of the first virtual object in the virtual scene;
acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene;
adjusting the position distribution of the blocking area based on the current position and the skill parameter.
An embodiment of the present application provides an information processing apparatus, the apparatus including:
the display unit is used for displaying a user interface, and the user interface comprises a virtual scene, a first virtual object and a second virtual object which are positioned in the virtual scene;
a processing unit configured to release a specified skill to a second virtual object based on a current position of the first virtual object in the virtual scene;
the acquiring unit is used for acquiring a skill parameter corresponding to the specified skill and the position distribution of the blocking area of the second virtual object in the virtual scene;
and the adjusting unit is used for adjusting the position distribution of the blocking area based on the current position and the skill parameter.
Optionally, the apparatus further includes an adjusting unit, where the adjusting unit is configured to:
and adjusting the position distribution of the blocking area based on the current position and the range parameter.
Optionally, the apparatus further includes a receiving unit, where the receiving unit is configured to:
receiving an operation instruction triggered by a user through the skill control;
optionally, the apparatus further includes a determining unit, where the determining unit is configured to:
determining a target position indicated by the operation instruction in the virtual scene;
determining a candidate region in the virtual scene based on a current position of the first virtual object in the virtual scene;
optionally, the apparatus further includes a determining unit, where the determining unit is configured to:
determining whether the target location is within the candidate region;
and if so, adjusting the position distribution of the blocking area according to the target position and the range parameter.
Optionally, the determining unit is configured to:
determining a first location data layer associated with the second virtual object based on the location distribution of the blocked regions;
optionally, the adjusting unit is configured to:
adjusting initial position data in the first position data layer according to the range parameter to obtain an adjusted first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted position data layer.
Optionally, the processing unit is further configured to:
when the duration of the skill release reaches the effective duration, restoring the adjusted position data in the first position data layer into the initial position data;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer after the original position data is restored.
Optionally, the processing unit is further configured to:
determining a first location data layer associated with the second virtual object based on the location distribution of the blocked regions;
creating a second location data layer based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer;
adjusting the position data in the second position data layer according to the range parameter to obtain an adjusted second position data layer;
the association between the second virtual object and the first position data layer is released, and the association between the second virtual object and the adjusted second position data layer is established;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted second position data layer.
Optionally, the processing unit is further configured to:
when the duration of the skill release reaches the effective duration, removing the association between the second virtual object and the adjusted second position data layer, and establishing the association between the second virtual object and the first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer.
Optionally, the receiving unit is further configured to:
when a movement instruction of the second virtual object is received.
Optionally, the determining unit is further configured to:
determining the end point position of the second virtual object to be moved in the virtual scene according to the moving instruction;
optionally, the obtaining unit is further configured to:
acquiring the position distribution of the second virtual object in the active area of the virtual scene;
optionally, the determining unit is further configured to:
determining a plurality of movable paths of the second virtual object based on the position distribution of the blocking areas and the position distribution of the active areas;
determining a target path from the plurality of movable paths that the second virtual object can move from a current position to the end position;
optionally, the processing unit is further configured to:
and controlling the second virtual object to move to the end position according to the target path in the virtual scene.
Optionally, the determining unit is further configured to:
determining an action area of the designated skill in the virtual scene according to the adjustment result of the position distribution of the blocking area;
optionally, the apparatus further comprises a display unit, and the display unit is configured to:
and displaying the skill effect of the specified skill on the action area.
Similarly, an embodiment of the present application further provides a computer device, which includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor, and when the computer program is executed by the processor, the computer program implements the steps of any one of the information processing methods.
In addition, an embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and the computer program, when executed by a processor, implements the steps of any one of the information processing methods.
The embodiment of the application provides an information processing method, an information processing device, computer equipment and a storage medium, wherein a user interface is displayed, the user interface comprises a virtual scene, a first virtual object and a second virtual object, the first virtual object and the second virtual object are located in the virtual scene, then, a designated skill is released to the second virtual object based on the current position of the first virtual object in the virtual scene, then, a skill parameter corresponding to the designated skill is obtained, the position distribution of a blocking area of the current second virtual object in the virtual scene is obtained, and finally, the position distribution of the blocking area is adjusted based on the current position and the skill parameter. According to the embodiment of the application, the position data layer is created, the position distribution of the blocking area of the virtual objects can be independently adjusted, and the accuracy of interaction between the virtual objects and the virtual scene in a game is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic architecture diagram of an information processing system according to an embodiment of the present application.
Fig. 2 is a flowchart of an information processing method according to an embodiment of the present application.
Fig. 3 is a schematic view of a first application scenario of an information processing method according to an embodiment of the present application.
Fig. 4 is a schematic view of a second application scenario of the information processing method according to the embodiment of the present application.
Fig. 5 is a schematic diagram of a third application scenario of the information processing method according to the embodiment of the present application.
Fig. 6 is a schematic diagram of a fourth application scenario of the information processing method according to the embodiment of the present application.
Fig. 7 is a schematic view of a fifth application scenario of the information processing method according to the embodiment of the present application.
Fig. 8 is a schematic diagram of a sixth application scenario of the information processing method according to the embodiment of the present application.
Fig. 9 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, in the games of the type such as the MOBA game and the MMO game, the conventional game interaction logic easily causes the accuracy of the interaction between game units and between game units and game scenes to be poor. For example, all gaming units within a game scene share the same blocking data, and all gaming units are affected by the blocking data.
Based on this, the embodiments of the present application provide an information processing method, apparatus, computer device and storage medium. Specifically, the information processing method of the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server or other devices. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
For example, when the information processing method is operated on a terminal, the terminal device stores a game application and is used for presenting a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an information processing system according to an embodiment of the present disclosure. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. The terminal 1000 held by the user can be connected to servers of different games through the network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining user input through touch or slide operations performed at multiple points on one or more touch sensitive display screens. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000 and through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, and so on. In addition, different terminals 1000 may be connected to other terminals or a server using their own bluetooth network or hotspot network. For example, a plurality of users may be online through different terminals 1000 to be connected and synchronized with each other through a suitable network to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 when different users play the multiplayer game online.
The embodiment of the application provides an information processing method, which can be executed by a terminal or a server. The embodiment of the present application is described as an example in which the information processing method is executed by a terminal. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed at a plurality of points on the screen at the same time. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role-playing game, a strategy game, a sports game, a game of chance, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, to implement a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that the game player uses to achieve the goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.
As shown in fig. 2, a specific flow of the information processing method provided in the embodiment of the present application may be as follows:
step 101, displaying a user interface.
In the embodiment of the present application, the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene, where the second virtual object may be displayed in the current virtual scene or not displayed in the current virtual scene, for example, please refer to fig. 3 and 4 together, in fig. 3, the first virtual object and the second virtual object are both displayed in the current virtual scene; in fig. 4, a first virtual object is displayed in the current virtual scene, and a second virtual object is not displayed in the current virtual scene because it is manipulated by a computer or other user to move to a designated area.
The first virtual object is a virtual role played or controlled by a user in a virtual scene, such as a role representing that a current game account is logged in an application program in the terminal. The second virtual object may be other virtual objects existing in the virtual scene, and the other virtual objects may be one or more, for example, the virtual scene may further include a third virtual object and a fourth virtual object, the third virtual object or the fourth virtual object may be in the same camp as the first virtual object or in a hostile camp, and the third virtual object or the fourth virtual object may be in the same camp as the second virtual object or in a hostile camp. Other virtual objects may be selected as the released skill-designating virtual object, where the released skill-designating virtual object is a virtual role played or controlled by a computer or other user in a virtual scene, such as a role representing an enemy or teammate, and the first virtual object in this embodiment is a virtual role played or controlled by a user in a virtual scene.
In a specific implementation process, the computer device may render and generate a graphical user interface on a touch display screen of the computer device with the touch display screen by executing a game application program, so as to display the virtual scene and at least one virtual object in the virtual scene on the graphical user interface.
Step 102, releasing the specified skill to the second virtual object based on the current position of the first virtual object in the virtual scene.
In an embodiment, the virtual scene on the graphical user interface may further include at least one skill control area, and the skill control area includes at least one skill control. When the specified skill is released to the second virtual object based on the current position of the first virtual object in the virtual scene, specifically, an operation instruction triggered by the user through the skill control may be received, a target position indicated by the operation instruction in the virtual scene is determined, then, a candidate region is determined in the virtual scene based on the current position of the first virtual object in the virtual scene, and finally, it is determined whether the target position is within the candidate region. If so, adjusting the position distribution of the blocking area according to the target position and the range parameter, and if not, not adjusting the position distribution of the blocking area.
In an embodiment, the first virtual object may select the second virtual object and release the specified skill corresponding to the skill control to the second virtual object by dragging the skill control, sliding the skill control, or clicking the second virtual object. The second virtual object may be the first virtual object itself or may be another virtual object.
In one embodiment, the skill controls are at a different level than the virtual scene. For example, the display hierarchy of the skill control is higher than the display hierarchy of the game scenario. For example, taking a terminal with a touch display screen as an example, a graphical user interface may be generated by executing a game application program to render on the touch display screen, where a virtual scene on the graphical user interface includes at least one skill control area, the skill control area includes at least one skill control, and the virtual scene on the graphical user interface may further include at least one virtual object. For example, the specified skill may include passing the virtual object over a blocking object in the virtual scene, all obstructions or other virtual objects in the virtual scene, and so on.
For example, as shown in fig. 3, the graphical user interface 100 may be generated by rendering on a touch display screen of the terminal 1000 through execution of a game application, and a virtual scene on the graphical user interface 100 includes at least one skill control area, and the skill control area includes at least one skill control 12 and at least one first virtual object 1 a. Each skill control corresponds to a designated skill, and the corresponding relation between the skill control and the designated skill can be set by default of a system or can be set by user definition. The skill control area shown in fig. 3 may be a skill layout of a normal state in a virtual scene, and the user may control the virtual object 1a to release the corresponding specified skill in the virtual scene by clicking on a single skill control.
The first virtual object for releasing the designated skill is a virtual role played or controlled by the user in the virtual scene, such as a role representing the user. The target object selected as the skill releasing target is a virtual role played or controlled by a computer or other user in a virtual scene, such as a role representing an enemy or a teammate, the first virtual object in the embodiment of the present application is a virtual role played or controlled by a user in a virtual scene, the second virtual object is a target object selected as the skill releasing target, and the second virtual object may be a virtual role played or controlled by a computer or other user in a virtual scene.
Step 103, acquiring a skill parameter corresponding to the specified skill and the position distribution of the blocking area of the current second virtual object in the virtual scene.
Specifically, the skill parameter corresponding to the specified skill can be obtained in response to the specified skill for the skill control. For example, the skill control in the virtual scene may be clicked by the user, so as to trigger the skill control to correspond to the specified skill, and thus obtain the skill parameter corresponding to the specified skill.
The skill parameter may include a range parameter, among others. The range parameter may be used to determine a range parameter for a region of action of a specified skill in a virtual scene. The active region may have any shape, such as a circle, a rectangle, a polygon, an irregular shape, or the like. For example, when the action region is circular, the radius of the circle and the position parameter of the circle are obtained, and the boundary position parameter of the circular action region is determined according to the radius parameter of the circular blocking region and the position parameter of the circular action region.
Optionally, the skill parameter may further include an effective duration, which is a time period from when the specified skill is triggered to the end of the skill. For example, the effective duration may be a time period from when the user clicks the skill control in the virtual scene to when the skill control is triggered to correspond to the specified skill to the end of the skill.
In one embodiment, each virtual object in the virtual scene belongs to a corresponding position data layer, and the position data layer is associated with the virtual object after being constructed in advance. The method comprises the following steps that firstly, computer equipment creates at least one preset position data layer, a position data layer mark is given to the preset position data layer, secondly, the preset position data layer is associated with an appointed file path, afterwards, the preset position data layer determines position data corresponding to the appointed path through the appointed path, and finally, the position data corresponding to the appointed path is stored in the preset position data layer to obtain the position data layer. At least one virtual object is associated to a position data layer, and the position data layer set in advance can be associated with one or more virtual objects, for example, a virtual object belonging to the same game formation in a game scene is associated to the same position data layer, and the virtual object can be switched between at least two position data layers. In order to meet the requirements of modifying the specified layer and searching the blocking data of the specified layer, a plurality of position data layers which are constructed in advance are stored in a mapping table. In the mapping table, the position data layer identifier is a key, the position data layer is a value, and the mapping table is used for finding the corresponding position data layer according to the position data layer identifier corresponding to the virtual object in the running process of the computer device, the position data layer stores the corresponding position data, and the blocking area and the active area of the virtual object in the virtual scene can be determined according to the position data, so that the blocking area and the active area of the virtual object in the virtual scene can be quickly found.
When the position distribution of the current second virtual object in the blocking area in the virtual scene is obtained, the position data identifier corresponding to the second virtual object may be specifically obtained, then the position data layer corresponding to the position data identifier is found according to the position data identifier corresponding to the second virtual object, and then the blocking area and the active area of the second virtual object in the virtual scene are determined according to the position data layer, so that the position distribution of the current second virtual object in the blocking area in the virtual scene is determined.
The position data layer comprises position data, the position data comprises arrays, subscripts of each array correspond to grid coordinates, and each data stores blocking/walking information of the coordinates. In this embodiment, the grid coordinates may be coordinates obtained by discretizing scene coordinates, and are discrete coordinates; the scene coordinates are coordinates of each coordinate point included in the virtual scene, and are continuous coordinates.
And 104, adjusting the position distribution of the blocking area based on the current position and the skill parameter.
In the embodiment of the application, when the position distribution of the blocking area is adjusted based on the current position and the skill parameter, taking the skill parameter including the range parameter and the effective duration as an example, the following specific cases may be considered:
optionally, the first position data layer associated with the second virtual object may be determined based on the position distribution of the blocking area, the initial position data in the first position data layer is adjusted according to the range parameter to obtain an adjusted first position data layer, and finally, the position distribution of the blocking area corresponding to the second virtual object is updated based on the adjusted position data layer. Specifically, the method comprises the following steps:
(1) when the first position data layer associated with the second virtual object is determined based on the position distribution of the blocking area, the position data layer corresponding to the position data identifier may be found according to the position data identifier corresponding to the second virtual object, where the position data layer is the first position data layer.
(2) When the position data in the first position data layer is adjusted according to the skill parameters, the size of the action area can be determined according to the range parameters, then the position data in the first position data layer is modified according to the range parameters, and the attribute of the position data of the blocking boundary determined by the action area is modified to be not walkable, so that the adjusted first position data layer is obtained.
(3) The position distribution of the blocking area corresponding to the second virtual object is updated based on the adjusted first position data layer, then the position data of the adjusted first position data layer is applied to the second virtual object, and then the blocking area and the active area of the second virtual object in the virtual scene are updated, so that the position distribution of the modified blocking area of the second virtual object in the virtual scene is determined.
In an embodiment, when the duration of the release of the skill reaches the effective duration, the adjusted position data in the first position data layer may be restored to the initial position data, and based on the first position data layer after the original position data is restored, the position distribution of the blocking area corresponding to the second virtual object is updated.
Optionally, a first position data layer associated with the second virtual object may be determined based on the position distribution of the blocking area, then a second position data layer is created based on the first position data layer, position data in the second position data layer is the same as position data in the first position data layer, then the position data in the second position data layer is adjusted according to the range parameter, so as to obtain an adjusted second position data layer, then the association between the second virtual object and the first position data layer is released, the association between the second virtual object and the adjusted second position data layer is established, and finally the position distribution of the blocking area corresponding to the second virtual object is updated based on the adjusted second position data layer. Specifically, the method comprises the following steps:
(1) when the first position data layer associated with the second virtual object is determined based on the position distribution of the blocking area, the position data layer corresponding to the position data identifier may be found according to the position data identifier corresponding to the second virtual object, where the position data layer is the first position data layer.
(2) A second location data layer may be created based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer.
(3) When the position data in the second position data layer is adjusted according to the skill parameter, the size of the action area can be determined according to the range parameter, then the position data in the second position data layer is modified according to the range parameter, then the attribute of the position data of the blocking boundary determined by the action area is modified into an unsewable attribute and/or the attribute of the position data of the blocking area in the action area is modified into a walkable attribute, and the adjusted second position data layer is obtained.
(4) And releasing the association between the second virtual object and the first position data layer, and establishing the association between the second virtual object and the adjusted second position data layer.
(5) The position distribution of the blocking area corresponding to the second virtual object may be updated based on the adjusted second position data layer, then the position data of the adjusted second position data layer is applied to the second virtual object, and then the blocking area and the active area of the second virtual object in the virtual scene are updated, thereby determining the position distribution of the modified blocking area of the second virtual object in the virtual scene.
In an embodiment, when the duration of the skill release reaches the effective duration, the association between the second virtual object and the adjusted second position data layer is released, and the association between the second virtual object and the first position data layer is established; and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer.
Optionally, in an embodiment, specifically:
(1) a position data layer corresponding to the position data identifier can be found according to the position data identifier corresponding to the second virtual object, wherein the position data layer is a first position data layer; and meanwhile, finding the position data layer corresponding to the position data identifier according to the position data identifier corresponding to the first virtual object.
(2) A second location data layer is created based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer.
(3) And adjusting the position data in the second position data layer according to the skill parameters to obtain an adjusted second position data layer. Firstly, determining the size of an action area according to a range parameter, modifying the position data in the second position data layer according to the range parameter, then modifying the attribute of the position data of the blocking boundary determined by the action area into non-removable attribute and modifying the attribute of the position data of the blocking area in the action area into removable attribute, and obtaining the adjusted second position data layer.
(4) Firstly, the association between the second virtual object and the first position data layer is released, the association between the first virtual object and the position data layer corresponding to the first virtual object is released, and then the association between the first virtual object, the second virtual object and the adjusted second position data layer is established.
(5) The position distribution of the blocking areas corresponding to the first virtual object and the second virtual object may be updated based on the adjusted second position data layer, then the position data of the adjusted second position data layer is applied to the first virtual object and the second virtual object, and then the blocking areas and the active areas of the first virtual object and the second virtual object in the virtual scene are updated, thereby determining the position distribution of the modified blocking areas of the first virtual object and the second virtual object in the virtual scene.
In an embodiment, when the duration of the skill release reaches the effective duration, first, the association between the first virtual object and the adjusted second location data layer is released, the association between the second virtual object and the adjusted second location data layer is released, meanwhile, the association between the first virtual object and the location data layer initially corresponding to the first virtual object is established, the association between the second virtual object and the first location data layer is established, then, the location distribution of the blocking area corresponding to the second virtual object is updated based on the first location data layer, and the location distribution of the blocking area corresponding to the first virtual object is updated based on the location data layer initially corresponding to the first virtual object.
Optionally, in this embodiment of the application, when a moving instruction of the second virtual object is received, first, an end position to which the second virtual object needs to move is determined according to the moving instruction, then, the position distribution of the current second virtual object in the active area in the virtual scene is obtained, then, based on the position distribution of the blocking area and the position distribution of the active area, a plurality of movable paths of the second virtual object are determined, then, a target path, which can be moved from the current position to the end position, of the second virtual object is determined from the plurality of movable paths, and finally, the second virtual object is controlled to move to the end position in the virtual scene according to the target path.
To sum up, the embodiment of the present application provides an information processing method, an information processing apparatus, a computer device, and a storage medium, where a user interface is displayed, where the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene, then a specified skill is released to the second virtual object based on a current position of the first virtual object in the virtual scene, then a skill parameter corresponding to the specified skill is obtained, and a position distribution of a blocking area of the current second virtual object in the virtual scene is obtained, and finally the position distribution of the blocking area is adjusted based on the current position and the skill parameter. According to the embodiment of the application, the position data layer is created, the position distribution of the blocking area of the virtual objects can be independently adjusted, the interaction accuracy between the virtual objects and the virtual scene in the game is improved, and the adjustment granularity is improved.
Referring to fig. 3 to fig. 8, in particular, the embodiment of the present application provides the following possible application scenarios of the information processing method:
optionally, in a first case, please refer to fig. 3 and fig. 4 together, in this embodiment, the first virtual object is a virtual role played or controlled by the user in the virtual scene, the first virtual object is hero 1a, the second virtual object selected by the hero 1a is hero 1a itself, and the first location data layer associated with the hero 1a can be determined based on the location distribution of the blocking area, that is, the location data layer corresponding to the location data identifier is found according to the location data identifier corresponding to the hero 1a, and the location data layer is the first location data layer. The hero 1a controls the third skill control 123 to release the specified skill to the hero 1a, the hero 1a being responsive to the specified skill for the third skill control 123, the specified skill for the third skill control 123 being to cause the hero 1a to cross any blockage other than a map boundary. The following operations are specifically carried out: and establishing a second position data layer based on the first position data layer, wherein the position data in the second position data layer is the same as the position data in the first position data layer, then adjusting the position data in the second position data layer according to the skill parameters, namely determining the size of an action area according to the range parameters, modifying the position data in the second position data layer according to the range parameters, modifying the attribute of the position data of the virtual scene boundary determined by the action area into non-removable attribute, and modifying the attribute of the position data of all blocking areas in the action area into removable attribute, thereby obtaining the adjusted second position data layer. And then, the hero 1a is disassociated from the first position data layer, and the hero 1a is associated with the adjusted second position data layer. Finally, the position distribution of the blocking area corresponding to hero 1a is updated based on the adjusted second position data layer. And applying the adjusted position data of the second position data layer to the hero 1a, and updating the blocking area and the active area of the hero 1a in the virtual scene, so as to determine the position distribution of the modified blocking area of the hero 1a in the virtual scene, and realize that the hero 1a can pass through any blocking except the map boundary of the virtual scene.
In some embodiments, when the skill duration released by the third skill control 123 reaches the effective duration, the hero 1a is disassociated from the adjusted second location data layer, and the hero 1a is associated with the first location data layer; and updating the position distribution of the blocking area corresponding to the hero 1a based on the first position data layer.
Alternatively, in a second case, please refer to fig. 5 and fig. 6 together, the first virtual object in the embodiment of the present application is a virtual role played or controlled by the user in the virtual scene, the first virtual object is hero 1a, the second virtual object selected by the hero 1a is hero 1a itself and hero 1b, and the first position data layer associated with the hero 1b can be determined based on the position distribution of the blocking area. Finding a position data layer corresponding to the position data identifier according to the position data identifier corresponding to the hero 1b, wherein the position data layer is a first position data layer; finding a position data layer corresponding to the position data identification according to the position data identification corresponding to the hero 1a, controlling the fourth skill control 124 to release the specified skill to the hero 1b by the hero 1a, and then responding to the specified skill for the skill control, wherein the fourth specified skill corresponding to the fourth skill control 124 is to place the hero 1a and the hero 1b in the same blocking range. The following operations are specifically carried out: creating a second location data layer based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer. And then, adjusting the position data in the second position data layer according to the skill parameters, modifying the position data in the second position data layer according to the range parameters, modifying the attribute of the position data of the blocking boundary determined by the action area into non-removable attribute, and modifying the attribute of the position data of the blocking area in the action area into removable attribute to obtain the adjusted second position data layer. Then, the hero 1b is disassociated from the first position data layer, the hero 1a is disassociated from the position data layer corresponding to the hero 1a, and the hero 1b is associated with the adjusted second position data layer. Finally, the position distribution of the blocking areas corresponding to the hero 1a and the hero 1b is updated based on the adjusted second position data layer. Applying the adjusted position data of the second position data layer to the hero 1a and the hero 1b, and updating the blocking area and the active area of the hero 1a and the hero 1b in the virtual scene, thereby determining the position distribution of the modified blocking area of the hero 1a and the hero 1b in the virtual scene.
In some embodiments, when the duration of the skill release reaches the effective duration, the hero 1a is disassociated from the adjusted second location data layer, the hero 1b is disassociated from the adjusted second location data layer, and meanwhile, the hero 1a is associated with the location data layer initially corresponding to the hero 1a, and the hero 1b is associated with the first location data layer; and updating the position distribution of the blocking area corresponding to the hero 1b based on the first position data layer, and updating the position distribution of the blocking area corresponding to the hero 1a based on the position data layer initially corresponding to the hero 1 a.
Optionally, in a third case, please refer to fig. 7 and 8 together, for example, there are a first virtual object 1a and a second virtual object 1b in the virtual interface, the first virtual object 1a and the second virtual object 1b belong to different camps (enemy camps) in the game scene, the first virtual object 1a controls the third skill control 123 to release the specified skill to the second virtual object 1b, the first position data layer associated with the second virtual object 1b is determined based on the position distribution of the blocking area, and then, in response to the specified skill for the skill control, the third specified skill corresponding to the third skill control 123 is to form a block on the second virtual object 1 b. The following operations are specifically carried out: firstly, adjusting the position data in the first position data layer according to the skill parameters, determining the size of an action area according to the range parameters, modifying the position data in the first position data layer according to the range parameters, and modifying the attribute of the position data of a blocking boundary determined by the action area into an immobile state to obtain the adjusted first position data layer. Then, the position distribution of the blocking area corresponding to the second virtual object 1b is updated based on the adjusted first position data layer. Finally, the adjusted position data of the first position data layer is applied to the second virtual object 1b, and the blocking area and the active area of the second virtual object 1b in the virtual scene are updated, so that the position distribution of the modified blocking area of the second virtual object 1b in the virtual scene is determined.
In an embodiment, when the duration of the skill release reaches the effective duration, restoring the adjusted position data in the first position data layer to initial position data; based on the first position data layer after the home position data is restored, the position distribution of the blocking area corresponding to the second virtual object 1b is updated.
To sum up, the embodiment of the present application provides an information processing method, an information processing apparatus, a computer device, and a storage medium, where a user interface is displayed, where the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene, then a specified skill is released to the second virtual object based on a current position of the first virtual object in the virtual scene, then a skill parameter corresponding to the specified skill is obtained, and a position distribution of a blocking area of the current second virtual object in the virtual scene is obtained, and finally the position distribution of the blocking area is adjusted based on the current position and the skill parameter. According to the embodiment of the application, the position data layer is created, the position distribution of the blocking area of the virtual objects can be independently adjusted, the interaction accuracy between the virtual objects and the virtual scene in the game is improved, and the adjustment granularity is improved.
In order to better implement the information processing method according to the embodiment of the present application, an embodiment of the present application further provides an information processing apparatus. Referring to fig. 9, fig. 9 is a schematic structural diagram of an information processing apparatus 300 according to an embodiment of the present disclosure. The information processing apparatus 300 may include a display unit 301, a processing unit 302, an acquisition unit 303, and an adjustment unit 304.
The display unit 301 is configured to display a user interface, where the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene.
A processing unit 302, configured to release a specified skill to a second virtual object based on a current position of the first virtual object in the virtual scene.
An obtaining unit 303, configured to obtain a skill parameter corresponding to the specified skill, and a current position distribution of the blocking area of the second virtual object in the virtual scene.
An adjusting unit 304, configured to adjust the position distribution of the blocking area based on the current position and the skill parameter.
Optionally, the adjusting unit 304 is configured to:
and adjusting the position distribution of the blocking area based on the current position and the range parameter.
Optionally, the apparatus further includes a receiving unit, where the receiving unit is configured to:
and receiving an operation instruction triggered by the skill control by the user.
Optionally, the apparatus further includes a determining unit, where the determining unit is configured to:
determining a target position indicated by the operation instruction in the virtual scene;
determining a candidate region in the virtual scene based on a current position of the first virtual object in the virtual scene.
Optionally, the apparatus further includes a determining unit, where the determining unit is configured to:
determining whether the target location is within the candidate region;
and if so, adjusting the position distribution of the blocking area according to the target position and the range parameter.
Optionally, the determining unit is configured to:
determining a first position data layer associated with the second virtual object based on the position distribution of the blocking regions.
Optionally, the adjusting unit 304 is configured to:
adjusting initial position data in the first position data layer according to the range parameter to obtain an adjusted first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted position data layer.
Optionally, the processing unit is further configured to:
when the duration of the skill release reaches the effective duration, restoring the adjusted position data in the first position data layer into the initial position data;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer after the original position data is restored.
Optionally, the processing unit 302 is further configured to:
determining a first location data layer associated with the second virtual object based on the location distribution of the blocked regions;
creating a second location data layer based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer;
adjusting the position data in the second position data layer according to the range parameter to obtain an adjusted second position data layer;
the association between the second virtual object and the first position data layer is released, and the association between the second virtual object and the adjusted second position data layer is established;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted second position data layer.
Optionally, the processing unit 302 is further configured to:
when the duration of the skill release reaches the effective duration, removing the association between the second virtual object and the adjusted second position data layer, and establishing the association between the second virtual object and the first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer.
Optionally, the receiving unit is further configured to:
when a movement instruction of the second virtual object is received.
Optionally, the determining unit is further configured to:
and determining the end point position of the second virtual object to be moved in the virtual scene according to the moving instruction.
Optionally, the obtaining unit 303 is further configured to:
and acquiring the position distribution of the active area of the second virtual object in the virtual scene.
Optionally, the determining unit is further configured to:
determining a plurality of movable paths of the second virtual object based on the position distribution of the blocking areas and the position distribution of the active areas;
determining a target path from the plurality of movable paths that the second virtual object can move from a current position to the end position.
Optionally, the processing unit 302 is further configured to:
and controlling the second virtual object to move to the end position according to the target path in the virtual scene.
Optionally, the determining unit is further configured to:
and determining the action area of the specified skill in the virtual scene according to the adjustment result of the position distribution of the blocking area.
Optionally, the apparatus further comprises a display unit, and the display unit is configured to:
and displaying the skill effect of the specified skill on the action area.
In the information processing apparatus 300 provided in the embodiment of the present application, the display unit 301 displays a user interface, where the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene; the processing unit 302 releases the specified skill to the second virtual object based on the current position of the first virtual object in the virtual scene; the obtaining unit 303 obtains a skill parameter corresponding to the specified skill and a position distribution of a blocking area of the second virtual object in the virtual scene; the adjusting unit 304 adjusts the position distribution of the blocking area based on the current position and the skill parameter. According to the embodiment of the application, the position distribution of the blocking area of the virtual object can be independently adjusted, the accuracy of adjusting the blocking area of the virtual object is improved, and the adjustment granularity is improved.
Correspondingly, the embodiment of the present application further provides a Computer device, where the Computer device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 10, fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 401 is a control center of the computer device 400, connects the respective parts of the entire computer device 400 using various interfaces and lines, performs various functions of the computer device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device 400 as a whole.
In the embodiment of the present application, the processor 401 in the computer device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
displaying a graphical user interface, wherein the graphical user interface comprises a virtual scene, a first virtual object, a second virtual object and at least one skill control area, the first virtual object, the second virtual object and the at least one skill control area are positioned in the virtual scene, the skill control area comprises at least one skill control, and the skill control corresponds to the specified skill of the virtual object in the virtual scene; releasing a specified skill to the second virtual object based on the current location of the first virtual object in the virtual scene; acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene; adjusting the position distribution of the blocking area based on the current position and the skill parameter.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 10, the computer device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the computer device architecture illustrated in FIG. 10 is not intended to be limiting of computer devices and may include more or less components than those illustrated, or combinations of certain components, or different arrangements of components.
The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.
In the embodiment of the present application, a game application is executed by the processor 401 to generate a graphical user interface on the touch display screen 403, and a virtual scene on the graphical user interface includes at least one skill control. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401, and then sent to, for example, another computer device via the radio frequency circuit 404, or output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown in fig. 10, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, in the computer device provided in this embodiment, by displaying a user interface, the user interface includes a virtual scene, and a first virtual object and a second virtual object located in the virtual scene; releasing a specified skill to a second virtual object based on a current location of the first virtual object in the virtual scene; acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene; adjusting the position distribution of the blocking area based on the current position and the skill parameter. According to the embodiment of the application, the position distribution of the blocking area of the virtual object can be independently adjusted, the accuracy of adjusting the blocking area of the virtual object is improved, and the adjustment granularity is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the information processing methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
displaying a user interface, wherein the user interface comprises a virtual scene, a first virtual object and a second virtual object which are positioned in the virtual scene; releasing a specified skill to a second virtual object based on a current location of the first virtual object in the virtual scene; acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene; adjusting the position distribution of the blocking area based on the current position and the skill parameter.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any information processing method provided in the embodiments of the present application, the beneficial effects that can be achieved by any information processing method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail an information processing method, an information processing apparatus, a computer device, and a storage medium provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principles and implementations of the present application, and the description of the foregoing embodiments is only used to help understand the technical solutions and core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (12)

1. An information processing method characterized by comprising:
displaying a user interface, wherein the user interface comprises a virtual scene, and a first virtual object and a second virtual object which are positioned in the virtual scene;
releasing a specified skill to the second virtual object based on the current location of the first virtual object in the virtual scene;
acquiring skill parameters corresponding to the specified skills and the position distribution of the blocking area of the second virtual object in the virtual scene;
adjusting the position distribution of the blocking area based on the current position and the skill parameter.
2. The method of claim 1, wherein the skill parameters comprise: a range parameter; the adjusting the position distribution of the blocking area based on the current position and the skill parameter comprises:
and adjusting the position distribution of the blocking area based on the current position and the range parameter.
3. The method of claim 2, wherein the user interface displays a skilled control; releasing a specified skill to a second virtual object based on a current location of the first virtual object in the virtual scene, comprising:
receiving an operation instruction triggered by a user through the skill control;
determining a target position indicated by the operation instruction in the virtual scene;
the adjusting the position distribution of the blocking area based on the current position and the skill parameter comprises:
determining a candidate region in the virtual scene based on a current position of the first virtual object in the virtual scene;
determining whether the target location is within the candidate region;
and if so, adjusting the position distribution of the blocking area according to the target position and the range parameter.
4. The method of claim 2, wherein said adjusting the location distribution of the blocked regions based on the target location and the range parameter comprises:
determining a first location data layer associated with the second virtual object based on the location distribution of the blocked regions;
adjusting initial position data in the first position data layer according to the range parameter to obtain an adjusted first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted position data layer.
5. The method of claim 4, wherein the skill parameters further comprise: an effective duration; after updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted position data layer, the method further includes:
when the duration of the skill release reaches the effective duration, restoring the adjusted position data in the first position data layer into the initial position data;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer after the original position data is restored.
6. The method of claim 2, wherein the adjusting the location distribution of the blocked area based on the current location and the range parameter comprises:
determining a first location data layer associated with the second virtual object based on the location distribution of the blocked regions;
creating a second location data layer based on the first location data layer, the location data in the second location data layer being the same as the location data in the first location data layer;
adjusting the position data in the second position data layer according to the range parameter to obtain an adjusted second position data layer;
the association between the second virtual object and the first position data layer is released, and the association between the second virtual object and the adjusted second position data layer is established;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted second position data layer.
7. The method of claim 6, wherein the skill parameters further comprise: the effective duration, after updating the position distribution of the blocking area corresponding to the second virtual object based on the adjusted second position data layer, includes:
when the duration of the skill release reaches the effective duration, removing the association between the second virtual object and the adjusted second position data layer, and establishing the association between the second virtual object and the first position data layer;
and updating the position distribution of the blocking area corresponding to the second virtual object based on the first position data layer.
8. The method of any one of claims 1 to 7, further comprising:
when a moving instruction of the second virtual object is received, determining an end point position to which the second virtual object needs to move in the virtual scene according to the moving instruction;
acquiring the position distribution of the second virtual object in the active area of the virtual scene;
determining a plurality of movable paths of the second virtual object based on the position distribution of the blocking areas and the position distribution of the active areas;
determining a target path from the plurality of movable paths that the second virtual object can move from a current position to the end position;
and controlling the second virtual object to move to the end position according to the target path in the virtual scene.
9. The method according to any one of claims 1 to 7, wherein after the adjusting the position distribution of the blocking area based on the current position and the skill parameter, further comprising:
determining an action area of the designated skill in the virtual scene according to the adjustment result of the position distribution of the blocking area;
and displaying the skill effect of the specified skill on the action area.
10. An information processing apparatus characterized in that the apparatus comprises:
the display unit is used for displaying a user interface, and the user interface comprises a virtual scene, a first virtual object and a second virtual object which are positioned in the virtual scene;
a processing unit configured to release a specified skill to a second virtual object based on a current position of the first virtual object in the virtual scene;
the acquiring unit is used for acquiring a skill parameter corresponding to the specified skill and the position distribution of the blocking area of the second virtual object in the virtual scene;
and the adjusting unit is used for adjusting the position distribution of the blocking area based on the current position and the skill parameter.
11. A computer arrangement comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing the steps of the information processing method according to any one of claims 1 to 9.
12. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the information processing method according to any one of claims 1 to 9.
CN202011488851.7A 2020-12-16 2020-12-16 Information processing method, information processing device, computer equipment and storage medium Pending CN112494942A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011488851.7A CN112494942A (en) 2020-12-16 2020-12-16 Information processing method, information processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011488851.7A CN112494942A (en) 2020-12-16 2020-12-16 Information processing method, information processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112494942A true CN112494942A (en) 2021-03-16

Family

ID=74972774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011488851.7A Pending CN112494942A (en) 2020-12-16 2020-12-16 Information processing method, information processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112494942A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682466A (en) * 2011-03-17 2012-09-19 腾讯科技(深圳)有限公司 Method, device and system for realizing dynamic blocking in three-dimensional role playing game
US20130109468A1 (en) * 2011-10-28 2013-05-02 Nintendo Co., Ltd. Game processing system, game processing method, game processing apparatus, and computer-readable storage medium having game processing program stored therein
JP2017012559A (en) * 2015-07-02 2017-01-19 株式会社スクウェア・エニックス Video game processing program, video game processing system, and user terminal
CN108159692A (en) * 2017-12-01 2018-06-15 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and storage medium
CN111249735A (en) * 2020-02-14 2020-06-09 网易(杭州)网络有限公司 Path planning method and device of control object, processor and electronic device
CN111672103A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Virtual object control method in virtual scene, computer device and storage medium
CN111672127A (en) * 2020-06-06 2020-09-18 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682466A (en) * 2011-03-17 2012-09-19 腾讯科技(深圳)有限公司 Method, device and system for realizing dynamic blocking in three-dimensional role playing game
US20130109468A1 (en) * 2011-10-28 2013-05-02 Nintendo Co., Ltd. Game processing system, game processing method, game processing apparatus, and computer-readable storage medium having game processing program stored therein
JP2017012559A (en) * 2015-07-02 2017-01-19 株式会社スクウェア・エニックス Video game processing program, video game processing system, and user terminal
CN108159692A (en) * 2017-12-01 2018-06-15 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and storage medium
CN111249735A (en) * 2020-02-14 2020-06-09 网易(杭州)网络有限公司 Path planning method and device of control object, processor and electronic device
CN111672103A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Virtual object control method in virtual scene, computer device and storage medium
CN111672127A (en) * 2020-06-06 2020-09-18 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN113082712A (en) Control method and device of virtual role, computer equipment and storage medium
CN111760274A (en) Skill control method and device, storage medium and computer equipment
CN113398590A (en) Sound processing method, sound processing device, computer equipment and storage medium
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN112870718A (en) Prop using method and device, storage medium and computer equipment
CN113398566A (en) Game display control method and device, storage medium and computer equipment
CN113786620A (en) Game information recommendation method and device, computer equipment and storage medium
CN113350793A (en) Interface element setting method and device, electronic equipment and storage medium
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN115193049A (en) Virtual role control method, device, storage medium and computer equipment
WO2024087786A1 (en) Game element display method and apparatus, computer device, and storage medium
CN113101650A (en) Game scene switching method and device, computer equipment and storage medium
WO2024045528A1 (en) Game control method and apparatus, and computer device and storage medium
WO2024031942A1 (en) Game prop control method and apparatus, computer device and storage medium
CN115501581A (en) Game control method and device, computer equipment and storage medium
CN115888101A (en) Virtual role state switching method and device, storage medium and electronic equipment
CN112799754B (en) Information processing method, information processing device, storage medium and computer equipment
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN113867873A (en) Page display method and device, computer equipment and storage medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN112494942A (en) Information processing method, information processing device, computer equipment and storage medium
CN113332721A (en) Game control method and device, computer equipment and storage medium
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN113350801A (en) Model processing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination