WO2022142543A1 - 道具的控制方法和装置、电子设备和存储介质 - Google Patents

道具的控制方法和装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022142543A1
WO2022142543A1 PCT/CN2021/121356 CN2021121356W WO2022142543A1 WO 2022142543 A1 WO2022142543 A1 WO 2022142543A1 CN 2021121356 W CN2021121356 W CN 2021121356W WO 2022142543 A1 WO2022142543 A1 WO 2022142543A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target virtual
prop
scene
scene object
Prior art date
Application number
PCT/CN2021/121356
Other languages
English (en)
French (fr)
Inventor
王文斌
陈辉辉
汤杰
Original Assignee
苏州幻塔网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州幻塔网络科技有限公司 filed Critical 苏州幻塔网络科技有限公司
Publication of WO2022142543A1 publication Critical patent/WO2022142543A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present invention relates to the field of the Internet, and in particular, to a method and device for controlling a prop, an electronic device and a storage medium.
  • some functions that rely on scene props for example, ropes
  • a traction function can be added to the game.
  • ropes can be used to move characters, and some scene-related interactions can be added in addition to regular character movements, and can also be configured for individual characters in the game.
  • rope-like functions such as using ropes to hook other characters and pull them to the side.
  • the above-mentioned games with the added rope function are generally single-player games or console games, not multi-player online games, and there is no problem of network synchronization.
  • some online games have added the related gameplay of this type of scene props, but only a certain character can use this special function, and the control method of the rope is relatively simple, either in a specific level or scene, or in a specific level or scene.
  • the conditional rope function is realized, which has poor interaction with the scene and cannot be applied to the entire open game world.
  • the above problems also exist.
  • the present invention provides a prop control method and device, an electronic device and a storage medium, so as to at least solve the problem of poor interactivity with the scene caused by the single control method of the rope in the related art.
  • a method for controlling props including: acquiring a target operation performed on a target client, wherein the target operation is used to trigger a target virtual character in a target game scene to launch a target virtual props; in response to the target operation, displaying a picture of the target end where the target virtual character launches the target virtual prop through the target client, wherein the other end of the target virtual prop other than the target end is bound on the target part of the target virtual character; when the target end is connected to the first scene object, the target virtual character and the target virtual character are pulled through the target virtual prop according to the object attribute of the first scene object. At least one of the first scene objects moves.
  • an apparatus for controlling props including: an acquisition unit configured to acquire a target operation performed on a target client, wherein the target operation is used to trigger a target operation in a target game scene
  • the target virtual character launches the target virtual prop, and one end of the target virtual prop is bound to the target virtual character
  • the display unit is used for responding to the target operation, and displaying the target virtual character through the target client.
  • an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other through the communication bus; wherein, a memory for storing a computer program; and a processor for executing the method steps in any of the foregoing embodiments by running the computer program stored in the memory.
  • a computer-readable storage medium where a computer program is stored in the storage medium, wherein the computer program is configured to execute the method in any of the foregoing embodiments when running step.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium, and the computer program is configured to execute the method steps in any one of the embodiments when running.
  • the method of controlling the interaction with the scene according to the object attributes of the scene objects is adopted, and the target operation performed on the target client is obtained by obtaining the target operation, and the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop , one end of the target virtual prop is bound to the target virtual character; in response to the target operation, the target end of the target virtual character launching the target virtual prop is displayed through the target client, and the target end is not bound to the target virtual character; In the event of a collision between objects in the first scene, at least one of the target virtual character and the first scene object is pulled through the target virtual prop to move according to the object attributes of the first scene object.
  • At least one of the objects moves without restricting the movement of the scene props, which can achieve the purpose of facilitating interaction with the scene and interaction with other characters, thereby achieving the technical effect of improving the interactivity between the scene props and the scene, thereby solving the problem.
  • FIG. 1 is a schematic diagram of a hardware environment of an optional prop control method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for controlling an optional prop according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an optional prop control method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of another optional prop control method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another optional prop control method according to an embodiment of the present invention.
  • FIG. 6 is a structural block diagram of an optional prop control apparatus according to an embodiment of the present invention.
  • FIG. 7 is a structural block diagram of an optional electronic device according to an embodiment of the present invention.
  • Figure 8 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • players In open world games, players usually have a very large degree of freedom. Although the game has a main line, it is generally not obvious. In this type of game, players can accept a variety of side quests and experience a variety of different play styles.
  • a method for controlling a prop is provided.
  • the above prop control method can be applied to the hardware environment including the terminal 102 and the server 104 as shown in FIG. 1 .
  • the server 104 is connected with the terminal 102 through the network, and can be used to provide services (such as game services, application services, etc.) for the terminal or the client installed on the terminal, and a database can be set on the server or independently of the server, Used to provide data storage services for the server 104 .
  • the above-mentioned network 104 includes but is not limited to at least one of the following: wired network, wireless network, the wired network may include but not limited to at least one of the following: a wide area network, a metropolitan area network or a local area network, the wireless network may include but not limited to at least one of the following One: Bluetooth, WIFI (Wireless Fidelity, wireless fidelity) and other networks that realize wireless communication.
  • the above-mentioned terminal 102 may be a terminal for computing data, such as a mobile terminal (eg, a mobile phone, a tablet computer), a notebook computer, a PC and other terminals.
  • the above-mentioned server may include, but is not limited to, any hardware device that can perform computing.
  • the method for controlling a prop in this embodiment of the present invention may be executed by the server 104 , may also be executed by the terminal 102 , or may be executed jointly by the server 104 and the terminal 102 .
  • the control method for the terminal 102 to execute the prop in the embodiment of the present invention may also be executed by a client installed on the terminal 102 .
  • FIG. 2 is a schematic flowchart of an optional item control method according to an embodiment of the present invention. As shown in FIG. 2 , the flow of the method is shown in FIG. 2 . The following steps may be included: step S202, step S204 and step S206.
  • step S202 the target operation performed on the target client is acquired, wherein one end of the target virtual prop is bound to the target virtual character.
  • the method for controlling props in this embodiment can be applied to controlling scene props in a game scene.
  • the above-mentioned game scene may be a game scene of a target game, for example, a game scene of a target network game, the target game may be a big world game, or other network games including scene props with traction function.
  • the target game can be executed by the client of the target game alone, that is, the target game is run by the client alone (single-player game); it can be executed by the background server of the target game, that is, the target game is executed by the server alone.
  • the client is only used to display the game screen of the target game, and obtain the operation on the game screen (for example, the operation position of the game screen), and synchronize it to the background server; it can also be controlled by the client of the target game and the background of the target game
  • the server that is, the client and the background server respectively execute part of the logic of the target game, which is not limited in this embodiment.
  • the target game is the target online game
  • the processing logic of the target online game is jointly executed by the background server and the client as an example, wherein the client obtains the user's operation information and synchronizes the operation information to the Background server, the background server executes the processing logic of the game operation and synchronizes the processing results to the relevant client.
  • the client performs the processing logic of the game operation and synchronizes the processing result to the The background server synchronizes the processing results to other clients
  • the control method of the props in this embodiment is also applicable.
  • a client of the target online game may run on a terminal device of a user (an object, or a player), and the client may communicate with the background server of the target online game. Users can log in to the above-mentioned clients running on their terminal devices by using account numbers and passwords, dynamic passwords, and associated application login.
  • the target object (corresponding to the target user, which can be identified by the target account) can log in to the target client of the target network game by using the target account.
  • a target game scene eg, an open game world, a big world scene
  • the target network game eg, an open-world game, a big-world game
  • the target object can control its corresponding avatar (ie, target avatar) to perform game operations in the target game scene, such as moving around the game map, using scene props, performing game tasks, interacting with other players, and so on.
  • the virtual characters controlled by them can perform the same or similar game operations in the target game scene in the same or similar manner as the target object, and other objects can be displayed on the target client A screen where the controlled avatar performs game operations.
  • the scene props belonging to the target virtual character may include target virtual props, and the target virtual road serves as a scene prop with a traction function, for example, a target rope prop.
  • the target avatar may be placed in the target avatar's storage space (eg, backpack, warehouse).
  • the target virtual prop can be placed or fixed on the target part of the target virtual character, for example, a hand, an arm, a waist, and the like.
  • the target virtual prop can be placed in the target carrying prop, eg, a prop bag, and the target carrying prop can be fixed on the target part of the target virtual character.
  • One end of the target virtual prop can be bound to the target virtual character, for example, fixed on the target part of the target virtual character, and the other end can be launched, so that the target virtual character can be connected with other scene objects through the target virtual prop, to interact with the scene and other characters.
  • the target virtual item may be an exclusive item of the target virtual character, that is, the virtual item cannot be dropped, can be destroyed, or cannot be repaired due to a durability of 0 or other reasons.
  • the target virtual item can also be a general-purpose item, can be discarded, can be picked up, can be dropped due to the death of the target virtual character, etc.
  • other attributes of the target virtual item except the pulling function are not limited.
  • the target object can control the target virtual character to perform different operations on the target game scene by operating the target client, for example, launch the target virtual prop.
  • the target client can obtain the target operation performed by the target object on the target client through the touch screen of the terminal device or other input devices.
  • the target operation is used to trigger the target virtual character to launch the target virtual prop, which can be a click operation, a sliding operation, or multiple operations. The combination of these operations is not limited in this embodiment.
  • step S204 in response to the target operation, the target client terminal displays a picture of the target terminal where the target virtual character launches the target virtual prop, wherein the target terminal is not bound to the target virtual character.
  • the target client can control the target virtual character to launch the target virtual prop, for example, to launch the target end of the target virtual prop (that is, the end not bound to the target virtual character) along the launch direction.
  • the above-mentioned control operation may be performed by the target client alone, or may be performed by the target client uploading the detected target operation to the background server and executed by the background server.
  • the execution manner of the control operation can be configured as required, which is not limited in this embodiment.
  • the target client can display the target end screen of the target virtual character launching the target virtual prop (prop launch screen).
  • the other end of the target virtual prop except the target end is bound to the target part of the target virtual character ( For example, hands, waist).
  • the above-mentioned launching operation may be performed for a specific target, for example, a target scene object, and the target client may display a picture of the target end where the target virtual character launches the target virtual prop toward the target scene object.
  • the above-mentioned launching operation may not be performed for a specific target, and the target client may display a picture of the target end where the target virtual character launches the target virtual prop along the launching direction.
  • a virtual character controlled by the user can shoot a rope.
  • One end of the rope is bound to the character's designated model position (target part), and the other end (target end) is directed toward the target (for example, the target scene) in the form of a bullet. object) displacement.
  • step S206 when the target end collides with the first scene object, at least one of the target virtual character and the first scene object is pulled by the target virtual prop to move according to the object attribute of the first scene object.
  • the above-mentioned collision may be: a collision occurs between the collider of the target virtual prop and the collider of the first scene object.
  • the first scene object may be the intended target of the target virtual prop, that is, the target scene object, or it may be the unintended target of the target virtual prop, that is, other than the target scene object other scene objects, for example, scene objects on the moving path of the target virtual prop, which are not limited in this embodiment.
  • the first scene object may be any scene object in the target game scene that allows interaction, and may be, but is not limited to, one of the following: terrain objects, buildings, trees, wooden boxes, scene objects, movable creatures, etc.
  • the movable creature may be the player Characters, can also be non-player creatures, etc.
  • the target client can display that the target end is connected to the first scene object (the target virtual character communicates with the first scene object through the target virtual prop).
  • the connection method can be at least one of the following: the target end is attached to the first scene object (for example, a component with adsorption properties on the target end, such as suction cups, magnets), the target end is connected by a grappling hook Wait until the first scene object is hooked.
  • the object type of the first scene object (ie, the type of the hit target) may be represented by the object attribute of the scene object.
  • the object attribute of the scene object is used to describe the scene object, and different scene objects may have different object attributes, or may have the same scene attribute.
  • the above-mentioned object attribute may be a movement attribute, for example, an attribute describing whether the object is movable or interactable, and for example, the above-mentioned object attribute may be a volume, weight, center of gravity, etc., which are related to the movement of the object. This is not limited in this embodiment.
  • traction results of the target virtual prop can be triggered, and the above traction results can include but are not limited to one of the following:
  • the virtual character of the traction target moves, for example, moves toward the first scene object (moves along the emission direction);
  • Both the traction target avatar and the first scene object move, for example, the traction target avatar moves toward the first scene object (moves along the launch direction), and at the same time, the traction target avatar moves toward the target avatar (movement occurs in the opposite direction of the launch direction).
  • the above-mentioned pulling function may be executed by controlling the target virtual item through the background server or the target client.
  • a picture in which at least one of the target virtual character and the first scene object is pulled and moved by the target virtual prop may be displayed on the target client.
  • other characters or scene objects eg, being pulled, pulled
  • open game world open world
  • the target client can display all the operations performed by the target avatar in the target game scene, and the results of performing each operation, such as the launch process of the target end of the target virtual prop, the target virtual prop pulling the target avatar and/or Or the process of the first scene object.
  • the target operation performed on the target client is obtained, wherein the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop, and one end of the target virtual prop is bound to the target virtual character ;
  • the above method may further include: step S11 and step S12.
  • a detection ray is emitted from the target end along the emission direction of the target end.
  • the detection ray is used for collision detection.
  • step S12 when the detected ray intersects with the first scene object, it is determined that the target end collides with the first scene object.
  • the background server can perform collision detection on the target to determine whether the target will collide with other scene objects (for example, expected targets, unintended targets), for example, it may be a collision Body detection, or radiographic detection.
  • scene objects for example, expected targets, unintended targets
  • the background server can perform collision detection on the target to determine whether the target will collide with other scene objects (for example, expected targets, unintended targets), for example, it may be a collision Body detection, or radiographic detection.
  • the background server may emit rays from the target end to perform collision detection.
  • detection rays may be emitted from the target end along the emission direction of the target end to perform collision detection.
  • the ray detection can be used to perform collision detection on scene objects with colliders added in the target game scene.
  • the backend server may determine that the target end collides with the first scene object.
  • multiple collision detections can be controlled. The time required for the location, if the calculated time exceeds a predetermined time threshold, the collision detection can be performed again after a certain interval of time detection.
  • the collision detection efficiency can be ensured by emitting rays from the target end to perform collision detection.
  • pulling at least one of the target virtual character and the first scene object to move through the target virtual prop includes: step S21.
  • step S21 in the case that the first scene object is a static object, the target virtual character is pulled by the target virtual prop to move toward the first scene object.
  • the target virtual character can be pulled to move through the target virtual prop, or the target virtual character can be pulled along the launching direction toward the first scene.
  • Scene objects move.
  • the result of the movement of the target virtual character may be: the target virtual character moves to the position of the object in the first scene.
  • the target client can display a picture of the target virtual prop pulling the target virtual character to move toward the first scene object.
  • the target virtual character can be controlled to swing through the target virtual prop using the target end as a fulcrum.
  • the condition for the swing to stop can be: the target virtual prop collides with other scene objects, or the displacement speed is reduced to 0 due to the action of gravity, friction, etc., or the target end is detached from the first scene object (it can be actively detached). , or passive disengagement).
  • the target virtual character when the target virtual prop collides with the static object, the target virtual character is pulled to move toward the static object, which can improve the flexibility of the virtual character's movement control.
  • pulling the target virtual character to move toward the first scene object through the target virtual prop includes steps S31 and S32.
  • step S31 a first movement parameter of the target virtual character and a target movement action are determined, wherein the target movement action is an action used by the target virtual character to move towards the first scene object.
  • step S32 the target virtual character is pulled along the movement track corresponding to the first movement parameter by the target virtual prop, and the target movement action is used to move toward the first scene object.
  • the background server may determine the first movement parameter of the target virtual character, and the first movement parameter is the movement of the target virtual character toward the first scene object. Movement parameter, through which the target virtual character can be simulated and controlled to simulate physical movement.
  • the background server can also determine the target movement action of the target avatar, and the target movement action can be the action used by the target avatar to move towards the first scene object, for example, a flying action, a fast sliding action, and a fast movement while moving from the other end
  • the action of starting to collect the target virtual item and it can also be other movement actions.
  • the background server may control the target virtual character to move toward the first scene object using the target movement action along a movement trajectory corresponding to the first movement parameter.
  • the target client can display a picture in which the target virtual prop pulls the target virtual character along the movement trajectory corresponding to the first movement parameter, and uses the target movement action to move toward the first scene object, so as to reflect the target virtual prop Traction effect for avatars.
  • the length of the target virtual prop can be shortened, for example, the elastic prop can be shortened, the target virtual prop can be collected from the other end, and the The target virtual prop is put into the target carrying prop to simulate different forms of the target virtual prop.
  • the character can fly over.
  • pulling at least one of the target virtual character and the first scene object to move through the target virtual prop includes: step S41.
  • step S41 when the first scene object is a movable object, the first scene object is pulled by the target virtual prop to move toward the target virtual character.
  • the first scene object is an interactive object or a movable object, for example, a large world object such as a wooden box, a scene object, a movable creature, etc.
  • the first scene object can be pulled to move by the target virtual prop, which can be a kind of pulling the first scene object.
  • the scene object moves toward the target avatar in the opposite direction of the launch direction.
  • the result of the movement of the first scene object may be: the first scene object moves to the position of the target virtual character, or separates from the target end, and the displacement speed is controlled to be reduced to 0 due to friction force, collision and other force fields.
  • the target client can display a picture of the target virtual prop pulling the first scene object to move toward the target virtual character.
  • the interactive or movable object is pulled to move toward the target virtual prop, which can improve the flexibility of scene object movement control and the diversity of scene interaction .
  • At least one of the target virtual character and the first scene object may also be determined according to the character attribute of the target virtual character and the object attribute of the first scene object. One moves.
  • the first scene object is controlled to move toward the target avatar; if the weight ratio is greater than or equal to the first proportion threshold, and is less than the second proportional threshold, while controlling the first scene object to move to the target virtual character, the target virtual character is controlled to move to the first scene object, and the moving speed of the target virtual character and the first scene object can be based on the difference between the two The weight and the received force field are determined; if the weight ratio is greater than or equal to the second proportional threshold, the target virtual character is controlled to move toward the first scene object.
  • the target virtual prop is triggered to interact with other scene objects (e.g., other characters) based on the collision (e.g., being pulled, pulled), the interaction with other scene objects can be controlled in various ways.
  • scene objects e.g., other characters
  • the interaction with other scene objects can be controlled in various ways.
  • pulling the first scene object to move toward the target virtual character through the target virtual prop includes: step S51 .
  • step S51 the target end is controlled to move toward the target avatar at a speed opposite to the speed before the collision, and the first scene object is controlled to move toward the target avatar at the same speed as the target end.
  • the backend server may determine the movement parameters of the target end of the target virtual prop and the movement parameters of the first scene object respectively, so as to reflect the target end pulling the first scene The effect of objects moving towards the target avatar.
  • the background server can configure the speed of the target end to be the reverse speed relative to the previous speed, and set the first scene object to move at the same speed as the target end in the direction of the target virtual character, so that the target end can be controlled to follow the speed before the collision.
  • the reverse speed of the speed moves towards the target virtual character, and controls the first scene object to move towards the target virtual character at the same speed as the target end.
  • the target client may display a picture in which the target moves towards the target avatar at the reverse speed of the speed before the collision, and the first scene object moves towards the target avatar at the same speed as the target. That is, a picture in which the first scene object moves toward the target virtual character following the target end at a speed opposite to the speed of the target end before the collision.
  • pulling the first scene object to move toward the target virtual character through the target virtual prop includes: step S52.
  • step S52 the first scene object is bound to the target end, so as to control the first scene object to move with the target end, and control the target end to move toward the target avatar according to the reverse speed of the speed before the collision.
  • the backend server can bind the first scene object to the target end of the target virtual prop to control the first scene object to move with the target end.
  • the background server may control the target end to move toward the target avatar according to the reverse speed (eg, -v1) of the speed before the collision (eg, v1) in a similar manner as described above. Since the first scene object moves with the target end, the target client may display a picture in which the first scene object moves toward the target virtual character at a speed opposite to the speed before the collision with the target end.
  • the reverse speed eg, -v1 of the speed before the collision
  • control methods are used to control the first scene object to follow the target end of the target virtual prop to move toward the target virtual character, which can ensure the continuity of the traction process and improve the fidelity of the game process.
  • the above method further includes: step S61 and step S62.
  • step S61 when the target virtual prop or the first scene object collides with the second scene object, the target end is controlled to separate from the first scene object.
  • step S62 the target end is controlled to continue to move toward the target virtual character until it returns to the target virtual character, and the first scene object is controlled to simulate physical movement until it stops moving.
  • the target online game will detect in real time whether there is an event (for example, a collision event) triggered in the middle during this process Interrupt the current behavior.
  • an event for example, a collision event
  • the background server can control the target end to detach from the first scene object and return to the target virtual character.
  • the target avatar can stop moving, or, to simulate physical movement in a specific direction, stop after moving a certain distance.
  • the target end may be triggered to detach from the first scene object, that is, the target end is separated from the first scene object.
  • the target client may display the target virtual prop or a picture in which the target terminal is separated from the first scene object after the collision between the first scene object and the second scene object.
  • the background server can control the target terminal to continue moving toward the target virtual character until returning to the target virtual character. This process can simulate the pulling function of the target virtual item, and the target virtual item can be returned to the target virtual character (the target part, or another part different from the target part).
  • the target client may display a picture of returning to the target virtual character after the target detaches from the first scene object.
  • the background server may determine the second movement parameter and the target environment parameter of the first scene object.
  • the second movement parameter is the movement parameter after the collision of the first scene object, and is used to control the movement process after the collision of the first scene object, which may include but is not limited to at least one of the following: movement speed and movement direction.
  • the target environment parameter is an environment parameter where the object in the first scene is located, and the environment parameter may be an environment parameter that affects the moving process of the object in the first scene, and may include, but is not limited to, at least one of the following: gravity and friction.
  • the background server can control the first scene object to simulate physical movement according to the second movement parameter and the target environment parameter, so as to improve the realistic degree of the scene object control.
  • the target client may display a picture in which the object in the first scene simulates physical movement until it stops moving.
  • the simulated physical movement of the first scene object may be: the first scene object moves along a direction (it may be the original direction, or the direction changed after the collision) for a certain distance and then stops.
  • the target virtual prop is controlled to return to the target virtual character after a collision occurs during the moving process, and the first scene object is controlled to simulate the physical movement process, which can improve the fidelity of the scene object control.
  • the above method further includes: step S71.
  • step S71 after the target end collides with the first scene object, a target connection animation matching the object properties of the target virtual prop and the first scene object is played, wherein different connection animations of the target virtual prop correspond to the collided scene objects different object properties.
  • connection animations may be configured for the target virtual prop (or the target end), and the connection animation may be the animation of the target virtual prop (or the target end) connecting to the collided scene object, which is used to describe the target virtual prop (or target) the process of connecting to the collided scene object.
  • connection animations can correspond to different object properties of the collided scene objects.
  • adsorption animation and winding animation can be configured for the target end, respectively corresponding to the adsorption properties of scene objects and the winding properties of object scene objects.
  • the target end of the target virtual prop may be configured with multiple endpoint models (eg, grappling hook model, suction cup model), and different endpoint models correspond to different object properties of the collided scene objects.
  • endpoint models eg, grappling hook model, suction cup model
  • different endpoint models correspond to different object properties of the collided scene objects.
  • Each endpoint model can have multiple model states, e.g. collapsed state, expanded state, connected to an object state. Different model states can correspond to different state animations.
  • the target endpoint model matching the object property of the first scene object is obtained from the multiple endpoint models corresponding to the target endpoint
  • the target connection corresponding to the connection state can also be obtained from the multiple state animations corresponding to the target endpoint model animation
  • each of the plurality of state animations may correspond to at least one model state of the target endpoint model.
  • a model (such as a grappling hook or other model) can be bound at the flying end of the scene prop, so that the corresponding grappling hook or other model's behavior is played when the target is hooked.
  • a target end model can be bound on the target end, and the target end model can be a model bound on the target end and used to hook the touched scene objects.
  • the target endpoint model can be a grappling hook, or any other model with a grappling hook function.
  • the target endpoint model can have different model states, eg, collapsed state, expanded state, and hooked object state. Different model states can correspond to different model animations.
  • the target client can follow the movement of the target end to play the unfolding animation of the target end model.
  • the target client can play a model animation in which the target end model hooks the first scene object, for example, an animation of wrapping around the first scene object.
  • connection animation matching the object attribute of the scene object is played, so that the displayed picture information can be enriched and the fidelity of the scene prop can be improved.
  • the above method further includes steps S81 to S83.
  • each data location marker in the data location marker array contains location data for a corresponding node in the plurality of nodes.
  • S83 Render the target prop model corresponding to the current moment at each update moment, so as to render the target virtual prop to the target client for display at each update moment.
  • the target virtual prop can have multiple nodes, and the target virtual prop can be divided into multiple segments (prop segments) through multiple nodes.
  • multiple nodes are triggered to move along the launch direction in a sequential order. , for example, the nodes closer to the target end are triggered to move along the emission direction earlier.
  • Multiple nodes can be triggered to move along the emission direction in sequence according to the length of the props that have been launched from the target virtual prop. At some point, some nodes may have started to move, while others have not.
  • the node positions of a plurality of nodes can be stored through a data position mark array, and the data position mark array contains a plurality of data position marks, and each data position mark corresponds to the node position of the plurality of nodes.
  • the data position marker array can be updated with time to reflect the form of the target virtual prop at different times.
  • the target client can display a certain number of game frames per second, for example, 60 frames per second.
  • the target client (it may also be a background server) can calculate the data position marker array of the target virtual prop at each update time.
  • the above update time corresponds to the update time interval, and the update time interval can be a The time of one frame or multiple frames, that is, the target client updates the data position marker array every one or more frames to obtain the model shape of the target virtual prop at each update moment.
  • the data position mark array at each update time can be determined according to the configured solving rule. For example, you can configure the target to move along the target direction at a constant speed, determine the length of the props that have been launched at each update moment, and then calculate the position data of each node, and solve the data position mark of the target virtual prop at each update moment. array.
  • the target client can use the data location tag array corresponding to the update moment to construct the target prop model of the target virtual item at the update moment.
  • the target prop model can contain multiple vertex and UV textures. There may be an association relationship between a plurality of vertex coordinates and UV texture coordinates and node positions (node coordinates) of a plurality of nodes.
  • you can update the vertex coordinates of the target prop model for example, the position of the vertex data in the vertex buffer
  • you can also update the UV texture coordinates of the target prop model and use the updated vertex Coordinates and UV texture coordinates to construct the target prop model at this update moment.
  • the target client can use the target renderer (Shader) to render the target prop model corresponding to the current moment, so as to render the target virtual prop to the target client for display at each update moment.
  • Target renderer Sender
  • Model material maps and the like used for rendering may be pre-configured, which are not limited in this embodiment.
  • the position of the data (node position) can be solved by the above method, and the position of the vertex data in the vertex buffer can be continuously updated to achieve the purpose of controlling the movement of the rope.
  • the position data of multiple nodes at each update time is recorded through the data position tag array, so as to construct a prop model of the target virtual prop, which is suitable for scene props such as ropes and other nodes whose positions vary greatly , which can improve the convenience of prop model construction.
  • solving the data location tag array of the target virtual prop at each update moment includes: step S91 and step S92.
  • step S91 an array of data location markers corresponding to the target time is obtained, wherein the target time is an update time, and each data location label includes the node position of the corresponding node at the target time, the corresponding node at the previous update time of the target time. Node position, and an operation flag indicating whether the corresponding node participates in the position operation.
  • step S92 according to the data location label array corresponding to the target time, the data location label array corresponding to the next update time of the target time is solved based on the Welley algorithm.
  • each unit (each data position mark) in the data position mark array can contain: new position data and old position data, the new position data can be the position data of an update time, and the old position data is the position data of the previous update time .
  • Each unit can also contain: Free data (operation flag), this Free data controls whether this node is free and whether it can participate in position operations. If it is false, then it is fixed and does not participate in positional operations, if it is true, then it is not fixed and can participate in positional operations.
  • the target client can obtain an array of data location markers corresponding to the target time.
  • the node positions of multiple nodes at each update time can be obtained.
  • algorithms used to solve the data position marker array may include, but are not limited to: Weirley's algorithm (Werley's integration method).
  • Wellley's algorithm is a numerical method for solving Newton's equations of motion, which can be used in molecular dynamics simulations as well as in video games.
  • the advantage of the Wellley algorithm is that it is numerically stable much more stable than the simple Euler method, and maintains the time-reversibility and volume-conserving properties of phase space volume elements in physical systems.
  • the target client can calculate the data position marker array corresponding to the next update moment of the target moment based on the Weirley algorithm according to the data position marker array corresponding to the target moment.
  • the Weirley algorithm There may be various forms of the Welley algorithm, and correspondingly, there may be various ways to calculate the data position marker array at the next update time based on the Welley algorithm.
  • the efficiency of solving the data position mark array can be improved; while solving the data position mark array based on the Weirley algorithm can improve the numerical stability and maintain the time reversibility in the physical system Properties of volume conservation with phase space volume elements.
  • calculating the data location flag array corresponding to the next update time of the target time based on the Weirley algorithm includes steps S101 to S105.
  • step S101 according to the operation mark, the target data position mark is obtained from the data position mark array corresponding to the target time, wherein the target node corresponding to the target data position mark participates in the position calculation, and the target data position mark includes the target node at the target time.
  • the first node position (the current time), and the second node position at the previous update time (the time immediately before the current time).
  • step S102 the target node speed of the target node at the target time is determined according to the first node position, the second node position and the update time interval.
  • step S103 the node mass of the target node and the target force field experienced by the target node at the target moment are determined.
  • step S104 based on the Weirley algorithm, the value obtained by dividing the product of the first node position, the target node velocity and the update time interval, and the product of the target force field and the square of the update time interval by twice the node mass Add to get the third node position of the target node at the next update time.
  • step S105 the target data position mark is updated using the first node position and the third node position, wherein the data position mark array corresponding to the next update moment contains the updated target data position mark.
  • the Welley integral method can be used to solve the calculation, and the Welley formula expressed by the velocity can be used for the calculation.
  • the target client can first obtain the target data position mark from the data position mark array corresponding to the target moment according to the operation mark, that is, the node participating in the position operation (the target node ), that is, the target data location marker.
  • the target data location marker There may be one or more target data location markers, and for a certain target data location marker, it may include the first node location (for example, r(t)) of the corresponding target node at the target time, and the first node location (for example, r(t)) at the previous update time.
  • the second node position (eg, r(t- ⁇ t)).
  • the target force field experienced by the target node at the target moment can be determined, for example, f(t), and the node of each node can be determined.
  • the mass (m) may be pre-configured, or determined according to information such as prop attributes (eg, prop mass) of the target virtual prop, the position of the node on the target virtual prop, and the like.
  • the node position of the target node at the next update instant can be calculated by The following three parts are summed: the first node position, the value obtained by the product of the target node velocity and the update interval, and the value obtained by dividing the product of the target force field by the square of the update interval divided by twice the node mass.
  • the target data position mark can be updated using the first node position and the third node position. For example, the node position at the current moment in the target data position mark is updated to the third node position, and the previous The node position at a moment is updated to the first node position, and the operation flag (Free data) is updated at the same time.
  • the following is an example to illustrate the Werley integration method.
  • the Welley integral method records the current position of the molecule (the position at the current update moment) and the previous position (the position at the previous update moment). To obtain the velocity of the molecule, just subtract the previous position from the current position.
  • the calculation steps of Welley's algorithm are: step 1 to step 3.
  • step 1 the position r(t+ ⁇ t) is calculated by Taylor's expansion formula (1).
  • the formula (1) is as follows:
  • r(t) is the molecular position at time t
  • v(t) is the velocity at time t
  • f(t) is the force field at time t
  • ⁇ t is the time difference between two adjacent times.
  • step 2 the force field f(t+ ⁇ t) is obtained from r(t+ ⁇ t) and the interaction potential condition of the system (if the interaction depends only on position r).
  • step 3 a new velocity v(t+ ⁇ t) is obtained from Werley's formula for velocity.
  • r(t+2 ⁇ t) can be obtained based on r(t+ ⁇ t), f(t+ ⁇ t) and v(t+ ⁇ t) into formula (1).
  • the data position marker array is solved by using the Werley formula represented by the velocity based on the Werley integral method, which can improve the efficiency of the calculation of the data position marker array.
  • constructing the target prop model of the target virtual prop at each update moment by using the data location tag array corresponding to each update moment includes steps S111 to S113.
  • step S111 a target model patch corresponding to the target prop model is acquired, wherein the target model patch is used to construct the target prop model.
  • step S112 the target vertex coordinates and the target texture coordinates of the target model patch are determined according to the data position marker array corresponding to each update time.
  • step S113 a target prop model of the target virtual prop at each update moment is constructed using the target vertex coordinates and the target texture coordinates.
  • the entire model (for example, the rope model) can be reconstructed.
  • the reconstruction can include but is not limited to the following aspects: the construction of the position, the construction of the UV , the construction of the tangent, the construction of the indexbuffer (index buffer).
  • the target prop model may be constructed using a target model patch corresponding to the target prop model wrapping the data location tags in the data location tag array, and the target model patch is used to construct the target prop model.
  • the target client can obtain the target model patch corresponding to the target prop model.
  • the target model patch may contain vertices and UV textures (target textures), and there may be a correspondence between vertex coordinates and UV texture coordinates and node coordinates (node position coordinates) of multiple nodes.
  • the target client can determine the target model surface based on the correspondence between the vertex coordinates and UV texture coordinates of the target model patch and the node positions of multiple nodes, and according to the data position tag array corresponding to this update time.
  • the target vertex coordinates and target texture coordinates of the slice According to the target vertex coordinates and the target texture coordinates, the target client can construct the target prop model of the target virtual prop at this update moment.
  • an initial prop model of the target virtual prop may be constructed first, and constraints between nodes and vertices and UV textures may be determined through the initial prop model.
  • the device for constructing the initial prop model of the target virtual prop may be a background server, or may be other devices, for example, terminal devices of related personnel.
  • the terminal device can generate a plurality of data position markers according to the number of segments of the target virtual prop and the length of each segment.
  • a plurality of data position markers are in one-to-one correspondence with a plurality of nodes, and one data position marker is used to mark the position of one node.
  • the terminal device can construct model vertices and model texture coordinates for the multiple data position markers, for example, construct vertex and UV textures according to the set number of edges, and obtain the initial prop model of the target virtual prop.
  • the number of segments of the rope can be set first, thus having the data position mark, as shown in Figure 3.
  • the rope itself has self-defined parameters, it can include but is not limited to: the number of segments, the number of sides, the number of iterations in the physical operation, the length of the rope, and the force on the rope when it participates in the physical operation. Some of these data can be later data.
  • the determination of the number, the length of the edge during model construction, and the number of vertices provide data support.
  • a patch can be constructed and wrapped around the data location markers, as shown in Figure 4.
  • the way to build the patch and wrap the data position markers can be: build the vertex and UV texture according to the set number of edges.
  • the model building is performed by wrapping the data position mark with the model patch, which can improve the convenience of model building.
  • the target virtual prop is a rope (rope prop)
  • the target online game is a big world game
  • the control logic of the game is executed by a background server.
  • the flow of the method for controlling the prop in this optional example may include the following steps: step S502 to step S512 .
  • step S502 the character launches the rope at the current position.
  • the user can control the corresponding virtual character to perform game operations, such as launching ropes, by operating the client of the big world game running on his terminal device.
  • the user-controlled character can launch the rope at the intended target.
  • One end of the rope is bound to the character, and the other end begins to move towards the intended target.
  • step S504 the flying end of the rope collides with the target.
  • the flying end of the rope collides with the target and hits the target.
  • the hit target can be the expected target or the unexpected target.
  • step S506 the type of the hit target is detected.
  • the background server can detect the type of the hit target (the object attribute of the hit target), and determine whether it is a large-world static object such as terrain, buildings, and trees, or a large-world interactive or movable object such as a wooden box, scene object, and movable creature.
  • step S508 if the hit target is a large-world static object, control the character to move to the target position.
  • the hit target is a static object in the big world, you can set the direction, speed, gravity of the character's movement, and set the corresponding flight action of the character at the same time, so that the character can fly over and end.
  • step S510 if the hit target is a large world interactive or movable object, the rope is returned and the object is displaced toward the character position.
  • the hit target is an interactive or movable object in the big world
  • the background server can detect in real time whether the current behavior is interrupted in the middle.
  • step S512 a collision occurs on the way back, the rope continues to return, the physical simulation movement is started at the interrupted position of the object, and the movement is stopped until the displacement speed becomes 0.
  • the control rope continues to return to the character.
  • the object stops moving towards the character's position, and starts the physical simulation movement until the object's displacement speed becomes 0 and stops moving.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM (Read-Only Memory, ROM). Read-only memory)/RAM (Random Access Memory), magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, or network device, etc.) The methods described in various embodiments of the invention.
  • FIG. 6 is a structural block diagram of an optional prop control apparatus according to an embodiment of the present invention. As shown in FIG. 6 , the apparatus may include an acquisition unit 602 , a display unit 604 and a pulling unit 606 .
  • the obtaining unit 602 is configured to obtain the target operation performed on the target client, wherein the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop, and one end of the target virtual prop is bound to the target virtual character.
  • the display unit 604 connected to the obtaining unit 602, is used for responding to the target operation to display, through the target client, a picture of the target terminal where the target virtual character launches the target virtual prop, wherein the target terminal is not bound to the target virtual character.
  • the pulling unit 606, connected with the display unit 604, is used for pulling the target virtual character and the first scene object through the target virtual prop according to the object attribute of the first scene object when the target end collides with the first scene object. At least one has moved.
  • the acquiring unit 602 in this embodiment can be used to execute the above step S202
  • the display unit 604 in this embodiment can be used to execute the above step S204
  • the pulling unit 606 in this embodiment can be used to execute the above step S204. Step S206.
  • the target operation performed on the target client is acquired, wherein the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop, and one end of the target virtual prop is bound to the target virtual character; responding to the target operation , through the target client to display the picture of the target end where the target virtual character launches the target virtual prop, wherein the target end is not bound to the target virtual character; in the case that the target end collides with the first scene object, according to the first scene object At least one of the target virtual character and the first scene object is pulled to move by the target virtual prop, which solves the problem of poor interactivity with the scene caused by the single control method of the scene prop in the related art, and improves the scene The interactivity of props and scenes.
  • the above-mentioned apparatus further includes: a transmitting unit and a determining unit.
  • the transmitting unit is configured to transmit detection rays from the target end along the emission direction of the target end after acquiring the target operation performed on the target client, wherein the detection rays are used for collision detection.
  • the determining unit is configured to determine that the target end collides with the first scene object when the detection ray intersects with the first scene object.
  • the traction unit 606 includes: a first traction module.
  • the first pulling module is used for pulling the target virtual character to move toward the first scene object through the target virtual prop when the first scene object is a static object.
  • the first pulling module includes: a first determining sub-module and a pulling sub-module.
  • the first determination submodule is used for determining the first movement parameter of the target virtual character and the target movement action, wherein the target movement action is the action used by the target virtual character to move towards the first scene object.
  • the pulling sub-module is used for pulling the target virtual character along the movement trajectory corresponding to the first movement parameter through the target virtual prop, and using the target movement action to move toward the first scene object.
  • the traction unit 606 includes: a second traction module.
  • the second pulling module is used for pulling the first scene object to move toward the target virtual character through the target virtual prop when the first scene object is a movable object.
  • the second traction module includes one of a first control sub-module, a first control sub-module, and a third control sub-module.
  • the first control sub-module is used to control the target end to move toward the target virtual character at the reverse speed of the speed before the collision, and to control the first scene object to move towards the target virtual character at the same speed as the target end.
  • the second control submodule is used to bind the first scene object to the target end, so as to control the first scene object to move with the target end.
  • the third control sub-module is used for controlling the target end to move toward the target virtual character according to the reverse speed of the speed before the collision.
  • the above-mentioned apparatus further includes: a first control unit and a second control unit.
  • the first control unit is configured to control the target end when the target virtual prop or the first scene object collides with the second scene object in the process of pulling the first scene object to move toward the target virtual character through the target virtual prop Separate from the first scene object.
  • the second control unit is configured to control the target terminal to continue to move towards the target virtual character until it returns to the target virtual character, and to control the first scene object to simulate physical movement until it stops moving.
  • the above-mentioned apparatus further includes: a playing unit.
  • a playback unit configured to play a target connection animation matching the object attributes of the target virtual prop and the first scene object after the target end collides with the first scene object, wherein different connection animations of the target virtual prop correspond to the collided scene Different object properties of objects.
  • the above-mentioned apparatus further includes: a calculation unit, a construction unit, and a rendering unit.
  • the solving unit is used to solve the data position mark array of the target virtual prop at each update moment after displaying the picture of the target end of the target virtual character launching the target virtual prop through the target client, wherein the target virtual prop passes through a plurality of nodes It is divided into multiple segments, and multiple nodes are triggered to move along the emission direction in turn according to the length of the props that have been emitted by the target virtual prop.
  • Each data location marker in the data location marker array contains the location data of the corresponding node in the multiple nodes. .
  • the construction unit is used to mark the array with the data position corresponding to each update moment to construct the target prop model of the target virtual prop at each update moment.
  • the rendering unit is used to render the target prop model corresponding to the current moment at each update moment, so as to render the target virtual prop to the target client for display at each update moment.
  • the solving unit includes: a first obtaining module and a solving module.
  • the first acquisition module is used to acquire an array of data location markers corresponding to the target moment, wherein the target moment is an update moment, and each data location marker includes the node position of the corresponding node at the target moment, and the previous one of the corresponding node at the target moment.
  • the solving module is used for solving the data position mark array corresponding to the next update moment of the target moment based on the Weirley algorithm according to the data position mark array corresponding to the target moment.
  • the calculation module includes: an acquisition submodule, a second determination submodule, a third determination submodule, a calculation submodule, and an update submodule.
  • the acquisition sub-module is used to obtain the target data position mark from the data position mark array corresponding to the target time according to the operation mark, wherein the target node corresponding to the target data position mark participates in the position operation, and the target data position mark includes the target node in The first node position at the target time, and the second node position at the previous update time.
  • the second determination submodule is configured to determine the target node speed of the target node at the target moment according to the first node position, the second node position and the update time interval.
  • the third determination submodule is used to determine the node quality of the target node and the target force field that the target node is subjected to at the target moment.
  • the calculation sub-module is used to divide the value obtained by the product of the first node position, the target node velocity and the update time interval, and the product of the target force field and the square of the update time interval by twice the node mass based on the Weirley algorithm. Add the values of , to get the third node position of the target node at the next update time.
  • the update submodule is used for updating the target data position mark using the first node position and the third node position, wherein the data position mark array corresponding to the next update moment contains the updated target data position mark.
  • the building unit includes: a second obtaining module, a determining module, and a building module.
  • the second acquiring module is configured to acquire the target model patch corresponding to the target prop model, wherein the target model patch is used to construct the target prop model.
  • the determining module is used for determining the target vertex coordinates and the target texture coordinates of the target model patch according to the data position marker array corresponding to each update moment.
  • the building module is used to construct the target prop model of the target virtual prop at each update moment by using the target vertex coordinates and the target texture coordinates.
  • the above modules may run in the hardware environment as shown in FIG. 1 , and may be implemented by software or hardware, wherein the hardware environment includes a network environment.
  • an electronic device for implementing the above-mentioned method for controlling the prop
  • the electronic device may be a server, a terminal, or a combination thereof.
  • FIG. 7 is a structural block diagram of an optional electronic device according to an embodiment of the present invention. As shown in FIG. 7 , it includes a processor 702, a communication interface 704, a memory 706, and a communication bus 708, wherein the processor 702, the communication interface 704 and the memory 706 communicate with each other through the communication bus 708, wherein,
  • memory 706 for storing computer programs
  • step S1 the target operation performed on the target client is acquired, wherein the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop, and one end of the target virtual prop is bound to the target virtual character.
  • step S2 in response to the target operation, a picture of the target terminal where the target virtual character launches the target virtual prop is displayed through the target client, wherein the target terminal is not bound to the target virtual character;
  • step S3 when the target end collides with the first scene object, at least one of the target virtual character and the first scene object is pulled by the target virtual prop to move according to the object attribute of the first scene object.
  • the above-mentioned communication bus may be a PCI (Peripheral Component Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus or the like.
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 7, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include RAM and may also include non-volatile memory, such as at least one disk memory.
  • the memory may also be at least one storage device located remotely from the aforementioned processor.
  • the above-mentioned memory 706 may include, but is not limited to, the acquisition unit 602 , the display unit 604 and the pulling unit 606 in the control device including the above-mentioned props.
  • other module units in the control device of the above-mentioned props may also be included but not limited to, which will not be repeated in this example.
  • the above-mentioned processor can be a general-purpose processor, which can include but is not limited to: CPU (Central Processing Unit, central processing unit), NP (Network Processor, network processor), etc.; can also be DSP (Digital Signal Processing, digital signal processor) ), ASIC (Application Specific Integrated Circuit, Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array, Field Programmable Gate Array) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit, central processing unit
  • NP Network Processor, network processor
  • DSP Digital Signal Processing, digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array, Field Programmable Gate Array
  • other programmable logic devices discrete gate or transistor logic devices, discrete hardware components.
  • the above electronic device further includes: a display for displaying a display interface of the target client.
  • FIG. 7 is only for illustration, and the device implementing the control method of the above props may be a terminal device, and the terminal device may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, etc. , handheld computers and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal equipment.
  • FIG. 7 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components than those shown in FIG. 7 (eg, network interfaces, display devices, etc.), or have a different configuration than that shown in FIG. 7 .
  • a storage medium is also provided.
  • the above-mentioned storage medium may be used to execute the program code of the control method of any one of the above-mentioned items in the embodiments of the present invention.
  • the above-mentioned storage medium may be located on at least one network device among multiple network devices in the network shown in the above-mentioned embodiments.
  • the storage medium is configured to store program codes for executing the following steps: step S1 to step S3.
  • step S1 the target operation performed on the target client is acquired, wherein the target operation is used to trigger the target virtual character in the target game scene to launch the target virtual prop, and one end of the target virtual prop is bound to the target virtual character.
  • step S2 in response to the target operation, a picture of the target terminal where the target virtual character launches the target virtual prop is displayed through the target client, wherein the target terminal is not bound to the target virtual character.
  • step S3 when the target end collides with the first scene object, at least one of the target virtual character and the first scene object is pulled by the target virtual prop to move according to the object attribute of the first scene object.
  • the above-mentioned storage media may include, but are not limited to, various media that can store program codes, such as U disk, ROM, RAM, removable hard disk, magnetic disk, or optical disk.
  • Figure 8 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • the computer program product includes a computer program/instructions 810 which, when executed by a processor, such as the processor 702 shown in FIG. 7, may implement the various steps in the methods described above .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提供了一种道具的控制方法和装置、电子设备和存储介质,其中,该方法包括:获取对目标客户端执行的目标操作,其中,目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上;响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上;在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动。通过本发明,解决了相关技术中存在由于场景道具的控制方式单一导致的与场景的交互性差的问题。

Description

道具的控制方法和装置、电子设备和存储介质
相关申请的交叉引用
本申请要求于2020年12月29日提交、申请号为202011593207.6且名称为“道具的控制方法和装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用合并于此。
技术领域
本发明涉及互联网领域,尤其涉及一种道具的控制方法和装置、电子设备和存储介质。
背景技术
目前,在一些大型单机游戏中,为了游戏角色与场景互动的多样性,在游戏中可以添加一些依赖具有牵引功能的场景道具(例如,绳索)的功能。例如,在一些RPG(Role-playing game,角色扮演游戏)单机游戏中,可以利用绳索进行人物移动,还可以在常规的人物移动以外加入一些场景相关的互动,也可以为游戏中的个别角色配置带有类似绳索的功能,比如利用绳索勾住其他角色拉至身边。
然而,上述添加绳索功能的游戏一般为单机游戏或主机游戏,不是多人网络游戏,不存在网络同步的问题。当前也有部分网络游戏加入了此种类型的场景道具的相关玩法,但也只是某个角色可以使用这种特殊功能,而且绳索的控制方式比较单一,要么是在特定的关卡或者场景中,要么在特定的任务中,再或者沿着制作好的特定路线,实现有条件限制的绳索功能,与场景的交互性差,无法适用于整个开放游戏世界。对于其他具有牵引功能的场景道具,同样存在上述问题。
因此,相关技术中存在由于场景道具的控制方式单一导致的与场景的交互性差的问题。
发明内容
本发明提供了一种道具的控制方法和装置、电子设备和存储介质,以至少解决相关技术中存在由于绳索的控制方式单一导致的与场景的交互性差的问题。
根据本发明实施例的第一方面,提供了一种道具的控制方法,包括:获取对目标客户端执行的目标操作,其中,所述目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具;响应所述目标操作,通过所述目标客户端显示所述目标虚拟角色发射所述目标虚拟道具的目标端的画面,其中,所述目标虚拟道具除了所述目标端以外的另一端绑定在所述目标虚拟角色的目标部位上;在所述目标端连接到第一场景物体的情况下,根据所述第一场景物体的物体属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动。
根据本发明实施例的第二方面,提供了一种道具的控制装置,包括:获取单元,用于获取对目标客户端执行的目标操作,其中,所述目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,所述目标虚拟道具的一端绑定在所述目标虚拟角色上;显示单元,用于响应所述目标操作,通过所述目标客户端显示所述目标虚拟角色发射所述目标虚拟道具的目标端的画面,其中,所述目标端未绑定在所述目标虚拟角色上;牵引单元,用于在所述目标端与第一场景物体发生碰撞的情况下,根据所述第一场景物体的物体 属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动。
根据本发明实施例的第三方面,还提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器、通信接口和存储器通过通信总线完成相互间的通信;其中,存储器,用于存储计算机程序;处理器,用于通过运行所述存储器上所存储的所述计算机程序来执行上述任一实施例中的方法步骤。
根据本发明实施例的第四方面,还提供了一种计算机可读的存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一实施例中的方法步骤。
根据本发明实施例的第五方面,还提供了计算机可读的存储介质,该存储介质中存储有计算机程序,该计算机程序被设置为运行时执行任一实施例中的方法步骤。
在本发明实施例中,采用按照场景物体的物体属性控制与场景的交互的方式,通过获取对目标客户端执行的目标操作,目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上;响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,目标端未绑定在目标虚拟角色上;在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动,由于按照场景物体的属性触发角色移动和场景物体的至少之一进行移动,而不限制场景道具的移动方式,可以实现方便加入场景互动和与其他角色之间互动的目的,从而达到了提高场景道具与场景的交互性的技术效果,进而解决了相关技术中存在由于场景道具的控制方式单一导致的与场景的交互性差的问题。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本发明实施例的一种可选的道具的控制方法的硬件环境的示意图。
图2是根据本发明实施例的一种可选的道具的控制方法的流程示意图。
图3是根据本发明实施例的一种可选的道具的控制方法的示意图。
图4是根据本发明实施例的另一种可选的道具的控制方法的示意图。
图5是根据本发明实施例的另一种可选的道具的控制方法的流程示意图。
图6是根据本发明实施例的一种可选的道具的控制装置的结构框图。
图7是根据本发明实施例的一种可选的电子设备的结构框图。
图8示意性地示出了实现根据本发明的方法的计算机程序产品的框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本发明实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
1、开放世界游戏
在开放世界游戏中,玩家通常拥有非常大的自由度,虽然游戏存在主线,但一般不明显。在此类游戏中,玩家可以接受各种各样的支线任务,体验多种不同的游戏风格。
根据本发明实施例的一个方面,提供了一种道具的控制方法。示例性的,上述道具的控制方法可以应用于如图1所示的包括终端102和服务器104的硬件环境中。如图1所示,服务器104通过网络与终端102进行连接,可用于为终端或终端上安装的客户端提供服务(如游戏服务、应用服务等),可在服务器上或独立于服务器设置数据库,用于为服务器104提供数据存储服务。
上述网络104包括但不限于以下至少之一:有线网络,无线网络,该有线网络可以包括但不限于以下至少之一:广域网、城域网或局域网,该无线网络可以包括但不限于以下至少之一:蓝牙、WIFI(Wireless Fidelity,无线保真)及其他实现无线通信的网络。上述终端102可以是计算数据的终端,如移动终端(例如手机、平板电脑)、笔记本电脑、PC机等终端上。上述服务器可以包括但不限于任何可以进行计算的硬件设备。
本发明实施例的道具的控制方法可以由服务器104来执行,也可以由终端102来执行,还可以是由服务器104和终端102共同执行。其中,终端102执行本发明实施例的道具的控制方法也可以是由安装在其上的客户端来执行。
以由终端102来执行本实施例中的道具的控制方法为例,图2是根据本发明实施例的一种可选的道具的控制方法的流程示意图,如图2所示,该方法的流程可以包括以下步骤:步骤S202、步骤S204和步骤S206。
在步骤S202,获取对目标客户端执行的目标操作,其中,目标虚拟道具的一端绑定在目标虚拟角色上。
本实施例中的道具的控制方法可以应用于在游戏场景中控制场景道具。上述游戏场景可以是目标游戏的游戏场景,例如,目标网络游戏的游戏场景,该目标游戏可以是大世界游戏,或者,其他包含具有牵引功能的场景道具的网络游戏。
目标游戏可以是由目标游戏的客户端单独执行的,即,由客户端单独运行该目标游戏(单机游戏);可以是由目标游戏的后台服务器单独执行的,即,由服务器单独运行该目标游戏,客户端仅用于显示目标游戏的游戏画面,以及获取对游戏画面的操作(例如,对游戏画面的操作位置),并同步至后台服务器;还可以由目标游戏的客户端和目标游戏的后台服务器共同执行的,即,客户端和后台服务器分别执行目标游戏的部分逻辑,本实施例中对此不作限定。
在一些实施方式中,,以目标游戏为目标网络游戏、且由后台服务器和客户端共同执行目标网络游戏的处理逻辑为例进行说明,其中,客户端获取用户的操作信息并将操作信息同步到后台服务器,后台服务器执行游戏操作的处理逻辑,并将处理结果同步到相关客户端,对于其他的执行方式(例如,客户端执行游戏操作的处理逻辑,并将处理结果同步到后台服务器,已由后台服务器将处理结果同步到其他客户端),本实施例中的道具 的控制方法同样适用。
一个用户(一个对象,或者说,一个玩家)的终端设备上可以运行有目标网络游戏的客户端,该客户端可以与目标网络游戏的后台服务器进行通信连接。用户可以使用帐号和密码、动态密码、关联应用登录等方式登录到其终端设备上运行的上述客户端。
对于目标对象(对应于目标用户,可以通过目标帐号标识)可以使用目标帐号登录到目标网络游戏的目标客户端。目标客户端上可以显示有目标网络游戏(例如,开放世界类游戏,大世界游戏)的目标游戏场景(例如,开放游戏世界,大世界场景)。目标对象可以控制与其对应的虚拟角色(即,目标虚拟角色)在目标游戏场景中执行游戏操作,例如,在游戏地图中移动,使用场景道具,执行游戏任务,与其他玩家交互等等。
在一些实施方式中,对于其他对象,其可以采用与目标对象相同或者类似的方式通过其控制的虚拟角色在目标游戏场景中执行相同或者类似的游戏操作,而在目标客户端上可以显示其他对象所控制的虚拟角色执行游戏操作的画面。
属于目标虚拟角色的场景道具中可以包含目标虚拟道具,该目标虚拟道作为具有牵引功能的场景道具,例如,目标绳索道具。在未被携带时,目标虚拟道具可以放置在目标虚拟角色的存储空间中(例如,背包,仓库)。在被携带时,目标虚拟道具可以放置或者固定在目标虚拟角色的目标部位上,例如,手部,手臂,腰部等。目标虚拟道具可以放置在目标承载道具内,例如,道具包,该目标承载道具可以固定在目标虚拟角色的目标部位。
目标虚拟道具的一端可以绑定在目标虚拟角色上,例如,固定在目标虚拟角色的目标部位上,另一端可以被发射出去,从而可以通过目标虚拟道具将目标虚拟角色与其他场景物体相连接,以实现与场景以及其他角色的互动。
目标虚拟道具可以是目标虚拟角色的专属道具,即,该虚拟道具不可掉落,可以被销毁,还可以由于耐久度为0或者其他原因导致无法修复。目标虚拟道具也可以是通用道具,可以被丢弃,可以被拾取,可以由于目标虚拟角色死亡而掉落等。本实施例中对于目标虚拟道具的除了牵引功能以外的其他属性不做限定。
目标对象可以通过操作目标客户端,控制目标虚拟角色对目标游戏场景执行不同的操作,例如,发射目标虚拟道具。目标客户端可以通过终端设备的触摸屏、或者其他的输入设备获取目标对象对目标客户端执行的目标操作,目标操作用于触发目标虚拟角色发射目标虚拟道具,可以是点击操作、滑动操作、或者多个操作的组合,本实施例中对此不作限定。
在步骤S204,响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上。
响应获取到的上述目标操作,目标客户端可以控制目标虚拟角色发射目标虚拟道具,例如,将目标虚拟道具的目标端(即,未绑定在目标虚拟角色上的一端)沿着发射方向发射出去。上述控制操作可以是由目标客户端单独执行的,也可以是目标客户端向后台服务器上传检测到的目标操作、由后台服务器执行的。控制操作的执行方式可以根据需要配置,本实施例中对此不作限定。
目标客户端可以显示目标虚拟角色发射目标虚拟道具的目标端的画面(道具发射画面),在显示的道具发射画面中,目标虚拟道具除了目标端以外的另一端绑定在目标虚拟角色的目标部位(例如,手部,腰部)上。
上述发射操作可以是针对特定目标执行的,例如,目标场景物体,则通过目标客户端显示的可以是:目标虚拟角色朝着目标场景物体发射目标虚拟道具的目标端的画面。上述发射操作也可以不是针对特定目标执行的,则通过目标客户端显示的可以是:目标虚拟角色沿着发射方向发射目标虚拟道具的目标端的画面。
例如,用户所控制的虚拟角色可以通过射击的方式发射绳索,绳索的一端绑定在角色指定的模型位置(目标部位),另一端(目标端)则以子弹的方式朝向目标(例如,目标场景物体)位移。
在步骤S206,在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动。
在目标端沿着发射方向移动的过程中,其可以与目标游戏场景中的第一场景物体发生碰撞。上述碰撞可以是:目标虚拟道具的碰撞体与第一场景物体的碰撞体之间发生碰撞。
如果目标虚拟道具是朝着目标场景物体发射的,第一场景物体可以是目标虚拟道具的预期目标,即,目标场景物体,也可以是目标虚拟道具的非预期目标,即,除了目标场景物体以外的其他场景物体,例如,目标虚拟道具的移动路径上的场景物体,本实施例中对此不作限定。
第一场景物体可以是目标游戏场景中任意允许交互的场景物体,可以但不限于为以下之一:地形物体,建筑,树木,木箱,场景物体,可移动生物等,可移动生物可以是玩家角色,也可以是非玩家生物等。
如果检测到目标虚拟道具与第一场景物体发生碰撞,即,目标虚拟道具命中第一场景物体,目标客户端上可以显示目标端与第一场景物体连接(目标虚拟角色通过目标虚拟道具与第一场景物体相连)的场景,连接的方式可以是以下至少之一:目标端吸附在第一场景物体上(例如,目标端上具有吸附属性的部件,比如,吸盘,磁铁),目标端通过抓钩等勾住第一场景物体。
通过命中目标类型,可以区分不同功能,同时显示不同的表现效果。第一场景物体的物体类型(即,命中目标的类型)可以通过场景物体的物体属性进行表示。场景物体的物体属性用于描述该场景物体,不同场景物体可以具有不同的物体属性,也可以具有相同的场景属性。上述物体属性可以是移动属性,例如,描述物体是否可移动或者可交互的属性,又例如,上述物体属性可以是体积、重量、重心等与物体移动具有关联关系的属性。本实施例中对此不作限定。
根据第一场景物体不同的物体属性,可以触发目标虚拟道具不同的牵引结果,上述牵引结果可以包括但不限于以下之一:
牵引目标虚拟角色发生移动,例如,朝着第一场景物体发生移动(沿着发射方向发生移动);
牵引第一场景物体发生移动,例如,朝着目标虚拟角色发生移动(沿着发射方向的反方向发生移动);
牵引目标虚拟角色和第一场景物体均发生移动,例如,牵引目标虚拟角色朝着第一场景物体发生移动(沿着发射方向发生移动),同时,牵引第一场景物体朝着目标虚拟角色发生移动(沿着发射方向的反方向发生移动)。
上述牵引功能可以是通过后台服务器或者目标客户端控制目标虚拟道具执行的。目标客户端上可以显示通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动的画面。
例如,可以基于对开放游戏世界(开放世界)中的地形、建筑和障碍等的检测,确定是触发角色移动还是与其他角色或者场景物体进行交互(例如,被拉拽,牵引)。
目标客户端上可以显示出目标虚拟角色在目标游戏场景中所执行的全部操作,以及执行各个操作所产生的结果,比如,目标虚拟道具的目标端的发射过程,目标虚拟道具牵引目标虚拟角色和/或第一场景物体的过程。
通过上述步骤S202至步骤S206,获取对目标客户端执行的目标操作,其中,目 标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上;响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上;在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动,解决了相关技术中存在由于场景道具的控制方式单一导致的与场景的交互性差的问题,提高了场景道具与场景的交互性。
作为一种可选的实施例,在获取对目标客户端执行的目标操作之后,上述方法还可以包括:步骤S11和步骤S12。
在步骤S11,从目标端沿着目标端的发射方向发射检测射线,示例性的,检测射线用于进行碰撞检测。
在步骤S12,在检测射线与第一场景物体相交的情况下,确定目标端与第一场景物体发生碰撞。
响应于目标操作,在目标端被发射出去之后,后台服务器可以对目标端进行碰撞检测,确定目标端是否会与其他场景物体(例如,预期目标,非预期目标)发生碰撞,例如,可以是碰撞体检测,也可以是射线检测。
可选地,后台服务器可以从目标端发射射线进行碰撞检测,例如,可以从目标端沿着目标端的发射方向发射检测射线,以便进行碰撞检测。射线检测可以是用于对目标游戏场景中添加有碰撞体的场景物体进行碰撞检测。
如果检测射线与第一场景物体相交,后台服务器可以确定目标端与第一场景物体发生碰撞。此外,由于目标端移动到检测射线与第一场景物体相交的位置需要一定时间,为保证碰撞检测的准确性,可以控制进行多次碰撞检测,例如,可以计算从目标端的当前位置移动到相交的位置所需的时间,如果计算得到的时间超过预定时间阈值,则可以间隔一定时间检测之后再次进行碰撞检测。
通过本实施例,考虑到被发射出去的目标端的移动速度较快,通过从目标端发射射线进行碰撞检测,可以保证碰撞检测的效率。
作为一种可选的实施例,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动包括:步骤S21。
在步骤S21,在第一场景物体为静态物体的情况下,通过目标虚拟道具牵引目标虚拟角色朝着第一场景物体移动。
如果第一场景物体是静态物体,例如,地形物体、建筑、树木等大世界静态物体,则可以通过目标虚拟道具牵引目标虚拟角色发生移动,可以是牵引目标虚拟角色沿着发射方向朝着第一场景物体移动。目标虚拟角色移动的结果可以是:目标虚拟角色移动到第一场景物体的位置。通过目标客户端可以显示目标虚拟道具牵引目标虚拟角色朝着第一场景物体移动的画面。
可选地,在本实施例中,在一些场景中,如果目标虚拟角色在移动的过程中处于悬空状态,则可以控制目标虚拟角色以目标端为支点,通过目标虚拟道具进行摆动。摆动停止的条件可以是:目标虚拟道具与其他场景物体发生碰撞,或者,受到重力、摩擦力等的作用导致位移速度减少到0,或者,目标端从第一场景物体上脱离(可以是主动脱离,也可以是被动脱离)。
通过本实施例,在目标虚拟道具与静态物体发生碰撞时,牵引目标虚拟角色朝着静态物体移动,可以提高虚拟角色移动控制的灵活性。
作为一种可选的实施例,通过目标虚拟道具牵引目标虚拟角色朝着第一场景物体移动包括:步骤S31和步骤S32。
在步骤S31,确定目标虚拟角色的第一移动参数和目标移动动作,其中,目标移 动动作为目标虚拟角色朝着第一场景物体移动所使用的动作。
在步骤S32,通过目标虚拟道具牵引目标虚拟角色沿着与第一移动参数对应的移动轨迹、并使用目标移动动作朝着第一场景物体移动。
在通过目标虚拟道具牵引目标虚拟角色朝着第一场景物体移动的情况下,后台服务器可以确定目标虚拟角色的第一移动参数,第一移动参数是目标虚拟角色朝着第一场景物体进行移动的移动参数,通过该移动参数可以模拟控制目标虚拟角色模拟物理运动。
后台服务器还可以确定目标虚拟角色的目标移动动作,该目标移动动作可以是目标虚拟角色朝着第一场景物体移动所使用的动作,例如,飞行动作,快速滑行动作,快速移动的同时从另一端开始收取目标虚拟道具的动作,还可以是其他移动动作。
在确定第一移动参数和目标移动动作之后,后台服务器可以控制目标虚拟角色沿着与第一移动参数对应的移动轨迹、并使用目标移动动作朝着第一场景物体移动。对于目标客户端,可以通过目标客户端显示目标虚拟道具牵引目标虚拟角色沿着与第一移动参数对应的移动轨迹、并使用目标移动动作朝着第一场景物体移动的画面,从而体现目标虚拟道具对于虚拟角色的牵引效果。
需要说明的是,在目标虚拟角色和/或第一场景物体进行移动的过程中,可以控制缩短目标虚拟道具的长度,例如,缩短弹性道具,从另一端开始收取目标虚拟道具,并将收取的目标虚拟道具放入目标承载道具中,以模拟目标虚拟道具的不同形态。
例如,可以通过设置角色移动的朝向、速度、重力等,同时设置角色相应的飞行动作,使角色飞行过去。
通过本实施例,通过设置虚拟角色的移动参数和移动动作,可以模拟受到道具牵引的物理表现,提高虚拟角色移动的拟真程度。
作为一种可选的实施例,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动包括:步骤S41。
在步骤S41,在第一场景物体为可移动物体的情况下,通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动。
如果第一场景物体是可交互物体或者可移动物体,例如,是木箱、场景物件、可移动生物等大世界物体,则可以通过目标虚拟道具牵引第一场景物体发生移动,可以是牵引第一场景物体沿着发射方向的反方向朝着目标虚拟角色移动。第一场景物体移动的结果可以是:第一场景物体移动到目标虚拟角色的位置,或者,与目标端脱离,并由于摩擦力、碰撞等的力场控制位移速度减小到0。通过目标客户端可以显示目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动的画面。
通过本实施例,在目标虚拟道具与可交互或者可移动物体发生碰撞时,牵引可交互或者可移动物体朝着目标虚拟道具移动,可以提高场景物体移动控制的灵活性,以及场景交互的多样性。
在一些实施例中,如果第一场景物体为可移动物体或者可交互物体,也可以根据目标虚拟角色的角色属性和第一场景物体的物体属性,确定目标虚拟角色和第一场景物体中的至少之一发生移动。
例如,如果第一场景物体的重量与目标虚拟角色的重量之间的重量比值小于第一比例阈值,则控制第一场景物体向目标虚拟角色发生移动;如果重量比值大于或者等于第一比例阈值、且小于第二比例阈值,则控制第一场景物体向目标虚拟角色发生移动的同时,控制目标虚拟角色向第一场景物体发生移动,目标虚拟角色和第一场景物体的移动速度可以根据两者的重量以及所收到的力场确定;如果重量比值大于或者等于第二比例阈值,则控制目标虚拟角色向第一场景物体发生移动。
如果基于碰撞触发目标虚拟道具与其他场景物体(例如,其他角色)进行交互(例 如,被拉拽,牵引),可以通过多种方式控制与其他场景物体进行交互。
在一些的实施方式中,通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动包括:步骤S51。
在步骤S51,控制目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动,并控制第一场景物体按照与目标端相同的速度朝着目标虚拟角色移动。
在通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动的情况下,后台服务器可以分别确定目标虚拟道具的目标端的移动参数和第一场景物体的移动参数,以体现目标端牵引第一场景物体向目标虚拟角色移动的效果。
后台服务器可以配置目标端的速度为相对之前速度的反向速度,同时设置第一场景物体以与目标端相同的速度、以朝着目标虚拟角色的方向位移,从而可以控制目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动,并控制第一场景物体按照与目标端相同的速度朝着目标虚拟角色移动。
对于目标客户端,目标客户端上可以显示目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动、同时第一场景物体按照与目标端相同的速度朝着目标虚拟角色移动的画面,即,第一场景物体跟随目标端按照与目标端在发生碰撞之前的速度的反向速度朝着目标虚拟角色移动的画面。
在一些实施方式中,通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动包括:步骤S52。
在步骤S52,将第一场景物体绑定在目标端上,以控制第一场景物体跟随目标端移动,并控制目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动。
在通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动的情况下,后台服务器可以将第一场景物体绑定在目标虚拟道具的目标端上,以控制第一场景物体跟随目标端移动。
后台服务器可以采用与前述类似的方式控制目标端按照发生碰撞之前的速度(例如,v1)的反向速度(例如,-v1)朝着目标虚拟角色移动。由于第一场景物体跟随目标端移动,目标客户端上可以显示第一场景物体跟随目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动的画面。
例如,可以通过设置目标端的绳索相对之前速度的反向速度、同时设置命中的目标以相同的速度、以角色的朝向位移,或者,直接将命中的目标绑定在绳索端,可以根据具体的表现情况而选择不同的方式。
通过本实施例,通过不同的控制方式控制第一场景物体跟随目标虚拟道具的目标端向着目标虚拟角色移动,可以保证牵引过程的连续性,提高游戏过程的拟真程度。
作为一种可选的实施例,在通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动的过程中,上述方法还包括:步骤S61和步骤S62。
在步骤S61,在目标虚拟道具或者第一场景物体与第二场景物体发生碰撞的情况下,控制目标端与第一场景物件分离。
在步骤S62,控制目标端朝着目标虚拟角色继续移动,直到返回到目标虚拟角色,并控制第一场景物体模拟物理移动,直到停止移动。
无论是角色(目标虚拟角色)位移到目标点(第一场景物体所在的位置)还是目标被位移到角色位置,目标网络游戏在这过程中都会实时检测中间是否有事件(例如,碰撞事件)触发中断当前行为。
对于角色位移到目标点的场景,如果发生移动中断事件,后台服务器可以控制目标端从第一场景物体上脱离,并返回到目标虚拟角色。目标虚拟角色可以停止移动,或者,沿着特定方向模拟物理移动过程,在移动一定距离之后停止。
在一些实施例中,在通过目标端牵引第一场景物体向着目标虚拟角色移动的过程中,在目标虚拟道具的移动路线上、或者在第一场景物体的移动路线上可能存在其他的场景物体,如果目标虚拟道具或者第一场景物体与第二场景物体发生碰撞,则可以触发目标端从第一场景物体上脱离,即,目标端与第一场景物体分离。目标客户端上可以显示目标虚拟道具或者第一场景物体与第二场景物体发生碰撞之后目标端与第一场景物体分离的画面。
后台服务器可以控制目标端朝着目标虚拟角色继续移动,直到返回到目标虚拟角色。此过程可以模拟目标虚拟道具的牵引功能,目标虚拟道具可以返回到目标虚拟角色(目标部位、或者与目标部位不同的另一个部位)。目标客户端上可以显示目标端从第一场景物体上脱离之后,返回到目标虚拟角色的画面。
后台服务器可以确定第一场景物体的第二移动参数和目标环境参数。第二移动参数是第一场景物体发生碰撞之后的移动参数,用于控制第一场景物体发生碰撞之后的移动过程,可以包括但不限于以下至少之一:移动速度,移动方向。目标环境参数是第一场景物体所处的环境参数,上述环境参数可以是影响第一场景物体的移动过程的环境参数,可以包括但不限于以下至少之一:重力,摩擦力。
后台服务器可以按照第二移动参数和目标环境参数控制第一场景物体模拟物理移动,提高场景物体控制的拟真程度。对于目标客户端,可以通过目标客户端显示第一场景物体模拟物理移动知道停止移动的画面。第一场景物体模拟物理移动可以是:第一场景物体沿着一个方向(可以是原来的方向,也可以是碰撞之后改变成的方向)移动一段距离之后停止。
通过本实施例,在移动过程中发生碰撞之后控制目标虚拟道具返回到目标虚拟角色,并控制第一场景物体模拟物理移动过程,可以提高场景物体控制的拟真程度。
作为一种可选的实施例,上述方法还包括:步骤S71。
在步骤S71,在目标端与第一场景物体发生碰撞之后,播放目标虚拟道具与第一场景物体的物体属性匹配的目标连接动画,其中,目标虚拟道具不同的连接动画对应于碰撞到的场景物体的不同物体属性。
在一些实施例中,可以为目标虚拟道具(或者目标端)配置多个连接动画,连接动画可以是目标虚拟道具(或者目标端)连接到碰撞到的场景物体的动画,用于描述目标虚拟道具(或者目标端)连接到碰撞到的场景物体的过程。
不同的连接动画可以对应于碰撞到的场景物体的不同物体属性,例如,可以为目标端配置吸附动画和缠绕动画,分别对应于场景物体的可吸附属性、以及物场景物体的可缠绕属性。
在一些实施例中,可以目标虚拟道具的目标端配置有多种端点模型(例如,抓钩模型,吸盘模型),不同的端点模型对应于碰撞到的场景物体的不同物体属性。在目标端与第一场景物体发生碰撞之后,可以从与目标端对应的多个端点模型中获取与第一场景物体的物体属性匹配的目标端点模型;播放与目标端点模型对应的目标连接动画,其中,目标连接动画用于描述目标端通过目标端点模型连接到第一场景物体的过程。
每个端点模型可以具有多个模型状态,例如,收拢状态,展开状态,连接到物体上的状态。不同的模型状态可以对应于不同的状态动画。在从与目标端对应的多个端点模型中获取与第一场景物体的物体属性匹配的目标端点模型之后,还可以从与目标端点模型对应的多个状态动画中获取与连接状态对应的目标连接动画,多个状态动画中的每个状态动画可以对应于目标端点模型的至少一种模型状态。
在一些实施例中,可以在场景道具飞行的一端绑定模型(比如抓钩或其他模型),从而在勾住目标时播放相应的抓钩或其他模型的动作表现。
可以在目标端上绑定目标端点模型,该目标端点模型可以为绑定在目标端上的、用于勾住所碰触到的场景物体的模型。目标端点模型可以是抓钩、或者其他具有抓钩功能的模型。该目标端点模型可以具有不同的模型状态,例如,收拢状态,展开状态,勾住物体的状态。不同的模型状态可以对应于不同的模型动画。
在目标虚拟道具的目标端发射的过程中,目标客户端可以跟随目标端的移动播放目标端点模型的展开动画。
在一些实施例中,在目标端与第一场景物体发生碰撞之后,目标客户端可以播放目标端点模型勾住第一场景物体的模型动画,例如,缠绕在第一场景物体上的动画。
通过本实施例,通过在场景道具飞行的一端与场景物体发生碰撞之后,播放与场景物体的物体属性匹配的连接动画,可以丰富显示的画面信息,提高场景道具的拟真程度。
在一些实施例中,在通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面之后,上述方法还包括:步骤S81至步骤S83。
S81,解算目标虚拟道具在每个更新时刻的数据位置标记数组,其中,目标虚拟道具通过多个节点被分为多个段,多个节点按照目标虚拟道具已发射出去的道具长度依次被触发沿发射方向移动,数据位置标记数组中的每个数据位置标记包含多个节点中的对应节点的位置数据。
S82,使用与每个更新时刻对应的数据位置标记数组,构建出目标虚拟道具在每个更新时刻的目标道具模型。
S83,在每个更新时刻对与当前时刻对应的目标道具模型进行渲染,以在每个更新时刻将目标虚拟道具渲染到目标客户端上进行显示。
目标虚拟道具可以是具有多个节点,通过多个节点可以将目标虚拟道具分为多个段(道具段),在目标端被发射出去之后,多个节点被触发沿着发射方向移动具有先后顺序,例如,越靠近目标端的节点被触发沿着发射方向移动的时间越早。可以按照目标虚拟道具已发射出去的道具长度依次触发多个节点沿发射方向移动。在某一时刻,部分节点可能已经开始移动,而其他节点则还未开始移动。
可以通过数据位置标记数组保存多个节点的节点位置,数据位置标记数组包含多个数据位置标记,每个数据位置标记对应于多个节点的节点位置。在目标端被发射出去之后,数据位置标记数组可以跟随时间进行更新,以体现目标虚拟道具在不同时间的形态。
对于目标网络游戏,目标客户端上每秒可以显示一定数量的游戏帧数,例如,一秒显示60帧。在目标端被发射出去之后,目标客户端(也可以是后台服务器)可以解算目标虚拟道具在每个更新时刻的数据位置标记数组,上述更新时刻与更新时间间隔对应,更新时间间隔可以是一帧或者多帧的时间,即,目标客户端每隔一帧或者多帧更新一次数据位置标记数组,以获取目标虚拟道具在每个更新时刻的模型形态。
解算数据位置标记数组的方式可以有多种,可以根据配置的解算规则确定在每个更新时刻的数据位置标记数组。例如,可以配置目标端以匀速沿着目标方向移动,确定在每个更新时刻已发射出去的道具长度,进而计算出各个节点的位置数据,解算出目标虚拟道具在每个更新时刻的数据位置标记数组。
对于每个更新时刻,目标客户端可以使用与本更新时刻对应的数据位置标记数组,构建出目标虚拟道具在本更新时刻的目标道具模型。目标道具模型可以包含多个顶点和UV纹理。多个顶点坐标和UV纹理坐标与多个节点的节点位置(节点坐标)之间可以具有关联关系。使用与本更新时刻对应的数据位置标记数组,可以更新目标道具模型的顶点坐标(例如,顶点缓冲区中顶点数据的位置),还可以更新目标道具模型的UV纹理坐标,并使用更新后的顶点坐标和UV纹理坐标,构建出本更新时刻的目标道具模型。
在每个更新时刻,目标客户端可以使用目标渲染器(Shader)对与当前时刻对应 的目标道具模型进行渲染,以在每个更新时刻将目标虚拟道具渲染到目标客户端上进行显示。渲染所使用的模型材质贴图等可以预先配置,本实施例中对此不作限定。
例如,对于绳索道具,通过上述方式来解算数据的位置(节点位置),可以不断更新顶点缓冲区中顶点数据的位置,达到控制绳索运动的目的。
通过本实施例,通过数据位置标记数组记录多个节点在每个更新时刻的位置数据,以便进行构建出目标虚拟道具的道具模型,适用于如绳索等节点之间的位置变化较大的场景道具,可以提高道具模型构建的便捷性。
在一些实施例中,解算目标虚拟道具在每个更新时刻的数据位置标记数组包括:步骤S91和步骤S92。
在步骤S91,获取与目标时刻对应的数据位置标记数组,其中,目标时刻为一个更新时刻,每个数据位置标记包含对应节点在目标时刻的节点位置、对应节点在目标时刻的前一个更新时刻的节点位置、以及用于指示对应节点是否参与位置运算的运算标记。
在步骤S92,根据与目标时刻对应的数据位置标记数组,基于韦尔莱算法解算出与目标时刻的下一个更新时刻对应的数据位置标记数组。
为了模拟场景道具的物理运动,可以使用用于求解牛顿运动方程的数值方法解算数据位置标记数组。数据位置标记数组中的每个单元(每个数据位置标记)可以包含:新位置数据和老位置数据,新位置数据可以是一个更新时刻的位置数据,老位置数据为前一个更新时刻的位置数据。每个单元还可以包含:Free数据(运算标记),这个Free数据控制这个节点是否是自由的,是否能够参与位置运算。如果它是false,那么它将被固定,不参与位置运算,如果它是true,那么它不是固定的,可以参与位置运算。
对于一个更新时刻,即,目标时刻,目标客户端可以获取与目标时刻对应的数据位置标记数组,每个数据位置标记包含对应节点在目标时刻的节点位置、对应节点在目标时刻的前一个更新时刻的节点位置、以及用于指示对应节点是否参与位置运算的运算标记。
通过解算数据位置标记数组,使其拥有重力、速度等,可以得到多个节点在各个更新时刻的节点位置。解算数据位置标记数组使用的算法可以有多种,可以包括但不限于:韦尔莱算法(韦尔莱积分法)。
韦尔莱算法是一种用于求解牛顿运动方程的数值方法,可以应用于分子动力学模拟以及视频游戏中。韦尔莱算法的优点在于:数值稳定性比简单的欧拉方法高很多,并保持了物理系统中的时间可逆性与相空间体积元体积守恒的性质。
目标客户端可以根据与目标时刻对应的数据位置标记数组,基于韦尔莱算法解算出与目标时刻的下一个更新时刻对应的数据位置标记数组。韦尔莱算法的形式可以有多种,对应地,基于韦尔莱算法解算出下一个更新时刻的数据位置标记数组的方式也可以有多种。
需要说明的是,对于其他可以用于求解牛顿运动方程的数值的算法,同样可以用于求解数据位置标记数组,所采用的求解算法不同,数据位置标记数组中的每个单元所包含的数据可以不同,本实施例中对此不作限定。
通过本实施例,通过配置运算标记,可以提高数据位置标记数组解算的效率;而基于韦尔莱算法解算数据位置标记数组,可以提高数值稳定性,并保持了物理系统中的时间可逆性与相空间体积元体积守恒的性质。
在一些实施例中,根据与目标时刻对应的数据位置标记数组,基于韦尔莱算法解算出与目标时刻的下一个更新时刻对应的数据位置标记数组包括:步骤S101至步骤S105。
在步骤S101,根据运算标记,从与目标时刻对应的数据位置标记数组中获取目标数据位置标记,其中,与目标数据位置标记对应的目标节点参与位置运算,目标数据位置标记包含目标节点在目标时刻(当前时刻)的第一节点位置、以及在前一个更新时刻(当前时刻的前一个时刻)的第二节点位置。
在步骤S102,根据第一节点位置、第二节点位置以及更新时间间隔,确定目标节点在目标时刻的目标节点速度。
在步骤S103,确定目标节点的节点质量、以及目标节点在目标时刻所受到的目标力场。
在步骤S104,基于韦尔莱算法,将第一节点位置、目标节点速度与更新时间间隔的乘积得到的值、以及目标力场与更新时间间隔平方的乘积除以两倍的节点质量得到的值相加,得到目标节点在下一个更新时刻的第三节点位置。
在步骤S105,使用第一节点位置和第三节点位置更新目标数据位置标记,其中,与下一个更新时刻对应的数据位置标记数组包含更新后的目标数据位置标记。
在基于韦尔莱算法解算数据位置标记数组时,可以使用韦尔莱积分法进行解算,解算使用的可以是由速度表示的韦尔莱公式。
由于多个节点中有部分节点不参与位置运算,目标客户端可以首先根据运算标记,从与目标时刻对应的数据位置标记数组中获取目标数据位置标记,也就是,参与位置运算的节点(目标节点)的数据位置标记,即,目标数据位置标记。目标数据位置标记可以有一个或者多个,对于某一个目标数据位置标记,其可以包含对应的目标节点在目标时刻的第一节点位置(例如,r(t))、以及在前一个更新时刻的第二节点位置(例如,r(t-Δt))。
根据第一节点位置、第二节点位置以及更新时间间隔,可以确定目标节点在目标时刻的目标节点速度,例如,v=(r(t)-r(t-Δt))/Δt。
根据第一节点位置和系统的相互作用势条件(如果相互作用仅依赖位置r),可以确定出目标节点在目标时刻所受到的目标力场,例如,f(t),而每个节点的节点质量(m)可以预先配置,或者根据目标虚拟道具的道具属性(例如,道具质量)、节点在目标虚拟道具的位置等信息确定。
基于韦尔莱算法,例如,使用由速度表示的韦尔莱公式,可以计算目标节点在下一个更新时刻的节点位置,即,第三节点位置(例如,r(t)),计算的方式可以是以下三部分求和:第一节点位置,目标节点速度与更新时间间隔的乘积得到的值,目标力场与更新时间间隔平方的乘积除以两倍的节点质量得到的值。
在得到第三节点位置之后,可以使用第一节点位置和第三节点位置更新目标数据位置标记,例如,将目标数据位置标记中当前时刻的节点位置更新为第三节点位置,将当前时刻的前一个时刻的节点位置更新为第一节点位置,同时更新运算标记(Free数据)。
下面结合示例说明韦尔莱积分法。韦尔莱积分法记录了分子当前的位置(当前更新时刻的位置)和之前的位置(前一个更新时刻的位置),要获得分子的速度,只要用当前位置减去之前的位置即可。韦尔莱算法的计算步骤为:步骤1至步骤3。
在步骤1,通过Taylor展开公式(1)计算得到位置r(t+Δt),公式(1)如下:
Figure PCTCN2021121356-appb-000001
其中,r(t)为t时刻的分子位置,v(t)为t时刻的速度,f(t)为t时刻的力场,Δt为两个相邻时刻的时间差。
在步骤2,由r(t+Δt)和系统的相互作用势条件(如果相互作用仅依赖位置r),求得力场f(t+Δt)。
在步骤3,由速度表示的韦尔莱公式求出新的速度v(t+Δt)。
那么,基于r(t+Δt)、f(t+Δt)和v(t+Δt)带入公式(1)可以求得r(t+2Δt)。
通过本实施例,基于韦尔莱积分法,使用由速度表示的韦尔莱公式解算数据位置标记数组,可以提高数据位置标记数组解算的效率。
在一些实施例中,使用与每个更新时刻对应的数据位置标记数组,构建出目标虚拟道具在每个更新时刻的目标道具模型包括:步骤S111至步骤S113。
在步骤S111,获取与目标道具模型对应的目标模型面片,其中,目标模型面片用于构建目标道具模型。
在步骤S112,根据与每个更新时刻对应的数据位置标记数组,确定目标模型面片的目标顶点坐标和目标纹理坐标。
在步骤S113,使用目标顶点坐标和目标纹理坐标,构建出目标虚拟道具在每个更新时刻的目标道具模型。
利用解算的数据(与每个更新时刻对应的数据位置标记数组),可以重新构建整个模型(例如,绳索模型),重新构建可以包括但不限于以下几个方面:位置的构建,UV的构建,切线的构建,indexbuffer(索引缓存)的构建。
在一些实施例中,目标道具模型可以是使用与目标道具模型对应的目标模型面片包裹数据位置标记数组中的数据位置标记来构建的,目标模型面片用于构建目标道具模型。为了得到目标道具模型,目标客户端可以获取与目标道具模型对应的目标模型面片。
目标模型面片可以包含顶点和UV纹理(目标纹理),顶点坐标和UV纹理坐标与多个节点的节点坐标(节点位置坐标)之间可以具有对应关系。对应一个更新时刻,目标客户端可以基于目标模型面片的顶点坐标和UV纹理坐标与多个节点的节点位置之间的对应关系,根据与本更新时刻对应的数据位置标记数组,确定目标模型面片的目标顶点坐标和目标纹理坐标。根据目标顶点坐标和目标纹理坐标,目标客户端可以构建出目标虚拟道具在本更新时刻的目标道具模型。
在一些实施例中,在使用目标虚拟道具之前,可以首先构建目标虚拟道具的初始道具模型,通过初始道具模型可以确定节点与顶点和UV纹理之间的约束。构建目标虚拟道具的初始道具模型的设备可以是后台服务器,也可以是其他设备,例如,相关人员的终端设备。
以由终端设备进行初始道具模型构建为例,终端设备可以按照目标虚拟道具的段数、以及各个段的长度,生成多个数据位置标记。多个数据位置标记与多个节点一一对应,一个数据位置标记用于标记一个节点的位置。
对于多个数据位置标记,终端设备可以为多个数据位置标记构建模型顶点和模型纹理坐标,例如,根据设置的边数构建顶点和UV纹理,得到目标虚拟道具的初始道具模型。
例如,可以首先设置绳索的段数,从而有了数据位置标记,如图3所示。由于绳索本身拥有自义定参数,可以包括但不限于:段数,边数,物理运算时迭代的次数,绳子的长度,绳子参与物理运算时受到的力,这些数据中的有些数据可以为之后数据数量的确定、模型构造时边的长度、顶点数量提供数据支持。然后,可以构建一个面片,把这个面片包裹在数据位置标记的周围,如图4所示。构建面片并包裹数据位置标记的方式可以是:根据设置的边数构建顶点和UV纹理。
通过本实施例,通过使用模型面片包裹数据位置标记的方式进行模型构建,可以提高模型构建的便捷性。
下面结合可选示例对本发明实施例中的道具的控制方法进行解释说明。在本示例中,目标虚拟道具为绳索(绳索道具),目标网络游戏为大世界游戏,游戏的控制逻辑是由后台服务器执行的。
如图5所示,本可选示例中的道具的控制方法的流程可以包括以下步骤:步骤S502至步骤S512。
在步骤S502,角色在当前位置发射绳索。
用户可以通过操作其终端设备上运行的大世界游戏的客户端控制对应的虚拟角色执行游戏操作,例如,发射绳索。该用户所控制的角色可以向预期目标发射绳索。绳索的一端绑定在角色上,另一端向着预期目标开始位移。
在步骤S504,绳索飞行的一端与目标发生碰撞。
绳索飞行的一端与目标发生碰撞,命中目标,命中的目标可以是预期目标,也可以是非预期目标。
在步骤S506,检测命中目标的类型。
后台服务器可以检测命中目标的类型(命中目标的物体属性),确定其是地形、建筑、树木等大世界静态物体,还是木箱、场景物件、可移动生物等大世界可交互或可移动物体。
在步骤S508,如果命中目标为大世界静态物体,控制角色位移到目标位置。
如果命中目标为大世界静态物体时,可以通过设置角色移动的朝向、速度,重力、并同时设置角色相应的飞行动作,使角色飞行过去,结束。
在步骤S510,如果命中目标为大大世界可交互或可移动物体,绳索返回同时物体朝角色位置位移。
如果命中目标为大大世界可交互或可移动物体,可以通过以下方式控制绳索返回同时物体朝角色位置位移:设置目标端绳索相对之前速度的反向速度位移,同时设置命中的目标以相同的速度、以角色的朝向位移;或者,将命中目标直接绑定在绳索端,具体用哪种方式可以根据具体表现情况进行选择。
无论是角色位移到目标点还是目标被位移到角色位置,后台服务器都可以实时检测中间是否有被中断当前行为。
在步骤S512,返回途中发生碰撞,绳索继续返回,物体中断位置开启物理模拟移动,直至位移速度为0停止移动。
如果绳索返回途中发生碰撞,则控制绳索继续返回到角色。对于命中目标,该物体中断朝角色位置位移,并开启物理模拟移动,直至物体的位移速度为0,停止移动。
通过本示例,通过对开放世界中的地形、建筑和障碍等进行检测,确定是触发角色移动还是与其他角色进行交互,从而加入绳索与场景互动,以及与其他角色之间的互动,从而保证绳索道具可以适用于开放游戏世界,也便于进行多人网络同步。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM(Read-Only Memory,只读存储器)/RAM(Random Access Memory,随机存取存储器)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
根据本发明实施例的另一个方面,还提供了一种用于实施上述道具的控制方法的道具的控制装置。图6是根据本发明实施例的一种可选的道具的控制装置的结构框图,如图6所示,该装置可以包括:获取单元602、显示单元604和牵引单元606。
获取单元602,用于获取对目标客户端执行的目标操作,其中,目标操作用于触 发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上。
显示单元604,与获取单元602相连,用于响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上。
牵引单元606,与显示单元604相连,用于在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动。
需要说明的是,该实施例中的获取单元602可以用于执行上述步骤S202,该实施例中的显示单元604可以用于执行上述步骤S204,该实施例中的牵引单元606可以用于执行上述步骤S206。
通过上述模块,获取对目标客户端执行的目标操作,其中,目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上;响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上;在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动,解决了相关技术中存在由于场景道具的控制方式单一导致的与场景的交互性差的问题,提高了场景道具与场景的交互性。
作为一种可选的实施例,上述装置还包括:发射单元和确定单元。
发射单元,用于在获取对目标客户端执行的目标操作之后,从目标端沿着目标端的发射方向发射检测射线,其中,检测射线用于进行碰撞检测。
确定单元,用于在检测射线与第一场景物体相交的情况下,确定目标端与第一场景物体发生碰撞。
在一些实施例中,牵引单元606包括:第一牵引模块。
第一牵引模块,用于在第一场景物体为静态物体的情况下,通过目标虚拟道具牵引目标虚拟角色朝着第一场景物体移动。
在一些实施例中,第一牵引模块包括:第一确定子模块和牵引子模块。
第一确定子模块,用于确定目标虚拟角色的第一移动参数和目标移动动作,其中,目标移动动作为目标虚拟角色朝着第一场景物体移动所使用的动作。
牵引子模块,用于通过目标虚拟道具牵引目标虚拟角色沿着与第一移动参数对应的移动轨迹、并使用目标移动动作朝着第一场景物体移动。
在一些实施例中,牵引单元606包括:第二牵引模块。
第二牵引模块,用于在第一场景物体为可移动物体的情况下,通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动。
在一些实施例中,第二牵引模块包括第一控制子模块、第一控制子模块和第三控制子模块中的一者。
第一控制子模块,用于控制目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动,并控制第一场景物体按照与目标端相同的速度朝着目标虚拟角色移动。
第二控制子模块,用于将第一场景物体绑定在目标端上,以控制第一场景物体跟随目标端移动。
第三控制子模块,用于控制目标端按照发生碰撞之前的速度的反向速度朝着目标虚拟角色移动。
在一些实施例中,上述装置还包括:第一控制单元和第二控制单元。
第一控制单元,用于在通过目标虚拟道具牵引第一场景物体朝着目标虚拟角色移动的过程中,在目标虚拟道具或者第一场景物体与第二场景物体发生碰撞的情况下,控制 目标端与第一场景物件分离。
第二控制单元,用于控制目标端朝着目标虚拟角色继续移动,直到返回到目标虚拟角色,并控制第一场景物体模拟物理移动,直到停止移动。
在一些实施例中,上述装置还包括:播放单元。
播放单元,用于在目标端与第一场景物体发生碰撞之后,播放目标虚拟道具与第一场景物体的物体属性匹配的目标连接动画,其中,目标虚拟道具不同的连接动画对应于碰撞到的场景物体的不同物体属性。
在一些实施例中,上述装置还包括:解算单元、构建单元和渲染单元。
解算单元,用于在通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面之后,解算目标虚拟道具在每个更新时刻的数据位置标记数组,其中,目标虚拟道具通过多个节点被分为多个段,多个节点按照目标虚拟道具已发射出去的道具长度依次被触发沿发射方向移动,数据位置标记数组中的每个数据位置标记包含多个节点中的对应节点的位置数据。
构建单元,用于使用与每个更新时刻对应的数据位置标记数组,构建出目标虚拟道具在每个更新时刻的目标道具模型。
渲染单元,用于在每个更新时刻对与当前时刻对应的目标道具模型进行渲染,以在每个更新时刻将目标虚拟道具渲染到目标客户端上进行显示。
在一些实施例中,解算单元包括:第一获取模块和解算模块。
第一获取模块,用于获取与目标时刻对应的数据位置标记数组,其中,目标时刻为一个更新时刻,每个数据位置标记包含对应节点在目标时刻的节点位置、对应节点在目标时刻的前一个更新时刻的节点位置、以及用于指示对应节点是否参与位置运算的运算标记。
解算模块,用于根据与目标时刻对应的数据位置标记数组,基于韦尔莱算法解算出与目标时刻的下一个更新时刻对应的数据位置标记数组。
在一些实施例中,解算模块包括:获取子模块、第二确定子模块、第三确定子模块、计算子模块和更新子模块。
获取子模块,用于根据运算标记,从与目标时刻对应的数据位置标记数组中获取目标数据位置标记,其中,与目标数据位置标记对应的目标节点参与位置运算,目标数据位置标记包含目标节点在目标时刻的第一节点位置、以及在前一个更新时刻的第二节点位置。
第二确定子模块,用于根据第一节点位置、第二节点位置以及更新时间间隔,确定目标节点在目标时刻的目标节点速度。
第三确定子模块,用于确定目标节点的节点质量、以及目标节点在目标时刻所受到的目标力场。
计算子模块,用于基于韦尔莱算法,将第一节点位置、目标节点速度与更新时间间隔的乘积得到的值、以及目标力场与更新时间间隔平方的乘积除以两倍的节点质量得到的值相加,得到目标节点在下一个更新时刻的第三节点位置。
更新子模块,用于使用第一节点位置和第三节点位置更新目标数据位置标记,其中,与下一个更新时刻对应的数据位置标记数组包含更新后的目标数据位置标记。
在一些实施例中,构建单元包括:第二获取模块、确定模块和构建模块。
第二获取模块,用于获取与目标道具模型对应的目标模型面片,其中,目标模型面片用于构建目标道具模型。
确定模块,用于根据与每个更新时刻对应的数据位置标记数组,确定目标模型面片的目标顶点坐标和目标纹理坐标。
构建模块,用于使用目标顶点坐标和目标纹理坐标,构建出目标虚拟道具在每个更新时刻的目标道具模型。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本发明实施例的又一个方面,还提供了一种用于实施上述道具的控制方法的电子设备,该电子设备可以是服务器、终端、或者其组合。
图7是根据本发明实施例的一种可选的电子设备的结构框图,如图7所示,包括处理器702、通信接口704、存储器706和通信总线708,其中,处理器702、通信接口704和存储器706通过通信总线708完成相互间的通信,其中,
存储器706,用于存储计算机程序;
处理器702,用于执行存储器706上所存放的计算机程序时,实现如下步骤:步骤S1至步骤S3。
在步骤S1,获取对目标客户端执行的目标操作,其中,目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上。
在步骤S2,响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上;
在步骤S3,在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动。
在一些实施例中,上述的通信总线可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线、或EISA(Extended Industry StandardArchitecture,扩展工业标准结构)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括RAM,也可以包括非易失性存储器(non-volatile memory),例如,至少一个磁盘存储器。可选地,存储器还可以是至少一个位于远离前述处理器的存储装置。
作为一种示例,上述存储器706中可以但不限于包括上述道具的控制装置中的获取单元602、显示单元604以及牵引单元606。此外,还可以包括但不限于上述道具的控制装置中的其他模块单元,本示例中不再赘述。
上述处理器可以是通用处理器,可以包含但不限于:CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processing,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在一些实施例中,上述电子设备还包括:显示器,用于显示目标客户端的显示界面。
本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图7所示的结构仅为示意,实施上述道具的控制方法的设备可以是终端设备,该终端设备可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图7其并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图7中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图7所示的不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
根据本发明实施例的又一个方面,还提供了一种存储介质。在一些实施例中,上述存储介质可以用于执行本发明实施例中上述任一项道具的控制方法的程序代码。
在一些实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:步骤S1至步骤S3。
在步骤S1,获取对目标客户端执行的目标操作,其中,目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,目标虚拟道具的一端绑定在目标虚拟角色上。
在步骤S2,响应目标操作,通过目标客户端显示目标虚拟角色发射目标虚拟道具的目标端的画面,其中,目标端未绑定在目标虚拟角色上。
在步骤S3,在目标端与第一场景物体发生碰撞的情况下,根据第一场景物体的物体属性,通过目标虚拟道具牵引目标虚拟角色和第一场景物体中的至少一个发生移动。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例中对此不再赘述。
在一些实施例中,上述存储介质可以包括但不限于:U盘、ROM、RAM、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
图8示意性地示出了实现根据本发明的方法的计算机程序产品的框图。所述计算机程序产品包括计算机程序/指令810,当所述计算机程序/指令810被诸如图7所示的处理器702之类的处理器执行时,可实现上文所描述的方法中的各个步骤。
上文对本说明书特定实施例进行了描述,其与其它实施例一并涵盖于所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定遵循示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可行的或者有利的。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
应可理解,以上所述实施例仅为举例说明本发明之目的而并非对本发明进行限制。在不脱离本发明基本精神及特性的前提下,本领域技术人员还可以通过其他方式来实施本发明。本发明的范围当以后附的权利要求为准,凡在本说明书一个或多个实施例的精神和原则之内所做的任何修改、等同替换、改进等,皆应涵盖其中。

Claims (16)

  1. 一种道具的控制方法,包括:
    获取对目标客户端执行的目标操作,其中,所述目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,所述目标虚拟道具的一端绑定在所述目标虚拟角色上;
    响应所述目标操作,通过所述目标客户端显示所述目标虚拟角色发射所述目标虚拟道具的目标端的画面,其中,所述目标端未绑定在所述目标虚拟角色上;
    在所述目标端与第一场景物体发生碰撞的情况下,根据所述第一场景物体的物体属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动。
  2. 根据权利要求1所述的方法,其中,在获取对所述目标客户端执行的所述目标操作之后,所述方法还包括:
    从所述目标端沿着所述目标端的发射方向发射检测射线,其中,所述检测射线用于进行碰撞检测;
    在所述检测射线与所述第一场景物体相交的情况下,确定所述目标端与所述第一场景物体发生碰撞。
  3. 根据权利要求1所述的方法,其中,根据所述第一场景物体的物体属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动包括:
    在所述第一场景物体为静态物体的情况下,通过所述目标虚拟道具牵引所述目标虚拟角色朝着所述第一场景物体移动。
  4. 根据权利要求3所述的方法,其中,通过所述目标虚拟道具牵引所述目标虚拟角色朝着所述第一场景物体移动包括:
    确定所述目标虚拟角色的第一移动参数和目标移动动作,所述目标移动动作为所述目标虚拟角色朝着所述第一场景物体移动所使用的动作;
    通过所述目标虚拟道具牵引所述目标虚拟角色沿着与所述第一移动参数对应的移动轨迹、并使用所述目标移动动作朝着所述第一场景物体移动。
  5. 根据权利要求1所述的方法,其中,根据所述第一场景物体的物体属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动包括:
    在所述第一场景物体为可移动物体的情况下,通过所述目标虚拟道具牵引所述第一场景物体朝着所述目标虚拟角色移动。
  6. 根据权利要求5所述的方法,其中,通过所述目标虚拟道具牵引所述第一场景物体朝着所述目标虚拟角色移动包括以下之一:
    控制所述目标端按照发生碰撞之前的速度的反向速度朝着所述目标虚拟角色移动,并控制所述第一场景物体按照与所述目标端相同的速度朝着所述目标虚拟角色移动;
    将所述第一场景物体绑定在所述目标端上,以控制所述第一场景物体跟随所述目标端移动,并控制所述目标端按照发生碰撞之前的速度的反向速度朝着所述目标虚拟角色移动。
  7. 根据权利要求5所述的方法,其中,在通过所述目标虚拟道具牵引所述第一场景 物体朝着所述目标虚拟角色移动的过程中,所述方法还包括:
    在所述目标虚拟道具或者所述第一场景物体与第二场景物体发生碰撞的情况下,控制所述目标端与所述第一场景物件分离;
    控制所述目标端朝着所述目标虚拟角色继续移动,直到返回到所述目标虚拟角色,并控制所述第一场景物体模拟物理移动,直到停止移动。
  8. 根据权利要求1所述的方法,还包括:
    在所述目标端与所述第一场景物体发生碰撞之后,播放所述目标虚拟道具与所述第一场景物体的物体属性匹配的目标连接动画,其中,所述目标虚拟道具不同的连接动画对应于碰撞到的场景物体的不同物体属性。
  9. 根据权利要求1至8中任一项所述的方法,其中,在通过所述目标客户端显示所述目标虚拟角色发射所述目标虚拟道具的目标端的画面之后,所述方法还包括:
    解算所述目标虚拟道具在每个更新时刻的数据位置标记数组,其中,所述目标虚拟道具通过多个节点被分为多个段,所述多个节点按照所述目标虚拟道具已发射出去的道具长度依次被触发沿发射方向移动,所述数据位置标记数组中的每个数据位置标记包含所述多个节点中的对应节点的位置数据;
    使用与每个所述更新时刻对应的所述数据位置标记数组,构建出所述目标虚拟道具在每个所述更新时刻的目标道具模型;
    在每个所述更新时刻对与当前时刻对应的所述目标道具模型进行渲染,以在每个所述更新时刻将所述目标虚拟道具渲染到所述目标客户端上进行显示。
  10. 根据权利要求9所述的方法,其中,解算所述目标虚拟道具在每个所述更新时刻的所述数据位置标记数组包括:
    获取与目标时刻对应的所述数据位置标记数组,其中,所述目标时刻为一个所述更新时刻,每个所述数据位置标记包含对应节点在所述目标时刻的节点位置、对应节点在所述目标时刻的前一个更新时刻的节点位置、以及用于指示对应节点是否参与位置运算的运算标记;
    根据与所述目标时刻对应的所述数据位置标记数组,基于韦尔莱算法解算出与所述目标时刻的下一个更新时刻对应的所述数据位置标记数组。
  11. 根据权利要求10所述的方法,其中,根据与所述目标时刻对应的所述数据位置标记数组,基于韦尔莱算法解算出与所述目标时刻的所述下一个更新时刻对应的所述数据位置标记数组包括:
    根据所述运算标记,从与所述目标时刻对应的所述数据位置标记数组中获取目标数据位置标记,其中,与所述目标数据位置标记对应的目标节点参与位置运算,所述目标数据位置标记包含所述目标节点在所述目标时刻的第一节点位置、以及在所述前一个更新时刻的第二节点位置;
    根据所述第一节点位置、所述第二节点位置以及更新时间间隔,确定所述目标节点在所述目标时刻的目标节点速度;
    确定所述目标节点的节点质量、以及所述目标节点在所述目标时刻所受到的目标力场;
    基于韦尔莱算法,将所述第一节点位置、所述目标节点速度与所述更新时间间隔的乘 积得到的值、以及所述目标力场与所述更新时间间隔平方的乘积除以两倍的所述节点质量得到的值相加,得到所述目标节点在所述下一个更新时刻的第三节点位置;
    使用所述第一节点位置和所述第三节点位置更新所述目标数据位置标记,其中,与所述下一个更新时刻对应的所述数据位置标记数组包含更新后的所述目标数据位置标记。
  12. 根据权利要求9所述的方法,其中,使用与每个所述更新时刻对应的所述数据位置标记数组,构建出所述目标虚拟道具在每个所述更新时刻的所述目标道具模型包括:
    获取与所述目标道具模型对应的目标模型面片,其中,所述目标模型面片用于构建所述目标道具模型;
    根据与每个所述更新时刻对应的所述数据位置标记数组,确定所述目标模型面片的目标顶点坐标和目标纹理坐标;
    使用所述目标顶点坐标和所述目标纹理坐标,构建出所述目标虚拟道具在每个所述更新时刻的所述目标道具模型。
  13. 一种道具的控制装置,包括:
    获取单元,用于获取对目标客户端执行的目标操作,其中,所述目标操作用于触发目标游戏场景中的目标虚拟角色发射目标虚拟道具,所述目标虚拟道具的一端绑定在所述目标虚拟角色上;
    显示单元,用于响应所述目标操作,通过所述目标客户端显示所述目标虚拟角色发射所述目标虚拟道具的目标端的画面,其中,所述目标端未绑定在所述目标虚拟角色上;
    牵引单元,用于在所述目标端与第一场景物体发生碰撞的情况下,根据所述第一场景物体的物体属性,通过所述目标虚拟道具牵引所述目标虚拟角色和所述第一场景物体中的至少一个发生移动。
  14. 一种电子设备,包括处理器、通信接口、存储器和通信总线,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信,
    所述存储器,用于存储计算机程序;
    所述处理器,用于通过运行所述存储器上所存储的所述计算机程序来执行权利要求1至12中任一项所述的方法步骤。
  15. 一种计算机可读的存储介质,所述存储介质中存储有计算机程序,所述计算机程序被设置为运行时执行权利要求1至12中任一项中所述的方法步骤。
  16. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至12中任一项所述的方法步骤。
PCT/CN2021/121356 2020-12-29 2021-09-28 道具的控制方法和装置、电子设备和存储介质 WO2022142543A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011593207.6A CN112587927B (zh) 2020-12-29 2020-12-29 道具的控制方法和装置、电子设备和存储介质
CN202011593207.6 2020-12-29

Publications (1)

Publication Number Publication Date
WO2022142543A1 true WO2022142543A1 (zh) 2022-07-07

Family

ID=75203438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121356 WO2022142543A1 (zh) 2020-12-29 2021-09-28 道具的控制方法和装置、电子设备和存储介质

Country Status (2)

Country Link
CN (2) CN116672712A (zh)
WO (1) WO2022142543A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116672712A (zh) * 2020-12-29 2023-09-01 苏州幻塔网络科技有限公司 道具的控制方法和装置、电子设备和存储介质
CN113546400B (zh) * 2021-07-26 2024-02-02 网易(杭州)网络有限公司 游戏中虚拟角色的控制方法、装置以及电子设备
CN114155605B (zh) * 2021-12-03 2023-09-15 北京字跳网络技术有限公司 一种控制方法、装置以及计算机存储介质
CN114618163A (zh) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 虚拟道具的驱动方法、装置、电子设备及可读存储介质
CN117504312A (zh) * 2022-07-28 2024-02-06 腾讯科技(成都)有限公司 虚拟角色的控制方法和装置、存储介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110882545A (zh) * 2019-12-06 2020-03-17 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子设备及存储介质
CN111026318A (zh) * 2019-12-05 2020-04-17 腾讯科技(深圳)有限公司 基于虚拟环境的动画播放方法、装置、设备及存储介质
CN112076467A (zh) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 控制虚拟对象使用虚拟道具的方法、装置、终端及介质
CN112587927A (zh) * 2020-12-29 2021-04-02 苏州幻塔网络科技有限公司 道具的控制方法和装置、电子设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2373882B (en) * 2001-03-27 2005-07-27 Proksim Software Inc Comparing the position of shared objects
CN109697001B (zh) * 2017-10-24 2021-07-27 腾讯科技(深圳)有限公司 交互界面的显示方法和装置、存储介质、电子装置
CN112044073B (zh) * 2020-09-10 2022-09-20 腾讯科技(深圳)有限公司 虚拟道具的使用方法、装置、设备及介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026318A (zh) * 2019-12-05 2020-04-17 腾讯科技(深圳)有限公司 基于虚拟环境的动画播放方法、装置、设备及存储介质
CN110882545A (zh) * 2019-12-06 2020-03-17 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子设备及存储介质
CN112076467A (zh) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 控制虚拟对象使用虚拟道具的方法、装置、终端及介质
CN112587927A (zh) * 2020-12-29 2021-04-02 苏州幻塔网络科技有限公司 道具的控制方法和装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112587927A (zh) 2021-04-02
CN112587927B (zh) 2023-07-07
CN116672712A (zh) 2023-09-01

Similar Documents

Publication Publication Date Title
WO2022142543A1 (zh) 道具的控制方法和装置、电子设备和存储介质
US10569176B2 (en) Video game gameplay having nuanced character movements and dynamic movement indicators
US8957858B2 (en) Multi-platform motion-based computer interactions
CN103530495A (zh) 增强现实仿真连续体
JP2022535675A (ja) 仮想オブジェクトの制御方法並びにその、装置、端末及びコンピュータプログラム
KR20130093069A (ko) 서버와, 게임 장치, 및 이 서버에서 실행되는 프로그램
WO2022156486A1 (zh) 虚拟道具的投放方法、装置、终端、存储介质及程序产品
US11782500B2 (en) System, method and apparatus of simulating physics in a virtual environment
JP2022540277A (ja) 仮想オブジェクト制御方法、装置、端末及びコンピュータプログラム
CN112156459A (zh) 对战游戏的控制方法和装置、存储介质和电子装置
JP2022552752A (ja) 仮想環境の画面表示方法及び装置、並びにコンピュータ装置及びプログラム
US11628365B2 (en) Information processing system, storage medium, information processing apparatus and information processing method
CN112316429A (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN114247146A (zh) 一种游戏的显示控制方法、装置、电子设备和介质
JP2023164687A (ja) 仮想オブジェクトの制御方法及び装置並びにコンピュータ装置及びプログラム
US11957980B2 (en) Respawn systems and methods in video games
JP2024508682A (ja) 仮想オブジェクトの制御方法、仮想オブジェクトの制御装置、コンピュータ機器、及びコンピュータプログラム
CN114225413A (zh) 碰撞检测方法、装置、电子设备和存储介质
CN113952739A (zh) 游戏数据的处理方法、装置、电子设备及可读存储介质
JP6959267B2 (ja) 場所ベースゲームプレイコンパニオンアプリケーションを使用するチャレンジの生成
KR20190127308A (ko) 게임 동작 예측 장치 및 방법
CN112843682B (zh) 数据同步方法、装置、设备及存储介质
KR102553856B1 (ko) 통제 정보 제공 장치 및 방법, 통제 정보 표시 장치 및 방법
WO2023197777A1 (zh) 虚拟道具的使用方法、装置、设备、介质及程序产品
JP2023049267A (ja) ゲームシステム、プログラム及びゲーム提供方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913288

Country of ref document: EP

Kind code of ref document: A1