WO2022252905A1 - 虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2022252905A1
WO2022252905A1 PCT/CN2022/090972 CN2022090972W WO2022252905A1 WO 2022252905 A1 WO2022252905 A1 WO 2022252905A1 CN 2022090972 W CN2022090972 W CN 2022090972W WO 2022252905 A1 WO2022252905 A1 WO 2022252905A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
target
summoned
target virtual
scene
Prior art date
Application number
PCT/CN2022/090972
Other languages
English (en)
French (fr)
Inventor
蔡奋麟
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2023553739A priority Critical patent/JP2024512345A/ja
Publication of WO2022252905A1 publication Critical patent/WO2022252905A1/zh
Priority to US18/303,851 priority patent/US20230256338A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the embodiment of the present application is based on the Chinese patent application with the application number 202110602499.3 and the filing date of May 31, 2021, and claims the priority of the Chinese patent application.
  • the entire content of the Chinese patent application is hereby incorporated into the embodiment of the present application as refer to.
  • the present application relates to human-computer interaction technology, and in particular to a control method, device, equipment, and computer-readable storage medium and computer program product for calling objects in a virtual scene.
  • Embodiments of the present application provide a control method, device, equipment, computer-readable storage medium, and computer program product for calling objects in a virtual scene, which can improve the efficiency of human-computer interaction.
  • An embodiment of the present application provides a control method for calling objects in a virtual scene, including:
  • the summoned object of the second form is controlled to be in an interaction assistance state, so as to assist the target virtual object to interact with the other virtual objects.
  • An embodiment of the present application provides a control method for calling objects in a virtual scene, including:
  • the summoned object In response to the transformation command triggered based on the sight pattern, the summoned object is controlled to move to the target position, and the character form is transformed into a shield state at the target position, so as to assist the target virtual object interact with the other virtual objects.
  • An embodiment of the present application provides a control device for calling objects in a virtual scene, including:
  • the object presentation module is configured to present the target virtual object in the interface of the virtual scene and the calling object of the first form
  • the state control module is configured to control the form of the called object to change from the first form to the second form when the target virtual object is in an interaction ready state for interacting with other virtual objects in the virtual scene, and
  • the summoned object of the second form is controlled to be in an interaction assistance state, so as to assist the target virtual object to interact with the other virtual objects.
  • An embodiment of the present application provides a control device for calling objects in a virtual scene, including:
  • the first presentation module is configured to present a target virtual object holding a shooting prop in a virtual shooting scene, and a summoned object in a character form;
  • the aiming control module is configured to, in the virtual shooting scene, control the target virtual object to use the shooting prop to aim at the target position, and present a corresponding crosshair pattern at the target position;
  • a state change module configured to control the summoned object to move to the target position in response to the change command triggered based on the sight pattern, and change from the character form to a shield state at the target position, so as to assisting the target virtual object to interact with the other virtual objects.
  • An embodiment of the present application provides an electronic device, including:
  • the processor is configured to, when executing the executable instructions stored in the memory, realize the control method for calling an object in the virtual scene provided by the embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium, which stores executable instructions, and is used to cause the processor to execute the method to implement the method for controlling the summoned object in the virtual scene provided by the embodiment of the present application.
  • An embodiment of the present application provides a computer program product, including a computer program or an instruction.
  • the computer program or instruction is executed by a processor, the method for controlling an object summoned in a virtual scene provided by the embodiment of the present application is implemented.
  • the target virtual object in the virtual scene and the summoned object in the first form Present the target virtual object in the virtual scene and the summoned object in the first form; when the target virtual object is in the interaction preparation state for interacting with other virtual objects in the virtual scene, control the form of the summoned object to change from the first form to the second form , and control the summoned object in the second form to be in the interaction assistance state to assist the target virtual object to interact with other virtual objects; in this way, when the target virtual object is in the interactive ready state, the form of the summoned object can be automatically controlled from the first form Transform into the second form, and control the summoned object to enter the interactive assistance state. Without any operation by the user, the summoned object can be automatically controlled to assist the target virtual object to interact with other virtual objects.
  • the target virtual object's In order to achieve a certain interaction purpose, the number of times the user operates the terminal to control the target virtual object to perform interactive operations can greatly reduce the efficiency of human-computer interaction and save the consumption of computing resources.
  • FIG. 1 is a schematic structural diagram of a control system 100 for calling objects in a virtual scene provided by an embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application.
  • FIG. 3A is a schematic flowchart of a method for controlling calling an object in a virtual scene provided by an embodiment of the present application
  • FIG. 3B is a schematic flowchart of a control method for calling an object in a virtual scene provided by an embodiment of the present application
  • Fig. 4 is a schematic diagram of following the summoned object provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of the state transition of the calling object provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of the state transition of the calling object provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the calling conditions of the calling object provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of a calling method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a method for following a summoned object provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of mobile location determination provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of a state change method of a summoned object provided by the embodiment of the present application.
  • FIG. 12 is a schematic diagram of the state transition of the calling object provided by the embodiment of the present application.
  • Fig. 13 is a schematic diagram of the function and effect of the summoned object provided by the embodiment of the present application.
  • FIG. 15 is a schematic diagram of the state transition of the calling object provided by the embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a control device for calling an object in a virtual scene provided by an embodiment of the present application.
  • first ⁇ second is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second" is allowed The specific order or sequencing may be interchanged such that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein.
  • Client an application running on a terminal to provide various services, such as a video playback client, a game client, and the like.
  • Response is used to represent the condition or state on which the executed operation depends.
  • one or more operations to be executed may be real-time or have a set delay; Unless otherwise specified, there is no restriction on the order in which the operations are performed.
  • the virtual scene is the virtual scene displayed (or provided) when the application program is running on the terminal.
  • the virtual scene can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a pure fiction virtual environment.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimensions of the virtual scene.
  • the three-dimensional virtual space can be an open space
  • the virtual scene can be used to simulate a real environment in reality.
  • the virtual scene can include sky, land, ocean, etc.
  • the Land can include deserts, cities, and other environmental elements.
  • virtual items can also be included in the virtual scene, for example, buildings, vehicles, virtual objects in the virtual scene are used to arm themselves or props such as weapons required for fighting with other virtual objects, and the virtual scene can also be It is used to simulate the real environment under different weathers, such as sunny, rainy, foggy or dark weather. The user can control the virtual object to move in the virtual scene.
  • the movable object may be a virtual character, a virtual animal, an animation character, etc., such as: a character, an animal, a plant, an oil drum, a wall, a stone, etc. displayed in a virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object can be a user character controlled by an operation on the client, or an artificial intelligence (AI, Artificial Intelligence) set in a virtual scene battle through training, or an artificial intelligence (AI) set in a virtual scene interaction
  • AI Artificial Intelligence
  • NPC Non-Player Character
  • the virtual object may be a virtual character performing confrontational interaction in the virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene may be preset, or dynamically determined according to the number of clients participating in the interaction.
  • the user can control the virtual object to fall freely in the sky of the virtual scene, glide or open the parachute to fall, etc., run, jump, crawl, bend forward, etc. on the land, and can also control The virtual object swims, floats or dives in the ocean.
  • the user can also control the virtual object to move in the virtual scene on a virtual vehicle.
  • the virtual vehicle can be a virtual car, a virtual aircraft, a virtual yacht, etc.
  • the above-mentioned scenario is used as an example for illustration, and this embodiment of the present application does not specifically limit it.
  • Users can also control virtual objects to interact with other virtual objects through virtual props.
  • the virtual props can be throwing virtual props such as grenades, cluster mines, and sticky grenades, or shooting virtual props such as machine guns, pistols, and rifles.
  • this application does not specifically limit the control type of the summoned object in the virtual scene.
  • Summon objects the images of various people and objects in the virtual scene that can assist virtual objects to interact with other virtual objects.
  • the images can be virtual characters, virtual animals, animation characters, virtual props, virtual vehicles, etc.
  • Scene data representing various characteristics of the objects in the virtual scene during the interaction process, for example, may include the position of the objects in the virtual scene.
  • different types of features may be included according to the type of virtual scene; for example, in the virtual scene of a game, the scene data may include the waiting time for various functions configured in the virtual scene (depending on the ability to use the same function within a certain period of time).
  • the number of functions can also represent the attribute values of various states of the game character, for example, including life value (energy value, also known as red value) and mana value (also known as blue value) and so on.
  • FIG. 1 is a schematic structural diagram of a control system 100 for calling objects in a virtual scene provided by an embodiment of the present application.
  • a terminal for example, a terminal 400-1 and a terminal 400-2
  • the server 200 is connected through the network 300
  • the network 300 may be a wide area network or a local area network, or a combination of the two, and a wireless or wired link is used to realize data transmission.
  • the terminal can be various types of user terminals such as smart phones, tablet computers, and notebook computers, and can also be a combination of any two or more of these data processing devices; the server 200 can also be a A server configured independently to support various services can also be configured as a server cluster or as a cloud server.
  • the terminal installs and runs applications that support virtual scenes
  • applications can be first-person shooting games (FPS, First-Person Shooting game), third-person shooting games, multiplayer online tactical arena games (MOBA, Multiplayer Online Battle Arena games), two-dimensional (Two Dimension, referred to as 2D) game applications, three-dimensional (Three Dimension, referred to as 3D) game applications, virtual reality applications, 3D map programs or any one of multiplayer gun battle survival games
  • the application program may also be a stand-alone version of the application program, such as a stand-alone version of a 3D game program.
  • the virtual scene involved in the embodiment of the present invention can be used to simulate a three-dimensional virtual space
  • the three-dimensional virtual space can be an open space
  • the virtual scene can be used to simulate a real environment in reality
  • the virtual scene can include Sky, land, ocean, etc.
  • the land may include environmental elements such as deserts and cities.
  • virtual items may also be included in the virtual scene, for example, buildings, tables, vehicles, and props such as weapons needed by virtual objects in the virtual scene to arm themselves or fight with other virtual objects.
  • the virtual scene can also be used to simulate a real environment under different weathers, for example, weather such as sunny days, rainy days, foggy days or dark nights.
  • the virtual object may be a virtual avatar representing the user in the virtual scene, and the avatar may be in any form, for example, a simulated character, a simulated animal, etc., which is not limited in the present invention.
  • the user can use the terminal to control virtual objects to perform activities in the virtual scene, such activities include but not limited to: adjusting body posture, crawling, running, riding, jumping, driving, picking up, shooting, attacking, throwing, At least one of the cut stamps.
  • the user can operate on the terminal in advance, and after the terminal detects the user's operation, it can download the game configuration file of the electronic game, and the game configuration file can include the application program of the electronic game, Interface display data or virtual scene data, etc., so that when the user (or player) logs into the electronic game on the terminal, the game configuration file can be invoked to render and display the electronic game interface.
  • the user can perform a touch operation on the terminal.
  • the terminal After the terminal detects the touch operation, it can send an acquisition request for the game data corresponding to the touch operation to the server, and the server determines the game data corresponding to the touch operation based on the acquisition request, and Returning to the terminal, the terminal renders and displays the game data, which may include virtual scene data, behavior data of virtual objects in the virtual scene, and the like.
  • the terminal presents the target virtual object in the virtual scene and the calling object in the first form;
  • the form is transformed into the second form, and the summoned object of the second form is controlled to be in an interactive assistance state, so as to assist the target virtual object to interact with other virtual objects.
  • FIG. 2 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application.
  • the electronic device 500 may be the terminal 400-1, the terminal 400-2 or the server 200 in FIG. Taking the terminal 400-1 or the terminal 400-2 shown in FIG. 1 as an example, an electronic device that implements the method for controlling the summoned object in the virtual scene according to the embodiment of the present application will be described.
  • the electronic device 500 shown in FIG. 2 includes: at least one processor 510 , a memory 550 , at least one network interface 520 and a user interface 530 .
  • Various components in the electronic device 500 are coupled together through the bus system 540 .
  • the bus system 540 is used to realize connection and communication between these components.
  • the bus system 540 also includes a power bus, a control bus and a status signal bus. However, for clarity of illustration, the various buses are labeled as bus system 540 in FIG. 2 .
  • Processor 510 can be a kind of integrated circuit chip, has signal processing capability, such as general-purpose processor, digital signal processor (DSP, Digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware Components, etc., wherein the general-purpose processor can be a microprocessor or any conventional processor, etc.
  • DSP digital signal processor
  • DSP Digital Signal Processor
  • User interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
  • the user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
  • Memory 550 may be removable, non-removable or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 550 optionally includes one or more storage devices located physically remote from processor 510 .
  • Memory 550 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), and the volatile memory can be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 550 described in the embodiment of the present application is intended to include any suitable type of memory.
  • control device for calling objects in the virtual scene may be realized by software.
  • FIG. 2 shows the control device 555 for calling objects in the virtual scene stored in the memory 550, which may be Software in the form of programs and plug-ins, including the following software modules: object presentation module 5551 and state control module 5552, these modules are logical, so they can be combined or further divided arbitrarily according to the functions realized, which will be explained below function of each module.
  • FIG. 3A is a schematic flowchart of a method for controlling calling an object in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 3A .
  • Step 101 The terminal presents a target virtual object in a virtual scene and a calling object in a first form.
  • a client that supports virtual scenes is installed on the terminal.
  • the terminal sends a request for obtaining scene data of the virtual scene to the server, and the server based on the scene carried in the request identification, obtain the scene data of the virtual scene indicated by the scene identification, and return the obtained scene data to the terminal, and the terminal performs screen rendering based on the received scene data, presenting the virtual scene obtained by observing the virtual scene from the perspective of the target virtual object screen, and present the target virtual object and the summoned object in the first form in the screen of the virtual scene.
  • the picture of the virtual scene is obtained by observing the virtual scene from the first-person perspective of the object, or the virtual scene is observed from the third-person perspective.
  • the picture of the virtual scene includes virtual objects and object interaction environments for interactive operations. The target virtual object to control and the summoned object associated with the target virtual object.
  • the target virtual object is a virtual object in the virtual scene corresponding to the current login account.
  • the user can control the target virtual object to interact with other virtual objects (different from the virtual objects in the virtual scene corresponding to the current login account) based on the interface of the virtual scene, such as controlling the target virtual object to hold virtual shooting props (such as virtual sniper firearms, virtual submachine guns, virtual shotguns, etc.) to shoot other virtual objects.
  • the summoned object is the image of various people and objects used to assist the target virtual object to interact with other virtual objects in the virtual scene.
  • the image can be a virtual character, virtual animal, animation character, virtual prop, virtual vehicle, etc.
  • the summoning object in the first form can be summoned by the following method: when there is a virtual chip for summoning the summoning object in the virtual scene, the control target virtual The object picks up the virtual chip; acquires the energy value of the target virtual object; when the energy value of the target virtual object reaches the energy threshold, summons the summoned object based on the virtual chip.
  • the virtual chip used to summon the above-mentioned calling object can be pre-configured in the virtual scene, and the virtual chip can exist at a specific position in the virtual scene, that is, the user can assemble the virtual chip through a picking operation; in practical applications, the virtual The chip can also be picked up, obtained as a reward or purchased before the user enters the virtual scene, or in the virtual scene.
  • the virtual chip can exist in the scene setting interface, that is, the user can assemble the virtual chip based on the setting operation on the scene setting interface.
  • the terminal After the control target virtual object is assembled with the virtual chip, the terminal obtains the attribute value of the target virtual object, such as the target virtual object's life value, energy value, etc.; then judges whether the attribute value of the target virtual object satisfies the summoning condition corresponding to the calling object
  • the summoning condition corresponding to the summoning object is that the attribute value of the virtual object must reach 500 points, so it can be judged whether the energy value of the target virtual object exceeds 500 points to determine whether the corresponding summoning condition of the summoning object is met; when determining based on the attribute value
  • the summoning condition corresponding to the summoning object is met (that is, the energy value of the target virtual object exceeds 500 points)
  • the summoning object corresponding to the target virtual object is summoned based on the assembled virtual chip.
  • the summoning condition corresponding to the summoning object may also include: whether to interact with the target virtual monster (such as an elite monster in a virtual state (life value lower than a preset threshold)).
  • the summoning condition corresponding to the summoning object is satisfied (that is, interacting with the target virtual monster)
  • the summoning object corresponding to the target virtual object is summoned based on the installed virtual chip.
  • the terminal can control the calling object to follow the target virtual object to move in the following manner: acquire the relative distance between the target virtual object and the calling object; When the relative distance exceeds the first distance threshold, the summoned object in the first form is controlled to move to a first target position relative to the target virtual object.
  • the first distance threshold is the position where the calling object is convenient for assisting the target virtual object.
  • the area of the auxiliary target virtual object can trigger the active follow behavior of the summoned object, that is, control the summoned object to move to a position close to the target virtual object, and control the summoned object to move to the first target position convenient for the auxiliary target summoned object ;
  • the target distance threshold the minimum distance between the calling object and the target virtual object when it is convenient to assist the target virtual object, less than the first distance threshold
  • the active following behavior of the summoned object can also be triggered, that is, the summoned object is controlled to move away from the target virtual object. Move at the current location, and control the summoned object to move to the first target position that is convenient for assisting the target summoned object; when the relative distance between the summoned object and the target virtual object is greater than the target distance threshold and less than the first distance threshold, the summoned object is considered In an area that is convenient for the virtual object of the auxiliary target, the summoned object can be controlled to remain in place.
  • the summoned object in order to ensure that the summoned object is at the precise position that is most convenient for the virtual object of the auxiliary target, the summoned object can be controlled to move to the most convenient location for the virtual object of the auxiliary target. at the first target position of .
  • the first target position is the ideal position of the calling object relative to the target virtual object, which is the most favorable position for the calling object to assist the target virtual object.
  • the first target position is related to the attributes and interaction habits of the calling object and the target virtual object.
  • the corresponding first target position can be different for different summoned objects and different target virtual objects.
  • the first target position is a position located at a certain distance behind the target virtual object, or it can be located at a certain distance behind the target virtual object.
  • the position can also be any position in a fan-shaped area with a preset angle centered on the target virtual object, etc. This application does not limit the first target position. In practical applications, it should be determined according to the actual situation. Certainly.
  • the terminal can also control the calling object to follow the target virtual object to move in the following manner: control the target virtual object to move in the virtual scene ;With the movement of the target virtual object, in the tracking area centered on the position of the target virtual object, present the second target position of the summoned object in the first form relative to the target virtual object, and control the movement of the summoned object in the first form to the second target position.
  • the summoned object is controlled to move to a third target position; wherein, the orientation of the third target position relative to the target virtual object is different from that of the second target position.
  • the third target location can also be determined by the following method: determining that the calling object is within the tracking area, and at least two positions, and select a position from at least two positions whose distance from the second target position is smaller than the target distance as the third target position; or, when there is no reachable position in the tracking area, expand the tracking area, and A third target position relative to the target virtual object is determined in the enlarged tracking area.
  • the summoned object in the process of controlling the summoned object to move to the second target position (such as a certain distance behind the player's right) that is most beneficial to the auxiliary target virtual object, when the summoned object cannot be controlled to reach the second target position, the summoned object can be controlled Move to other positions, such as controlling the summoned object to reach the nearest reachable point on the right rear of the target virtual object, or reach a certain distance behind the left rear of the target virtual object, or expand the tracking area. In the expanded tracking area, find a suitable reachable target point to control the summoned object to move to the found suitable reachable target point.
  • Fig. 4 is a schematic diagram of following the summoned object provided by the embodiment of the present application, with the target virtual object (player) facing the reverse extension line L1 facing forward, extending two angled areas to the left and right, and the angle ⁇
  • the size is configurable. Get point A whose distance from the player's location to the reverse extension line L1 is R1, and make a vertical line L2 that passes through point A and is perpendicular to the reverse extension line L1.
  • the line L2 and the included angle ray form two left and right triangular tracking areas (area 1 and area 2) or two left and right fan-shaped tracking areas, and the priority is in the tracking area that is most conducive to assisting the target virtual object, such as in area 1 behind the player's right , select the target point (point B) that the summoned object can reach, as the target point where the summoned object follows the target virtual object (that is, the third target position).
  • the terminal can also control the summoned object to follow the target virtual object to move in the following manner: control the target virtual object to move in the virtual scene; Along with the movement, the moving route indication information is presented, and the moving route indicating information is used to indicate the moving route of the calling object to follow the target virtual object; the calling object is controlled to move according to the moving route indicated by the moving route indicating information.
  • the moving route indicated by the instruction information is the moving route of the target virtual object
  • the terminal controls the calling object to move synchronously with the target calling object according to the moving route indicated by the moving route indicating information, so as to ensure that the calling object is always in the most favorable position for assisting the target virtual object. relative position of the object.
  • the moving route indication information is the moving route for adjusting the calling object in real time.
  • the terminal controls the calling object to move according to the moving route indicated by the moving route indication information, and the relative position of the calling object relative to the target virtual object can be adjusted in real time to ensure that the calling object The object is placed in the most favorable relative position to assist the target virtual object as much as possible.
  • Step 102 When the target virtual object is in the interactive preparation state for interacting with other virtual objects in the virtual scene, control the form of the summoned object to change from the first form to the second form, and control the summoned object in the second form to be in the interactive auxiliary state , to assist the target virtual object to interact with other virtual objects.
  • the calling object can have at least two different working states, such as non-interactive ready state and interactive ready state; when the calling object satisfies the working state change condition, the terminal can control the calling object to change the working state, wherein, the calling object's work State transition conditions can be related to the working state of the target virtual object.
  • the summoned object is in the following state of following the target virtual object by default, when the target virtual object is in the non-interactive ready state for interacting with other virtual objects in the virtual scene
  • the calling object is controlled to remain in the following state; when the target virtual object is in the interaction preparation state for interacting with other virtual objects in the virtual scene, it is determined that the calling object meets the working state change conditions, Then control the summoned object to change from the following state to the interaction ready state.
  • the terminal can control the form of the summoned object to change from the first form to the second form in the following manner: control the summoned object in the first form to move to a target position whose distance from the target virtual object is the target distance; At the target position, control the summoned object to transform from the first form to the second form.
  • the summoned object has at least two different forms.
  • the summoned object can be controlled to perform form transformation; for example, when the summoned object is an anime character , and the working state of the target virtual object in the virtual scene is a non-interactive ready state, and it is determined that the calling object does not meet the shape transformation condition, then the shape of the calling object is controlled to be a character shape (that is, the first shape); when the target virtual object is determined by The non-interactive ready state is transformed into the interactive ready state.
  • the summoned object with the character form is controlled to move to the target position, and the target position is controlled.
  • the summoned object is transformed from a character form to a second form such as a virtual shield wall or a shield.
  • Figure 5- Figure 6 is a schematic diagram of the state transition of the calling object provided by the embodiment of the present application.
  • the form of the calling object It is the character form 502 (i.e. the first form).
  • the target virtual object 501 enters the interactive preparation state such as shoulder aiming or aiming through the mirror, control the summoned object with the character form to move to the target position, and control the summoned object at the target position by
  • the character form i.e. the first form
  • a virtual shield wall form 503 i.e. the second form
  • the first form when the target virtual object 601 enters an interactive preparation state such as shoulder aiming or aiming through the mirror, control the summoned object with an animation character image to move to the target position, and control the summoned object at the target position to change from the character form ( That is, the first form) is transformed into a protective cover form 603 (that is, the second form).
  • the terminal can also display an interaction screen corresponding to the interaction between the target virtual object and other virtual objects, wherein the target virtual object and other virtual objects are located on both sides of the calling object; during the process of displaying the interaction screen, when When other virtual objects perform interactive operations on the target virtual object through virtual props, the summoned object is controlled to block the interactive operation.
  • the summoned object in the second form can block other virtual objects from attacking the target virtual object.
  • the summoned object in the second form is a virtual shield wall and other virtual objects launch bullets to attack the target virtual object, if the bullet acts On the virtual shield wall, the virtual shield wall can block the attack of bullets on the target virtual object, so as to protect the target virtual object.
  • the terminal may also present attribute change indication information corresponding to the summoned object; wherein, the attribute change indication information is used to indicate the attribute value of the summoned object deducted from blocking the interactive operation; when the attribute change indication information indicates that the summoned object When the attribute value is lower than the attribute threshold, the form of the control summoned object is changed from the second form to the first form.
  • the attribute value may include at least one of the following: life value, blood volume, energy value, stamina value, ammunition amount, and defense value.
  • life value a value that specifies the balance of the game
  • energy value a value that specifies the amount of the game
  • stamina value a value that specifies the amount of the game.
  • the summoned object can resist the attack from the front, it will also lose its own attributes and reduce its own attribute value due to the attack.
  • the attribute value is lower than the attribute threshold, the form of the controlled summoned object will change from the second form Transform to first form.
  • the summoned object is a shield-type AI
  • the virtual shield wall can resist the attack from the front, it will continue to lose blood (the blood volume of the shield-type AI) due to the attack.
  • the blood volume is lower than a certain
  • the set value is set, it will exit from the shield wall state and enter the humanoid attack action.
  • the terminal when the target virtual object and other virtual objects are located on both sides of the summoned object, the terminal can also display the screen where the target virtual object observes other virtual objects through the summoned object in the second form, and highlight them in the screen other virtual objects.
  • night vision can be used to display the picture of observing other virtual objects through the summoned object of the second form, and highlight the outlines of other virtual objects in the picture to highlight other virtual objects.
  • the summoning object of the second form is a virtual shield wall (opaque, with a blocking effect), and the target virtual object and other virtual objects are located on both sides of the virtual shield wall.
  • the target virtual object observes the virtual shield wall from its own perspective , it is impossible to observe other virtual objects blocked by the virtual shield wall, but in the embodiment of this application, when the target virtual object observes the virtual shield wall from its own perspective, it is blocked by the virtual shield wall due to night vision or perspective.
  • the target virtual object can observe other virtual objects blocked by the virtual shield wall; when other virtual objects observe the virtual shield wall from their own perspective, they are The target virtual object blocked by the virtual shield wall cannot be observed; in this way, other virtual objects are exposed in the field of view of the target virtual object, but the target virtual object is not exposed in the field of vision of other virtual objects, which is beneficial to control the target virtual object Formulate an interaction strategy that can cause the greatest damage to other virtual objects, and perform corresponding interaction operations according to the interaction strategy, thereby improving the interaction ability of the target virtual object and improving the efficiency of human-computer interaction.
  • the target virtual object and other virtual objects are located on both sides of the calling object, during the process of the target virtual object interacting with other virtual objects, the target virtual object is controlled to project virtual props in the virtual scene; when the virtual When the prop passes through the summoned object, an effect enhancement prompt message is displayed, which is used to prompt that the corresponding effect of the virtual prop has been improved.
  • projection may include throwing or launching, such as controlling the target virtual object to throw a first virtual prop (such as a dart, a grenade, a javelin, etc.) in a virtual scene, or controlling a target virtual object to pass a second virtual prop (such as a firearm) , bow and arrow, ballista, etc.) to launch sub-virtual items (correspondingly, such as bullets, arrows, bombs, etc.), when the first virtual item or sub-virtual items pass through the summoned object, gain effects such as increased attack power can be obtained.
  • a first virtual prop such as a dart, a grenade, a javelin, etc.
  • a second virtual prop such as a firearm
  • sub-virtual items correspondingly, such as bullets, arrows, bombs, etc.
  • the target virtual object can also be maintained in the interaction preparation state.
  • control the summoned object in the second form to follow the target virtual object to move.
  • the summoning object of the second form is a virtual shield wall
  • the target virtual object keeps aiming and moves or turns
  • control the virtual shield wall to follow the target virtual object to move or turn in real time, ensuring that the virtual shield wall is always at the target virtual object
  • the summoned object of the second form is a protective cover
  • the target virtual object keeps aiming and moves or turns
  • the protective cover will be controlled to move or turn in real time following the target virtual object, ensuring that the protective cover is always in the around the target virtual object.
  • the terminal automatically adjusts the moving route of the summoned object to avoid the obstacle .
  • the terminal can continuously detect the position coordinates of the summoned object relative to the target virtual object during the process of controlling the second form of the summoned object to follow the target virtual object.
  • the position coordinates will change. is constantly corrected, the summoned object will also keep overlapping with the position coordinates; when there is an obstacle at the position coordinates, prevent the summoned object from moving to the coordinate position, then control the summoned object to move to the closest reachable position from the position coordinates;
  • the moving speed of the summoned object is configurable.
  • the summoned object When the target virtual object moves or turns in the interactive ready state, the summoned object will follow the interactive ready state to move or turn in real time, ensuring that the summoned object is always in a position that can assist the target virtual object.
  • the summoned object in the second state is a virtual shield
  • the summoned object in the second state is a shield
  • the terminal controls the calling object in the second form to switch from the following state to the interactive assistance state
  • the target virtual object exits the interaction preparation state
  • it can also control the calling object to change from the second form to the first Form, and control the working state of the summoned object in the first form to switch from the interactive assistance state to the follow state.
  • the corresponding first form is a character form
  • the corresponding second form is a virtual shield wall.
  • the wall immediately returns to the character form, and returns to the default position of following the target virtual object, such as the target position at the right rear of the target virtual object, switching from the interactive assistance state to the following state, so that the shape and working state of the summoned object are the same as the target virtual object
  • the working state of the target is adapted to facilitate the summoned object to play an auxiliary role for the target virtual object in a timely manner.
  • the skills of the target virtual object can be improved, and the interaction ability of the target virtual object can be improved to improve human-computer interaction. efficiency.
  • the terminal can control the form of the summoned object to change from the first form to the second form in the following manner, and control the summoned object in the second form to be in an interactive assistance state: control the target virtual object to use the target virtual prop to aim at the virtual scene The target position in the target position, and present the corresponding crosshair pattern at the target position; in response to the transformation instruction triggered based on the crosshair pattern, control the summoned object to move to the target position, transform from the first form to the second form at the target position, and The summoned object that controls the second form is in the interaction assistance state.
  • the locking target corresponding to the target position may be other virtual objects different from the target virtual object in the virtual scene, or a scene position in the virtual scene, such as hillside, sky, trees, etc. in the virtual scene.
  • the target virtual prop may be corresponding to a corresponding sight pattern (for example, a sight pattern of a virtual shooting gun), so that the sight pattern appears at the target position after aiming at the target position.
  • the interactive assistance state corresponding to the summoned object may be different.
  • the terminal controls the target virtual object to use the target virtual prop to target the target object in the virtual scene (that is, the locked target is another virtual object)
  • the terminal controls the summoned object in the first form to be in the auxiliary attack state that is, the summoned object in the auxiliary attack state is controlled to use the corresponding specific skills to attack the target object
  • the terminal controls the target virtual object to use the target virtual props to aim at the target in the virtual scene position there is no target object, such as the locked target is a scene position such as a point on the ground or a point in the sky in the virtual scene
  • the terminal controls the calling object in the first form to move to the target position, and controls the calling object at the target position to move from the second One form transforms into the second form.
  • the summoned object in the control shield form will switch from the following state (corresponding to the first form (such as the human form)) to the auxiliary protection state (corresponding to the shield state), so as to control the summoned object to be in an interactive assistance state suitable for the locked target, so as to assist the target virtual object to interact in the virtual scene.
  • the first form such as the human form
  • the auxiliary protection state corresponding to the shield state
  • the terminal after the terminal controls the form of the summoned object to change from the first form to the second form, and controls the summoned object in the second form to be in the interactive assistance state, it can also present a recall control for recalling the summoned object ; Responding to the trigger operation on the recall control, controlling the summoned object to move from the target position to the initial position, and controlling the summoned object to transform from the second form to the first form.
  • the recall of the summoned object is implemented through the recall control.
  • the recalled summoned object can be controlled to be in the first form (ie, the initial form).
  • FIG. 3B is the control of the calling object in the virtual scene provided by the embodiment of the present application.
  • Step 501 The terminal presents a target virtual object holding a shooting prop in a virtual shooting scene, and a summoned object in a character form.
  • the terminal while presenting the target virtual object holding the shooting prop, the terminal also presents the summoned object corresponding to the target virtual object.
  • the summoned object is in a humanoid form (namely, the above-mentioned first form).
  • the summoned object is an image in the form of a character for assisting the target virtual object to interact with other virtual objects in the virtual scene, and the image may be a virtual character, an animation character, or the like.
  • the summoning object can be the summoning object that the system randomly assigns to the target virtual object when the user first enters the virtual scene, or it can be the user controlling the target virtual object to perform some specific tasks according to the scene guidance information in the virtual scene to reach the summoning object.
  • the summoning condition so as to summon the summoning object, can also be the summoning object that the user summons by triggering the summoning control. For example, when the summoning condition is met, click the summoning control to summon the above-mentioned summoning object.
  • Step 502 In the virtual shooting scene, control the target virtual object to use the shooting props to aim at the target position, and present a corresponding crosshair pattern at the target position.
  • the terminal can control the target virtual object to use the shooting prop to aim at the target position in the virtual scene for interaction.
  • the locked target corresponding to the target position may be another virtual object different from the target virtual object in the virtual scene, or a scene position in the virtual scene, such as a hillside, sky, trees, etc. in the virtual scene.
  • the shooting prop may be corresponding to a corresponding sight pattern (such as a sight pattern of a virtual shooting gun), so that the sight pattern appears at the target position after aiming at the target position.
  • Step 503 In response to the transformation instruction triggered based on the crosshair pattern, control the summoned object to move to the target position, and transform from a character form to a shield state at the target position, so as to assist the target virtual object to interact with other virtual objects.
  • auxiliary protection state corresponding to the shield state
  • the summoned object in the auxiliary protection state is controlled to interact with other virtual objects in the virtual shooting scene; through the above method, the summoned object is controlled to be in a state suitable for locking the target
  • the first form of the shield-type AI is a character form
  • the second form is a virtual shield wall (that is, the above-mentioned shield state )
  • the shield-type AI is controlled to automatically transform from the character form into a virtual shield wall, so as to assist the target virtual object to interact with other virtual objects in the virtual scene.
  • the flow of the control method for calling objects in the virtual scene mainly involves: the call of the shield-type AI, the logic that the shield-type AI follows the target virtual object to move, the state transformation of the shield-type AI, Next, we will explain one by one.
  • Figure 7 is a schematic diagram of the summoning conditions of the summoning object provided by the embodiment of the present application.
  • the summoning condition of the shield-type AI is that the target virtual object has a shield chip, and the energy value of the target virtual object reaches the energy Threshold, interacting with other virtual objects (such as interacting with any weak elite monsters), when the above conditions are met, the shield AI can be summoned.
  • FIG. 8 is a schematic diagram of a calling method provided by an embodiment of the present application. The method includes:
  • Step 201 The terminal controls the target virtual object to interact with other target objects in the virtual scene.
  • Step 202 Determine whether the target virtual object has a shield chip.
  • the terminal when there is a shield chip for summoning a shield-type AI in the virtual scene, the terminal can control the target virtual object to pick up the shield chip, and when the target virtual object successfully picks up the shield chip, perform step 203 ; When there is no shield chip for summoning the shield-type AI in the virtual scene, or the target virtual object fails to pick up the shield chip, go to step 205 .
  • Step 203 Determine whether the energy of the target virtual object reaches an energy threshold.
  • the energy of the target virtual object can be obtained through the interactive operation of the target virtual object in the virtual scene, and the terminal obtains the energy value of the target virtual object.
  • the energy value of the target virtual object reaches the energy threshold (such as nanometer energy exceeds 500 points)
  • Execute step 204 when the energy value of the target virtual object does not reach the energy threshold (for example, nanometer energy is lower than 500 points), execute step 205.
  • Step 204 Present a prompt that the shield-type AI is successfully summoned.
  • the shield-type AI can be summoned based on the shield chip, and the summoned shield-type AI is in the character form (the first form) by default, and is in the following state of moving with the target virtual object.
  • Step 205 Present a prompt that the shield AI is not successfully summoned.
  • the shield-type AI moves with the logic of the target virtual object
  • FIG. 9 is a schematic diagram of a method for following a summoned object provided by an embodiment of the present application.
  • the method includes:
  • Step 301 The terminal controls the shield AI to enter the following state.
  • the newly summoned shield-type AI is in the following state of following the target virtual object by default.
  • Step 302 Determine whether the relative distance is greater than a first distance threshold.
  • the first distance threshold is the location where the calling object is convenient for assisting the target virtual object and the target virtual object the maximum distance between.
  • the relative distance between the target virtual object and the shield-type AI in the following state is obtained.
  • the shield-type AI is considered to be too far away from the target summoning object and is inconvenient to assist.
  • step 304 is performed at this time; when the relative distance is less than the first distance threshold and greater than the target distance threshold (it is the minimum distance between the position where the calling object is convenient for assisting the target virtual object and the target virtual object, less than first distance threshold), it is considered that the shield-type AI is in an area convenient for assisting the target virtual object, and step 303 is performed at this time.
  • Step 303 Control the shield AI to stay in place.
  • Step 304 Determine whether the target position is reachable.
  • the target position (i.e. the above-mentioned first target position or the second target position) is the ideal position of the shield-type AI relative to the target virtual object, and is the most favorable position for the shield-type AI to assist the target to summon the object.
  • the target position is It is located at a certain distance behind the right and rear of the target virtual object.
  • Step 305 Control the shield AI to move to the target position.
  • Step 306 Control the shield AI to move to other accessible positions.
  • Fig. 10 is a schematic diagram of determining the mobile position provided by the embodiment of the present application.
  • the target virtual object (player) is directed toward the reverse extension line facing forward, extending two angled areas to the left and right, and the size of the angle ⁇ can be Configuration, take the reverse extension line facing the player, when the extension line distance is R0, make a vertical line 1 of the extension line, when the shield AI is located in the area between the horizontal line where the target virtual object is located and the vertical line 1, it is considered
  • the distance between the shield-type AI and the target virtual object is too close, and it is in a position that is not conducive to assisting the target virtual object.
  • control the shield-type AI to move to a position A that is a certain distance to the right and rear of the target virtual object, where A is located on the horizontal line
  • the distance from the horizontal line where the target virtual object is located is R1.
  • Fig. 11 is a schematic diagram of the state change method of the summoned object provided by the embodiment of the present application. The method includes:
  • Step 401 The terminal controls the shield AI to be in a follow state.
  • Step 402 Determine whether the target virtual object is ready for interaction.
  • step 403 is performed at this time; otherwise, step 401 is performed.
  • Step 403 Control the shield-type AI to transform from a human figure into a virtual shield wall.
  • FIG 12 is a schematic diagram of the state transition of the summoned object provided by the embodiment of the present application.
  • the shield-type AI in human form will quickly rush to the target virtual object.
  • the human figure becomes a virtual shield wall, and the orientation of the virtual shield wall is consistent with the current orientation of the target virtual object; the default effect of the virtual shield wall is to resist all long-range attacks from the front of the virtual shield wall in one direction.
  • the terminal can continuously detect the position coordinates of the virtual shield wall relative to the target virtual object. As the target virtual object moves or turns, the position coordinates will be continuously corrected.
  • the shield wall will also keep overlapping with the position coordinates, ignoring the suspended position; if there is an obstacle at the position coordinates directly in front of the player, it will prevent the virtual shield wall from moving to the coordinate position, and the virtual shield wall can only move to the closest distance to the position coordinates
  • the reachable position of the virtual shield wall; the movement speed of the virtual shield wall is configurable.
  • the virtual shield wall will move or turn in real time following the interactive ready state, ensuring that it is always in front of the target virtual object and can be suspended in the air. If there is an obstacle, it will be squeezed away by the obstacle and will not be interspersed.
  • the form of the summoned object will immediately change from the virtual shield wall back to human form, and return to the default position following the target virtual object, that is, the target position at the right rear of the target virtual object.
  • Figure 13 is a schematic diagram of the effect of the summoned object provided by the embodiment of the present application.
  • the target virtual object can obtain different combat gains by interacting with the virtual shield wall, for example, using night vision to display through the virtual shield wall Observe the screen of other virtual objects, and highlight the outlines of other virtual objects in the screen to highlight other virtual objects, but when the highlighted virtual objects move out of the area where the virtual shield wall is in the target virtual object's field of vision, Cancel the highlighting effect; for another example, when the bullet fired by the target virtual object passes through the virtual shield wall, gain effects such as increased attack power can be obtained, which strengthens the vision of the target virtual object observing other virtual objects on the other side through the virtual shield wall Effect.
  • Figures 14A-14B are schematic diagrams of observations through summoned objects provided by the embodiment of the present application. Since the virtual shield wall is generated by a nano energy field, in order to distinguish the two sides of the virtual shield wall against bullets and other long-distance flying Object effect, when the target virtual object and other virtual objects are on both sides of the virtual shield wall, the visual effect 1 viewed from the target virtual object side (front) through the virtual shield wall (Fig. 14A) and other virtual object sides ( Reverse side) The visual effect 2 (FIG. 14B) viewed through the virtual shield wall is different.
  • Figure 15 is a schematic diagram of the state transition of the summoned object provided by the embodiment of the present application.
  • the virtual shield wall can resist the attack from the front, it will continue to lose blood due to the attack (shield type AI's blood volume), when the blood volume is lower than a certain set value, it will exit from the shield wall state and enter the humanoid attack action.
  • the terminal can also control the shield-type AI through the trigger operation of the lock control for the shield-type AI, for example, when the terminal controls the target virtual object to use the target virtual prop to target the target object in the virtual scene , trigger the lock control, and the terminal responds to the trigger operation to control the shield-type AI to use specific skills to attack the target object; when the terminal controls the target virtual object to use the target virtual props to aim at the target position in the virtual scene (there is no target object), Trigger the lock control, and the terminal responds to the trigger operation to control the shield-type AI to move to the target position, and control the shield-type AI to transform from a human figure into a virtual shield wall at the target position to block long-range attacks directly in front of the virtual shield wall.
  • the shield-type AI will monitor the behavior status of the target virtual object by itself, and automatically decide to execute corresponding skills and behaviors.
  • the shield-type AI will follow the target virtual object to move. In this way, the player does not need to send any instructions to the shield-type AI to get the automatic protection of the shield-type AI, so that the player can focus on himself On the only character controlled (that is, the target virtual object), the operation efficiency is improved.
  • the software modules stored in the control device 555 for calling the object in the virtual scene of the memory 550 in FIG. 2 may include:
  • the object presentation module 5551 is configured to present the target virtual object in the virtual scene and the calling object of the first form
  • the state control module 5552 is configured to control the form of the summoned object to change from the first form to the second form when the target virtual object is in an interaction ready state for interacting with other virtual objects in the virtual scene, and
  • the summoned object of the second form is controlled to be in an interaction assistance state, so as to assist the target virtual object to interact with the other virtual objects.
  • the method before the calling object in the first form, the method further includes:
  • the object calling module is configured to control the target virtual object to pick up the virtual chip when there is a virtual chip for calling the summoned object in the virtual scene;
  • the summoned object is summoned based on the virtual chip.
  • the device after presenting the target virtual object and the calling object with the first form, the device further includes:
  • the first control module is configured to obtain the relative distance between the target virtual object and the summoned object
  • the summoned object in the first form is controlled to move to a first target position relative to the target virtual object.
  • the device after presenting the target virtual object and the calling object in the first form, the device further includes:
  • a second control module configured to control the target virtual object to move in the virtual scene
  • the summoned object in the first form presents a second target position relative to the target virtual object, and controls the second target position of the target virtual object.
  • a summoned object in a form moves to the second target position.
  • the device also includes:
  • the movement adjustment module is configured to control the movement of the calling object in the first form to the second target position, when there is an obstacle in the moving route of the calling object, or the moving route includes different When the geographical environment makes it impossible for the summoned object to reach the second target position, control the summoned object to move to a third target position;
  • orientations of the third target position and the second target position relative to the target virtual object are different.
  • the device before controlling the calling object to move to the third target position, the device further includes:
  • a position determining module configured to determine at least two positions that the calling object passes through from the current position to the second target position within the tracking area, and select from the at least two positions that are related to the A position whose distance between the second target positions is smaller than the target distance is used as the third target position; or,
  • the device after presenting the target virtual object and the calling object in the first form, the device further includes:
  • a third control module configured to control the target virtual object to move in the virtual scene
  • the moving route indicating information is used to indicate the moving route of the calling object following the target virtual object;
  • the state control module is configured to control the calling object in the first form to move to a target position whose distance from the target virtual object is a target distance;
  • the device also includes:
  • the fourth control module is configured to display an interaction screen corresponding to the interaction between the target virtual object and the other virtual objects, the target virtual object and the other virtual objects are located on both sides of the summoned object;
  • the summoned object is controlled to block the interaction operation.
  • the device also includes:
  • the fifth control module is configured to present attribute transformation instruction information corresponding to the summoned object
  • the attribute change indication information is used to indicate the attribute value of the calling object deducted from blocking the interactive operation
  • the form of the summoned object is controlled to be transformed from the second form to the first form.
  • the device also includes:
  • a highlighting module configured to display the target virtual object observing the other virtual objects through the summoned object in the second form when the target virtual object and the other virtual objects are located on both sides of the summoned object screen, and highlight the other virtual objects in the screen.
  • the device also includes:
  • An enhanced prompting module configured to control the target during the interaction between the target virtual object and the other virtual objects when the target virtual object and the other virtual objects are located on both sides of the summoned object
  • the virtual object projects virtual props in the virtual scene
  • effect enhancement prompt information is presented, and the effect enhancement prompt information is used to prompt that the effect corresponding to the virtual prop has been improved.
  • the device after controlling the form of the calling object to change from the first form to the second form, and controlling the calling object in the second form to switch from the following state to the interactive assistance state, the device also includes:
  • a sixth control module configured to control the target virtual object to move in the virtual scene while the target virtual object remains in the interactive ready state
  • the summoned object in the second form is controlled to follow the target virtual object to move.
  • the device also includes:
  • the movement adjustment module is configured to automatically adjust the movement of the summoned object when the summoned object moves to a blocking area with obstacles during the process of controlling the summoned object in the second form to follow the target virtual object. Move the route to avoid said blocker.
  • the device further includes:
  • the seventh control module is configured to control the form of the summoned object to change from the second form to the first form when the target virtual object exits the interaction ready state, and control the change of the first form
  • the working state of the calling object is switched from the interactive assistance state to the following state.
  • the state control module is further configured to control the target virtual object to use the target virtual prop to aim at the target position in the virtual scene, and present a corresponding crosshair pattern at the target position;
  • control the summoned object In response to the transformation command triggered based on the sight pattern, control the summoned object to move to the target position, transform from the first form to a second form at the target position, and control the second form
  • the summoned object is in the interactive assistance state.
  • the device also includes:
  • an object recall module configured to present a recall control for recalling the summoned object
  • the summoned object In response to a trigger operation on the recall control, the summoned object is controlled to move from the target position to an initial position, and the form of the summoned object is controlled to change from the second form to the first form.
  • the embodiment of the present application also provides a control device for calling objects in a virtual scene, including:
  • the first presentation module is configured to present a target virtual object holding a shooting prop in a virtual shooting scene, and a summoned object in a character form;
  • the aiming control module is configured to, in the virtual shooting scene, control the target virtual object to use the shooting prop to aim at the target position, and present a corresponding crosshair pattern at the target position;
  • a state change module configured to control the summoned object to move to the target position in response to the change command triggered based on the sight pattern, and change from the character form to a shield state at the target position, so as to assisting the target virtual object to interact with the other virtual objects.
  • An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the control method for calling an object in the virtual scene described above in the embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored.
  • the processor When the executable instructions are executed by the processor, the processor will be caused to execute the virtual scene provided by the embodiment of the present application.
  • the control method of the summoned object The control method of the summoned object.
  • the computer-readable storage medium can be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of files that hold other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
  • HTML Hyper Text Markup Language
  • executable instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or alternatively, on multiple computing devices distributed across multiple sites and interconnected by a communication network. to execute.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景中召唤对象的控制方法、装置、设备、计算机可读存储介质及计算机程序产品;方法包括:呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。

Description

虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品
相关申请的交叉引用
本申请实施例基于申请号为202110602499.3、申请日为2021年05月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请实施例作为参考。
技术领域
本申请涉及人机交互技术,尤其涉及一种虚拟场景中召唤对象的控制方法、装置、设备及、计算机可读存储介质及计算机程序产品。
背景技术
在目标大多虚拟场景的应用中,用户大都通过终端控制单一虚拟对象在虚拟场景中与其他虚拟对象进行交互,然而单一虚拟对象的技能能力比较有限,为了达到某种交互目的,需用户通过终端控制单一虚拟对象执行多次交互操作,人机交互效率低。
发明内容
本申请实施例提供一种虚拟场景中召唤对象的控制方法、装置、设备、计算机可读存储介质及计算机程序产品,能够提高人机交互效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中召唤对象的控制方法,包括:
呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;
当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并
控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
本申请实施例提供一种虚拟场景中召唤对象的控制方法,包括:
呈现虚拟射击场景中持有射击道具的目标虚拟对象、以及处于人物形态的召唤对象;
在所述虚拟射击场景中,控制所述目标虚拟对象使用所述射击道具瞄准目标位置,并在所述目标位置处呈现相应的准星图案;
响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,并在所述目标位置处由所述人物形态变换为护盾状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
本申请实施例提供一种虚拟场景中召唤对象的控制装置,包括:
对象呈现模块,配置为呈现虚拟场景的界面中的目标虚拟对象以及第一形态的召唤对象;
状态控制模块,配置为当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并
控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
本申请实施例提供一种虚拟场景中召唤对象的控制装置,包括:
第一呈现模块,配置为呈现虚拟射击场景中持有射击道具的目标虚拟对象、以及处于人物形态的召唤对象;
瞄准控制模块,配置为在所述虚拟射击场景中,控制所述目标虚拟对象使用所述射击道具瞄准目标位置,并在所述目标位置处呈现相应的准星图案;
状态变换模块,配置为响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,并在所述目标位置处由所述人物形态变换为护盾状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
本申请实施例提供一种电子设备,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场景中召唤对象的控制方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟场景中召唤对象的控制方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现本申请实施例提供的虚拟场景中召唤对象的控制方法。
本申请实施例具有以下有益效果:
呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;当目标虚拟对象处于与虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制召唤对象的形态由第一形态变换为第二形态,并控制第二形态的召唤对象处于交互辅助状态,以辅助目标虚拟对象与其他虚拟对象进行交互;如此,当目标虚拟对象处于交互准备状态时,即可自动控制召唤对象的形态由第一形态变换为第二形态,并控制召唤对象进入交互辅助状态,无需用户任何操作,即可自动控制召唤对象辅助目标虚拟对象与其他虚拟对象进行交互,借助于召唤对象的技能,能够提高目标虚拟对象的技能,进而能够大大降低为达到某种交互目的、用户操作终端控制目标虚拟对象执行交互操作的次数,提高了人机交互效率,节约了计算资源消耗。
附图说明
图1为本申请实施例提供的虚拟场景中召唤对象的控制系统100的架构示意图;
图2为本申请实施例提供的电子设备500的结构示意图;
图3A为本申请实施例提供的虚拟场景中召唤对象的控制方法的流程示意图;
图3B为本申请实施例提供的虚拟场景中召唤对象的控制方法的流程示意图;
图4为本申请实施例提供的召唤对象的跟随示意图;
图5为本申请实施例提供的召唤对象的状态变换示意图;
图6为本申请实施例提供的召唤对象的状态变换示意图;
图7为本申请实施例提供的召唤对象的召唤条件示意图;
图8为本申请实施例提供的召唤方法示意图;
图9为本申请实施例提供的召唤对象的跟随方法示意图;
图10为本申请实施例提供的移动位置确定示意图;
图11为本申请实施例提供的召唤对象的状态变换方法示意图;
图12为本申请实施例提供的召唤对象的状态变换示意图;
图13为本申请实施例提供的召唤对象的作用效果示意图;
图14A-14B为本申请实施例提供的透过召唤对象进行观察的画面示意图;
图15为本申请实施例提供的召唤对象的状态变换示意图;
图16为本申请实施例提供的虚拟场景中召唤对象的控制装置的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二…”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二…”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)客户端,终端中运行的用于提供各种服务的应用程序,例如视频播放客户端、游戏客户端等。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟场景,是应用程序在终端上运行时显示(或提供)的虚拟场景,该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。
例如,虚拟场景为一个三维虚拟空间时,该三维虚拟空间可以是一个开放空间,该虚拟场景可以用于模拟现实中的真实环境,例如,该虚拟场景中可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素。当然,在该虚拟场景中还可以包括虚拟物品,例如,建筑物、载具、虚拟场景中的虚拟对象用于武装自己或与其他虚拟对象进行战斗所需的兵器等道具,该虚拟场景还可以用于模拟不同天气下的真实环境,例如,晴天、雨天、雾天或黑夜等天气。用户可以控制虚拟对象在该虚拟场景中进行移动。
4)虚拟对象,虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
可选地,该虚拟对象可以是通过客户端上的操作进行控制的用户角色,也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence),还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character)。可选地,该虚拟对象可以是在虚拟场景中进行对抗式交互的虚拟人物。可选地,该虚拟场景中参与互动的虚拟对象的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
以射击类游戏为例,用户可以控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也可以控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也可以控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具可以是虚拟汽车、虚拟飞行器、虚拟游艇等,在此仅以上述场景进行举例说明,本申请实施例对此不作具体限定。用户也可以控制虚拟对象通过虚拟道具与其他虚拟对象进行对抗式的交互,例如,该虚拟道具可以是手雷、集束雷、粘性手雷等投掷类虚拟道具,也可以是机枪、手枪、步枪等射击类虚拟道具,本申请对虚拟场景中召唤对象的控制类型不作具体限定。
5)召唤对象,虚拟场景中可以辅助虚拟对象与其他虚拟对象进行交互的各种人和物的形象,该形象可以是虚拟人物、虚拟动物、动漫人物、虚拟道具、虚拟载具等。
6)场景数据,表示虚拟场景中的对象在交互过程中受所表现的各种特征, 例如,可以包括对象在虚拟场景中的位置。当然,根据虚拟场景的类型可以包括不同类型的特征;例如,在游戏的虚拟场景中,场景数据可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏角色的各种状态的属性值,例如包括生命值(能量值,也称为红量)和魔法值(也称为蓝量)等。
参见图1,图1为本申请实施例提供的虚拟场景中召唤对象的控制系统100的架构示意图,为实现支撑一个示例性应用,终端(示例性地,终端400-1和终端400-2),通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合,使用无线或有线链路实现数据传输。
终端可以为智能手机、平板电脑、笔记本电脑等各种类型的用户终端,还可以为台式计算机、游戏机、电视机或者这些数据处理设备中任意两个或多个的组合;服务器200既可以为单独配置的支持各种业务的一个服务器,亦可以配置为一个服务器集群,还可以为云服务器等。
在实际应用中,终端安装和运行有支持虚拟场景的应用程序,该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、多人在线战术竞技游戏(MOBA,Multiplayer Online Battle Arena games)、二维(Two Dimension,简称2D)游戏应用、三维(Three Dimension,简称3D)游戏应用、虚拟现实应用程序、三维地图程序或者多人枪战类生存游戏中的任意一种,该应用程序还可以是单机版的应用程序,比如单机版的3D游戏程序。
本发明实施例中涉及到的虚拟场景可以用于模拟一个三维虚拟空间,该三维虚拟空间可以是一个开放空间,该虚拟场景可以用于模拟现实中的真实环境,例如,该虚拟场景中可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素。当然,在该虚拟场景中还可以包括虚拟物品,例如,建筑物、桌子、载具、虚拟场景中的虚拟对象用于武装自己或与其他虚拟对象进行战斗所需的兵器等道具。该虚拟场景还可以用于模拟不同天气下的真实环境,例如,晴天、雨天、雾天或黑夜等天气。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象,该虚拟形象可以是任一种形态,例如,仿真人物、仿真动物等,本发明对此不作限定。在实际实施时,用户可以使用终端控制虚拟对象在该虚拟场景中进行活动,该活动包括但不限于:调整身体姿态、爬行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷、切戳中的至少一种。
以电子游戏场景为示例性场景,用户可以提前在该终端上进行操作,该终端检测到用户的操作后,可以下载电子游戏的游戏配置文件,该游戏配置文件可以包括该电子游戏的应用程序、界面显示数据或虚拟场景数据等,以使得该用户(或玩家)在该终端上登录电子游戏时可以调用该游戏配置文件,对电子游戏界面进行渲染显示。用户可以在终端上进行触控操作,该终端检测到触控操作后,可发送对应触控操作的游戏数据的获取请求至服务器,服务器基于获取请求确定该触控操作所对应的游戏数据,并返回至终端,终端对该游戏数据进行渲染显示,该游戏数据可以包括虚拟场景数据、该虚拟场景中虚拟对象的行为数据等。
在实际应用中,终端呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;当目标虚拟对象处于与虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制召唤对象的形态由第一形态变换为第二形态,并控制第二形态的召唤对象处于交互辅助状态,以辅助目标虚拟对象与其他虚拟对象进行交互。
参见图2,图2为本申请实施例提供的电子设备500的结构示意图,在实际应用中,电子设备500可以为图1中的终端400-1、终端400-2或服务器200,以电子设备为图1所示的终端400-1或终端400-2为例,对实施本申请实施例的虚拟场景中召唤对象的控制方法的电子设备进行说明。图2所示的电子设备500包括:至少一个处理器510、存储器550、至少一个网络接口520和用户接口530。电子设备500中的各个组件通过总线系统540耦合在一起。可理解,总线系统540用于实现这些组件之间的连接通信。总线系统540除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统540。
处理器510可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口530包括使得能够呈现媒体内容的一个或多个输出装置531,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口530还包括一个或多个输入装置532,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器550可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器550可选地包括在物理位置上远离处理器510的一个或多个存储设备。
存储器550包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器550旨在包括任意适合类型的存储器。
在一些实施例中,本申请实施例提供的虚拟场景中召唤对象的控制装置可以采用软件方式实现,图2示出了存储在存储器550中的虚拟场景中召唤对象的控制装置555,其可以是程序和插件等形式的软件,包括以下软件模块:对象呈现模块5551和状态控制模块5552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,将在下文中说明各个模块的功能。
接下来对本申请实施例的提供的虚拟场景中召唤对象的控制方法进行说明,在实际实施时,该方法可由服务器或终端单独实施,还可由服务器及终端协同实施。参见图3A,图3A为本申请实施例提供的虚拟场景中召唤对象的控制方法的流程示意图,将结合图3A示出的步骤进行说明。
步骤101:终端呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象。
这里,终端上安装有支持虚拟场景的客户端,当用户打开终端上的客户端, 且终端运行该客户端时,终端发送虚拟场景的场景数据的获取请求至服务器,服务器基于获取请求携带的场景标识,获取场景标识所指示的虚拟场景的场景数据,并将获取的场景数据返回至终端,终端基于接收到的场景数据进行画面渲染,呈现以目标虚拟对象视角对虚拟场景观察得到的虚拟场景的画面,并在虚拟场景的画面中呈现目标虚拟对象以及第一形态的召唤对象。这里,虚拟场景的画面是以第一人称对象视角对虚拟场景观察得到,或是以第三人称视角对虚拟场景观察得到,虚拟场景的画面中包括进行交互操作的虚拟对象及对象交互环境,如当前用户控制的目标虚拟对象及目标虚拟对象关联的召唤对象。
其中,目标虚拟对象为当前登录账号所对应的虚拟场景中的虚拟对象。在该虚拟场景中,用户可基于虚拟场景的界面控制目标虚拟对象与其他虚拟对象(不同于当前登录账号所对应的虚拟场景中的虚拟对象)进行交互,如控制目标虚拟对象持有虚拟射击道具(如虚拟狙击类枪械、虚拟冲锋枪、虚拟散弹枪等)射击其他虚拟对象。召唤对象为用于辅助目标虚拟对象与虚拟场景中其他虚拟对象进行交互的各种人和物的形象,该形象可以是虚拟人物、虚拟动物、动漫人物、虚拟道具、虚拟载具等。
在一些实施例中,终端在呈现第一形态的召唤对象之前,可通过如下方式召唤得到第一形态的召唤对象:当虚拟场景中存在用于召唤所述召唤对象的虚拟芯片时,控制目标虚拟对象拾取虚拟芯片;获取目标虚拟对象的能量值;当目标虚拟对象的能量值到达能量阈值时,基于虚拟芯片召唤所述召唤对象。
这里,可以预先在虚拟场景中配置用于召唤上述召唤对象的虚拟芯片,该虚拟芯片可以在虚拟场景中的特定位置存在,即用户可以通过拾取操作装配该虚拟芯片;在实际应用中,该虚拟芯片也可以在用户进入虚拟场景之前、或者虚拟场景内拾取、取得的奖励或购买得到的,虚拟芯片可存在于场景设置界面中,即用户可以通过在场景设置界面基于设置操作装配该虚拟芯片。
当控制目标虚拟对象装配完成该虚拟芯片后,终端获取目标虚拟对象的属性值,比如目标虚拟对象的生命值、能量值等;然后判断该目标虚拟对象的属性值是否满足召唤对象对应的召唤条件,比如,召唤对象对应的召唤条件是虚拟对象的属性值需达到500点,故可判断目标虚拟对象的能量值是否超过500点来确定是否满足召唤对象对应的召唤条件;当基于该属性值确定召唤对象对应的召唤条件得到满足(即目标虚拟对象的能量值超过500点)时,则基于该已装配的虚拟芯片召唤目标虚拟对象对应的召唤对象。
在实际应用中,该召唤对象对应的召唤条件还可以包括:是否与目标虚拟怪兽(比如处于虚拟状态(生命值低于预设阈值)的精英怪)进行交互。当确定召唤对象对应的召唤条件得到满足(即与目标虚拟怪兽进行交互)时,则基于该已装配的虚拟芯片召唤目标虚拟对象对应的召唤对象。
在实际实施时,在召唤上述召唤对象时,可以满足上述示例性的召唤条件中至少之一即可实现,比如上述示例性的召唤条件全部满足,或者只满足上述示例性的召唤条件中的一个或两个,在本申请实施例中不作限定。
在一些实施例中,终端在呈现目标虚拟对象以及具有第一形态的召唤对象之后,可通过如下方式控制召唤对象跟随目标虚拟对象进行移动:获取目标虚 拟对象与召唤对象之间的相对距离;当相对距离超过第一距离阈值时,控制第一形态的召唤对象移动至相对目标虚拟对象的第一目标位置处。
这里,在实际应用中,召唤对象与目标虚拟对象之间的相对距离过大或过小,都将不利于召唤对象辅助目标虚拟对象,第一距离阈值为召唤对象便于辅助目标虚拟对象时所处位置与目标虚拟对象之间的最大距离,当召唤对象与目标虚拟对象之间的相对距离超过第一距离阈值时,认为召唤对象相距目标虚拟对象太远,此种情况下,召唤对象处于不便于辅助目标虚拟对象的区域,此时,可触发召唤对象的主动跟随行为,即控制召唤对象向靠近目标虚拟对象所在位置处移动,并控制召唤对象移动至便于辅助目标召唤对象的第一目标位置处;当召唤对象与目标虚拟对象之间的相对距离低于目标距离阈值(是召唤对象便于辅助目标虚拟对象时所处位置与目标虚拟对象之间的最小距离,小于第一距离阈值)时,认为召唤对象相距目标虚拟对象又太近,此时,召唤对象同样处于不便于辅助目标虚拟对象的区域,此种情况下,也可触发召唤对象的主动跟随行为,即控制召唤对象向远离目标虚拟对象所在位置处移动,并控制召唤对象移动至便于辅助目标召唤对象的第一目标位置处;当召唤对象与目标虚拟对象之间的相对距离大于目标距离阈值且小于第一距离阈值时,认为召唤对象处于便于辅助目标虚拟对象的区域,可控制召唤对象维持原地,但在实际应用中,为了确保召唤对象处于最便于辅助目标虚拟对象的精确位置,可控制召唤对象移动至最便于辅助目标召唤对象的第一目标位置处。
其中,第一目标位置为召唤对象相对目标虚拟对象的理想位置,为最有利于召唤对象辅助目标虚拟对象的位置,第一目标位置与召唤对象和目标虚拟对象的属性、交互习惯等相关,不同的召唤对象和不同的目标虚拟对象,对应的第一目标位置可以是不同的,如第一目标位置为位于目标虚拟对象右后方一定距离的位置,也可为位于目标虚拟对象左后方一定距离的位置,还可以是以目标虚拟对象为中心的、具有预设角度的扇形区域中的任一位置,等等,本申请并不对第一目标位置进行限定,在实际应用中,应根据实际情况而定。
在一些实施例中,终端在呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象之后,还可通过如下方式控制召唤对象跟随目标虚拟对象进行移动:控制目标虚拟对象在虚拟场景中进行移动;伴随目标虚拟对象移动的进行,在以目标虚拟对象所处位置为中心的跟踪区域内,呈现第一形态的召唤对象相对目标虚拟对象的第二目标位置,并控制第一形态的召唤对象移动至第二目标位置处。
在一些实施例中,在控制第一形态的召唤对象移动至第二目标位置的过程中,当在召唤对象的移动路线中存在阻挡物,或者移动路线中包括不同的地理环境使得召唤对象无法到达第二目标位置时,控制召唤对象移动至第三目标位置处;其中,第三目标位置与第二目标位置相对目标虚拟对象的方位不同。
在实际应用中,当在召唤对象的移动路线中存在阻挡物时,还可呈现无法到达提醒。
在一些实施例中,终端控制召唤对象移动至第三目标位置之前,还可通过如下方式确定第三目标位置:确定召唤对象在跟踪区域内,从当前位置移动至 第二目标位置所途经的至少两个位置,并从至少两个位置中选择与第二目标位置之间的距离小于目标距离的位置作为第三目标位置;或者,当跟踪区域中不存在可达位置时,扩大跟踪区域,并在扩大后的跟踪区域中确定相对目标虚拟对象的第三目标位置。
这里,在控制召唤对象移动至最有利于辅助目标虚拟对象的第二目标位置(如玩家右后方一定距离的位置)的过程中,当无法控制召唤对象到达第二目标位置时,可控制召唤对象移动至其他位置,如控制召唤对象到达目标虚拟对象右后方最近的可到达点,或到达目标虚拟对象左后方一定距离的位置,或扩大跟踪区域,在扩大后的跟踪区域,根据上述方式找到合适的可到达目标点,以控制召唤对象移动至找到的合适的可到达目标点。
参见图4,图4为本申请实施例提供的召唤对象的跟随示意图,以目标虚拟对象(玩家)朝向正前方朝向的反向延长线L1,往左右延伸两个夹角区域,夹角α的大小可配置,获取从玩家所在位置至反向延长线L1的距离为R1的A点,作一条经过A点且垂直于反向延长线L1的垂直线L2,如此,反向延长线L1、垂直线L2、夹角射线构成左右两个三角形跟踪区域(区域1和区域2)或构成左右两个扇形跟踪区域,优先在最利于辅助目标虚拟对象的跟踪区域,如在玩家右后的区域1内,选择召唤对象可到达的目标点(B点),作为召唤对象跟随目标虚拟对象的目标点(即第三目标位置),若右后的区域1中没有合适的目标点,则退而求其次,寻找玩家左后的区域2内的合适目标点;如果玩家左后的区域2内也找不到合适目标点,则扩大搜索区域,在扩大后的搜索区域内继续上述方式选择合适的目标点,直到找出合适的可到达目标点(即其他可达位置)作为第三目标位置为止。
在一些实施例中,终端呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象之后,还可通过如下方式控制召唤对象跟随目标虚拟对象进行移动:控制目标虚拟对象在虚拟场景中进行移动;伴随移动的进行,呈现移动路线指示信息,移动路线指示信息,用于指示召唤对象跟随目标虚拟对象进行移动的移动路线;控制召唤对象按照移动路线指示信息所指示的移动路线进行移动。
这里,若终端在控制目标虚拟对象在虚拟场景中进行移动前,召唤对象已处于最有利于辅助目标虚拟对象的相对位置,则在控制目标虚拟对象在虚拟场景中进行移动的过程中,移动路线指示信息所指示的移动路线即为目标虚拟对象的移动路线,终端控制召唤对象按照移动路线指示信息所指示的移动路线跟随目标召唤对象进行同步移动,以保证召唤对象始终处于最有利于辅助目标虚拟对象的相对位置处。若终端在控制目标虚拟对象在虚拟场景中进行移动前,召唤对象未处于最有利于辅助目标虚拟对象的相对位置,则在控制目标虚拟对象在虚拟场景中进行移动的过程中,移动路线指示信息所指示的移动路线即为实时对召唤对象进行调整的移动路线,终端控制召唤对象按照移动路线指示信息所指示的移动路线进行移动,可实时调整召唤对象相对目标虚拟对象的相对位置,以确保召唤对象尽可能处于最有利于辅助目标虚拟对象的相对位置处。
步骤102:当目标虚拟对象处于与虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制召唤对象的形态由第一形态变换为第二形态,并控制第二 形态的召唤对象处于交互辅助状态,以辅助目标虚拟对象与其他虚拟对象进行交互。
其中,召唤对象可具有至少两个不同的工作状态,如非交互准备状态、交互准备状态;当召唤对象满足工作状态变换条件时,终端可控制召唤对象进行工作状态变换,其中,召唤对象的工作状态变换条件可与目标虚拟对象所处工作状态相关,例如,假设召唤对象默认处于跟随目标虚拟对象进行移动的跟随状态,当目标虚拟对象处于与虚拟场景中其他虚拟对象进行交互的非交互准备状态时,确定召唤对象不满足工作状态条件,则控制召唤对象维持处于跟随状态;当目标虚拟对象处于与虚拟场景中其他虚拟对象进行交互的交互准备状态时,确定召唤对象满足了工作状态变换条件,则控制召唤对象由跟随状态变换为交互准备状态。
在一些实施例中,终端可通过如下方式控制召唤对象的形态由第一形态变换为第二形态:控制第一形态的召唤对象移动至距目标虚拟对象的距离为目标距离的目标位置处;在目标位置处,控制召唤对象由第一形态变换为第二形态。
在实际应用中,召唤对象具有至少两个不同的形态,当满足形态变换条件(与目标虚拟对象所处工作状态相关)时,即可控制召唤对象进行形态变换;例如,当召唤对象为动漫人物、且目标虚拟对象在虚拟场景中所处工作状态为非交互准备状态时,确定召唤对象不满足形态变换条件,则控制召唤对象的形态为人物形态(即第一形态);当目标虚拟对象由非交互准备状态变换为交互准备状态,如目标虚拟对象进入肩膀瞄准或开镜瞄准时,确定召唤对象满足形态变换条件,则控制具有人物形态的召唤对象移动至目标位置处,并在目标位置处控制召唤对象由人物形态变换为虚拟盾墙或防护罩等第二形态。
参见图5-图6,图5-图6为本申请实施例提供的召唤对象的状态变换示意图,图5中,当目标虚拟对象501在虚拟场景中处于非交互准备状态时,召唤对象的形态为人物形态502(即第一形态),当目标虚拟对象501进入肩膀瞄准或开镜瞄准等交互准备状态时,控制具有人物形态的召唤对象移动至目标位置处,并在目标位置处控制召唤对象由人物形态(即第一形态)变换为虚拟盾墙形态503(即第二形态);图6中,当目标虚拟对象601在虚拟场景中处于非交互准备状态时,召唤对象的形态为人物形态602(即第一形态),当目标虚拟对象601进入肩膀瞄准或开镜瞄准等交互准备状态时,控制具有动漫人物形象的召唤对象移动至目标位置处,并在目标位置处控制召唤对象由人物形态(即第一形态)变换为防护罩形态603(即第二形态)。
在一些实施例中,终端还可展示目标虚拟对象与其他虚拟对象进行交互所对应的交互画面,其中,目标虚拟对象与其他虚拟对象位于召唤对象的两侧;在展示交互画面的过程中,当其他虚拟对象通过虚拟道具执行针对目标虚拟对象的交互操作时,控制召唤对象阻挡交互操作。
这里,处于第二形态的召唤对象可阻挡其他虚拟对象针对目标虚拟对象的攻击,例如,第二形态的召唤对象为虚拟盾墙、且其他虚拟对象发射子弹攻击目标虚拟对象时,若子弹作用于虚拟盾墙上,则虚拟盾墙可阻挡子弹对目标虚拟对象的攻击,达到保护目标虚拟对象的作用。
在一些实施例中,终端还可呈现对应召唤对象的属性变换指示信息;其中,属性变换指示信息,用于指示阻挡交互操作所扣除的召唤对象的属性值;当属性变换指示信息指示召唤对象的属性值低于属性阈值时,控制召唤对象的形态由第二形态变换为第一形态。
其中,属性值可包括以下至少之一:生命值、血量、能量值、体力值、弹药量、防御值。为了保证游戏平衡,召唤对象虽然能够抵挡来自正前方的攻击,但也会由于受到攻击而损耗自身属性,降低自身属性值,当属性值低于属性阈值时,控制召唤对象的形态由第二形态变换为第一形态。
例如,当召唤对象为护盾型AI时,虚拟盾墙虽然能抵挡来自正前方的攻击,但是也会因为受到攻击而持续掉血(护盾型AI的血量),当血量低于某个设定值时会从盾墙状态退出进入人形受击动作。
在一些实施例中,当目标虚拟对象与其他虚拟对象位于召唤对象的两侧时,终端还可展示目标虚拟对象透过第二形态的召唤对象观察其他虚拟对象的画面,并在画面中突出显示其他虚拟对象。
其中,可通过采用夜视的方式显示透过第二形态的召唤对象观察其他虚拟对象的画面,并在画面中高亮显示其他虚拟对象的轮廓,以突出显示其他虚拟对象。例如,第二形态的召唤对象为虚拟盾墙(不透明,具有遮挡作用),目标虚拟对象与其他虚拟对象位于虚拟盾墙的两侧,在常规情况下,目标虚拟对象以自身视角观察虚拟盾墙时,是无法观察到被虚拟盾墙遮挡的其他虚拟对象的,而本申请实施例中,目标虚拟对象以自身视角观察虚拟盾墙时,由于采用夜视或透视的方式显示被虚拟盾墙遮挡的其他虚拟对象,那么可确定其他虚拟对象相对目标虚拟对象是可见的,即目标虚拟对象能够观察得到被虚拟盾墙遮挡的其他虚拟对象;而其他虚拟对象以自身视角观察虚拟盾墙时,是无法观察到被虚拟盾墙遮挡的目标虚拟对象;如此,其他虚拟对象暴露在目标虚拟对象的视野范围内,而目标虚拟对象却未暴露在其他虚拟对象的视野范围内,有利于控制目标虚拟对象制定能够对其他虚拟对象造成最大伤害的交互策略,并按照交互策略执行相应的交互操作,进而提高目标虚拟对象的交互能力,以提高人机交互效率。
在一些实施例中,当目标虚拟对象与其他虚拟对象位于召唤对象的两侧时,在目标虚拟对象与其他虚拟对象进行交互的过程中,控制目标虚拟对象在虚拟场景中投射虚拟道具;当虚拟道具穿过召唤对象时,呈现效果增强提示信息,效果增强提示信息,用于提示虚拟道具所对应的作用效果得到提升。
其中,投射可包括投掷或发射,如控制目标虚拟对象在虚拟场景中投掷第一虚拟道具(如飞镖、手雷、标枪等),或者控制目标虚拟对象在虚拟场景中通过第二虚拟道具(如枪械、弓箭、弩炮等)发射子虚拟道具(相应的,如子弹、箭头、炸弹等),当第一虚拟道具或者子虚拟道具穿过召唤对象时,可以获得攻击力提升等增益效果。
在一些实施例中,终端控制召唤对象的形态由第一形态变换为第二形态,并控制第二形态的召唤对象由跟随状态切换为交互辅助状态之后,还可在目标虚拟对象维持处于交互准备状态的过程中,控制目标虚拟对象在虚拟场景中移 动;在控制目标虚拟对象移动的过程中,控制第二形态的召唤对象跟随目标虚拟对象进行移动。
例如,当第二形态的召唤对象为虚拟盾墙时,若目标虚拟对象保持瞄准状态移动或转向,则控制虚拟盾墙实时跟随目标虚拟对象进行移动或转向,保证虚拟盾墙永远处于目标虚拟对象的正前方,且可以悬空;当第二形态的召唤对象为防护罩时,若目标虚拟对象保持瞄准状态移动或转向,则控制防护罩实时跟随目标虚拟对象进行移动或转向,保证防护罩永远处于目标虚拟对象的周围。
在一些实施例中,终端在控制第二形态的召唤对象跟随目标虚拟对象进行移动的过程中,当召唤对象移动至存在阻挡物的阻挡区域时,自动调整召唤对象的移动路线以避开阻挡物。
在实际应用中,终端在控制第二形态的召唤对象跟随目标虚拟对象进行移动的过程中,可不断检测召唤对象相对目标虚拟对象的位置坐标,随着目标虚拟对象的移动或转向,位置坐标会被不断修正,召唤对象也会保持与该位置坐标重叠;当位置坐标处有阻挡物时,阻止召唤对象往该坐标位置移动,则控制召唤对象移动到距离该位置坐标最近的可到达位置;在控制召唤对象移动过程中,召唤对象的移动速度是可配置的。
当目标虚拟对象保持交互准备状态进行移动或转向时,召唤对象会实时跟随交互准备状态进行移动或转向,保证召唤对象永远处于能够辅助目标虚拟对象的位置,如第二状态的召唤对象为虚拟盾墙时,保证虚拟盾墙处于目标虚拟对象的正前方,又如第二状态的召唤对象为防护罩时,保证防护罩处于目标虚拟对象的周围。
在一些实施例中,终端在控制第二形态的召唤对象由跟随状态切换为交互辅助状态之后,当目标虚拟对象退出交互准备状态时,还可控制召唤对象的形态由第二形态变换为第一形态,并控制第一形态的召唤对象的工作状态由交互辅助状态切换为跟随状态。
例如,当召唤对象为护盾型AI时,对应的第一形态为人物形态,对应的第二形态为虚拟盾墙,当目标虚拟对象退出交互准备状态时,该召唤对象的形态将从虚拟盾墙立刻变回人物形态,并且回到跟随目标虚拟对象的默认位置,如目标虚拟对象右后方的目标位置处,由交互辅助状态切换为跟随状态,如此,召唤对象的形态和工作状态与目标虚拟对象的工作状态相适配,便于召唤对象及时发挥针对目标虚拟对象的辅助作用,借助于召唤对象的技能,能够提高目标虚拟对象的技能,进而提高目标虚拟对象的交互能力,以提高人机交互效率。
在一些实施例中,终端可通过如下方式控制召唤对象的形态由第一形态变换为第二形态,并控制第二形态的召唤对象处于交互辅助状态:控制目标虚拟对象使用目标虚拟道具瞄准虚拟场景中的目标位置,并在目标位置处呈现相应的准星图案;响应于基于准星图案触发的变换指令,控制召唤对象移动至目标位置处,在目标位置处由第一形态变换为第二形态,并控制第二形态的召唤对象处于交互辅助状态。
其中,目标位置所对应的锁定目标可以是虚拟场景中不同于目标虚拟对象的其他虚拟对象,也可以是虚拟场景中的场景位置,比如虚拟场景中的山坡、 天空、树木等。在实际应用中,目标虚拟道具可以对应有相应的准星图案(比如虚拟射击枪支的准星图案),从而在瞄准目标位置后,在目标位置处呈现该准星图案。根据目标位置所对应锁定目标的不同,召唤对象对应的交互辅助状态可以是不同的,例如,当终端控制目标虚拟对象使用目标虚拟道具瞄准虚拟场景中的目标对象(即锁定目标为其他虚拟对象)时,终端控制第一形态的召唤对象处于辅助攻击状态,即控制处于辅助攻击状态的召唤对象使用对应的特定技能攻击该目标对象;当终端控制目标虚拟对象使用目标虚拟道具瞄准虚拟场景中的目标位置(不存在目标对象,如锁定目标为虚拟场景中的地面一点、天空一点等场景位置)时,终端控制第一形态的召唤对象移动至目标位置处,并在目标位置处控制召唤对象由第一形态变换为第二形态,如控制召唤对象由人物形态变换为护盾形态,控制护盾形态的召唤对象由跟随状态(与第一形态(如人形状态)对应)切换为辅助防护状态(与护盾状态对应),从而控制召唤对象处于与锁定目标相适配的交互辅助状态,以辅助目标虚拟对象在虚拟场景中进行交互。
在一些实施例中,终端在控制召唤对象的形态由第一形态变换为第二形态,并控制第二形态的召唤对象处于交互辅助状态之后,还可呈现用于对召唤对象进行召回的召回控件;响应于针对召回控件的触发操作,控制召唤对象由目标位置移动至初始位置,并控制召唤对象的形态由第二形态变换为第一形态。
这里,通过召回控件实现召唤对象的召回,在召回召唤对象时,无论召回前召唤对象处于何种形态,均可控制召回的召唤对象处于第一形态(即初始形态)。
接下来以虚拟场景为虚拟射击场景为例,继续对本申请实施例提供的虚拟场景中召唤对象的控制方法进行说明,参见图3B,图3B为本申请实施例提供的虚拟场景中召唤对象的控制方法的流程示意图,该方法包括:
步骤501:终端呈现虚拟射击场景中持有射击道具的目标虚拟对象、以及处于人物形态的召唤对象。
这里,终端在呈现持有射击道具的目标虚拟对象的同时,还呈现目标虚拟对象对应的召唤对象,此时,召唤对象处于人形形态(即上述的第一形态)。这里,该召唤对象为用于辅助目标虚拟对象与虚拟场景中其他虚拟对象进行交互的人物形态的形象,该形象可以是虚拟人物、动漫人物等。该召唤对象可为用户最初进入虚拟场景时,系统随机分配给目标虚拟对象的召唤对象,也可为用户根据虚拟场景中的场景指引信息,通过控制目标虚拟对象执行某些特定任务以达到召唤对象的召唤条件,从而召唤得到的召唤对象,也可以为用户通过触发召唤控件召唤得到的召唤对象,比如在满足召唤条件的情况下,点击召唤控件以召唤上述召唤对象。
步骤502:在虚拟射击场景中,控制目标虚拟对象使用射击道具瞄准目标位置,并在目标位置处呈现相应的准星图案。
这里,终端在呈现持有射击道具的目标虚拟对象、以及目标虚拟对象对应的召唤对象之后,可以控制目标虚拟对象使用射击道具在虚拟场景中瞄准目标 位置以进行交互。该目标位置所对应的锁定目标可以是虚拟场景中不同该目标虚拟对象的其他虚拟对象,也可以是虚拟场景中的场景位置,比如虚拟场景中的山坡、天空、树木等。在实际应用中,该射击道具可以对应有相应的准星图案(比如虚拟射击枪支的准星图案),从而在瞄准目标位置后,在目标位置处呈现该准星图案。
步骤503:响应于基于准星图案触发的变换指令,控制召唤对象移动至目标位置处,并在目标位置处由人物形态变换为护盾状态,以辅助目标虚拟对象与其他虚拟对象进行交互。
这里,在实际应用中,针对不同的锁定目标,在本申请实施例中,为召唤对象设置不同的交互辅助状态,比如辅助防护状态、辅助攻击状态等。当锁定目标为其他虚拟对象时,控制召唤对象处于辅助攻击状态,并可控制处于辅助攻击状态的召唤对象在虚拟射击场景中攻击其他虚拟对象;当锁定目标为场景位置时,比如锁定目标为虚拟场景中的地面一点、天空一点等场景位置时,控制召唤对象移动至目标位置处,并在目标位置处控制召唤对象由人物形态变换为护盾形态,如控制人形状态的召唤对象由跟随状态切换为辅助防护状态(与护盾状态对应),控制处于辅助防护状态的召唤对象在虚拟射击场景中辅助目标虚拟对象与其他虚拟对象进行交互;通过上述方式,控制召唤对象处于与锁定目标相适配的交互辅助状态,以辅助目标虚拟对象与其他虚拟对象进行交互,借助于召唤对象的辅助,能够提高目标虚拟对象的技能,进而提高目标虚拟对象的交互能力,以提高人机交互效率。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。以虚拟场景为射击游戏、召唤对象为用于辅助目标虚拟对象的护盾型AI为例,护盾型AI的第一形态为人物形态,第二形态为虚拟盾墙(即上述的护盾状态),当目标虚拟对象在瞄准(即上述的交互准备状态)时,控制护盾型AI由人物形态自动变换为虚拟盾墙,以辅助目标虚拟对象在虚拟场景中与其他虚拟对象进行交互。
在实际实施时,本申请实施例提供的虚拟场景中召唤对象的控制方法流程主要涉及:护盾型AI的召唤、护盾型AI跟随目标虚拟对象进行移动的逻辑、护盾型AI状态变换,接下来将逐一进行说明。
1、护盾型AI的召唤
参见图7,图7为本申请实施例提供的召唤对象的召唤条件示意图,如图7所示,护盾型AI的召唤条件是目标虚拟对象拥有护盾芯片、目标虚拟对象的能量值达到能量阈值、与其他虚拟对象进行交互(如与任何虚弱精英怪进行交互),当满足以上条件时,即可召唤出护盾型AI。
参见图8,图8为本申请实施例提供的召唤方法示意图,该方法包括:
步骤201:终端控制目标虚拟对象在虚拟场景中与其他目标对象进行交互。
步骤202:判断目标虚拟对象是否拥有护盾芯片。
这里,在实际实施时,当虚拟场景中存在用于召唤护盾型AI的护盾芯片时,终端可控制目标虚拟对象拾取护盾芯片,当目标虚拟对象成功拾取护盾芯片时, 执行步骤203;当虚拟场景中不存在用于召唤护盾型AI的护盾芯片,或目标虚拟对象没有成功拾取护盾芯片时,执行步骤205。
步骤203:判断目标虚拟对象的能量是否达到能量阈值。
这里,目标虚拟对象的能量可通过目标虚拟对象在虚拟场景中的交互操作获得,终端获取目标虚拟对象的能量值,当目标虚拟对象的能量值到达能量阈值(如纳米能量超过500点)时,执行步骤204;当目标虚拟对象的能量值未到达能量阈值(如纳米能量低于500点)时,执行步骤205。
步骤204:呈现成功召唤出护盾型AI的提示。
这里,当满足召唤条件时,即可基于护盾芯片召唤出护盾型AI,召唤出的护盾型AI默认处于人物形态(第一形态)、且处于跟随目标虚拟对象进行移动的跟随状态。
步骤205:呈现未成功召唤出护盾型AI的提示。
2、护盾型AI跟随目标虚拟对象进行移动的逻辑
参见图9,图9为本申请实施例提供的召唤对象的跟随方法示意图,该方法包括:
步骤301:终端控制护盾型AI进入跟随状态。
这里,新招唤出的护盾型AI默认处于跟随目标虚拟对象进行移动的跟随状态。
步骤302:判断相对距离是否大于第一距离阈值。
这里,召唤对象与目标虚拟对象之间的相对距离过大或过小,都将不利于召唤对象辅助目标虚拟对象,第一距离阈值为召唤对象便于辅助目标虚拟对象时所处位置与目标虚拟对象之间的最大距离。在实际应用中,获取目标虚拟对象与处于跟随状态的护盾型AI之间的相对距离,当相对距离大于第一距离阈值时,认为护盾型AI相距目标召唤对象太远,处于不便于辅助目标虚拟对象的区域,此时执行步骤304;当相对距离小于第一距离阈值、且大于目标距离阈值(是召唤对象便于辅助目标虚拟对象时所处位置与目标虚拟对象之间的最小距离,小于第一距离阈值)时,认为护盾型AI处于便于辅助目标虚拟对象的区域,此时执行步骤303。
步骤303:控制护盾型AI维持在原地。
步骤304:判断目标位置是否可达到。
其中,目标位置(即上述的第一目标位置或第二目标位置)为护盾型AI相对目标虚拟对象的理想位置,为最有利于护盾型AI辅助目标召唤对象的位置,如目标位置为位于目标虚拟对象右后方一定距离的位置。当目标位置可达到时,执行步骤305;当目标位置不可达时,执行步骤306。
步骤305:控制护盾型AI移动至目标位置。
步骤306:控制护盾型AI移动至其他可达位置。
其中,其他可达位置即为上述的第三目标位置。
参见图10,图10为本申请实施例提供的移动位置确定示意图,以目标虚拟对象(玩家)朝向正前方朝向的反向延长线,往左右延伸两个夹角区域,夹角α的大小可配置,以玩家朝向的反向延长线,延长线距离为R0时,作一个 该延长线的垂直线1,当护盾型AI位于目标虚拟对象所在水平线与垂直线1之间的区域时,认为护盾型AI与目标虚拟对象距离太近,处于不利于辅助目标虚拟对象的位置,此时,控制护盾型AI移动至位于目标虚拟对象右后方一定距离的位置A处,其中,A所在水平线与目标虚拟对象所在水平线的距离为R1。
当延长线距离为R2时,作一个该延长线的垂直线2,当护盾型AI所在水平线与目标虚拟对象所在水平线的距离大于R2时,认为护盾型AI与目标虚拟对象距离太远,处于不利于辅助目标虚拟对象的位置,此时,控制护盾型AI移动至位于目标虚拟对象右后方一定距离的位置A处,其中,A所在水平线与目标虚拟对象所在水平线的距离为R1。
当目标虚拟对象右后方的位置A处存在阻挡物时,即玩家右后的三角形区域没有合适的目标点,则退而求其次,寻找玩家左后的三角形区域内的合适目标点;如果玩家左后的三角形区域内也找不到合适目标点,则扩展R1的大小至R2,继续上述规则选点,直到找出合适的可到达目标点(即其他可达位置)为止。
3、护盾型AI状态变换
参见图11,图11为本申请实施例提供的召唤对象的状态变换方法示意图,该方法包括:
步骤401:终端控制护盾型AI处于跟随状态。
步骤402:判断目标虚拟对象是否处于交互准备状态。
这里,当目标虚拟对象进入肩膀瞄准或开镜瞄准时,认为目标虚拟对象处于交互准备状态,此时执行步骤403;否则,执行步骤401。
步骤403:控制护盾型AI由人形变换为虚拟盾墙。
参见图12,图12为本申请实施例提供的召唤对象的状态变换示意图,这里,当目标虚拟对象进入肩膀瞄准或开镜瞄准时,处于人形的护盾型AI会快速冲到距离目标虚拟对象正前方目标距离处,由人形变成虚拟盾墙,虚拟盾墙的朝向与目标虚拟对象当前的朝向一致;虚拟盾墙的默认效果是单向抵挡来自虚拟盾墙正前方的所有远程攻击。
在实际应用中,当护盾型AI变成虚拟盾墙后,终端可不断检测虚拟盾墙相对目标虚拟对象的位置坐标,随着目标虚拟对象的移动或转向,位置坐标会被不断修正,虚拟盾墙也会保持与该位置坐标重叠,无视悬空位置;如果玩家正前方位置坐标处有阻挡物,会阻止虚拟盾墙往该坐标位置移动,则虚拟盾墙仅能移动到距离该位置坐标最近的可到达位置;虚拟盾墙的移动速度可配置。
当目标虚拟对象保持交互准备状态进行移动或转向时,虚拟盾墙会实时跟随交互准备状态进行移动或转向,保证永远处于目标虚拟对象的正前方,且可以悬空,但是如果交互准备状态正前方有阻挡物,则会被阻挡物挤开,不会穿插。当目标虚拟对象退出交互准备状态时,该召唤对象的形态将从虚拟盾墙立刻变回人形,并且回到跟随目标虚拟对象的默认位置,即目标虚拟对象右后方的目标位置处。
参见图13,图13为本申请实施例提供的召唤对象的作用效果示意图,目标虚拟对象可通过与虚拟盾墙进行交互获得不同的战斗增益,例如,采用夜视 的方式显示透过虚拟盾墙观察其他虚拟对象的画面,并在画面中高亮显示其他虚拟对象的轮廓,以突出显示其他虚拟对象,但当被高亮显示的其他虚拟对象移出目标虚拟对象视野中的虚拟盾墙所在区域时,取消高亮显示效果;又例如,目标虚拟对象发射出去的子弹穿过虚拟盾墙时,可以获得攻击力提升等增益效果,强化了目标虚拟对象透过虚拟盾墙观察另一面其他虚拟对象的视觉效果。
参见图14A-14B,图14A-14B为本申请实施例提供的透过召唤对象进行观察的画面示意图,由于虚拟盾墙是由纳米能量场产生,为了区分虚拟盾墙两面对子弹等远程飞行物的效果,当目标虚拟对象与其他虚拟对象处于虚拟盾墙的两侧时,从目标虚拟对象侧(正面)透过虚拟盾墙所查看的视觉效果1(图14A)和其他虚拟对象侧(反面)透过虚拟盾墙所查看的视觉效果2(图14B)是不一样的。
参见图15,图15为本申请实施例提供的召唤对象的状态变换示意图,为了保证平衡,虚拟盾墙虽然能抵挡来自正前方的攻击,但是也会因为受到攻击而持续掉血(护盾型AI的血量),当血量低于某个设定值时会从盾墙状态退出进入人形受击动作。
此外,在实际应用中,终端还可通过针对护盾型AI的锁定控件的触发操作对护盾型AI进行控制,例如,当终端控制目标虚拟对象使用目标虚拟道具瞄准虚拟场景中的目标对象时,触发锁定控件,终端响应于该触发操作,控制护盾型AI使用特定技能攻击该目标对象;当终端控制目标虚拟对象使用目标虚拟道具瞄准虚拟场景中的目标位置(不存在目标对象)时,触发锁定控件,终端响应于该触发操作,控制护盾型AI移动至目标位置处,并在目标位置处控制护盾型AI由人形变换为虚拟盾墙,阻挡虚拟盾墙正前方的远程攻击。
通过上述方式,目标虚拟对象不需要对护盾型AI做出任何指令或操作,护盾型AI会自己监测目标虚拟对象的行为状态,并且自动决策执行相应的技能和行为,当目标虚拟对象的位置发生变化时,护盾型AI会跟随目标虚拟对象进行移动,如此,玩家不需要对护盾型AI发送任何指令,即可得到护盾型AI的自动保护,使得玩家能够将精力放到自己操控的唯一角色(即目标虚拟对象)上,提高操作效率。
下面继续说明本申请实施例提供的虚拟场景中召唤对象的控制装置555的实施为软件模块的示例性结构,在一些实施例中,参见图16,图16为本申请实施例提供的虚拟场景中召唤对象的控制装置的结构示意图,存储在图2中存储器550的虚拟场景中召唤对象的控制装置555中的软件模块可以包括:
对象呈现模块5551,配置为呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;
状态控制模块5552,配置为当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并
控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
上述方案中,所述呈现第一形态的召唤对象之前,所述方法还包括:
对象召唤模块,配置为当所述虚拟场景中存在用于召唤所述召唤对象的虚拟芯片时,控制所述目标虚拟对象拾取所述虚拟芯片;
获取所述目标虚拟对象的能量值;
当所述目标虚拟对象的能量值到达能量阈值时,基于所述虚拟芯片召唤所述召唤对象。
上述方案中,所述呈现目标虚拟对象以及具有第一形态的召唤对象之后,所述装置还包括:
第一控制模块,配置为获取所述目标虚拟对象与所述召唤对象之间的相对距离;
当所述相对距离超过第一距离阈值时,控制所述第一形态的召唤对象移动至相对所述目标虚拟对象的第一目标位置处。
上述方案中,所述呈现目标虚拟对象以及第一形态的召唤对象之后,所述装置还包括:
第二控制模块,配置为控制所述目标虚拟对象在所述虚拟场景中进行移动;
伴随所述移动的进行,在以所述目标虚拟对象所处位置为中心的跟踪区域内,呈现所述第一形态的召唤对象相对所述目标虚拟对象的第二目标位置,并控制所述第一形态的召唤对象移动至所述第二目标位置处。
上述方案中,所述装置还包括:
移动调节模块,配置为在控制所述第一形态的召唤对象移动至所述第二目标位置的过程中,当在所述召唤对象的移动路线中存在阻挡物,或者所述移动路线中包括不同的地理环境使得所述召唤对象无法到达所述第二目标位置时,控制所述召唤对象移动至第三目标位置处;
其中,所述第三目标位置与所述第二目标位置相对所述目标虚拟对象的方位不同。
上述方案中,所述控制所述召唤对象移动至第三目标位置之前,所述装置还包括:
位置确定模块,配置为确定所述召唤对象在所述跟踪区域内,从当前位置移动至所述第二目标位置所途经的至少两个位置,并从所述至少两个位置中选择与所述第二目标位置之间的距离小于目标距离的位置作为所述第三目标位置;或者,
当所述跟踪区域中不存在可达位置时,扩大所述跟踪区域,并在扩大后的跟踪区域中确定相对所述目标虚拟对象的第三目标位置。
上述方案中,所述呈现目标虚拟对象以及第一形态的召唤对象之后,所述装置还包括:
第三控制模块,配置为控制所述目标虚拟对象在所述虚拟场景中进行移动;
伴随所述移动的进行,呈现移动路线指示信息,所述移动路线指示信息,用于指示所述召唤对象跟随所述目标虚拟对象进行移动的移动路线;
控制所述召唤对象按照所述移动路线指示信息所指示的移动路线进行移动。
上述方案中,所述状态控制模块,配置为控制第一形态的所述召唤对象移 动至距所述目标虚拟对象的距离为目标距离的目标位置处;
在所述目标位置处,控制所述召唤对象由所述第一形态变换为第二形态。
上述方案中,所述装置还包括:
第四控制模块,配置为展示所述目标虚拟对象与所述其他虚拟对象进行交互所对应的交互画面,所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧;
在展示所述交互画面的过程中,当所述其他虚拟对象通过虚拟道具执行针对所述目标虚拟对象的交互操作时,控制所述召唤对象阻挡所述交互操作。
上述方案中,所述装置还包括:
第五控制模块,配置为呈现对应所述召唤对象的属性变换指示信息;
其中,所述属性变换指示信息,用于指示阻挡所述交互操作所扣除的所述召唤对象的属性值;
当所述属性变换指示信息指示所述召唤对象的属性值低于属性阈值时,控制所述召唤对象的形态由所述第二形态变换为所述第一形态。
上述方案中,所述装置还包括:
突出显示模块,配置为当所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧时,展示所述目标虚拟对象透过所述第二形态的召唤对象观察所述其他虚拟对象的画面,并在所述画面中突出显示所述其他虚拟对象。
上述方案中,所述装置还包括:
增强提示模块,配置为当所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧时,在所述目标虚拟对象与所述其他虚拟对象进行交互的过程中,控制所述目标虚拟对象在所述虚拟场景中投射虚拟道具;
当所述虚拟道具穿过所述召唤对象时,呈现效果增强提示信息,所述效果增强提示信息,用于提示所述虚拟道具所对应的作用效果得到提升。
上述方案中,所述控制所述召唤对象的形态由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象由所述跟随状态切换为交互辅助状态之后,所述装置还包括:
第六控制模块,配置为在所述目标虚拟对象维持处于所述交互准备状态的过程中,控制所述目标虚拟对象在所述虚拟场景中移动;
在控制所述目标虚拟对象移动的过程中,控制所述第二形态的召唤对象跟随所述目标虚拟对象进行移动。
上述方案中,所述装置还包括:
移动调整模块,配置为在控制所述第二形态的召唤对象跟随所述目标虚拟对象进行移动的过程中,当所述召唤对象移动至存在阻挡物的阻挡区域时,自动调整所述召唤对象的移动路线以避开所述阻挡物。
上述方案中,所述控制所述第二形态的召唤对象由所述跟随状态切换为交互辅助状态之后,所述装置还包括:
第七控制模块,配置为当所述目标虚拟对象退出所述交互准备状态时,控制所述召唤对象的形态由所述第二形态变换为所述第一形态,并控制所述第一形态的召唤对象的工作状态由所述交互辅助状态切换为跟随状态。
上述方案中,所述状态控制模块,还配置为控制所述目标虚拟对象使用目标虚拟道具瞄准所述虚拟场景中的目标位置,并在所述目标位置处呈现相应的准星图案;
响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,在所述目标位置处由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象处于交互辅助状态。
上述方案中,所述装置还包括:
对象召回模块,配置为呈现用于对所述召唤对象进行召回的召回控件;
响应于针对所述召回控件的触发操作,控制所述召唤对象由所述目标位置移动至初始位置,并控制所述召唤对象的形态由所述第二形态变换为所述第一形态。
在一些实施例中,本申请实施例还提供一种虚拟场景中召唤对象的控制装置,包括:
第一呈现模块,配置为呈现虚拟射击场景中持有射击道具的目标虚拟对象、以及处于人物形态的召唤对象;
瞄准控制模块,配置为在所述虚拟射击场景中,控制所述目标虚拟对象使用所述射击道具瞄准目标位置,并在所述目标位置处呈现相应的准星图案;
状态变换模块,配置为响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,并在所述目标位置处由所述人物形态变换为护盾状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中召唤对象的控制方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景中召唤对象的控制方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其他单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其他程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (21)

  1. 一种虚拟场景中召唤对象的控制方法,所述方法由电子设备执行,所述方法包括:
    呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;
    当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并
    控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
  2. 如权利要求1所述的方法,其中,所述呈现第一形态的召唤对象之前,所述方法还包括:
    当所述虚拟场景中存在用于召唤所述召唤对象的虚拟芯片时,控制所述目标虚拟对象拾取所述虚拟芯片;
    获取所述目标虚拟对象的能量值;
    当所述目标虚拟对象的能量值到达能量阈值时,基于所述虚拟芯片召唤所述召唤对象。
  3. 如权利要求1所述的方法,其中,所述呈现虚拟场景中的目标虚拟对象以及具有第一形态的召唤对象之后,所述方法还包括:
    获取所述目标虚拟对象与所述召唤对象之间的相对距离;
    当所述相对距离超过第一距离阈值时,控制所述第一形态的召唤对象移动至相对所述目标虚拟对象的第一目标位置处。
  4. 如权利要求1所述的方法,其中,所述呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象之后,所述方法还包括:
    控制所述目标虚拟对象在所述虚拟场景中进行移动;
    伴随所述移动的进行,在以所述目标虚拟对象所处位置为中心的跟踪区域内,呈现所述第一形态的召唤对象相对所述目标虚拟对象的第二目标位置,并控制所述第一形态的召唤对象移动至所述第二目标位置处。
  5. 如权利要求4所述的方法,其中,所述方法还包括:
    在控制所述第一形态的召唤对象移动至所述第二目标位置的过程中,当在所述召唤对象的移动路线中存在阻挡物,或者所述移动路线中包括不同的地理环境使得所述召唤对象无法到达所述第二目标位置时,控制所述召唤对象移动至第三目标位置处;
    其中,所述第三目标位置与所述第二目标位置相对所述目标虚拟对象的方位不同。
  6. 如权利要求5所述的方法,其中,所述控制所述召唤对象移动至第三目标位置之前,所述方法还包括:
    确定所述召唤对象在所述跟踪区域内,从当前位置移动至所述第二目标位置所途经的至少两个位置,并从所述至少两个位置中选择与所述第二目标位置之间的距离小于目标距离的位置作为所述第三目标位置;或者,
    当所述跟踪区域中不存在可达位置时,扩大所述跟踪区域,并在扩大后的 跟踪区域中确定相对所述目标虚拟对象的第三目标位置。
  7. 如权利要求1所述的方法,其中,所述呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象之后,所述方法还包括:
    控制所述目标虚拟对象在所述虚拟场景中进行移动;
    伴随所述移动的进行,呈现移动路线指示信息,所述移动路线指示信息,用于指示所述召唤对象跟随所述目标虚拟对象进行移动的移动路线;
    控制所述召唤对象按照所述移动路线指示信息所指示的移动路线进行移动。
  8. 如权利要求1所述的方法,其中,所述控制所述召唤对象的形态由所述第一形态变换为第二形态,包括:
    控制第一形态的所述召唤对象移动至距所述目标虚拟对象的距离为目标距离的目标位置处;
    在所述目标位置处,控制所述召唤对象由所述第一形态变换为第二形态。
  9. 如权利要求8所述的方法,其中,所述方法还包括:
    展示所述目标虚拟对象与所述其他虚拟对象进行交互所对应的交互画面,所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧;
    在展示所述交互画面的过程中,当所述其他虚拟对象通过虚拟道具执行针对所述目标虚拟对象的交互操作时,控制所述召唤对象阻挡所述交互操作。
  10. 如权利要求9所述的方法,其中,所述方法还包括:
    呈现对应所述召唤对象的属性变换指示信息;
    其中,所述属性变换指示信息,用于指示阻挡所述交互操作所扣除的所述召唤对象的属性值;
    当所述属性变换指示信息指示所述召唤对象的属性值低于属性阈值时,控制所述召唤对象的形态由所述第二形态变换为所述第一形态。
  11. 如权利要求8所述的方法,其中,所述方法还包括:
    当所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧时,展示所述目标虚拟对象透过所述第二形态的召唤对象观察所述其他虚拟对象的画面,并在所述画面中突出显示所述其他虚拟对象。
  12. 如权利要求8所述的方法,其中,所述方法还包括:
    当所述目标虚拟对象与所述其他虚拟对象位于所述召唤对象的两侧时,在所述目标虚拟对象与所述其他虚拟对象进行交互的过程中,控制所述目标虚拟对象在所述虚拟场景中投射虚拟道具;
    当所述虚拟道具穿过所述召唤对象时,呈现效果增强提示信息,所述效果增强提示信息,用于提示所述虚拟道具所对应的作用效果得到提升。
  13. 如权利要求1所述的方法,其中,所述控制所述召唤对象的形态由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象处于交互辅助状态之后,所述方法还包括:
    在所述目标虚拟对象维持处于所述交互准备状态的过程中,控制所述目标虚拟对象在所述虚拟场景中移动;
    在控制所述目标虚拟对象移动的过程中,控制所述第二形态的召唤对象跟随所述目标虚拟对象进行移动。
  14. 如权利要求13所述的方法,其中,所述方法还包括:
    在控制所述第二形态的召唤对象跟随所述目标虚拟对象进行移动的过程中,当所述召唤对象移动至存在阻挡物的阻挡区域时,自动调整所述召唤对象的移动路线以避开所述阻挡物。
  15. 如权利要求1所述的方法,其中,所述控制所述第二形态的召唤对象处于交互辅助状态之后,所述方法还包括:
    当所述目标虚拟对象退出所述交互准备状态时,控制所述召唤对象的形态由所述第二形态变换为所述第一形态,并控制所述第一形态的召唤对象的工作状态由所述交互辅助状态切换为跟随状态。
  16. 如权利要求1所述的方法,其中,所述控制所述召唤对象的形态由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象处于交互辅助状态,包括:
    控制所述目标虚拟对象使用目标虚拟道具瞄准所述虚拟场景中的目标位置,并在所述目标位置处呈现相应的准星图案;
    响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,在所述目标位置处由所述第一形态变换为第二形态,并控制所述第二形态的召唤对象处于交互辅助状态。
  17. 一种虚拟场景中召唤对象的控制方法,所述方法由电子设备执行,所述方法包括:
    呈现虚拟射击场景中持有射击道具的目标虚拟对象、以及处于人物形态的召唤对象;
    在所述虚拟射击场景中,控制所述目标虚拟对象使用所述射击道具瞄准目标位置,并在所述目标位置处呈现相应的准星图案;
    响应于基于所述准星图案触发的变换指令,控制所述召唤对象移动至所述目标位置处,并在所述目标位置处由所述人物形态变换为护盾状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
  18. 一种虚拟场景中召唤对象的控制装置,所述装置包括:
    对象呈现模块,配置为呈现虚拟场景中的目标虚拟对象以及第一形态的召唤对象;
    状态控制模块,配置为当所述目标虚拟对象处于与所述虚拟场景中其他虚拟对象进行交互的交互准备状态时,控制所述召唤对象的形态由所述第一形态变换为第二形态,并
    控制所述第二形态的召唤对象处于交互辅助状态,以辅助所述目标虚拟对象与所述其他虚拟对象进行交互。
  19. 一种电子设备,包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至17任一项所述的虚拟场景中召唤对象的控制方法。
  20. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至17任一项所述的虚拟场景中召唤对象的控制方法。
  21. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现权利要求1至17任一项所述的虚拟场景中召唤对象的控制方法。
PCT/CN2022/090972 2021-05-31 2022-05-05 虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品 WO2022252905A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023553739A JP2024512345A (ja) 2021-05-31 2022-05-05 仮想シーンにおける召喚オブジェクトの制御方法、装置、機器、及びコンピュータプログラム
US18/303,851 US20230256338A1 (en) 2021-05-31 2023-04-20 Method for controlling call object in virtual scene, apparatus for controlling call object in virtual scene, device, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110602499.3A CN113181649B (zh) 2021-05-31 2021-05-31 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
CN202110602499.3 2021-05-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/303,851 Continuation US20230256338A1 (en) 2021-05-31 2023-04-20 Method for controlling call object in virtual scene, apparatus for controlling call object in virtual scene, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2022252905A1 true WO2022252905A1 (zh) 2022-12-08

Family

ID=76985947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090972 WO2022252905A1 (zh) 2021-05-31 2022-05-05 虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品

Country Status (4)

Country Link
US (1) US20230256338A1 (zh)
JP (1) JP2024512345A (zh)
CN (1) CN113181649B (zh)
WO (1) WO2022252905A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12048880B2 (en) * 2020-03-17 2024-07-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for displaying interactive item, terminal, and storage medium
CN113181649B (zh) * 2021-05-31 2023-05-16 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
KR20240046594A (ko) * 2022-01-11 2024-04-09 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 파트너 객체 제어 방법 및 장치, 및 디바이스, 매체 및 프로그램 제품
CN114344906B (zh) * 2022-01-11 2024-08-20 腾讯科技(深圳)有限公司 虚拟场景中伙伴对象的控制方法、装置、设备及存储介质
CN114612553B (zh) * 2022-03-07 2023-07-18 北京字跳网络技术有限公司 一种虚拟对象的控制方法、装置、计算机设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017055897A (ja) * 2015-09-15 2017-03-23 株式会社カプコン ゲームプログラムおよびゲームシステム
CN110812837A (zh) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 虚拟道具的放置方法和装置、存储介质及电子装置
CN111589133A (zh) * 2020-04-28 2020-08-28 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、设备及存储介质
CN112076473A (zh) * 2020-09-11 2020-12-15 腾讯科技(深圳)有限公司 虚拟道具的控制方法、装置、电子设备及存储介质
CN112090067A (zh) * 2020-09-23 2020-12-18 腾讯科技(深圳)有限公司 虚拟载具的控制方法、装置、设备及计算机可读存储介质
CN113181649A (zh) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
CN113181650A (zh) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017055897A (ja) * 2015-09-15 2017-03-23 株式会社カプコン ゲームプログラムおよびゲームシステム
CN110812837A (zh) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 虚拟道具的放置方法和装置、存储介质及电子装置
CN111589133A (zh) * 2020-04-28 2020-08-28 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、设备及存储介质
CN112076473A (zh) * 2020-09-11 2020-12-15 腾讯科技(深圳)有限公司 虚拟道具的控制方法、装置、电子设备及存储介质
CN112090067A (zh) * 2020-09-23 2020-12-18 腾讯科技(深圳)有限公司 虚拟载具的控制方法、装置、设备及计算机可读存储介质
CN113181649A (zh) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
CN113181650A (zh) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US20230256338A1 (en) 2023-08-17
CN113181649B (zh) 2023-05-16
CN113181649A (zh) 2021-07-30
JP2024512345A (ja) 2024-03-19

Similar Documents

Publication Publication Date Title
WO2022252905A1 (zh) 虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品
WO2022252911A1 (zh) 虚拟场景中召唤对象的控制方法、装置、设备、存储介质及程序产品
JP2021514701A (ja) バーチャル環境バトルにおける装備表示方法、装置、デバイス、記憶媒体及びコンピュータプログラム
WO2022017063A1 (zh) 控制虚拟对象恢复属性值的方法、装置、终端及存储介质
US20230347244A1 (en) Method and apparatus for controlling object in virtual scene, electronic device, storage medium, and program product
US20240307790A1 (en) Information sending method, information sending apparatus, computer readable storage medium, and device
CN112402960B (zh) 虚拟场景中状态切换方法、装置、设备及存储介质
CN113633964B (zh) 虚拟技能的控制方法、装置、设备及计算机可读存储介质
KR102678616B1 (ko) 타깃 가상 객체를 결정하는 방법 및 장치, 단말, 및 저장 매체
CN112138384B (zh) 虚拟投掷道具的使用方法、装置、终端及存储介质
US20230040737A1 (en) Method and apparatus for interaction processing of virtual item, electronic device, and readable storage medium
WO2022227958A1 (zh) 虚拟载具的显示方法、装置、设备以及存储介质
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
US20230364502A1 (en) Method and apparatus for controlling front sight in virtual scenario, electronic device, and storage medium
WO2023142617A1 (zh) 基于虚拟场景的射线显示方法、装置、设备以及存储介质
CN113827967B (zh) 游戏控制方法及装置、电子设备、存储介质
CN113797557B (zh) 游戏控制方法及装置、电子设备、存储介质
CN113144603B (zh) 虚拟场景中召唤对象的切换方法、装置、设备及存储介质
CN111202983A (zh) 虚拟环境中的道具使用方法、装置、设备及存储介质
CN112717394B (zh) 瞄准标记的显示方法、装置、设备及存储介质
WO2024093940A1 (zh) 虚拟场景中虚拟对象组的控制方法、装置及产品
WO2024098628A9 (zh) 游戏交互方法、装置、终端设备及计算机可读存储介质
CN112121433B (zh) 虚拟道具的处理方法、装置、设备及计算机可读存储介质
CN113769379B (zh) 虚拟对象的锁定方法、装置、设备、存储介质及程序产品
CN113633991B (zh) 虚拟技能的控制方法、装置、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814952

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023553739

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/04/2024)