CN116920403A - Virtual object control method, device, equipment, storage medium and program product - Google Patents

Virtual object control method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN116920403A
CN116920403A CN202210365169.1A CN202210365169A CN116920403A CN 116920403 A CN116920403 A CN 116920403A CN 202210365169 A CN202210365169 A CN 202210365169A CN 116920403 A CN116920403 A CN 116920403A
Authority
CN
China
Prior art keywords
virtual
slave
scene
virtual scene
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210365169.1A
Other languages
Chinese (zh)
Inventor
顾列宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202210365169.1A priority Critical patent/CN116920403A/en
Priority to KR1020247009494A priority patent/KR20240046594A/en
Priority to PCT/CN2023/071526 priority patent/WO2023134660A1/en
Publication of CN116920403A publication Critical patent/CN116920403A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a control method, a device, equipment, a computer readable storage medium and a computer program product of a virtual object; the method comprises the following steps: displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first slave object in a first form, and the first slave object and the first main object have a subordinate relation; controlling the first slave object to deform from the first configuration to the second configuration in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene; controlling the first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting the third virtual object in the virtual scene, displaying location information of the third virtual object in a map corresponding to the virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first host object. The application can improve the man-machine interaction efficiency and reduce the occupation of hardware processing resources.

Description

Virtual object control method, device, equipment, storage medium and program product
Technical Field
The present application relates to man-machine interaction technology, and in particular, to a method, apparatus, device, computer readable storage medium and computer program product for controlling a virtual object.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction among virtual objects controlled by users or artificial intelligence according to actual application requirements, has various typical application scenes, and can simulate the actual fight process among the virtual objects in the virtual scene of games, for example.
Taking shooting game scenes as an example, most players aim to hit enemy by using weapons, the behavior mode and the combat strategy of the players are relatively single, for example, the players use the secondary objects (also called objects or virtual pets) associated with the main objects to detect the enemy, but the secondary objects in the mode are very easy to be found by the enemy, in the related technology, in order not to be found by the enemy, the objects are often needed to be hidden in hidden places for detection, the detection capability of the secondary objects is reduced, the effective information of the enemy is difficult to obtain, the improvement of the interaction capability of the players is influenced, so that multiple interactive operations have to be executed for achieving a certain interaction purpose (such as obtaining the effective information), the man-machine interaction efficiency is low, and the hardware processing resources are wasted.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment, a computer readable storage medium and a computer program product for a virtual object, which can improve the man-machine interaction efficiency and reduce the occupation of hardware processing resources.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control method of a virtual object, which comprises the following steps:
displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first master object and a first slave object in a first form, and the first slave object and the first master object have a subordinate relationship;
controlling the first slave object to deform from the first configuration to a second configuration in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene;
controlling a first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene;
in response to the first slave object detecting a third virtual object in the virtual scene, displaying location information of the third virtual object in a map corresponding to the virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first master object.
The embodiment of the application provides a control device for a virtual object, which comprises the following components:
the first display module is used for displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first slave object in a first form, and the first slave object and the first main object have a subordinate relation;
a first control module for controlling the first slave object to deform from the first configuration to a second configuration in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene;
the second control module is used for controlling the first slave object in the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene;
a second display module, configured to, in response to the first slave object detecting a third virtual object in the virtual scene, display location information of the third virtual object in a map corresponding to the virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first master object.
In the above aspect, the first display module is further configured to receive a second instruction for the first slave object; in response to the second instruction, calling a first slave object in an initial form, and displaying a game screen in which the first slave object is deformed from the initial form to a first form and the first slave object in the first form is adsorbed onto the first master object to become a part of the first master object model; the first control module is further configured to demonstrate a process in which the first slave object in the first configuration is detached from the first master object to move to a target position, and is deformed into the first slave object in the second configuration at the target position.
In the above scheme, the device further includes: the third control module is used for responding to the first slave object of the first form to be adsorbed to the first master object and controlling the first slave object of the first form to perform object reconnaissance on the virtual scene; and when a fourth virtual object is detected in a first area taking the first main object as a center, displaying the position information of the fourth virtual object in a map corresponding to the virtual scene.
In the above solution, when the number of the second virtual objects is at least two, the apparatus further includes: the deformation determining module is used for displaying a form selection interface and displaying forms corresponding to at least two second virtual objects which can be selected in the form selection interface; and responding to a selection operation of the morphology of a target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
In the above aspect, before the controlling the first slave object to deform from the first shape to the second shape, the apparatus further includes: the form prediction module is used for acquiring scene data of a first area taking the first slave object as a center in the virtual scene; wherein the scene data includes a virtual object located in the first region; calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form; the machine learning model is trained based on scene data in a sample area and the marked deformation.
In the above solution, the second control module is further configured to control the first slave object in the second aspect to move in the virtual scene; controlling the first slave object in the second form to deform from the second form to an initial form in response to the first slave object in the second form being attacked by a fifth virtual object in the process of moving the first slave object in the second form in the virtual scene; and controlling the first slave object of the initial form to perform object reconnaissance on the virtual scene.
In the above solution, the second control module is further configured to control the first slave object in the second aspect to move in the virtual scene; controlling the first slave object in the second mode to release the marker wave to the periphery and displaying a second area of the marker wave in the process of moving the first slave object in the second mode in the virtual scene; controlling the first slave object of the second form to perform object reconnaissance in the second area; the second display module is further configured to, in response to the first slave object detecting a third virtual object in the second area, highlight the third virtual object, and display location information of the third virtual object in a map corresponding to the virtual scene, so that virtual objects in a group to which the first master object belongs are viewed.
In the above scheme, the device further includes: a fourth control module for controlling the first slave object of the second modality to lock a third virtual object in response to the first slave object detecting the third virtual object in the second area; and when the third virtual object moves in the virtual scene and moves to a target position shielded by an obstacle, the third virtual object at the target position is seen through.
In the above scheme, the second display module is further configured to display, in a map corresponding to the virtual scene, location information of a third virtual object in response to the first slave object detecting the third virtual object in the virtual scene and the first slave object being attacked by the third virtual object.
In the above solution, the second display module is further configured to obtain interaction parameters of each third virtual object when the number of the third virtual objects is at least two; wherein the interaction parameters include at least one of: interaction role, interaction preference, interaction capability, distance to the first master object; and displaying the position information of each third virtual object in the map corresponding to the virtual scene by adopting a display style corresponding to the interaction parameter.
In the above scheme, the device further includes: a fifth control module, configured to receive a tracking instruction of the first slave object in the second aspect for the third virtual object when the third virtual object moves in the virtual scene; and responding to the tracking instruction, controlling a first slave object of the second form to track the third virtual object along the tracking direction indicated by the tracking instruction, and updating and displaying the position information of the third virtual object in a map corresponding to the virtual scene.
In the above scheme, the device further includes: a sixth control module, configured to control, when an obstacle is detected in the virtual scene during object detection of the virtual scene by the first slave object in the second aspect, the first slave object in the second aspect to release a pulse wave for penetrating the obstacle to the obstacle; determining that a third virtual object is detected in the virtual scene when the obstacle is detected to be blocked by the third virtual object based on the pulse wave; and controlling the first slave object in the second form to perform reconnaissance marking on the third virtual object, and perspective the third virtual object.
In the above scheme, the device further includes: the material detection module is used for controlling the first slave object in the second form to perform material detection on the virtual scene in response to the first master object having material detection skills; displaying indication information corresponding to virtual materials when the virtual materials are detected in the virtual scene; the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
In the above-mentioned scheme, the material detection module is further configured to, when a virtual material is detected in a second area centered on the first slave object in the virtual scene, display, at a position where the virtual material in the second area is located, category indication information of the virtual material, and display, in a map corresponding to the virtual scene, position indication information of the virtual material; and taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
In the above scheme, the material detection module is further configured to display, when the number of virtual materials is at least two, indication information of a first number of virtual materials in the at least two virtual materials by using a first display style, and display indication information of a second number of virtual materials in the at least two virtual materials by using a second display style; the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
In the above scheme, the material detection module is further configured to display, when the types of the virtual materials are at least two, indication information of a virtual material of a target type in the at least two virtual materials by using a third display style, and display indication information of virtual materials of other types except the target type in the at least two virtual materials by using a fourth display style; the third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual object provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the control method of the virtual object provided by the embodiment of the application.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions realize the control method of a first slave object in a virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, the first slave object is transformed from the first form into the second form consistent with the image of the second virtual object (namely the enemy) with the enemy relation with the first master object by transforming the first slave object affiliated with the first master object, and the first slave object in the second form is controlled to perform object reconnaissance in the virtual scene.
In addition, when the enemy information such as the third virtual object (which may include the second virtual object) is detected, the position information of the third virtual scene is displayed in the map, which is favorable for the first main object to prepare an interaction strategy capable of causing the biggest damage to the enemy according to the position of the enemy, and execute corresponding interaction operation according to the interaction strategy, so that the interaction capability (such as attack capability or defense capability) of the first main object is improved, and under the condition that the interaction capability of the first main object is improved, the terminal device can reduce the interaction times of executing the interaction operation for achieving a certain interaction purpose (such as obtaining the effective information of the enemy or defeating the enemy), thereby improving the man-machine interaction efficiency and reducing the occupation of hardware processing resources.
Drawings
Fig. 1A is an application mode schematic diagram of a control method of a virtual object according to an embodiment of the present application;
fig. 1B is an application mode schematic diagram of a control method of a virtual object according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application;
fig. 3 is a flow chart of a control method of a virtual object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of morphological changes of a slave object according to an embodiment of the present application;
FIG. 5 is a schematic view of reconnaissance according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a morphological change of a slave object according to an embodiment of the present application;
FIG. 7 is a schematic diagram of reconnaissance according to an embodiment of the present application;
FIG. 8 is a schematic diagram of reconnaissance according to an embodiment of the present application;
FIG. 9 is a schematic diagram of detection according to an embodiment of the present application;
FIG. 10 is a schematic diagram of detection according to an embodiment of the present application;
fig. 11 is a flowchart illustrating a method for controlling a virtual object according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the term "first/second …" is merely to distinguish similar objects and does not represent a particular ordering for objects, it being understood that the "first/second …" may be interchanged with a particular order or precedence where allowed to enable embodiments of the present application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And a client, an application program for providing various services, such as a video playing client, a game client, etc., running in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) The virtual scene is a virtual scene displayed (or provided) when the application program runs on the terminal, and the virtual scene can be a simulation environment for a real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual object to move in the virtual scene.
4) Virtual objects, images of various people and objects in a virtual scene that can interact, or movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
5) The slave object, also called object or virtual pet, belonging to the corresponding main object, refers to an additional individual unit of the non-user character body controlled by the player (main object), and can execute some instructions under the control of the player, namely, the slave object is the image of various people and objects which can assist the virtual object to interact with other virtual objects in the virtual scene, and the image can be a virtual character, a virtual animal, a cartoon character, a virtual prop, a virtual carrier and the like.
In the embodiment of the application, the first master object is a virtual object in a virtual scene corresponding to the current user account, which may also be referred to as a first virtual object, and the first slave object is a calling object belonging to the first virtual object (i.e. the first master object). The second virtual object, the third virtual object, the fourth virtual object or the fifth virtual object are all common names of virtual objects (main objects) or associated slave objects in the virtual scene corresponding to the current user account, for example, the second virtual object may include the second main object and the second slave object, the third virtual object may include the third main object and the third slave object instead of referring to one virtual object in the virtual scene, and for example, the virtual object a, the virtual object B and the virtual object C in the virtual scene are all corresponding to accounts (enemies) of the first main object belonging to different groups, and then the virtual object a, the virtual object B and the virtual object C are all referred to as the second virtual object (or the third virtual object, the fourth virtual object and the fifth virtual object).
6) Scene data representing various feature data represented by objects in a virtual scene during interaction may include, for example, the position of the virtual object in the virtual scene, the environment in which the virtual object is located, the time that needs to wait when various functions are configured in the virtual scene (depending on the number of times the same function can be used in a specific time), and attribute values representing various states of the virtual object of the game, such as a life value and a magic value.
The embodiment of the application provides a control method, a control device, a terminal device, a computer readable storage medium and a computer program product for a virtual object, which can improve the man-machine interaction efficiency and reduce the occupation of hardware processing resources. In order to facilitate easier understanding of the control method of the virtual object provided by the embodiment of the present application, first, an exemplary implementation scenario of the control method of the virtual object provided by the embodiment of the present application is described, where the virtual scenario in the control method of the virtual object provided by the embodiment of the present application may be based on output of a terminal device completely or on cooperative output of the terminal device and a server. In some embodiments, the virtual scene may be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the game characters, so that a user can relax life pressure in the game process.
In an implementation scenario, referring to fig. 1A, fig. 1A is a schematic application mode diagram of a control method of a virtual object according to an embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of a virtual scene 100 completely depending on the computing capability of graphics processing hardware of a terminal device 400, for example, a game in a stand-alone/offline mode, and output the virtual scene through various different types of terminal devices 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device. By way of example, the types of graphics processing hardware include central processing units (CPU, central Processing Unit) and graphics processors (GPU, graphics Processing Unit).
When forming the visual perception of the virtual scene 100, the terminal device 400 calculates the data required for display through the graphic computing hardware, and completes loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception for the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is presented on the display screen of the smart phone, or a video frame realizing the three-dimensional display effect is projected on the lens of the augmented reality/virtual reality glasses; in addition, to enrich the perceived effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception and gustatory perception by means of different hardware.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a game screen during the running of the client 410, where the game screen includes at least a part of the virtual scene 100 for role playing, and the virtual scene 100 may be an environment for interaction of game characters, for example, plain, street, valley, etc. for the game characters to fight against; the virtual scene 100 includes a first master object 110 and a first slave object 120 having a first aspect in a subordinate relationship with the first master object.
As an example, the terminal device controls the first slave object to be deformed from the first form to the second form in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene; controlling the first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting a third virtual object in the virtual scene, displaying position information of the third virtual object in a map of the corresponding virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first host object. Therefore, in the object reconnaissance process, the form of the first slave object is consistent with the image of the second virtual object, so that the first slave object is not easy to be found by an enemy, even can touch the vicinity of the enemy to reconnaissance, and effective information of the enemy is easily obtained, the reconnaissance capability of the first slave object is greatly improved, and the interactive capability of the first master object can be improved by the improvement of the reconnaissance capability of the first slave object. In addition, the position information of the detected third virtual scene is displayed in the map, which is favorable for the first main object to prepare an interaction strategy capable of causing the biggest damage to the enemy according to the position of the enemy, and execute corresponding interaction operation according to the interaction strategy, so that the interaction capability (such as attack capability or defense capability) of the first main object is improved.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic application mode diagram of a control method of a virtual object according to an embodiment of the present application, which is applied to a terminal device 400 and a server 200, and is suitable for an application mode that completes virtual scene calculation depending on a computing capability of the server 200 and outputs a virtual scene at the terminal device 400. Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of virtual scene related display data (such as scene data) and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 finishes loading, analyzing and rendering the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect can be projected on a lens of an augmented reality/virtual reality glasses; as regards the perception of the form of the virtual scene, it is understood that the auditory perception may be formed by means of the corresponding hardware output of the terminal device 400, for example using a microphone, the tactile perception may be formed using a vibrator, etc.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a game screen during the running of the client 410, where the game screen includes at least a part of a virtual scene 100 with role playing, and the virtual scene 100 may be an environment for interaction of game characters, for example, a plain, a street, a valley, etc. for the game characters to fight against; the virtual scene 100 includes a first master object 110 and a first slave object 120 having a first aspect in a subordinate relationship with the first master object.
As an example, the terminal device controls the first slave object to be deformed from the first form to the second form in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene; controlling the first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting a third virtual object in the virtual scene, displaying position information of the third virtual object in a map of the corresponding virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first host object. Therefore, in the object reconnaissance process, the form of the first slave object is consistent with the image of the second virtual object, so that the first slave object is not easy to be found by an enemy, even can touch the vicinity of the enemy to reconnaissance, and effective information of the enemy is easily obtained, the reconnaissance capability of the first slave object is greatly improved, and the interactive capability of the first master object can be improved by the improvement of the reconnaissance capability of the first slave object. In addition, the position information of the detected third virtual scene is displayed in the map, which is favorable for the first main object to prepare an interaction strategy capable of causing the biggest damage to the enemy according to the position of the enemy, and execute corresponding interaction operation according to the interaction strategy, so that the interaction capability (such as attack capability or defense capability) of the first main object is improved.
In some embodiments, the terminal device 400 may implement the control method of the virtual object provided by the embodiment of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) application (APP, APPlication), i.e. a program that needs to be installed in an operating system to run, such as a shooting game APP (i.e. client 410 described above); the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an example of an application program, in actual implementation, the terminal device 400 installs and runs an application program supporting a virtual scene. The application may be any one of a First person shooter game (FPS), a third person shooter game, a virtual reality application, a three-dimensional map program, a maneuver simulation program, or a multi-person gunfight survival game. The user uses the terminal device 400 to operate a virtual object located in a virtual scene to perform activities including, but not limited to: at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona, or the like.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
For example, the server 200 in fig. 1B may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, in practical application, the electronic device may be the terminal device 400 in fig. 1A, or may be the terminal device 400 or the server 200 in fig. 1B, and the electronic device is illustrated by taking the electronic device as the terminal device 400 shown in fig. 1A as an example. The terminal device 400 shown in fig. 2 includes: at least one processor 420, a memory 460, at least one network interface 430, and a user interface 440. The various components in terminal device 400 are coupled together by bus system 450. It is understood that bus system 450 is used to implement the connected communications between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 450 in fig. 2.
The processor 420 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 440 includes one or more output devices 441 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 440 also includes one or more input devices 442, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 460 optionally includes one or more storage devices physically remote from processor 420.
Memory 460 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 460 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 460 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, the exemplary network interfaces 430 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 463 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 441 (e.g., a display screen, speakers, etc.) associated with the user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in software, and fig. 2 shows the control device 465 for a virtual object stored in the memory 460, which may be software in the form of a program and a plug-in, and includes the following software modules: the first display module 4651, the first control module 4652, the second control module 4653 and the second display module 4654 are logically, and thus may be arbitrarily combined or further split according to the implemented functions, the functions of each module will be described below.
In other embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in hardware, and as an example, the control device for a virtual object provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to perform the control method for a virtual object provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may use one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic components.
The method for controlling the virtual object provided by the embodiment of the application is specifically described below with reference to the accompanying drawings. The control method of the virtual object provided by the embodiment of the present application may be executed by the terminal device 400 in fig. 1A alone, or may be executed by the terminal device 400 and the server 200 in fig. 1B in cooperation. Next, a control method of a virtual object provided by the embodiment of the present application is described by way of example in which the terminal device 400 in fig. 1A alone is executed. Referring to fig. 3, fig. 3 is a flowchart of a control method of a virtual object according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
It should be noted that the method shown in fig. 3 may be executed by various computer programs running on the terminal device 400, and is not limited to the above-mentioned client 410, but may also be the operating system 461, software modules and scripts described above, and therefore the client should not be considered as limiting the embodiments of the present application.
Step 101: the terminal device displays a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first slave object in a first form, and the first slave object and the first main object have a subordinate relation.
Here, a client supporting a virtual scene (such as a game) is installed on the terminal device, and when a user opens the client on the terminal and the terminal device runs the client, the terminal device displays a game screen, where the game screen may be obtained by observing the game from a first person object perspective or from a third person perspective. The game picture comprises at least a part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object in a first form, the first auxiliary object and the first main object have an affiliated relation, the first main object is a virtual object in the virtual scene corresponding to the current user account, and in the virtual scenes, a user can control the first main object to interact with other virtual objects (such as a second virtual object which is in different groups or is in an hostile relation with the first main object) based on an interface of the virtual scenes. The first slave object is a calling object associated with the first master object and is used for assisting the first master object to interact with other virtual objects in the virtual scene, wherein the images can be virtual characters, virtual animals, cartoon characters, virtual props, virtual carriers and the like.
In some embodiments, the terminal device may display the game screen by: receiving a second instruction for the first slave object; in response to the second instruction, a first slave object in the initial form is summoned, and a game screen in which the first slave object is deformed from the initial form to the first form and the first slave object in the first form is adsorbed to the first master object so as to become a part of the first master object model is displayed.
In practical application, the terminal device may display a call control for calling the first slave object in an interface of the virtual scene, receive a second instruction (i.e. a call instruction) in response to a trigger operation for the call control, generate and send a call request for the first slave object to the server in response to the call instruction, wherein the call request carries an object identifier of the first slave object to be called, and the server determines relevant parameters of the first slave object requested to be called by the call request based on the call request, and pushes the determined relevant parameters of the first slave object to the terminal device for rendering a picture based on the relevant parameters, and display a rendered call picture (i.e. the game picture described above).
Referring to fig. 4, fig. 4 is a schematic diagram of a morphological change of a slave object according to an embodiment of the present application, in which a terminal device displays a first slave object in an initial form after a call, and displays the first slave object in the initial form after the call is changed into the first slave object in the first form, and a process that the first slave object in the first form is adsorbed onto a first master object to become a part of a first master object model, such as displaying a stranger of a called cartoon image first, displaying a stranger of the cartoon image again, and displaying an animation that the stranger of the cartoon image becomes fragments and the fragments are adsorbed on an arm of the first master object.
Typically, the assistance that the first slave object in the different forms brings to the first master object is different, for example, the first slave object in the initial form may be an independent avatar located at a distance around the first master object, and when the first master object moves in the virtual scene, the first slave object in the initial form moves following the movement of the first master object; after the first slave object in the initial form is summoned and before the first slave object is changed from the initial form to the first form, the terminal device can also control the initial form to detect a target area centering on the first slave object in the virtual scene, and when the target object (such as a second virtual object or virtual material) is detected, indication information of the detected target object is displayed. The first slave object in the first form is adsorbed on the arm of the first master object, and the first slave object in the first form is less easily perceived by the second virtual object relative to the first slave object in the initial form, so that when the first slave object in the first form is controlled to perform object detection or resource detection in the virtual scene, valuable information such as the position where the second virtual object is located, the surrounding resource distribution and the like is more beneficial to detection or detection, and further the interaction capacity of the first master object can be improved.
In some embodiments, the terminal device may control the first slave object of the first modality to perform object reconnaissance on the virtual scene in response to the first slave object of the first modality being adsorbed to the first master object; when the fourth virtual object is detected in the first area centering on the first main object, position information of the fourth virtual object is displayed in a map corresponding to the virtual scene.
In practical application, the terminal device may control the first slave object in the first form to assist the first master object in the virtual scene, interact with other virtual objects of the first slave object in different virtual object groups, so in order to obtain the position information of other virtual objects, may control the first slave object in the first form to perform object reconnaissance on the virtual scene, in actual implementation, other virtual objects (such as virtual objects or non-user roles in different groups with the first master object, collectively referred to as a fourth virtual object) in the virtual scene are bound with a collider component (such as a collision box, an impact ball, etc.), in the process of controlling the first slave object in the first form to perform object reconnaissance on the virtual scene, through a camera component on the first slave object, a detection ray is emitted from the first slave object along the direction of the first master object or the first slave object, and when the detection ray intersects with the collider component bound on the fourth virtual object, it is determined that the first slave object reconnaissance on the fourth virtual object; when the detected ray does not intersect with the collider component bound to the fourth virtual object, the first slave object is determined to not detect the fourth virtual object, when the fourth virtual object is detected, early warning is sent out for the fourth virtual object, and position information of the fourth virtual object, such as the distance and the direction of the fourth virtual object relative to the first master object, is displayed in a map of the virtual scene.
Referring to fig. 5, fig. 5 is a schematic scout diagram provided by the embodiment of the present application, where a first slave object in a first form is controlled to perform object scout in a virtual scene, when a fourth virtual object is scout in a first area centered on a first master object or a first slave object, position information of the fourth virtual object is displayed in a map of the virtual scene, so as to early warn all virtual objects (friend relations) in the first master object or a group to which the first master object belongs and view the position information of the fourth virtual object, and after knowing the position information of the fourth virtual object, it is convenient for a terminal device to control a corresponding virtual object to interact with the fourth virtual object by adopting an interaction policy most suitable for the current interaction, which is favorable for improving the interaction capability of the first master object or the group to which the first master object belongs.
Step 102: in response to a first instruction for the first slave object, the first slave object is controlled to deform from the first configuration to the second configuration.
The first instruction is also called a morphing instruction, and can be triggered by triggering the morphing control, for example, the terminal device can display the morphing control for the first slave object in the interface of the virtual scene, respond to the triggering operation for the morphing control, receive the morphing instruction, respond to the morphing instruction, generate and send a morphing request for the first slave object to the server, wherein the morphing request carries the object identification of the first slave object, the server determines relevant parameters of the first slave object requested to be morphed by the morphing request (such as a non-user character or other virtual object which is located in a different group with the first master object near the first slave object and is commonly called a second virtual object) based on the morphing request, and pushes the determined relevant parameters of the first slave object to the terminal device for the terminal device to perform picture rendering based on the relevant parameters, and displaying the rendered calling picture, namely displaying a first slave object which is formed by deforming a first slave object from a first form to a second form, wherein the second form corresponds to (e.g. accords with) the image of a second virtual object in the virtual scene, wherein the second virtual object can be a virtual object which exists in a first area which takes the first slave object as a center in the virtual scene, the second virtual object has an hostile relation with the first master object, the display style of the second virtual object can be the same or different under the two view angles with the hostile relation, for example, under the view angle of the hostile (e.g. the second virtual object or other virtual objects with the second virtual object with the friend relation), the second virtual object (e.g. hostile hero or hostile) is displayed in a style which accords with the image of the second virtual object, displaying the second virtual object in a highlighting mode under the view angle of the first main object to give a remarkable prompt to a user, wherein the highlighting mode comprises at least one of the following display modes: target color display, superposition mask display, highlighting display, tracing display and transparent display are adopted.
Referring to fig. 6, fig. 6 is a schematic diagram of a morphological change of a slave object according to an embodiment of the present application, when a first slave object in a first morphology is a fragment (obtained by strange disintegration of a cartoon figure) adsorbed on an arm of a first master object, a terminal device displays that the fragment adsorbed on the arm of the first master object is detached from the arm to move to a target position in response to a deformation instruction, and displays an animation of the detached fragment deformed to a first slave object (i.e., a first slave object in a second morphology) with a consistent image of a second virtual object at the target position.
In some embodiments, when the number of the second virtual objects is at least two, the terminal device may determine the second shape to be deformed by: displaying a form selection interface, and displaying forms corresponding to at least two second virtual objects which can be selected in the form selection interface; and responding to a selection operation of the morphology of the target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as a second morphology. Therefore, the user can manually select the deformation form of the first slave object, and the operation experience of the user is further improved.
For example, the terminal device receives a deformation instruction in response to a trigger operation for the deformation control, generates and sends a deformation request for a first slave object to the server in response to the deformation instruction, wherein the deformation request carries an object identifier of the first slave object, the server detects a second virtual object in a third area centering on the first slave object in the virtual scene based on the deformation request, returns a detection result to the terminal device, displays a shape selection interface in an interface of the virtual scene when the detection result represents that the number of detected second virtual objects is multiple, and displays shapes corresponding to at least two second virtual objects which can be selected in the shape selection interface, such as a shape of the second virtual object 1, a shape of the second virtual object 2 and a shape of the second virtual object 3, and the terminal device can select the shape of the second virtual object 2 selected by the user on the assumption that the shape of the second virtual object 2 is selected in the shape selection interface.
In some embodiments, before controlling the first slave object to deform from the first shape to the second shape, the terminal device may predict the second shape by: acquiring scene data of a first area taking a first slave object as a center in a virtual scene; wherein the scene data includes other virtual objects located in the first region (e.g., other virtual objects located in a different group than the first master object or non-user roles); calling a machine learning model to conduct prediction processing based on the scene data to obtain a second form; wherein the machine learning model is trained based on scene data in the sample region and the labeled morphology (morphology of the first slave object). In this way, the form of the first slave object capable of maximally improving the interaction capability of the first master object or the group to which the first master object belongs is predicted by calling the machine learning model, so that the prediction accuracy is improved, the enemy is subjected to maximum interference, and the interaction capability of the first master object or the group to which the first master object belongs is further improved.
In practical application, the machine learning model may be a neural network model (such as a convolutional neural network, a deep convolutional neural network, or a fully-connected neural network), a decision tree model, a gradient lifting tree, a multi-layer perceptron, a support vector machine, and the like, and the type of the machine learning model is not particularly limited in the embodiment of the present application.
Step 103: the first slave object of the second aspect is controlled to move in the virtual scene to perform object reconnaissance on the virtual scene.
Here, in the case that the first slave object is deformed to correspond to the image of the second virtual object, the user may control the first slave object to move in the virtual scene to perform object reconnaissance on the virtual scene, for example, a movement control for the first slave object in the second state is displayed in an interface of the virtual scene, when the user triggers the movement control, the terminal device controls the first slave object in the second state to move in the virtual scene in response to a movement instruction triggered by the triggering operation, and performs object reconnaissance on the virtual scene during the movement, in practical application, the virtual object in the virtual scene is bound with a collider component (for example, a collision box, a collision ball, or the like), and during the object reconnaissance on the virtual scene by the first slave object in the second state, a detection ray is emitted from the first slave object along the direction of the first slave object through the camera component on the first slave object, and when the detection ray intersects with the collider component bound on the virtual object, it is determined that the first slave object reconnaissance on the virtual object; when the detected ray does not intersect a collider component bound to the virtual object, then it is determined that the virtual object is not detected by the first slave object. In addition, after the first slave object is deformed to correspond to the image of the second virtual object, the first slave object can automatically move in the virtual scene without user control so as to perform object reconnaissance on the virtual scene, so that the operation flow is simplified, and the reconnaissance efficiency can be improved.
Step 104: in response to the first slave object detecting the third virtual object in the virtual scene, position information of the third virtual object is displayed in a map of the corresponding virtual scene.
The third virtual object may be a generic term of a non-user character (i.e., other objects having hostile relations) associated with the virtual object or the virtual object to which the first master object belongs, which is detected in the target area centered on the first slave object, and may include the second virtual object. When the third virtual object is detected, the position information of the third virtual object is displayed in the map of the virtual scene so as to be checked by the first main object or all virtual objects in the group to which the first main object belongs, and after the position information of the third virtual object is known, the terminal equipment is convenient to control the corresponding virtual object to interact with the third virtual object by adopting the most suitable interaction strategy, so that the interaction capability of the first main object or the group to which the first main object belongs is improved.
In some embodiments, the terminal device may control the first slave object of the second modality to move in the virtual scene to perform object reconnaissance on the virtual scene by: controlling the first slave object of the second aspect to move in the virtual scene; controlling the first slave object in the second form to release the marking wave to the periphery and displaying a second area where the marking wave reaches in the process that the first slave object in the second form moves in the virtual scene; controlling the first slave object of the second modality to perform object reconnaissance in the second area; accordingly, the terminal device may implement, in response to the first slave object detecting the third virtual object in the virtual scene, displaying the location information of the third virtual object in the map corresponding to the virtual scene by: when the third virtual object is detected in the second area, the third virtual object is highlighted, and the position information of the third virtual object is displayed in the map corresponding to the virtual scene so as to be checked by the first main object or other virtual objects in the group to which the first main object belongs.
Referring to fig. 7, fig. 7 is a schematic view of a scout provided in an embodiment of the present application, in a process of controlling a first slave object in a second form to release a marker wave from the first slave object to the periphery, and displaying a second area (the second area may be a circular area centered on the first slave object and having a target distance as a radius, or may be an area with another shape, and the embodiment of the present application does not limit the shape of the second area), and may control the first slave object to scout the object in the second area, for example, by using a camera component on the first slave object, to emit a detection ray from the first slave object to the periphery, and when the detection ray intersects with a collider component bound to a third virtual object, determine that the third virtual object is scout in the second area; when the detected ray does not intersect a collider component bound to the third virtual object, then it is determined that the third virtual object is not detected in the second region. When the third virtual object is detected, special effect elements can be displayed in an associated area of the third virtual object, such as displaying the added special effect elements on the periphery of the third virtual object, the special effect elements can change skin materials, colors and the like of the third virtual object so as to highlight the third virtual object, and position information of the third virtual object is displayed in a map of a virtual scene so that the first main object or all virtual objects in a group to which the first main object belongs can check the position information of the third virtual object, and therefore, under the condition that the position information of the third virtual object is acquired, an interaction strategy capable of causing the greatest damage to the third virtual object is formed by the first main object or all virtual objects in the group to which the first main object belongs is facilitated, corresponding interaction operation is executed according to the interaction strategy, and interaction capability of the first main object or other virtual objects in the group to which the first main object belongs is improved.
In some embodiments, the terminal device may further control the first slave object of the second modality to lock the third virtual object when the third virtual object is detected in the second area; when the third virtual object moves in the virtual scene and moves to the target position blocked by the obstacle, the third virtual object at the target position is perspective.
Referring to fig. 8, fig. 8 is a schematic scout diagram provided in the embodiment of the present application, after a first slave object scouts a third virtual object and highlights the third virtual object, if the third virtual object moves in a virtual scene, the first slave object is controlled to lock the third virtual object, and the first slave object is controlled to continuously release a marker wave to the third virtual object, so as to always highlight the third virtual object, if the third virtual object moves in the virtual scene to a place blocked by an obstacle (such as a wall), the third virtual object blocked by the obstacle is perspective, that is, even if the third virtual object is blocked by the obstacle, the third virtual object is still highlighted, so that the third virtual object blocked by the obstacle is visible relative to all virtual objects in the first master object or the group to which the first master object belongs, and the position information of the third virtual object can be displayed in the map of the virtual scene, so as to always expose the third virtual object in the view range of all virtual objects in the first master object or the group to which the first master object belongs, thereby being beneficial to improving the interaction capability of the first master object or the first master object and the first master object to which belongs to the first master object, and further improving the interaction policy.
In some embodiments, the terminal device displays location information of a third virtual object in a map of a corresponding virtual scene in response to the first slave object detecting the third virtual object in the virtual scene and the first slave object being attacked by the third virtual object; here, when the first slave object detects the third virtual object and the first slave object is attacked by the third virtual object, the position information of the third virtual object may be immediately displayed in the map of the virtual scene, so that the first master object or other virtual objects in the group to which the first master object belongs may view the position information of the third virtual object.
In some embodiments, the terminal device may display the location information of the third virtual object in the map corresponding to the virtual scene by: when the number of the third virtual objects is at least two, acquiring interaction parameters of each third virtual object; wherein the interaction parameters include at least one of: interaction role, interaction preference, interaction capability, distance to the first master object; and displaying the position information of each third virtual object in the map corresponding to the virtual scene by adopting a display style corresponding to the interaction parameter.
Here, when a plurality of third virtual objects are detected, the threat degree of the third virtual object to the first main object may be determined according to the interaction parameter of each third virtual object, the display priority of each third virtual object may be determined according to the threat degree, and each third virtual object may be displayed differently according to the display priority, for example, the more hostile the interaction role of the third virtual object with the interaction role of the first main object, the stronger the interaction capability, or the better the interaction preference is, the less good the first main object cannot cope with, or the more close the third virtual object is to the first main object, the greater the threat degree of the corresponding third virtual object is, the higher the display priority of the corresponding third virtual object is, from which the position information of the third virtual object with the higher display priority than the target number of the target priority may be selected, or the higher the position information of each third virtual object may be displayed in a form that the display style of the corresponding third virtual object is more prominent. Therefore, the first main object or the virtual object in the group to which the first main object belongs is facilitated to select a proper third virtual object from the virtual objects to attack, so that the interaction capability is improved.
In some embodiments, the terminal device may control the first slave object of the second modality to move in the virtual scene to perform object reconnaissance on the virtual scene by: controlling the first slave object of the second aspect to move in the virtual scene; controlling the first slave object to deform from the second form to the initial form in response to the first slave object in the second form being attacked by the fifth virtual object during the movement of the first slave object in the second form in the virtual scene; controlling the first slave object of the initial form to perform object reconnaissance on the virtual scene, for example, controlling the first slave object of the initial form to release the marker wave to the periphery so as to perform object reconnaissance on the virtual scene through the marker wave; accordingly, the terminal device may display the location information of the third virtual object in the map corresponding to the virtual scene when the third virtual object is detected in the virtual scene by: when the third virtual object is detected in the virtual scene, the third virtual object is highlighted, and position information of the third virtual object is displayed in a map corresponding to the virtual scene.
In some embodiments, the terminal device may further receive a tracking instruction of the first slave object of the second aspect for the third virtual object when the third virtual object moves in the virtual scene, in a case that the first slave object of the second aspect detects the third virtual object; and responding to the tracking instruction, controlling the first slave object of the second form to track the third virtual object along the tracking direction indicated by the tracking instruction, and updating and displaying the position information of the third virtual object in the map corresponding to the virtual scene.
Here, after the first slave object detects the third virtual object and displays the position information of the third virtual object, if the third virtual object moves in the virtual scene, the terminal device may control the first slave object to track the third virtual object, that is, the first slave object moves along with the movement of the third virtual object, and update and display the position information of the third virtual object in the map, so that the third virtual object is always exposed in the field of view of the first master object or the virtual object in the group to which the first master object belongs, which is beneficial to making an interaction policy capable of causing the greatest damage to the third virtual object by the first master object or the virtual object in the group to which the first master object belongs, and executing corresponding interaction operation according to the interaction policy, thereby improving the interaction capability of the first master object or the group to which the first master object belongs.
In some embodiments, the terminal device may determine to detect the third virtual object by: controlling the first slave object of the second form to release a pulse wave for penetrating the obstacle to the obstacle when the obstacle is detected in the virtual scene during object detection of the virtual scene by the first slave object of the second form; when the third virtual object is blocked by the obstacle based on the pulse wave detection, the third virtual object is detected in the virtual scene, and the first slave object in the second mode is controlled to perform detection marking on the third virtual object so as to see through the third virtual object.
In practical application, the collision device component (such as a collision box and a collision ball) is bound to the obstacle, in the process of controlling the first slave object to perform object detection in the virtual scene, whether the obstacle exists in the virtual scene or not can be detected first, for example, through the camera component on the first slave object, the detection ray is emitted from the first slave object along the direction of the first slave object, when the detection ray intersects with the collision device component bound to the obstacle, the first slave object is determined to detect the obstacle, at this time, whether the third virtual object is hidden behind the obstacle is further determined, when the fact that the third virtual object is hidden behind the obstacle is determined, the first slave object is controlled to detect the mark on the third virtual object, and the second virtual object is seen through, so that the third virtual object is seen through high, even if the third virtual object is blocked by the obstacle, the third virtual object is seen through high, relative to the first master object or all virtual objects in the group, the first master object is seen through, in addition, the interaction efficiency of the first virtual object is enabled to be seen through, and the interaction strategy is enabled to be always improved, and the interaction efficiency of the first virtual object is enabled to be always seen through in the first master object or all the virtual objects in the first master object, and the interaction strategy is enabled to be seen through in the first master object.
In some embodiments, the terminal device may further control the first slave object of the second modality to perform the asset detection on the virtual scene in response to the first master object having the asset detection skill; displaying indication information corresponding to the virtual materials when the virtual materials are detected in the virtual scene; the virtual materials are used for picking up the first main object, and the picked virtual materials are used for improving the interaction capability of the first main object in the virtual scene.
Wherein the material detection skill is a skill for performing material detection, when the first master object is equipped with the material detection skill or the first slave object has the material detection skill, the terminal device can control the first slave object in the second state to perform material detection on the virtual scene through the material detection skill, in practical application, the virtual material is bound with a collision device (such as a collision box, a collision ball and the like), in the process of controlling the first slave object to perform material detection on the virtual scene, a detection ray is emitted from the first slave object along the direction of the first slave object through a camera component on the first slave object, and when the detection ray intersects with the collision device bound on the virtual material, the first slave object is determined to detect the virtual material; when the detected ray does not intersect a collider component bound to the virtual asset, then it is determined that the virtual asset was not detected by the first slave object.
Virtual materials that can be detected by the materials detection skills include, but are not limited to: gold coin, construction material (e.g., ore), food material, weaponry, equipment, or character upgrade material. When the first slave object detects the virtual material in the virtual scene, indicating information of the corresponding virtual material is displayed, the terminal equipment can control the first master object or other virtual objects in the group to which the first master object belongs to pick up or mine the detected virtual material based on the indicating information of the virtual material, and can control the first master object or other virtual objects in the group to which the first master object belongs to upgrade self equipment or build a virtual building based on the picked up or mined virtual material, so that the interaction capacity, such as attack capacity or defense capacity, of the first master object or other virtual objects in the group to which the first master object belongs in the virtual scene is improved.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material when the virtual material is detected in the virtual scene by: when the virtual material is detected in a second area taking the first slave object as the center in the virtual scene, displaying the type indication information of the virtual material at the position of the virtual material in the second area, and displaying the position indication information of the virtual material in a map corresponding to the virtual scene; at least one of the category indication information and the position indication information is used as indication information of the corresponding virtual material.
Referring to fig. 9, fig. 9 is a schematic diagram of detection provided in the embodiment of the present application, when a first slave object in a second form detects a virtual object in a second area in a virtual scene centered on the first slave object, category indication information of the virtual object, such as equipment category, construction category, defense category, etc., is displayed at a position of the virtual object, and position information of the virtual object, such as distance and direction of the virtual object relative to a first master object or other virtual objects in a group to which the first master object belongs, is displayed in a map of the virtual scene, and the terminal device may control the first master object or other virtual objects in the group to which the first master object belongs to pick up the virtual object based on the indication information of the virtual object, so as to improve interaction capability of the terminal device.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material by: when the number of the virtual materials is at least two, displaying the indication information of the first number of the virtual materials in the at least two virtual materials by adopting a first display mode, and displaying the indication information of the second number of the virtual materials in the at least two virtual materials by adopting a second display mode; the first display style is different from the second display style, the first display style represents that the first quantity of virtual materials are located in the visual field range of the first main object, and the second display style represents that the second quantity of virtual materials are located outside the visual field range of the first main object.
Referring to fig. 10, fig. 10 is a schematic diagram of detection provided in an embodiment of the present application, when a plurality of virtual materials are detected in a virtual scene by a first slave object in a first aspect, indication information of the virtual materials in the visual field range and out of the visual field range may be displayed by adopting different display styles (such as different colors, different brightnesses, etc.) according to whether the virtual materials are in the visual field range of the first master object. It will be appreciated that as the field of view of the first primary object changes, the display style of each virtual asset may change as the field of view of the first primary object changes. Therefore, the indication information of the virtual materials in the first main object visual field range is displayed by adopting different display modes, the player can be given a remarkable prompt, and the first main object is controlled to select and pick up the proper virtual materials, so that the interaction capability of the player is improved.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material by: when the types of the virtual materials are at least two, displaying the indication information of the virtual materials of the target type in the at least two virtual materials by adopting a third display mode, and displaying the indication information of the virtual materials of other types except the target type in the at least two virtual materials by adopting a fourth display mode; the third display style is different from the fourth display style, and the third display style characterizes the picking priority of the virtual materials of the target type and is higher than the picking priority of the virtual materials of other types.
When the types of the detected virtual props are multiple, displaying the indication information of the virtual supplies of the corresponding types by adopting different display modes according to different pickup priorities, particularly highlighting the indication information of the virtual supplies of the type with the highest pickup priorities, so that the virtual props of the target type with the highlighted pickup priorities can be controlled to be picked up by the virtual objects; therefore, the virtual materials of the target type most needed by the first main object are selected from the plurality of detected virtual materials, and the interaction capability of the first main object is improved.
In actual implementation, the terminal device may determine the virtual supplies of the target type by: based on the use preference of the first main object, acquiring the matching degree of each type of virtual material and the use preference, and selecting the virtual material corresponding to the highest matching degree as the indication information of the virtual material of the target type for highlighting; for example, the types of virtual supplies detected include: the method comprises the steps of predicting the use preference of a first main object, namely the preference, the proficiency degree and the like of the first main object on various types of virtual materials according to the role of the first main object in a virtual scene or the virtual materials of the historical use type of the first main object through a neural network model, respectively determining the matching degree of the virtual materials of the equipment class and the use preference, the matching degree of the virtual materials of the construction class and the use preference and the matching degree of the virtual materials of the defense class and the use preference based on the use preference of the first main object, selecting the virtual materials of the defense class with the highest matching degree from the virtual materials, and highlighting the indication information of the virtual materials of the screened defense class. In addition, according to at least one parameter of consumption degree, pick-up difficulty coefficient, distance between the first main object and the other virtual materials, the indication information of the virtual materials is highlighted by screening the virtual materials of the target type which is most suitable for the first main object from the virtual materials of multiple types, so that the virtual materials of the target type which is most favored, most needed and most suitable for the first main object are screened from the detected multiple virtual materials, and the interaction capability of the first main object is improved.
It can be understood that, in the embodiment of the present application, the relevant data related to the login account, the scene data, etc. are essentially relevant data of the user, when the embodiment of the present application is applied to a specific product or technology, the user needs to obtain permission or consent, and the collection, use and processing of the relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Taking a shooting game as an example, a first slave object is an avatar controlled by a player (i.e., a first master object) in the game and used for assisting player interaction, the same first slave object can have different forms, the terminal equipment can control the first slave object to deform among the different forms, the first slave object in the different forms corresponds to different instructions, and the different instructions are used for controlling the first slave object in the corresponding form to execute corresponding operations, so that corresponding assistance is brought to the player.
Referring to fig. 11, fig. 11 is a flowchart of a method for controlling a first slave object in a virtual scene according to an embodiment of the present application, where the method includes:
step 201: the first terminal device responds to the first instruction and generates and sends a calling request for the first slave object to the server.
The first instruction is a calling instruction, a calling control for calling the first slave object can be displayed in the game interface, when the user triggers the calling control, the first terminal equipment responds to the triggering operation for the calling control, receives the calling instruction, responds to the calling instruction, and the generated calling request carries the object identification of the first slave object to be called.
Step 202: the server determines and returns relevant parameters of the first slave object requested to be summoned by the summoning request to the first terminal device based on the summoning request.
Step 203: the first terminal equipment performs picture rendering based on the related parameters, displays the called first slave object in the initial form, and displays the process that the first slave object in the initial form is deformed into the first slave object in the first form and the first slave object in the first form is adsorbed to the first master object.
Here, the first terminal device performs screen rendering based on the relevant parameters, and displays the rendered call screen, for example, first displaying a monster of the summoned cartoon image, and then displaying an animation in which the monster of the cartoon image becomes fragments and the fragments are adsorbed on the arm of the first main object (player) (i.e., fragments become a part of the player model).
After the first slave object in the initial form is called, before the first slave object is changed from the initial form to the first form, the terminal device may also control the initial form to detect the virtual scene (e.g., the target area centered on the first slave object), and when the target object (e.g., the virtual object or the virtual material) is detected, the terminal device may display the indication information of the detected target object.
Step 204: the first terminal equipment controls a first slave object in the first form to scout the virtual scene, and when the target object is scout, the indication information corresponding to the target object is displayed.
The target object comprises at least one of a second virtual object and virtual materials, and when the second virtual object (enemy) is detected, the position information of the second virtual object, such as the distance and the direction of the second virtual object relative to the first main object, is displayed in the game map. When the first master object is equipped with a material detection skill, the terminal device can control the first slave object in the first form to detect the material of the virtual scene through the material detection skill, and when the virtual material (such as gold coin, building material (such as ore), food material, weapon equipment, equipment or character upgrading material and the like) is detected, indication information of the virtual material (such as indication of the type of the virtual material, the position of the virtual material and the like) is displayed.
In practical application, when the number of detected target objects is greater than or equal to 2, different display modes (such as different colors, different brightnesses, etc.) can be adopted to display each target object according to the characteristics of the target objects, for example, when the target object is a second virtual object, different display modes are adopted to display each second virtual object according to the difference of the distances between each second virtual object and the first main object; when the target object is a virtual material, displaying indication information of the virtual material in and out of the visual field range of the first main object by adopting different display modes (such as different colors, different brightness and the like).
Step 205: the first terminal device generates and transmits a deforming request for the first slave object to the server in response to the second instruction.
The second instruction is a mimicry instruction or a morphing instruction, and may be triggered by triggering the morphing control, for example, the terminal device may display a morphing control for the first slave object in an interface of the virtual scene, respond to a triggering operation for the morphing control, receive the morphing instruction, respond to the morphing instruction, generate and send a morphing request for the first slave object to the server, where the morphing request carries information such as an object identifier of the first slave object, a form in which the current first slave object is located, and the like.
Step 206: the server determines and returns deformation information of the first slave object requested to be deformed by the deformation request to the first terminal device based on the deformation request.
The deformation information may be information related to a second virtual object (including a non-user character or other virtual objects located in a different group from the first master object) in the area centered on the first slave object.
Step 207: the first terminal device performs screen rendering based on the deformation information, and displays the animation of the first slave object, wherein the first slave object is deformed from the first form to the second form.
For example, an animation in which fragments adsorbed on the arm of the first main object are deformed into a strange of a cartoon figure (the strange moves to another position apart from the arm of the first main object) is displayed first, and an animation in which the strange of the cartoon figure is deformed into a first sub object conforming to the figure of the second virtual object is displayed.
Step 208: the first terminal equipment responds to the reconnaissance instruction, controls the first slave object of the second form to conduct object reconnaissance on the virtual scene, highlights the third virtual object when the third virtual object is reconnaissance in the virtual scene, and displays the position information of the third virtual object in a map corresponding to the virtual scene.
For example, after the first slave object is deformed from the first form to the first slave object in the second form, the terminal device may control the first slave object in the second form to move in the virtual scene, and perform object reconnaissance in the moving process, for example, control the first slave object in the second form to release a marker wave around, so as to perform object reconnaissance on the virtual scene through the marker wave, when the third virtual object is reconnaissance, may display special effect elements in an associated area of the third virtual object, for example, display added special effect elements on the periphery of the third virtual object, where the special effect elements may change skin materials, colors, etc. of the third virtual object, so as to highlight the third virtual object, and display position information of the third virtual object in a map of the virtual scene, so that after the position information of the third virtual object is known, the terminal device is convenient to control the corresponding virtual object to interact with the third virtual object by adopting an interaction strategy most suitable for the first main object or the first main object, so as to be beneficial to improve the capability of the first main object or the first main group to interact with the object; the third virtual object is a generic term of a non-user role associated with the virtual object or the virtual object belonging to a different group of the first master object detected in the target area centered on the first slave object, and may include the second virtual object.
Under the condition of acquiring the position information of the third virtual object, the first main object or other virtual objects in the group to which the first main object belongs are facilitated to prepare an interaction strategy capable of causing the maximum damage to the second virtual object, and corresponding interaction operation is executed according to the interaction strategy, so that the interaction capability of the first main object or other virtual objects in the group to which the first main object belongs is improved.
Step 209: the second terminal equipment responds to the attack instruction aiming at the first slave object in the second form and controls the third virtual object to attack the first slave object in the second form.
Step 210: the first terminal device controls the first slave object to be deformed from the second shape to the initial shape in response to the first slave object of the second shape being hit by the second virtual object.
Here, after the first slave object is deformed into the initial form, object reconnaissance or resource detection may still be performed on the virtual scene, and when other virtual objects or virtual materials are reconnaissance in the virtual scene, the position information of the reconnaissance virtual objects or virtual materials is displayed in the map corresponding to the virtual scene for the player to view.
By means of the method, the first slave object related to the first master object is transformed from the first form to the second form consistent with the image of the second virtual object in the hostile relation with the first master object, and the first slave object in the second form is controlled to conduct object reconnaissance in the virtual scene.
In addition, the first auxiliary object related to the first main object is controlled to conduct material detection or object reconnaissance on the virtual scene, when the virtual material is detected, indication information corresponding to the virtual material is displayed, the first main object can easily see and pick up the detected virtual material based on the indication information, and further the interaction capability of the first main object in the virtual scene is improved based on the picked-up virtual material, for example, self equipment is upgraded or a defending building is built by utilizing the picked-up virtual material, so that the attack capability or the defending capability is improved; when enemy information such as a third virtual object (which may include a second virtual object) is detected, position information of a third virtual scene is displayed in the map so that the first main object or other virtual objects in the group to which the first main object belongs can be checked, and under the condition that the position information of the third virtual object is known, the first main object or other virtual objects in the group to which the first main object belongs are beneficial to making an interaction strategy capable of causing the greatest damage to the enemy according to the position of the enemy, and executing corresponding interaction operation according to the interaction strategy, so that interaction capability (such as attack capability or defense capability) of the first main object or other virtual objects in the group to which the first main object belongs is improved, and under the condition that interaction capability of the first main object or other virtual objects in the group to which the first main object belongs is improved, the terminal equipment can reduce interaction times for executing interaction operation to achieve a certain interaction target (such as obtaining effective information of the enemy or defeating the enemy, etc.), man-machine interaction efficiency is improved, and occupation of hardware processing resources is reduced.
Continuing with the description below of an exemplary architecture of the virtual object control apparatus 465 implemented as a software module provided by embodiments of the present application, in some embodiments, the software modules stored in the virtual object control apparatus 465 of the memory 460 of fig. 2 may include: a first display module 4651 configured to display a game screen, where the game screen includes at least a portion of a virtual scene, the virtual scene including a first master object and a first slave object in a first form, and the first slave object and the first master object have a subordinate relationship; a first control module 4652 for controlling the first slave object to deform from the first configuration to a second configuration in response to the first instruction; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene; a second control module 4653, configured to control the first slave object of the second aspect to move in the virtual scene, so as to perform object reconnaissance on the virtual scene; the second display module 4654 is configured to display, in response to the first slave object detecting a third virtual object in the virtual scene, location information of the third virtual object in a map corresponding to the virtual scene.
In some embodiments, the first display module is further configured to receive a summoning instruction for the first slave object; in response to the calling instruction, calling a first slave object in an initial form, and displaying a game screen in which the first slave object is deformed from the initial form to a first form and the first slave object in the first form is adsorbed to the first master object so as to become a part of the first master object model; the first control module is further configured to demonstrate a process in which the first slave object in the first configuration is detached from the first master object to move to a target position, and is deformed into the first slave object in the second configuration at the target position.
In some embodiments, the apparatus further comprises: the third control module is used for responding to the first slave object of the first form to be adsorbed to the first master object and controlling the first slave object of the first form to perform object reconnaissance on the virtual scene; and when a fourth virtual object is detected in a first area taking the first main object as a center, displaying the position information of the fourth virtual object in a map corresponding to the virtual scene.
In some embodiments, when the number of second virtual objects is at least two, the apparatus further comprises: the form determining module is used for displaying a form selection interface and displaying forms corresponding to at least two second virtual objects which can be selected in the form selection interface; and responding to a selection operation of the morphology of a target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
In some embodiments, before the controlling the first slave object to deform from the first shape to the second shape, the apparatus further comprises: the form prediction module is used for acquiring scene data of a first area taking the first slave object as a center in the virtual scene; wherein the scene data includes a virtual object located in the first region; calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form; the machine learning model is trained based on scene data in a sample area and the marked deformation.
In some embodiments, the second control module is further configured to control the first slave object of the second aspect to move in the virtual scene; controlling the first slave object in the second form to deform from the second form to an initial form in response to the first slave object in the second form being attacked by a fifth virtual object in the process of moving the first slave object in the second form in the virtual scene; and controlling the first slave object of the initial form to perform object reconnaissance on the virtual scene.
In some embodiments, the second control module is further configured to control the first slave object of the second aspect to move in the virtual scene; controlling the first slave object in the second mode to release the marker wave to the periphery and displaying a second area of the marker wave in the process of moving the first slave object in the second mode in the virtual scene; controlling the first slave object of the second form to perform object reconnaissance in the second area; the second display module is further configured to, in response to the first slave object detecting a third virtual object in the second area, highlight the third virtual object, and display location information of the third virtual object in a map corresponding to the virtual scene, so that the virtual objects in the group to which the first master object belongs can be viewed.
In some embodiments, the apparatus further comprises: a fourth control module for controlling the first slave object of the second modality to lock a third virtual object in response to the first slave object detecting the third virtual object in the second area; and when the third virtual object moves in the virtual scene and moves to a target position shielded by an obstacle, the third virtual object at the target position is seen through.
In some embodiments, the second display module is further configured to display, in response to the first slave object detecting a third virtual object in the virtual scene and the first slave object being attacked by the third virtual object, location information of the third virtual object in a map corresponding to the virtual scene.
In some embodiments, the information display module is further configured to obtain interaction parameters of each third virtual object when the number of the third virtual objects is at least two; wherein the interaction parameters include at least one of: interaction role, interaction preference, interaction capability, distance to the first master object; and displaying the position information of each third virtual object in the map corresponding to the virtual scene by adopting a display style corresponding to the interaction parameter.
In some embodiments, the apparatus further comprises: a fifth control module, configured to receive a tracking instruction of the first slave object in the second aspect for the third virtual object when the third virtual object moves in the virtual scene; and responding to the tracking instruction, controlling a first slave object of the second form to track the third virtual object along the tracking direction indicated by the tracking instruction, and updating and displaying the position information of the third virtual object in a map corresponding to the virtual scene.
In some embodiments, the apparatus further comprises: a sixth control module, configured to control, in a process of controlling the first slave object in the second aspect to perform object reconnaissance on the virtual scene, when an obstacle is reconnaissance in the virtual scene, to release a pulse wave for penetrating the obstacle from the first slave object in the second aspect to the obstacle; determining that a third virtual object is detected in the virtual scene when the obstacle is detected to be blocked by the third virtual object based on the pulse wave; and controlling the first slave object in the second form to perform reconnaissance marking on the third virtual object, and perspective the third virtual object.
In some embodiments, the apparatus further comprises: the material detection module is used for controlling the first slave object in the second form to detect the materials of the virtual scene in response to the first slave object having the material detection skills; displaying indication information corresponding to virtual materials when the virtual materials are detected in the virtual scene; the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
In some embodiments, the material detection module is further configured to, when a virtual material is detected in a second area centered on the first slave object in the virtual scene, display, at a location where the virtual material in the second area is located, category indication information of the virtual material, and display, in a map corresponding to the virtual scene, location indication information of the virtual material; and taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
In some embodiments, the material detection module is further configured to display, when the number of virtual materials is at least two, indication information of a first number of virtual materials in the at least two virtual materials in a first display style, and display indication information of a second number of virtual materials in the at least two virtual materials in a second display style; the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
In some embodiments, the material detection module is further configured to display, when the types of the virtual materials are at least two, indication information of a virtual material of a target type in the at least two virtual materials by using a third display style, and display indication information of a virtual material of another type except the target type in the at least two virtual materials by using a fourth display style; the third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the embodiments of the present application described above. . The method.
Embodiments of the present application provide a computer readable storage medium having stored therein executable instructions which, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. A method for controlling a virtual object, the method comprising:
displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first master object and a first slave object in a first form, and the first slave object and the first master object have a subordinate relationship;
controlling the first slave object to deform from the first configuration to a second configuration in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene;
controlling a first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene;
In response to the first slave object detecting a third virtual object in the virtual scene, displaying location information of the third virtual object in a map corresponding to the virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first master object.
2. The method of claim 1, wherein displaying a game screen comprises:
receiving a second instruction for the first slave object;
in response to the second instruction, calling a first slave object in an initial form, and displaying a game screen in which the first slave object is deformed from the initial form to a first form and the first slave object in the first form is adsorbed onto the first master object to become a part of the first master object model;
the controlling the first slave object to deform from the first configuration to the second configuration includes:
a process of releasing the first slave object exhibiting the first morphology from the first master object to move to a target position and deforming the first slave object to a second morphology at the target position.
3. The method of claim 2, wherein the method further comprises:
Controlling a first slave object of the first modality to perform object reconnaissance on the virtual scene in response to the first slave object of the first modality being adsorbed to the first master object;
and when a fourth virtual object is detected in a first area taking the first main object as a center, displaying the position information of the fourth virtual object in a map corresponding to the virtual scene.
4. The method of claim 1, wherein when the number of second virtual objects is at least two, the method further comprises:
displaying a form selection interface, and displaying forms corresponding to at least two selectable second virtual objects in the form selection interface;
and responding to a selection operation of the morphology of a target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
5. The method of claim 1, wherein the controlling the first slave object prior to deforming from the first configuration to the second configuration, the method further comprises:
acquiring scene data of a first area taking the first slave object as a center in the virtual scene; wherein the scene data includes a virtual object located in the first region;
Calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form;
the machine learning model is trained based on scene data in a sample area and the labeled form.
6. The method of claim 1, wherein the controlling the first slave object of the second modality to move in the virtual scene to perform object reconnaissance on the virtual scene comprises:
controlling a first slave object of the second modality to move in the virtual scene;
controlling the first slave object in the second form to deform from the second form to an initial form in response to the first slave object in the second form being attacked by a fifth virtual object in the process of moving the first slave object in the second form in the virtual scene;
and controlling the first slave object of the initial form to perform object reconnaissance on the virtual scene.
7. The method of claim 1, wherein the controlling the first slave object of the second modality to move in the virtual scene to perform object reconnaissance on the virtual scene comprises:
controlling a first slave object of the second modality to move in the virtual scene;
Controlling the first slave object in the second mode to release the marker wave to the periphery and displaying a second area of the marker wave in the process of moving the first slave object in the second mode in the virtual scene;
controlling the first slave object of the second form to perform object reconnaissance in the second area;
the method further includes, in response to the first slave object detecting a third virtual object in the virtual scene, displaying location information of the third virtual object in a map corresponding to the virtual scene, including:
and responding to the first slave object to detect a third virtual object in the second area, highlighting the third virtual object, and displaying the position information of the third virtual object in a map corresponding to the virtual scene for viewing by the virtual objects in the group to which the first master object belongs.
8. The method of claim 7, wherein the method further comprises:
controlling the first slave object of the second modality to lock a third virtual object in response to the first slave object detecting the third virtual object in the second area;
and when the third virtual object moves in the virtual scene and moves to a target position shielded by an obstacle, the third virtual object at the target position is seen through.
9. The method of claim 1, wherein the displaying location information of a third virtual object in a map corresponding to the virtual scene in response to the first slave object detecting the third virtual object in the virtual scene comprises:
and in response to the first slave object detecting a third virtual object in the virtual scene and the first slave object being attacked by the third virtual object, displaying position information of the third virtual object in a map corresponding to the virtual scene.
10. The method of claim 1, wherein the displaying the location information of the third virtual object in the map corresponding to the virtual scene comprises:
when the number of the third virtual objects is at least two, acquiring interaction parameters of each third virtual object;
wherein the interaction parameters include at least one of: interaction role, interaction preference, interaction capability, distance to the first master object;
and displaying the position information of each third virtual object in the map corresponding to the virtual scene by adopting a display style corresponding to the interaction parameter.
11. The method of claim 1, wherein the method further comprises:
when the third virtual object moves in the virtual scene, receiving a tracking instruction of the first slave object of the second form for the third virtual object;
and responding to the tracking instruction, controlling a first slave object of the second form to track the third virtual object along the tracking direction indicated by the tracking instruction, and updating and displaying the position information of the third virtual object in a map corresponding to the virtual scene.
12. The method of claim 1, wherein the method further comprises, prior to displaying the location information of the third virtual object in the map corresponding to the virtual scene:
controlling the first slave object of the second form to release a pulse wave for penetrating the obstacle to the obstacle when the obstacle is detected in the virtual scene in the process of performing object detection on the virtual scene by the first slave object of the second form;
determining that a third virtual object is detected in the virtual scene when the obstacle is detected to be blocked by the third virtual object based on the pulse wave;
And controlling the first slave object in the second form to perform reconnaissance marking on the third virtual object, and perspective the third virtual object.
13. The method of claim 1, wherein the method further comprises:
controlling the first slave object of the second form to perform material detection on the virtual scene in response to the first slave object having material detection skills;
displaying indication information corresponding to virtual materials when the first slave object detects the virtual materials in the virtual scene;
the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
14. The method of claim 13, wherein the displaying the indication information corresponding to the virtual asset when the first slave object detects the virtual asset in the virtual scene comprises:
when the first slave object detects virtual materials in a second area taking the first slave object as a center in the virtual scene, displaying category indication information of the virtual materials at the position of the virtual materials in the second area, and displaying position indication information of the virtual materials in a map corresponding to the virtual scene;
And taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
15. The method of claim 13, wherein displaying the indication information corresponding to the virtual good comprises:
when the number of the virtual materials is at least two, displaying the indication information of the first number of the virtual materials in the at least two virtual materials by adopting a first display mode, and displaying the indication information of the second number of the virtual materials in the at least two virtual materials by adopting a second display mode;
the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
16. The method of claim 13, wherein displaying the indication information corresponding to the virtual good comprises:
when the types of the virtual materials are at least two, displaying the indication information of the virtual materials of the target type in the at least two virtual materials by adopting a third display mode, and displaying the indication information of the virtual materials of other types except the target type in the at least two virtual materials by adopting a fourth display mode;
The third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
17. A control apparatus for a virtual object, the apparatus comprising:
the first display module is used for displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first slave object in a first form, and the first slave object and the first main object have a subordinate relation;
a first control module for controlling the first slave object to deform from the first configuration to a second configuration in response to a first instruction for the first slave object; wherein the second modality corresponds to an avatar of a second virtual object in the virtual scene;
the second control module is used for controlling the first slave object in the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene;
a second display module, configured to, in response to the first slave object detecting a third virtual object in the virtual scene, display location information of the third virtual object in a map corresponding to the virtual scene; wherein the second virtual object and the third virtual object have hostile relationships with the first master object.
18. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of controlling a virtual object according to any one of claims 1 to 16 when executing executable instructions stored in said memory.
19. A computer readable storage medium storing executable instructions for implementing the method of controlling a virtual object according to any one of claims 1 to 16 when executed by a processor.
20. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of controlling a virtual object according to any one of claims 1 to 16.
CN202210365169.1A 2022-01-11 2022-04-07 Virtual object control method, device, equipment, storage medium and program product Pending CN116920403A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202210365169.1A CN116920403A (en) 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product
KR1020247009494A KR20240046594A (en) 2022-01-11 2023-01-10 Partner object control methods and devices, and device, media and program products
PCT/CN2023/071526 WO2023134660A1 (en) 2022-01-11 2023-01-10 Partner object control method and apparatus, and device, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210365169.1A CN116920403A (en) 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116920403A true CN116920403A (en) 2023-10-24

Family

ID=88393020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210365169.1A Pending CN116920403A (en) 2022-01-11 2022-04-07 Virtual object control method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116920403A (en)

Similar Documents

Publication Publication Date Title
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN112691377A (en) Control method and device of virtual role, electronic equipment and storage medium
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN112076473A (en) Control method and device of virtual prop, electronic equipment and storage medium
CN113633964B (en) Virtual skill control method, device, equipment and computer readable storage medium
US20220266139A1 (en) Information processing method and apparatus in virtual scene, device, medium, and program product
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
KR20220082924A (en) Method and apparatus, device, storage medium and program product for controlling a virtual object
CN112402946A (en) Position acquisition method, device, equipment and storage medium in virtual scene
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN114217708A (en) Method, device, equipment and storage medium for controlling opening operation in virtual scene
CN114296597A (en) Object interaction method, device, equipment and storage medium in virtual scene
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN112121433B (en) Virtual prop processing method, device, equipment and computer readable storage medium
CN112717403B (en) Virtual object control method and device, electronic equipment and storage medium
CN116920403A (en) Virtual object control method, device, equipment, storage medium and program product
CN116920401A (en) Virtual object control method, device, equipment, storage medium and program product
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN112870694B (en) Picture display method and device of virtual scene, electronic equipment and storage medium
CN113101636B (en) Information display method and device for virtual object, electronic equipment and storage medium
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
CN116764207A (en) Interactive processing method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination