CN116920401A - Virtual object control method, device, equipment, storage medium and program product - Google Patents

Virtual object control method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN116920401A
CN116920401A CN202210363835.8A CN202210363835A CN116920401A CN 116920401 A CN116920401 A CN 116920401A CN 202210363835 A CN202210363835 A CN 202210363835A CN 116920401 A CN116920401 A CN 116920401A
Authority
CN
China
Prior art keywords
virtual
slave
controlling
scene
materials
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210363835.8A
Other languages
Chinese (zh)
Inventor
顾列宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202210363835.8A priority Critical patent/CN116920401A/en
Publication of CN116920401A publication Critical patent/CN116920401A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a control method, a device, equipment, a computer readable storage medium and a computer program product of a virtual object; the method comprises the following steps: displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have a subordinate relation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes; controlling the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object; responsive to the first slave object detecting the virtual material within a first range of the first master object, displaying indication information corresponding to the virtual material; the virtual materials are used for picking up the first main object, and the picked virtual materials are used for improving the interaction capability of the first main object in the virtual scene. The application can improve the man-machine interaction efficiency and reduce the occupation of hardware processing resources.

Description

Virtual object control method, device, equipment, storage medium and program product
Technical Field
The present application relates to man-machine interaction technology, and in particular, to a method, apparatus, device, computer readable storage medium and computer program product for controlling a virtual object.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction among virtual objects controlled by users or artificial intelligence according to actual application requirements, has various typical application scenes, and can simulate the actual fight process among the virtual objects in the virtual scene of games, for example.
Taking shooting game scenes as an example, most players aim at using weapons to hit and kill enemies, the behavior mode and the combat strategy are relatively single, for example, although the players can use related slave objects (also called objects or virtual pets) to detect the enemies, the players have no perception on the virtual materials which can be picked up, so that the players cannot pick up the virtual materials in time, the improvement of the interactive capability of the players is influenced, so that multiple interactive operations have to be executed for achieving a certain interactive purpose (such as defeating the enemies), the man-machine interaction efficiency is low, and the hardware processing resources are wasted.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment, a computer readable storage medium and a computer program product for a virtual object, which can detect virtual materials for improving interaction capability, so that the man-machine interaction efficiency is improved, and meanwhile, the occupation of hardware processing resources is reduced.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control method of a virtual object, which comprises the following steps:
displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have a subordinate relation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes;
controlling the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object;
responsive to the first slave object detecting the virtual material within a first range of the first master object, displaying indication information corresponding to the virtual material;
the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
The embodiment of the application provides a control device of a virtual object, which comprises:
the first display module is used for displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have a subordinate relation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes;
a first control module for controlling the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object;
the second display module is used for responding to the first slave object to detect the virtual material in the first range of the first master object and displaying indication information corresponding to the virtual material;
the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
In the above scheme, the device further includes: the second control module is used for responding to a second instruction aiming at the first slave object, controlling the first slave object to be separated from the first master object and deformed into a first form, and controlling the first slave object in the first form to detect materials of the virtual scene; and in response to the first slave object detecting the virtual material in the second range of the first slave object, displaying indication information corresponding to the virtual material.
In the above scheme, the second display module is further configured to display, in response to the first slave object detecting a virtual material in a first range centered on the first master object, type indication information of the virtual material at a position where the virtual material is located in the first range, and display position indication information of the virtual material in a map corresponding to the virtual scene; and taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
In the above scheme, the second display module is further configured to display, when the number of virtual materials is at least two, indication information of a first number of virtual materials in the at least two virtual materials in a first display style, and display, in a second display style, indication information of a second number of virtual materials in the at least two virtual materials; the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
In the above scheme, the second display module is further configured to display, when the types of the virtual materials are at least two, indication information of a virtual material of a target type in the at least two virtual materials by using a third display style, and display indication information of virtual materials of other types except the target type in the at least two virtual materials by using a fourth display style; the third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
In the above aspect, after the controlling the first slave object to be adsorbed on the first master object, the apparatus further includes: the third control module is used for controlling the first slave object to perform object reconnaissance on the virtual scene; and when the first slave object detects a second virtual object with hostile relation with the first master object in a third range taking the first master object as a center, displaying the position information of the second virtual object in a map corresponding to the virtual scene.
In the above aspect, the square device further includes: a fourth control module, configured to control the first slave object to be deformed from the first master object to a first shape in response to a third instruction for the first slave object, and control the first slave object to be deformed from the first shape to a second shape; wherein the second aspect corresponds to an avatar of a third virtual object in the virtual scene that is located within a fourth range centered on the first slave object; controlling a first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting a fourth virtual object in the virtual scene, displaying position information of the fourth virtual object in a map corresponding to the virtual scene; wherein the third virtual object and the fourth virtual object have hostile relationships with the first master object.
In the above solution, when the number of the second virtual objects is at least two, the apparatus further includes: the form determining module is used for displaying a form selection interface and displaying forms corresponding to at least two second virtual objects which can be selected in the form selection interface; and responding to a selection operation of the morphology of a target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
In the above scheme, the device further includes: the form prediction module is used for acquiring scene data corresponding to a fourth range in the virtual scene; wherein the scene data includes at least one of: a non-user role in the fourth scope, other virtual objects in the fourth scope; calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form; the machine learning model is obtained by training based on scene data in a sample range and a labeled mimetic.
In the above aspect, before displaying the position information of the fourth virtual object in the map corresponding to the virtual scene, the apparatus further includes: and a fifth control module, configured to control the first slave object in the second aspect to perform reconnaissance marking on the fourth virtual object, so as to highlight the fourth virtual object.
In the above solution, the fifth control module is further configured to control the first slave object in the second aspect to lock the fourth virtual object, and obtain a distance between the first slave object and the fourth virtual object; controlling the first slave object of the second form to emit an emission object to the fourth virtual object when the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance mark; when the fourth virtual object is hit by the emission, a special effect element is displayed in an associated area of the fourth virtual object to highlight the fourth virtual object.
In the above aspect, the fifth control module is further configured to control, when the distance between the first slave object and the fourth virtual object exceeds the target distance for performing the reconnaissance marking, the first slave object in the second aspect to move at the target speed to the position where the fourth virtual object is located until the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance marking.
In the above aspect, after the first slave object controlling the second aspect performs the reconnaissance marking on the fourth virtual object, the apparatus further includes: and the sixth control module is used for responding to the attack of the first slave object in the second form by the fifth virtual object, controlling the first slave object to recover from the second form to the initial form, and controlling the first slave object in the initial form to release the marker wave from the periphery so as to continuously scout the fourth virtual object through the marker wave.
In the above scheme, the device further includes: a seventh control module, configured to receive, when the fourth virtual object moves in a virtual scene during a process of controlling the first slave object in the second aspect to perform a reconnaissance marking on the fourth virtual object, a tracking instruction of the first slave object in the second aspect for the fourth virtual object; and responding to the tracking instruction, controlling the first slave object in the second form to track the fourth virtual object along the tracking direction indicated by the tracking instruction, and controlling the first slave object in the second form to continuously scout the fourth virtual object.
In the above scheme, the device further includes: an eighth control module, configured to display a picture of the fourth virtual object moving in a virtual scene in a process of controlling the first slave object in the second aspect to perform a scout mark on the fourth virtual object; and when the fourth virtual object moves to a target position which is blocked by an obstacle, the fourth virtual object positioned at the target position is perspective.
In the above aspect, the fourth control module is further configured to control, when an obstacle is detected in the virtual scene, the first slave object of the second aspect to release a pulse wave to the obstacle; and when the obstacle is determined to block the fourth virtual object based on the pulse wave, the first slave object in the second form is controlled to perform reconnaissance marking on the fourth virtual object, and the fourth virtual object is seen through.
In the above scheme, the device further includes: a ninth control module, configured to control, during controlling the first slave object in the second aspect to move in the virtual scene, the first slave object in the second aspect to perform material detection on the virtual scene; when the virtual material is detected in the fifth range of the first slave object in the virtual scene, displaying the type indication information corresponding to the virtual material at the position of the virtual material in the fifth range, and displaying the position indication information of the virtual material in a map corresponding to the virtual scene.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual object provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the control method of the virtual object provided by the embodiment of the application.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions realize the control method of the virtual object provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, in the interaction process, the first slave object subordinate to the first master object is controlled to detect the virtual scene, when the virtual object is detected, the indication information corresponding to the virtual object is displayed, the first master object can easily see and pick up the detected virtual object based on the indication information, and further, the interaction capability of the first master object in the virtual scene is improved based on the picked-up virtual object, for example, the self equipment or the construction defense architecture is upgraded by utilizing the picked-up virtual object, so that the attack capability or the defense capability is improved, and under the condition that the interaction capability of the first master object is improved, the terminal equipment can reduce the interaction times for executing the interaction operation by reaching a certain interaction purpose (such as defeating enemy and the like), thereby improving the man-machine interaction efficiency and reducing the occupation of hardware processing resources.
Drawings
Fig. 1A is an application mode schematic diagram of a control method of a virtual object according to an embodiment of the present application;
fig. 1B is an application mode schematic diagram of a control method of a virtual object according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application;
fig. 3 is a flow chart of a control method of a virtual object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of reconnaissance according to an embodiment of the present application;
FIG. 5 is a schematic diagram of detection according to an embodiment of the present application;
FIG. 6 is a schematic diagram of detection according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a morphological change of a slave object according to an embodiment of the present application;
FIG. 8 is a schematic diagram of reconnaissance according to an embodiment of the present application;
FIG. 9 is a schematic diagram of reconnaissance provided in an embodiment of the present application;
fig. 10 is a flowchart illustrating a method for controlling a virtual object according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the term "first/second …" is merely to distinguish similar objects and does not represent a particular ordering for objects, it being understood that the "first/second …" may be interchanged with a particular order or precedence where allowed to enable embodiments of the present application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And a client, an application program for providing various services, such as a video playing client, a game client, etc., running in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) The virtual scene is a virtual scene displayed (or provided) when the application program runs on the terminal, and the virtual scene can be a simulation environment for a real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual object to move in the virtual scene.
4) Virtual objects, images of various people and objects in a virtual scene that can interact, or movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
5) The slave object, also called object or virtual pet, belonging to the corresponding main object, refers to an additional individual unit of the non-user character body controlled by the player (main object), and can execute some instructions under the control of the player, namely, the slave object is the image of various people and objects which can assist the virtual object to interact with other virtual objects in the virtual scene, and the image can be a virtual character, a virtual animal, a cartoon character, a virtual prop, a virtual carrier and the like.
In the embodiment of the application, the first master object is a virtual object in a virtual scene corresponding to the current user account, which may also be referred to as a first virtual object, and the first slave object is a calling object belonging to the first virtual object (i.e. the first master object). The second virtual object, the third virtual object, the fourth virtual object or the fifth virtual object are all common names of virtual objects (main objects) or associated slave objects in the virtual scene corresponding to the current user account, for example, the second virtual object may include the second main object and the second slave object, the third virtual object may include the third main object and the third slave object instead of referring to one virtual object in the virtual scene, and for example, the virtual object a, the virtual object B and the virtual object C in the virtual scene are all corresponding to accounts (enemies) of the first main object belonging to different groups, and then the virtual object a, the virtual object B and the virtual object C are all referred to as the second virtual object (or the third virtual object, the fourth virtual object and the fifth virtual object).
6) Scene data representing various feature data represented by objects in a virtual scene during interaction may include, for example, the position of the virtual object in the virtual scene, the environment in which the virtual object is located, the time that needs to wait when various functions are configured in the virtual scene (depending on the number of times the same function can be used in a specific time), and attribute values representing various states of the virtual object of the game, such as a life value and a magic value.
The embodiment of the application provides a control method, a control device, a terminal device, a computer readable storage medium and a computer program product for a virtual object, which can improve the man-machine interaction efficiency and reduce the occupation of hardware processing resources. In order to facilitate easier understanding of the control method of the virtual object provided by the embodiment of the present application, first, an exemplary implementation scenario of the control method of the virtual object provided by the embodiment of the present application is described, where the virtual scenario in the control method of the virtual object provided by the embodiment of the present application may be based on output of a terminal device completely or on cooperative output of the terminal device and a server. In some embodiments, the virtual scene may be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the game characters, so that a user can relax life pressure in the game process.
In an implementation scenario, referring to fig. 1A, fig. 1A is a schematic application mode diagram of a control method of a virtual object according to an embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of a virtual scene 100 completely depending on the computing capability of graphics processing hardware of a terminal device 400, for example, a game in a stand-alone/offline mode, and output the virtual scene through various different types of terminal devices 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device. By way of example, the types of graphics processing hardware include central processing units (CPU, central Processing Unit) and graphics processors (GPU, graphics Processing Unit).
When forming the visual perception of the virtual scene 100, the terminal device 400 calculates the data required for display through the graphic computing hardware, and completes loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception for the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is presented on the display screen of the smart phone, or a video frame realizing the three-dimensional display effect is projected on the lens of the augmented reality/virtual reality glasses; in addition, to enrich the perceived effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception and gustatory perception by means of different hardware.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a game screen during the running of the client 410, where the game screen includes at least a part of the virtual scene 100 for role playing, and the virtual scene 100 may be an environment for interaction of game characters, for example, plain, street, valley, etc. for the game characters to fight against; the virtual scene 100 includes a first master object 110 and a first slave object 120 having a subordinate relationship with the first master object, and the first slave object 120 has a material detection skill for performing material detection on the virtual scene.
As an example, the terminal device controls the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object; responsive to the first slave object detecting the virtual material within a first range of the first master object, displaying indication information corresponding to the virtual material; the virtual materials are used for picking up the first main object, and the picked virtual materials are used for improving the interaction capability of the first main object in the virtual scene; therefore, the first main object can easily view and pick up the detected virtual materials based on the indication information, and further the interaction capability of the first main object in the virtual scene is improved based on the picked-up virtual materials, and under the condition that the interaction capability of the first main object is improved, the interaction times for executing interaction operation for defeating the enemy can be reduced, the man-machine interaction efficiency is improved, and the occupation of hardware processing resources is reduced.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic application mode diagram of a control method of a virtual object according to an embodiment of the present application, which is applied to a terminal device 400 and a server 200, and is suitable for an application mode that completes virtual scene calculation depending on a computing capability of the server 200 and outputs a virtual scene at the terminal device 400. Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of virtual scene related display data (such as scene data) and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 finishes loading, analyzing and rendering the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect can be projected on a lens of an augmented reality/virtual reality glasses; as regards the perception of the form of the virtual scene, it is understood that the auditory perception may be formed by means of the corresponding hardware output of the terminal device 400, for example using a microphone, the tactile perception may be formed using a vibrator, etc.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a game screen during the running of the client 410, where the game screen includes at least a part of the virtual scene 100 for role playing, and the virtual scene 100 may be an environment for interaction of game characters, for example, plain, street, valley, etc. for the game characters to fight against; the virtual scene 100 includes a first master object 110 and a first slave object 120 having a subordinate relationship with the first master object, and the first slave object 120 has a material detection skill for performing material detection on the virtual scene.
As an example, the terminal device controls the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object; responsive to the first slave object detecting the virtual material within a first range of the first master object, displaying indication information corresponding to the virtual material; the virtual materials are used for picking up the first main object, and the picked virtual materials are used for improving the interaction capability of the first main object in the virtual scene; therefore, the first main object can easily view and pick up the detected virtual materials based on the indication information, and further the interaction capability of the first main object in the virtual scene is improved based on the picked-up virtual materials, and under the condition that the interaction capability of the first main object is improved, the interaction times for executing interaction operation for defeating the enemy can be reduced, the man-machine interaction efficiency is improved, and the occupation of hardware processing resources is reduced.
In some embodiments, the terminal device 400 may implement the control method of the virtual object provided by the embodiment of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) application (APP, APPlication), i.e. a program that needs to be installed in an operating system to run, such as a shooting game APP (i.e. client 410 described above); the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an example of an application program, in actual implementation, the terminal device 400 installs and runs an application program supporting a virtual scene. The application may be any one of a First person shooter game (FPS), a third person shooter game, a virtual reality application, a three-dimensional map program, a maneuver simulation program, or a multi-person gunfight survival game. The user uses the terminal device 400 to operate a virtual object located in a virtual scene to perform activities including, but not limited to: at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona, or the like.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
For example, the server 200 in fig. 1B may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, in practical application, the electronic device may be the terminal device 400 in fig. 1A, or may be the terminal device 400 or the server 200 in fig. 1B, and the electronic device is illustrated by taking the electronic device as the terminal device 400 shown in fig. 1A as an example. The terminal device 400 shown in fig. 2 includes: at least one processor 420, a memory 460, at least one network interface 430, and a user interface 440. The various components in terminal device 400 are coupled together by bus system 450. It is understood that bus system 450 is used to implement the connected communications between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 450 in fig. 2.
The processor 420 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 440 includes one or more output devices 441 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 440 also includes one or more input devices 442, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 460 optionally includes one or more storage devices physically remote from processor 420.
Memory 460 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 460 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 460 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, the exemplary network interfaces 430 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 463 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 441 (e.g., a display screen, speakers, etc.) associated with the user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in software, and fig. 2 shows the control device 465 for a virtual object stored in the memory 460, which may be software in the form of a program and a plug-in, and includes the following software modules: the first display module 4651, the first control module 4652 and the second display module 4653 are logical, so that any combination or further splitting may be performed according to the implemented functions, and functions of the respective modules will be described below.
In other embodiments, the control device for a virtual object provided in the embodiments of the present application may be implemented in hardware, and as an example, the control device for a virtual object provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to perform the control method for a virtual object provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may use one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic components.
The method for controlling the virtual object provided by the embodiment of the application is specifically described below with reference to the accompanying drawings. The control method of the virtual object provided by the embodiment of the present application may be executed by the terminal device 400 in fig. 1A alone, or may be executed by the terminal device 400 and the server 200 in fig. 1B in cooperation. Next, a control method of a virtual object provided by the embodiment of the present application is described by way of example in which the terminal device 400 in fig. 1A alone is executed. Referring to fig. 3, fig. 3 is a flowchart of a control method of a virtual object according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
It should be noted that the method shown in fig. 3 may be executed by various computer programs running on the terminal device 400, and is not limited to the above-mentioned client 410, but may also be the operating system 461, software modules and scripts described above, and therefore the client should not be considered as limiting the embodiments of the present application.
Step 101: the terminal device displays a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have an affiliation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes.
Here, a client supporting a virtual scene (such as a game) is installed on the terminal device, and when a user opens the client on the terminal and the terminal runs the client, the terminal device displays a game screen, where the game screen may be obtained by observing the game from a first person object perspective or from a third person perspective. The game picture comprises at least a part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object in a first form, the first auxiliary object and the first main object have an affiliated relation, the first main object is a virtual object in the virtual scene corresponding to the current user account, and in the virtual scenes, a user can control the first main object to interact with other virtual objects (such as a second virtual object which is in different groups or is in an hostile relation with the first main object) based on an interface of the virtual scenes. The first slave object is a calling object associated with the first master object and is used for assisting the first master object to interact with other virtual objects in the virtual scene, wherein the images can be virtual characters, virtual animals, cartoon characters, virtual props, virtual carriers and the like.
Step 102: in response to a first instruction for the first slave object, the first slave object is controlled to be adsorbed on the first master object.
In some embodiments, the terminal device may control the first slave object to be adsorbed on the first master object in response to the first instruction for the first slave object by: in response to a first instruction for the first slave object, the first slave object in the initial form is summoned, and the first slave object in the initial form is deformed into the first form, and the first slave object in the first form is adsorbed onto the first master object to become a part of the first master object model is displayed.
In practical application, the terminal device may display a calling control for calling the first slave object in an interface of the virtual scene, respond to a triggering operation for the calling control, receive a calling instruction (i.e. a first instruction), respond to the calling instruction, generate and send a calling request for the first slave object to the server, wherein the calling request carries an object identifier of the first slave object to be called, the server determines relevant parameters of the first slave object requested by the calling request based on the calling request, and pushes the determined relevant parameters of the first slave object to the terminal device, so that the terminal device performs picture rendering based on the relevant parameters, and displays a rendered calling picture, such as displaying the first slave object in the calling initial form, and displaying a process that the first slave object in the initial form is converted into the first slave object in the first form and the first slave object in the first form is attached to the first master object, such as displaying a wild image and then displaying a wild image and a fragment of the first slave object attached to the first master object.
Typically, the assistance that the first slave object in the different forms brings to the first master object is different, for example, the first slave object in the initial form may be an independent avatar located at a distance around the first master object, and when the first master object moves in the virtual scene, the first slave object in the initial form moves following the movement of the first master object; after the first slave object in the initial form is summoned and before the first slave object is changed from the initial form to the first form, the terminal device can also control the initial form to detect a target area centering on the first slave object in the virtual scene, and when the target object (such as a second virtual object or virtual material) is detected, indication information of the detected target object is displayed. The first slave object in the first form is attached to the arm of the first master object, and the first slave object in the first form is less easily perceived by the second virtual object relative to the first slave object in the initial form, so that when the first slave object in the first form is controlled to perform object detection or resource detection in the virtual scene, valuable information such as the position where the second virtual object is located, the surrounding resource distribution and the like is more beneficial to detection or detection, and further the interaction capacity of the first master object can be improved.
In some embodiments, the terminal device may control the first slave object of the first modality to perform object reconnaissance on the virtual scene in response to the first slave object of the first modality being attached to the first master object; when a second virtual object having an hostile relationship with the first main object is detected within a third range of the first main object, the third range may be a circular range area with the first main object as a center and a target distance as a radius, or may be a sector range area with the first main object as a vertex, the target distance as a radius, and the target angle as a central angle, or may be a range area of other shapes (such as irregular), and the shape of the third range is not limited in the embodiment of the present application.
In practical application, the terminal device can control the first slave object in the first form to assist the first master object to interact with the second virtual object in the different virtual object groups of the first master object in the virtual scene, so in order to acquire the position information of the second virtual object, the first slave object in the first form can be controlled to conduct object detection on the virtual scene, in practical application, the second virtual object (such as other virtual objects or non-user roles in different groups with the first master object, collectively called as the second virtual object) in the virtual scene is bound with a collision device (such as a collision box, a collision ball and the like), and in the process of controlling the first slave object in the first form to conduct object detection on the virtual scene, a detection ray is emitted from the first slave object along the direction of the first master object or the first slave object through the camera device on the first slave object, and when the detection ray intersects with the collision device bound on the second virtual object, the first slave object is determined to detect the second virtual object; when the detected ray does not intersect with the collider component bound to the second virtual object, the first slave object is determined to not detect the second virtual object, when the second virtual object is detected, early warning is sent out for the second virtual object, and position information of the second virtual object, such as the distance and the direction of the second virtual object relative to the first master object, is displayed in a map of the virtual scene.
Referring to fig. 4, fig. 4 is a schematic scout diagram provided by the embodiment of the present application, where a first slave object in a first form is controlled to perform object scout in a virtual scene, and when a second virtual object is scout in a third range centered on the first master object or the first slave object, position information of the second virtual object, that is, a distance and a direction of the second virtual object relative to the first master object, is displayed in a map of the virtual scene; in addition, the position information of the second virtual object displayed in the map can be also used for viewing all virtual objects in the group to which the first main object belongs. After the position information of the second virtual object is acquired, the terminal equipment is convenient to control the first main object, or the terminal equipment corresponding to each virtual object in the group to which the first main object belongs adopts the most suitable interaction strategy to interact with the second virtual object, so that the interaction capability of the first main object or the group to which the first main object belongs is improved.
Step 103: and displaying indication information corresponding to the virtual material when the first slave object detects the virtual material in the first range of the first master object.
The first range may be a circular range area with the first main object as the center and the target distance as the radius, or a fan-shaped range area with the first main object as the vertex, the target distance as the radius and the target angle as the central angle, or may be a range area with other shapes (such as irregularities), and the shape of the first range is not limited in the embodiment of the present application.
The material detection skill is a skill for detecting materials, when a first slave object has the material detection skill, the terminal equipment can control the first slave object in a first form to detect materials of a virtual scene through the material detection skill, in practical application, a collision device component (such as a collision box, a collision ball and the like) is bound to the virtual materials, in the process of controlling the first slave object to detect materials of the virtual scene, a detection ray is emitted from the first slave object along the direction of the first master object through a camera component on the first slave object, and when the detection ray intersects with the collision device component bound to the virtual materials, the virtual materials are determined to be detected by the first slave object; when the detected ray does not intersect a collider component bound to the virtual asset, then it is determined that the virtual asset was not detected by the first slave object.
Virtual materials that can be detected by the materials detection skills include, but are not limited to: gold coin, construction material (e.g., ore), food material, weaponry, equipment, or character upgrade material. When the first slave object detects the virtual material in the virtual scene, indicating information of the corresponding virtual material is displayed, the terminal equipment can control the first master object or other virtual objects in the group to which the first master object belongs to pick up or mine the detected virtual material based on the indicating information of the virtual material, and can control the first master object or other virtual objects in the group to which the first master object belongs to upgrade self equipment or build a virtual building based on the picked up or mined virtual material, so that the interaction capacity, such as attack capacity or defense capacity, of the first master object or other virtual objects in the group to which the first master object belongs in the virtual scene is improved.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material when the virtual material is detected in the virtual scene by: when the virtual materials are detected in a first range which takes a first main object as a center in the virtual scene, displaying type indication information of the virtual materials at the position of the virtual materials in the first range, and displaying position indication information of the virtual materials in a map corresponding to the virtual scene; at least one of the category indication information and the position indication information is used as indication information of the corresponding virtual material.
Referring to fig. 5, fig. 5 is a schematic diagram of detection provided in the embodiment of the present application, when a first slave object in a first form detects a virtual object in a first range centered on a first master object (or a first slave object) in a virtual scene, category indication information of the virtual object, such as equipment, construction, defense, etc., is displayed at a location where the virtual object is located, location information of the virtual object, such as a distance and a direction of the virtual object relative to the first master object or other virtual objects in a group to which the first master object belongs, is displayed in a map of the virtual scene, and the terminal device may control the first master object or other virtual objects in the group to which the first master object belongs to pick up the virtual object based on the indication information of the virtual object, so as to improve interaction capability of the terminal device.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material by: when the number of the virtual materials is at least two, displaying the indication information of the first number of the virtual materials in the at least two virtual materials by adopting a first display mode, and displaying the indication information of the second number of the virtual materials in the at least two virtual materials by adopting a second display mode; the first display style is different from the second display style, the first display style represents that the first quantity of virtual materials are located in the visual field range of the first main object, and the second display style represents that the second quantity of virtual materials are located outside the visual field range of the first main object.
Referring to fig. 6, fig. 6 is a schematic diagram of detection provided in the embodiment of the present application, when a plurality of virtual materials detected by a first slave object in a virtual scene in a first aspect are detected, indication information of the virtual materials in the visual field range and the visual field range can be displayed by adopting different display styles (such as different colors, different brightness, etc.) according to whether the virtual materials are in the visual field range of the first master object. It will be appreciated that as the field of view of the first primary object changes, the display style of each virtual asset may change as the field of view of the first primary object changes. Therefore, the indication information of the virtual materials in the first main object visual field range is displayed by adopting different display modes, the player can be given a remarkable prompt, and the first main object is controlled to select and pick up the proper virtual materials, so that the interaction capability of the player is improved.
In some embodiments, the terminal device may display the indication information of the corresponding virtual material by: when the types of the virtual materials comprise at least two types, displaying the indication information of the virtual materials of the target type in the at least two types of virtual materials by adopting a third display mode, and displaying the indication information of the virtual materials of other types except the target type in the at least two types of virtual materials by adopting a fourth display mode; the third display style is different from the fourth display style, and the third display style characterizes the picking priority of the virtual materials of the target type and is higher than the picking priority of the virtual materials of other types.
When the types of the detected virtual props are multiple, displaying the indication information of the virtual supplies of the corresponding types by adopting different display modes according to different pickup priorities, particularly highlighting the indication information of the virtual supplies of the type with the highest pickup priorities, so that the virtual props of the target type with the highlighted pickup priorities can be controlled to be picked up by the virtual objects; therefore, the virtual materials of the target type most needed by the first main object are selected from the plurality of detected virtual materials, and the interaction capability of the first main object is improved.
In actual implementation, the virtual supplies of the target type may be determined by: based on the use preference of the first main object, acquiring the matching degree of each type of virtual material and the use preference, and selecting the virtual material corresponding to the highest matching degree as the indication information of the virtual material of the target type for highlighting; for example, the types of virtual supplies detected include: the method comprises the steps of predicting the use preference of a first main object, namely the preference, the proficiency degree and the like of the first main object on various types of virtual materials according to the role of the first main object in a virtual scene or the virtual materials of the historical use type of the first main object through a neural network model, respectively determining the matching degree of the virtual materials of the equipment class and the use preference, the matching degree of the virtual materials of the construction class and the use preference and the matching degree of the virtual materials of the defense class and the use preference based on the use preference of the first main object, selecting the virtual materials of the defense class with the highest matching degree from the virtual materials, and highlighting the indication information of the virtual materials of the screened defense class. In addition, according to at least one parameter of consumption degree, pick-up difficulty coefficient, distance between the first main object and the other virtual materials, the indication information of the virtual materials is highlighted by screening the virtual materials of the target type which is most suitable for the first main object from the virtual materials of multiple types, so that the virtual materials of the target type which is most favored, most needed and most suitable for the first main object are screened from the detected multiple virtual materials, and the interaction capability of the first main object is improved.
In some embodiments, the terminal device may further control the first slave object to be detached from the first master object and deformed into the first form in response to the second instruction for the first slave object, and control the first slave object in the first form to perform material detection on the virtual scene; responsive to the first slave object detecting the virtual asset within the second range of the first slave object, displaying indication information corresponding to the virtual asset.
The second range may be a circular range region centered on the first slave object and having a target distance as a radius, or a fan-shaped range region centered on the first slave object, having a target distance as a radius, and having a target angle as a central angle, or may be a range region of other shapes (such as irregularities), and the shape of the second range is not limited in the embodiment of the present application.
Here, when the terminal device receives the second instruction (for restoring the first slave object to the original state) for the first eating object, the first slave object can be controlled to be detached from the first master object and deformed to be in the first form (such as the original form), for example, fragments on an arm of the first master object are controlled to be detached and moved to a target position and converted into a strange of the cartoon image at the target position, and the first slave object in the first form (such as the strange of the cartoon image) is controlled to continuously perform material detection on the virtual scene; and when the first slave object detects the virtual material in the second range of the first slave object, displaying the indication information of the corresponding virtual material.
In some embodiments, the terminal device may receive a third instruction for the first slave object of the first modality; in response to a third instruction, controlling the first slave object to be detached from the first master object and deformed into a first form, and controlling the first slave object to be deformed from the first form into a second form; wherein the second aspect corresponds to an avatar of a third virtual object in the virtual scene that is located within a fourth range centered on the first slave object; controlling the first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting the fourth virtual object in the virtual scene, displaying position information of the fourth virtual object in a map of the corresponding virtual scene; wherein the third virtual object and the fourth virtual object each have an hostile relationship with the first host object.
The third instruction is also called a mimicry instruction, and the mimicry instruction can be triggered by triggering a mimicry control, for example, the terminal device can display the mimicry control for the first slave object in an interface of the virtual scene, respond to the triggering operation for the mimicry control, receive the mimicry instruction, respond to the mimicry instruction, generate and send a mimicry request for the first slave object to a server, wherein the mimicry request carries an object identification of the first slave object, the server determines relevant parameters of the first slave object requested to perform mimicry by the mimicry request (such as a non-user character near the first slave object or other virtual objects located in different groups with the first master object, collectively called a third virtual object), and pushing the determined relevant parameters of the first slave object to the terminal equipment so as to enable the terminal equipment to conduct picture rendering based on the relevant parameters, and displaying a rendered calling picture, namely displaying a process of converting the first slave object from the first form to the initial form, and displaying a process of converting the first slave object in the initial form to the first slave object in the second form, such as firstly displaying an animation of converting fragments attached to the arm of the first master object into a cartoon figure (the cartoon moves to other positions after being separated from the arm of the first master object), and displaying the cartoon figure of the cartoon figure into the animation of the first slave object consistent with the figure of the second virtual object.
In the case that the first slave object is changed into the image corresponding to the third virtual object, the terminal equipment can display a movement control for the first slave object in the second form in an interface of the virtual scene, when the user triggers the movement control, the terminal equipment responds to a movement instruction triggered by the triggering operation to control the first slave object in the second form to move in the virtual scene and conduct object reconnaissance on the virtual scene in the moving process, in the practical application, a collision device (such as a collision box, a collision ball and the like) is bound to the virtual object in the virtual scene, in the process of controlling the first slave object in the second form to conduct object reconnaissance on the virtual scene, a detection ray is emitted from the first slave object along the direction of the first slave object through a camera assembly on the first slave object, and when the detection ray intersects with the collision device bound to the virtual object, the first slave object is determined to be reconnaissance on the virtual object; when the detected ray does not intersect a collider component bound to the virtual object, then it is determined that the virtual object is not detected by the first slave object.
In some embodiments, the terminal device may control the first slave object to directly change from the first modality to the second modality in response to the third instruction for the first slave object of the first modality, so as to improve the mimicry efficiency.
Referring to fig. 7, fig. 7 is a schematic diagram of a morphology change of a first slave object provided in an embodiment of the present application, first, a terminal device displays a first slave object in a summoned initial morphology in response to a summoned instruction (first instruction), and displays a process of converting the first slave object in the initial morphology into the first slave object in the first morphology and attaching the first slave object in the first morphology to a first master object, such as displaying a monster of a summoned cartoon image first, and then displaying an animation that the monster of the cartoon image becomes fragments and the fragments attach to an arm of the first master object; then, the terminal device displays that the fragments attached to the arm of the first master object are detached from the arm in response to the mimicry instruction (third instruction), and displays that the detached fragments are mimicked directly at other positions to the animation of the first slave object corresponding to the avatar of the third virtual object (i.e., the first slave object of the second form).
In this way, the first slave object is simulated to be in a form corresponding to (e.g. consistent with) the image of the third virtual object, and the first slave object in a form consistent with the image of the third virtual object is not easy to be perceived by the enemy virtual object, so that when the first slave object in the first form is controlled to perform object detection or resource detection in the virtual scene, valuable information such as the position where the enemy virtual object is located, the surrounding resource distribution and the like is more beneficial to detection or detection, and the interaction capability of the first master object can be improved.
In some embodiments, when the number of the third virtual objects is at least two, the terminal device may determine the second state to be mimicked by: displaying a form selection interface, and displaying forms corresponding to at least two selectable third virtual objects in the form selection interface; and responding to a selection operation of the morphology of the target virtual object in the at least two third virtual objects, and taking the morphology of the selected target virtual object as a second morphology. Therefore, the user can manually select the mimicry form of the first slave object, and the operation experience of the user is further improved.
For example, the terminal device receives a mimicry command in response to a trigger operation for a mimicry control, generates and sends a mimicry request for a first slave object to the server in response to the mimicry command, wherein the mimicry request carries an object identifier of the first slave object, the server detects a third virtual object in a fourth range centering on the first slave object in a virtual scene based on the mimicry request, returns a detection result to the terminal device, when the number of the detected third virtual objects represented by the detection result is a plurality of, displays a morphology selection interface in an interface of the virtual scene, displays morphologies corresponding to at least two selectable third virtual objects in the morphology selection interface, such as a morphology of the third virtual object 1, a morphology of the third virtual object 2 and a morphology of the third virtual object 3, a user can select the morphology of the plurality of third virtual objects displayed in the morphology selection interface, and the terminal device determines the morphology of the third virtual object 2 of the user to be the second morphology of the first slave object.
In some embodiments, the terminal device may predict the second shape by: acquiring scene data of a third area in the corresponding virtual scene; wherein the scene data includes at least one of: a non-user role in the fourth scope, other virtual objects in the fourth scope; calling a machine learning model to conduct prediction processing based on the scene data to obtain a second form; the machine learning model is obtained by training based on scene data in a sample range and a labeled mimetic; in this way, the form of the first slave object capable of maximally improving the interaction capability of the first master object or the group to which the first master object belongs is predicted by calling the machine learning model, so that the prediction accuracy is improved, the enemy is subjected to maximum interference, and the interaction capability of the first master object or the group to which the first master object belongs is further improved.
In practical application, the machine learning model may be a neural network model (such as a convolutional neural network, a deep convolutional neural network, or a fully-connected neural network), a decision tree model, a gradient lifting tree, a multi-layer perceptron, a support vector machine, and the like, and the type of the machine learning model is not particularly limited in the embodiment of the present application.
In some embodiments, before displaying the position information of the fourth virtual object in the map corresponding to the virtual scene, the terminal device may further control the first slave object of the second aspect to perform a scout mark on the fourth virtual object to highlight the fourth virtual object.
Here, in the case that the first slave object is simulated to be consistent with the image of the third virtual object, the terminal device may display a scout control for the first slave object in the second state in an interface of the virtual scene, when the user triggers the scout control, the terminal device controls the first slave object in the second state to move in the virtual scene in response to a scout instruction triggered by the trigger operation, so as to perform object scout on the virtual scene, in practical application, a fourth virtual object (such as a non-user role or other virtual object located in a different group with the first master object, and collectively referred to as a fourth virtual object) in the virtual scene is bound with a collider component (such as a collision box, a collision ball, etc.), during object scout on the virtual scene by the first slave object in the second state is controlled, a detection ray is emitted from the first slave object along the direction of the first slave object through the camera component on the first slave object, and when the detection ray intersects with the collider component bound on the fourth virtual object, the first slave object is determined to be the fourth slave object; when the detected ray does not intersect a collider component bound to the fourth virtual object, then it is determined that the fourth virtual object is not detected by the first slave object.
In some embodiments, the terminal device may control the first slave object of the second modality to scout the fourth virtual object to highlight the fourth virtual object by: controlling a first slave object in the second form to lock a fourth virtual object, and acquiring the distance between the first slave object and the fourth virtual object; when the distance between the first slave object and the fourth virtual object exceeds the target distance for performing the reconnaissance mark, the terminal equipment can also control the first slave object in the second form to move to the position where the fourth virtual object is located at the target speed until the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance mark; when the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance mark, controlling the first slave object of the second form to emit an emission object to the fourth virtual object; when the fourth virtual object is hit by the emission object, special effect elements are displayed in the association range of the fourth virtual object so as to highlight the fourth virtual object.
Here, when the second virtual object is detected, the terminal device may first control the first slave object in the second form to lock the fourth virtual object detected, and detect the distance between the first slave object and the fourth virtual object, when the distance between the first slave object and the fourth virtual object is greater than the target distance for performing the detection mark (which may be set according to the actual situation), control the first slave object to track the fourth virtual object at a faster moving speed, and when the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the detection mark, control the first slave object to emit the emission, for example, see fig. 7, fig. 7 is a schematic diagram for detection provided by an embodiment of the present application, control the first slave object to release the aiming infrared ray or the laser, where the distance between the aiming infrared ray or the laser corresponds to the distance between the first slave object and the second slave object, and changes with the change of the distance between the first slave object and the second slave object, when an emission object (aiming infrared ray or laser) hits a impactor component (such as an impact box, an impact ball and the like) bound with a fourth virtual object, determining that the fourth virtual object is hit, at this time, displaying special effect elements in an association range of the hit fourth virtual object, for example, displaying the added special effect elements on the periphery of wrapping the fourth virtual object, wherein the special effect elements can change skin materials, colors and the like of the fourth virtual object so as to highlight the fourth virtual object, for all members of the first main object or a group to which the first main object belongs to view the position information of the fourth virtual object, so that in the case of acquiring the position information of the fourth virtual object, the virtual object of the first main object or the group to which the first main object belongs is beneficial to making an interaction strategy capable of causing the maximum damage to the fourth virtual object, and executing corresponding interaction operation according to the interaction strategy, thereby improving the interaction capability of the first main object or all members in the group to which the first main object belongs.
In some embodiments, after the terminal device controls the first slave object in the second form to perform the scout marking on the fourth virtual object, in response to the first slave object in the second form being attacked by the fifth virtual object, the first slave object can be controlled to recover from the second form to the initial form, and the first slave object in the initial form is controlled to release the marking wave around, so that the fourth virtual object is continuously scout marked by the marking wave, and the position information of the fourth virtual object is displayed in the map corresponding to the virtual scene.
Referring to fig. 8, fig. 8 is a schematic scout diagram provided in the embodiment of the present application, after a first slave object in a second form performs a scout mark on a second virtual object to highlight a fourth virtual object (i.e. successfully scout the mark to a target), a terminal device responds to an attack of the first slave object in the second form by a fifth virtual object (any enemy), and may further control the first slave object to return from the second form to an initial form (e.g. control the form of the first slave object to change from a second form consistent with the image of the fourth virtual object to an initial form of an animation image), and control the first slave object in the initial form to release a mark wave (e.g. a pulse wave) to a surrounding or the fourth virtual object, and display the position information of the fourth virtual object in a map of a virtual scene through the mark wave, so that the position information of the fourth virtual object is viewed by the first master object or a virtual object in a group to which the first master object belongs.
In some embodiments, in the process of controlling the first slave object in the second form to perform reconnaissance marking on the fourth virtual object, when the fourth virtual object moves in the virtual scene, the terminal device may receive a tracking instruction of the first slave object in the second form for the fourth virtual object; and responding to the tracking instruction, controlling the first slave object of the second form to track the fourth virtual object along the tracking direction indicated by the tracking instruction, and controlling the first slave object of the second form to continuously scout the fourth virtual object.
After the first slave object highlights the detected fourth virtual object, if the fourth virtual object moves in the virtual scene, the first slave object is controlled to track the fourth virtual object, that is, the first slave object moves along with the movement of the fourth virtual object, and the first slave object is continuously controlled to continuously snoop the fourth virtual object, that is, the first slave object is controlled to continuously release aiming infrared rays or laser to the fourth virtual object so as to always highlight the fourth virtual object, and the fourth virtual object is exposed in the visual field range of the first master object or the virtual object in the group to which the first master object belongs, so that an interaction strategy capable of causing the maximum damage to the fourth virtual object is formed by the first master object or the virtual object in the group to which the first master object belongs is facilitated, and corresponding interaction operation is executed according to the interaction strategy, so that the interaction capability of the first master object or the group to which the first master object belongs is improved.
In some embodiments, the terminal device may display a picture of movement of the fourth virtual object in the virtual scene during the process of controlling the first slave object in the second state to perform the scout mark on the fourth virtual object; when the fourth virtual object moves to the target position blocked by the obstacle, the fourth virtual object at the target position is perspective.
Referring to fig. 9, fig. 9 is a schematic scout diagram provided in the embodiment of the present application, after a first slave object highlights a scout mark of a scout fourth virtual object, if the fourth virtual object moves in a virtual scene and moves to a place blocked by an obstacle (such as a wall), the fourth virtual object blocked by the obstacle is seen through, that is, even if the fourth virtual object is blocked by the obstacle, the fourth virtual object is still highlighted, so that the fourth virtual object blocked by the obstacle is visible with respect to the first master object or all virtual objects in a group to which the first master object belongs, and position information of the fourth virtual object can be displayed in a map of the virtual scene, so that the second virtual object is always exposed in a view range of the first master object or all virtual objects in the group to which the first master object belongs, which all members in the first master object belongs are beneficial to making an interaction policy capable of causing maximum damage to the second virtual object, and executing a corresponding interaction policy according to the interaction policy, thereby improving interaction efficiency of the first master object or the first master object and improving the interaction capability of the first master object.
In some embodiments, the terminal device may implement, when the fourth virtual object is detected in the virtual scene, a scout flag for the fourth virtual object by controlling the first slave object of the second aspect: when an obstacle is detected in the virtual scene, controlling the first slave object of the second form to release pulse waves to the obstacle; when it is determined that the obstacle shields the fourth virtual object based on the pulse wave, the first slave object in the second form is controlled to perform reconnaissance marking on the fourth virtual object, and the fourth virtual object is seen through.
In practical application, the collision device component (such as a collision box and a collision ball) is bound to the obstacle, in the process of controlling the first slave object to perform object detection in the virtual scene, whether the obstacle exists in the virtual scene or not can be detected first, for example, through the camera component on the first slave object, the detection ray is emitted from the first slave object along the direction of the first slave object, when the detection ray intersects with the collision device component bound to the obstacle, the first slave object is determined to detect the obstacle, at this time, whether the fourth virtual object is hidden behind the obstacle is further detected, when the fact that the fourth virtual object is hidden behind the obstacle is determined, the first slave object is controlled to perform detection marking on the fourth virtual object, and the fourth virtual object is seen through perspective, so that even if the fourth virtual object is blocked by the obstacle, the fourth virtual object is still highlighted, the fourth virtual object is still visible relative to the first main object or all virtual objects in the group, the first main object is enabled to be hidden in the virtual object, and the interaction efficiency is enabled to be improved, and the interaction policy is enabled to be executed in the first main object or all the first main object is enabled to always in the virtual object, and the interaction policy is enabled to be always in the first main object.
In some embodiments, the terminal device may further control the first slave object of the second form to perform material detection on the virtual scene during the process of controlling the first slave object of the second form to move in the virtual scene; when the virtual material is detected in the fifth range of the first slave object in the virtual scene, the type indication information corresponding to the virtual material is displayed at the position of the virtual material in the fifth range, and the position indication information of the virtual material is displayed in the map corresponding to the virtual scene.
In this case, when the morphology of the first slave object is simulated to correspond to the image of the third virtual object, the terminal device may further control the first slave object in the simulated state (i.e., the second state) to detect the nearby area, and when the virtual object is detected, the type indication information (such as equipment, construction, defense, etc.) of the virtual object may be displayed at the location of the virtual object, and the location information of the virtual object may be displayed in the map of the virtual scene, such as the distance and direction of the virtual object with respect to the first master object or the first slave object, and the terminal device may control the first master object to pick up the virtual object based on the indication information of the virtual object, so as to enhance the interaction capability of the terminal device.
It can be understood that, in the embodiment of the present application, the relevant data related to the login account, the scene data, etc. are essentially relevant data of the user, when the embodiment of the present application is applied to a specific product or technology, the user needs to obtain permission or consent, and the collection, use and processing of the relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Taking a shooting game as an example, a first slave object is an avatar controlled by a player (i.e., a first master object) in the game and used for assisting player interaction, the same first slave object can have different forms, the terminal equipment can control the first slave object to change among the different forms, the first slave object in the different forms corresponds to different instructions, and the different instructions are used for controlling the first slave object in the corresponding form to execute corresponding operations, so that corresponding assistance is brought to the player.
Referring to fig. 10, fig. 10 is a flowchart of a method for controlling a virtual object according to an embodiment of the present application, where the method includes:
step 201: the terminal equipment responds to the calling instruction and generates and sends a calling request aiming at the first slave object to the server.
Here, a call control for calling the first slave object may be displayed in the game interface, when the user triggers the call control, the terminal device receives a call instruction in response to a triggering operation for the call control, and the generated call request carries an object identifier of the first slave object to be called in response to the call instruction.
Step 202: the server determines and returns relevant parameters of the first slave object requested to be summoned by the summoning request to the terminal equipment based on the summoning request.
Step 203: the terminal equipment performs picture rendering based on the related parameters, displays the called first slave object in the initial form, and displays the process of converting the first slave object in the initial form into the first slave object in the first form and attaching the first slave object in the first form to the first master object.
Here, the terminal device performs picture rendering based on the relevant parameters, and displays the rendered calling picture, for example, first displaying the monster of the called cartoon figure, and then displaying the animation in which the monster of the cartoon figure becomes fragments and the fragments are attached to the arm of the first main object (player).
After the first slave object in the initial form is called, before the first slave object is changed from the initial form to the first form, the terminal device may also control the initial form to detect the virtual scene (e.g., the target area centered on the first slave object), and when the target object (e.g., the virtual object or the virtual material) is detected, for example, display the indication information of the detected target object.
Step 204: the terminal equipment controls a first slave object of the first form to scout the virtual scene, and when the target object is scout, the terminal equipment displays indication information corresponding to the target object.
The target object comprises at least one of a second virtual object and virtual materials, and when the second virtual object (enemy) is detected, the position information of the second virtual object, such as the distance and the direction of the second virtual object relative to the first main object, is displayed in the game map. When the first master object is equipped with a material detection skill, the terminal device can control the first slave object in the first form to detect the material of the virtual scene through the material detection skill, and when the virtual material (such as gold coin, building material (such as ore), food material, weapon equipment, equipment or character upgrading material and the like) is detected, indication information of the virtual material (such as indication of the type of the virtual material, the position of the virtual material and the like) is displayed.
In practical application, when the number of detected target objects is greater than or equal to 2, different display modes (such as different colors, different brightnesses, etc.) can be adopted to display each target object according to the characteristics of the target objects, for example, when the target object is a second virtual object, different display modes are adopted to display each second virtual object according to the difference of the distances between each second virtual object and the first main object; when the target object is a virtual material, displaying indication information of the virtual material in and out of the visual field range of the first main object by adopting different display modes (such as different colors, different brightness and the like).
Step 205: the terminal device responds to the mimicry instruction and generates and transmits a mimicry request for the first slave object to the server.
The mimicry command can be triggered by triggering the mimicry control, for example, the terminal equipment can display the mimicry control for the first slave object in an interface of the virtual scene, respond to the triggering operation of the mimicry control, receive the mimicry command, respond to the mimicry command, generate and send a mimicry request for the first slave object to the server, and the mimicry request carries information such as an object identifier of the first slave object, a form in which the current first slave object is located and the like.
Step 206: the server determines and returns the mimicry information of the first slave object requested to be mimicry by the mimicry request to the terminal device based on the mimicry request.
The mimicry information may be information related to a second virtual object (including a non-user role or other virtual objects located in a different group from the first master object) within the range centered on the first slave object.
Step 207: the terminal equipment performs picture rendering based on the mimicry information, and displays the animation of the first slave object, wherein the first slave object is converted into an initial form from a first form, and the first slave object in the initial form is mimicry to the first slave object in a second form.
For example, the animation in which the fragments attached to the arm of the first master object are transformed into the animate of the animation (the animate moves to another position apart from the arm of the first master object) is displayed first, and the animate of the animation is displayed as the animation of the first slave object conforming to the avatar of the second virtual object.
Step 208: the terminal equipment controls the first slave object of the second form to perform object reconnaissance on the virtual scene.
In practical application, a second virtual object (such as a non-user role or other virtual objects located in a different group from the first main object, collectively referred to as a second virtual object) in the virtual scene is bound with a collider component (such as a collision box, a collision ball, etc.), during the object reconnaissance process of the first slave object controlling the second form on the virtual scene, through a camera component on the first slave object, a detection ray is emitted from the first slave object along the direction of the first slave object, and when the detection ray intersects with the collider component bound on the second virtual object, it is determined that the first slave object reconnaissance the second virtual object; when the detected ray does not intersect a impactor component bound to the second virtual object, then it is determined that the second virtual object is not detected by the first slave object.
Step 209: when the second virtual object is detected, the first slave object in the second form is controlled to perform detection marking on the second virtual object so as to highlight the second virtual object.
Here, when the second virtual object is detected, the terminal device may first control the first slave object in the second form to lock the detected second virtual object, and detect the distance between the first slave object and the second virtual object, when the distance between the first slave object and the second virtual object is greater than the target distance for performing the detection mark (which may be set according to the actual situation), control the first slave object to track the second virtual object at a faster moving speed, until the distance between the first slave object and the second slave object does not exceed the target distance for performing the detection mark, control the first slave object to release the aiming infrared ray or the laser, where the distance between the aiming infrared ray or the laser corresponds to the distance between the first slave object and the second slave object, and change with the change of the distance between the first slave object and the second slave object, and when the emitter (the aiming infrared ray or the laser) hits a collider component (for example, a collision box, a collision ball, etc.) bound by the second virtual object, determine that the second virtual object is hit, and at this time, special effect elements are displayed in the association range of the second virtual object, such as elements added in the periphery of the second virtual object are displayed, and the special effect elements can change skin material, the color of the second virtual object, and the main information of the second virtual object are highlighted for viewing all the attributive information of the second virtual object.
In practical application, after the first slave object in the second form performs the reconnaissance mark on the second virtual object to highlight the second virtual object (i.e. successfully reconnaissance mark to the target), the terminal device may further control the first slave object to restore from the second form to the initial form (e.g. control the form of the first slave object to change from the second form consistent with the image of the second virtual object to the initial form of the cartoon image), control the first slave object in the initial form to release the mark wave (e.g. pulse wave) around or from the second virtual object, and display the position information of the second virtual object in the map of the virtual scene through the mark wave, so that all members in the group to which the first master object belongs view the position information of the second virtual object.
In the above manner, in the interaction process, the first slave object associated with the first master object is controlled to perform material detection or object reconnaissance on the virtual scene, when the virtual material is detected, indication information corresponding to the virtual material is displayed, the first master object can easily view and pick up the detected virtual material based on the indication information, and further the interaction capability of the first master object in the virtual scene is improved based on the picked-up virtual material, for example, self equipment is upgraded or a defending structure is built by using the picked-up virtual material, so that the attack capability or defending capability is improved; when the second virtual object is detected, the position information of the second virtual object is displayed for all members in the group to which the first main object belongs to check, so that an interaction strategy which can cause the maximum damage to the second virtual object is manufactured by all members in the group to which the first main object belongs, corresponding interaction operation is executed according to the interaction strategy, the interaction capability of all members in the group to which the first main object belongs is further improved, the interaction times for executing the interaction operation for defeating the adversary can be reduced under the condition that the interaction capability is improved, the man-machine interaction efficiency is improved, and the occupation of hardware processing resources is reduced.
Continuing with the description below of an exemplary architecture of the virtual object control apparatus 465 implemented as a software module provided by embodiments of the present application, in some embodiments, the software modules stored in the virtual object control apparatus 465 of the memory 460 of fig. 2 may include: a first display module 4651, configured to display a game screen, where the game screen includes at least a part of a virtual scene, where the virtual scene includes a first master object and a first slave object, where the first slave object has a subordinate relationship with the first master object, and the first slave object has a material detection skill, and is configured to perform material detection on the virtual scene; a first control module 4652 for controlling the first slave object to be adsorbed on the first master object to become a part of the first master object model in response to a first instruction for the first slave object; a second display module 4653, configured to display indication information corresponding to a virtual material in response to the first slave object detecting the virtual material within a first range of the first master object; the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
In some embodiments, the apparatus further comprises: the second control module is used for responding to a second instruction aiming at the first slave object, controlling the first slave object to be separated from the first master object and deformed into a first form, and controlling the first slave object in the first form to detect materials of the virtual scene; and in response to the first slave object detecting the virtual material in the second range of the first slave object, displaying indication information corresponding to the virtual material.
In some embodiments, the second display module is further configured to, when a virtual material is detected in a second area in the virtual scene centered on the first main object, display, at a location where the virtual material in the second range is located, category indication information of the virtual material, and display, in a map corresponding to the virtual scene, location indication information of the virtual material; and taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
In some embodiments, the second display module is further configured to display, when the number of virtual materials is at least two, indication information of a first number of virtual materials in the at least two virtual materials in a first display style, and display, in a second display style, indication information of a second number of virtual materials in the at least two virtual materials; the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
In some embodiments, the second display module is further configured to display, when the types of the virtual materials are at least two, indication information of a virtual material of a target type in the at least two virtual materials by using a third display style, and display indication information of a virtual material of another type except the target type in the at least two virtual materials by using a fourth display style; the third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
In some embodiments, after the controlling the first slave object to adsorb on the first master object, the apparatus further comprises: the third control module is used for controlling the first slave object to perform object reconnaissance on the virtual scene; and when the first slave object detects a second virtual object with hostile relation with the first master object in a third range of the first master object, displaying the position information of the second virtual object in a map corresponding to the virtual scene.
In some embodiments, after the controlling the first slave object to adsorb on the first master object, the method further comprises: a fourth control module, configured to control the first slave object to be deformed from the first master object to a first shape in response to a third instruction for the first slave object, and control the first slave object to be deformed from the first shape to a second shape; wherein the second aspect corresponds to an avatar of a third virtual object in the virtual scene that is located within a fourth region centered on the first slave object; controlling a first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene; in response to the first slave object detecting a fourth virtual object in the virtual scene, displaying position information of the fourth virtual object in a map corresponding to the virtual scene; wherein the third virtual object and the fourth virtual object have hostile relationships with the first master object.
In some embodiments, when the number of second virtual objects is at least two, the apparatus further comprises: the form determining module is used for displaying a form selection interface and displaying forms corresponding to at least two second virtual objects which can be selected in the form selection interface; and responding to a selection operation of the morphology of a target virtual object in the at least two second virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
In some embodiments, the apparatus further comprises: the form prediction module is used for acquiring scene data corresponding to a third area in the virtual scene; wherein the scene data includes at least one of: a non-user role in the fourth scope, other virtual objects in the fourth scope; calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form; the machine learning model is obtained by training based on scene data in a sample range and a labeled mimetic.
In some embodiments, before displaying the location information of the fourth virtual object in the map corresponding to the virtual scene, the apparatus further comprises: and a fifth control module, configured to control the first slave object in the second aspect to perform reconnaissance marking on the fourth virtual object, so as to highlight the fourth virtual object.
In some embodiments, the fifth control module is further configured to control the first slave object of the second aspect to lock the fourth virtual object and obtain a distance between the first slave object and the fourth virtual object; controlling the first slave object of the second form to emit an emission object to the fourth virtual object when the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance mark; and when the fourth virtual object is hit by the emission object, displaying special effect elements in the association range of the fourth virtual object so as to highlight the fourth virtual object.
In some embodiments, the fifth control module is further configured to control, when the distance between the first slave object and the fourth virtual object exceeds the target distance for performing the scout mark, the first slave object of the second aspect to move at the target speed to the position where the fourth virtual object is located until the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the scout mark.
In some embodiments, after the first slave object controlling the second modality scouts the fourth virtual object, the apparatus further comprises: and the sixth control module is used for responding to the attack of the first slave object in the second form by the fifth virtual object, controlling the first slave object to recover from the second form to the initial form, and controlling the first slave object in the initial form to release the marker wave from the periphery so as to continuously scout the fourth virtual object through the marker wave.
In some embodiments, the apparatus further comprises: a seventh control module, configured to receive, when the fourth virtual object moves in a virtual scene during a process of controlling the first slave object in the second aspect to perform a reconnaissance marking on the fourth virtual object, a tracking instruction of the first slave object in the second aspect for the fourth virtual object; and responding to the tracking instruction, controlling the first slave object in the second form to track the fourth virtual object along the tracking direction indicated by the tracking instruction, and controlling the first slave object in the second form to continuously scout the fourth virtual object.
In some embodiments, the apparatus further comprises: an eighth control module, configured to display a picture of the fourth virtual object moving in a virtual scene in a process of controlling the first slave object in the second aspect to perform a scout mark on the fourth virtual object; and when the fourth virtual object moves to a target position which is blocked by an obstacle, the fourth virtual object positioned at the target position is perspective.
In some embodiments, the fourth control module is further configured to control the first slave object of the second modality to release a pulse wave to the obstacle when the obstacle is detected in the virtual scene; and when the obstacle is determined to block the fourth virtual object based on the pulse wave, the first slave object in the second form is controlled to perform reconnaissance marking on the fourth virtual object, and the fourth virtual object is seen through.
In some embodiments, the apparatus further comprises: a ninth control module, configured to control, during controlling the first slave object in the second aspect to move in the virtual scene, the first slave object in the second aspect to perform material detection on the virtual scene; when the virtual material is detected in the fifth range of the first slave object in the virtual scene, displaying the type indication information corresponding to the virtual material at the position of the virtual material in the fifth range, and displaying the position indication information of the virtual material in a map corresponding to the virtual scene.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual object control method according to the embodiment of the present application.
Embodiments of the present application provide a computer readable storage medium having stored therein executable instructions which, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. A method for controlling a virtual object, the method comprising:
displaying a game picture, wherein the game picture comprises at least a part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have a subordinate relation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes;
Controlling the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object;
responsive to the first slave object detecting a virtual asset within a first range of the first master object, displaying indication information corresponding to the virtual asset;
the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
2. The method of claim 1, wherein the method further comprises:
in response to a second instruction for the first slave object, controlling the first slave object to be separated from the first master object and deformed into a first form, and controlling the first slave object in the first form to perform material detection on the virtual scene;
and in response to the first slave object detecting the virtual material in the second range of the first slave object, displaying indication information corresponding to the virtual material.
3. The method of claim 1, wherein the displaying, in response to the first slave object detecting a virtual asset within a first range of the first master object, indication information corresponding to the virtual asset comprises:
In response to the first slave object detecting a virtual material in a first range centered on the first master object, displaying category indication information of the virtual material at a position where the virtual material is located in the first range, and displaying position indication information of the virtual material in a map corresponding to the virtual scene;
and taking at least one of the category indication information and the position indication information as indication information corresponding to the virtual material.
4. The method of claim 1, wherein displaying the indication information corresponding to the virtual material comprises:
when the number of the virtual materials is at least two, displaying the indication information of the first number of the virtual materials in the at least two virtual materials by adopting a first display mode, and displaying the indication information of the second number of the virtual materials in the at least two virtual materials by adopting a second display mode;
the first display style is different from the second display style, the first display style characterizes that the first number of virtual materials are located in the visual field range of the first main object, and the second display style characterizes that the second number of virtual materials are located outside the visual field range of the first main object.
5. The method of claim 1, wherein displaying the indication information corresponding to the virtual material comprises:
when the types of the virtual materials are at least two, displaying the indication information of the virtual materials of the target type in the at least two virtual materials by adopting a third display mode, and displaying the indication information of the virtual materials of other types except the target type in the at least two virtual materials by adopting a fourth display mode;
the third display style is different from the fourth display style, and characterizes the pickup priority of the virtual materials of the target type, which is higher than the pickup priority of the virtual materials of other types.
6. The method of claim 1, wherein the controlling the first slave object after being adsorbed on the first master object further comprises:
controlling the first slave object to perform object reconnaissance on the virtual scene;
and when the first slave object detects a second virtual object with hostile relation with the first master object in a third range of the first master object, displaying the position information of the second virtual object in a map corresponding to the virtual scene.
7. The method of claim 1, wherein the controlling the first slave object after being adsorbed on the first master object further comprises:
in response to a third instruction for the first slave object, controlling the first slave object to be detached from the first master object and deformed into a first form, and controlling the first slave object to be deformed from the first form into a second form;
wherein the second aspect corresponds to an avatar of a third virtual object in the virtual scene that is located within a fourth range centered on the first slave object;
controlling a first slave object of the second form to move in the virtual scene so as to perform object reconnaissance on the virtual scene;
in response to the first slave object detecting a fourth virtual object in the virtual scene, displaying position information of the fourth virtual object in a map corresponding to the virtual scene; wherein the third virtual object and the fourth virtual object both have hostile relationships with the first master object.
8. The method of claim 7, wherein when the number of third virtual objects is at least two, the method further comprises:
Displaying a form selection interface, and displaying forms corresponding to at least two selectable third virtual objects in the form selection interface;
and responding to a selection operation of the morphology of a target virtual object in the at least two third virtual objects, and taking the morphology of the selected target virtual object as the second morphology.
9. The method of claim 7, wherein the method further comprises:
acquiring scene data corresponding to a fourth range in the virtual scene; wherein the scene data includes at least one of: a non-user role in the fourth scope, other virtual objects in the fourth scope;
calling a machine learning model to conduct prediction processing based on the scene data to obtain the second form;
the machine learning model is obtained by training based on scene data in a sample range and the labeled form.
10. The method of claim 7, wherein the method further comprises, prior to displaying the location information of the fourth virtual object in the map corresponding to the virtual scene:
and controlling the first slave object of the second form to perform reconnaissance marking on the fourth virtual object so as to highlight the fourth virtual object.
11. The method of claim 10, wherein the controlling the first slave object of the second modality to scout the fourth virtual object to highlight the fourth virtual object comprises:
controlling a first slave object of the second form to lock the fourth virtual object, and acquiring the distance between the first slave object and the fourth virtual object;
controlling the first slave object of the second form to emit an emission object to the fourth virtual object when the distance between the first slave object and the fourth virtual object does not exceed the target distance for performing the reconnaissance mark;
when the fourth virtual object is hit by the emission, a special effect element is displayed in an associated area of the fourth virtual object to highlight the fourth virtual object.
12. The method of claim 10, wherein after the first slave object controlling the second modality scouts the fourth virtual object, the method further comprises:
in response to the first slave object in the second form being attacked by the fifth virtual object, controlling the first slave object to recover from the second form to the original form, and
And controlling the first slave object of the initial form to release the marker wave to the periphery so as to continuously scout the fourth virtual object through the marker wave.
13. The method of claim 10, wherein the method further comprises:
in the process of controlling a first slave object in a second form to perform reconnaissance marking on the fourth virtual object, when the fourth virtual object moves in a virtual scene, a tracking instruction of the first slave object in the second form for the fourth virtual object is received;
and responding to the tracking instruction, controlling the first slave object in the second form to track the fourth virtual object along the tracking direction indicated by the tracking instruction, and controlling the first slave object in the second form to continuously scout the fourth virtual object.
14. The method of claim 10, wherein the method further comprises:
displaying a picture of the fourth virtual object moving in a virtual scene in the process of controlling the first slave object in the second form to perform reconnaissance marking on the fourth virtual object;
and when the fourth virtual object moves to a target position which is blocked by an obstacle, the fourth virtual object positioned at the target position is perspective.
15. The method of claim 10, wherein the controlling the first slave object of the second modality to scout the fourth virtual object comprises:
controlling a first slave object of the second modality to release a pulse wave to an obstacle when the obstacle is detected in the virtual scene;
and when the obstacle is determined to block the fourth virtual object based on the pulse wave, the first slave object in the second form is controlled to perform reconnaissance marking on the fourth virtual object, and the fourth virtual object is seen through.
16. The method of claim 7, wherein the method further comprises:
in the process of controlling the first slave object in the second form to move in the virtual scene, controlling the first slave object in the second form to detect materials of the virtual scene;
when the virtual material is detected in the fifth range of the first slave object in the virtual scene, displaying the type indication information corresponding to the virtual material at the position of the virtual material in the fifth range
And displaying the position indication information of the virtual materials in the map corresponding to the virtual scene.
17. A control apparatus for a virtual object, the apparatus comprising:
the first display module is used for displaying a game picture, wherein the game picture comprises at least part of virtual scenes, the virtual scenes comprise a first main object and a first auxiliary object, the first auxiliary object and the first main object have a subordinate relation, and the first auxiliary object has a material detection skill and is used for detecting materials of the virtual scenes;
a first control module for controlling the first slave object to be adsorbed on the first master object in response to a first instruction for the first slave object;
the second display module is used for responding to the first slave object to detect the virtual material in the first range of the first master object and displaying indication information corresponding to the virtual material;
the virtual materials are used for being picked up by the first main object, and the picked virtual materials are used for improving interaction capability of the first main object in the virtual scene.
18. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of controlling a virtual object according to any one of claims 1 to 15 when executing executable instructions stored in said memory.
19. A computer readable storage medium storing executable instructions for implementing the method of controlling a virtual object according to any one of claims 1 to 16 when executed by a processor.
20. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of controlling a virtual object according to any one of claims 1 to 16.
CN202210363835.8A 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product Pending CN116920401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210363835.8A CN116920401A (en) 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210363835.8A CN116920401A (en) 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116920401A true CN116920401A (en) 2023-10-24

Family

ID=88374368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210363835.8A Pending CN116920401A (en) 2022-04-07 2022-04-07 Virtual object control method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116920401A (en)

Similar Documents

Publication Publication Date Title
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN112121434B (en) Interaction method and device of special effect prop, electronic equipment and storage medium
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112076473A (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112402963B (en) Information sending method, device, equipment and storage medium in virtual scene
CN112402946B (en) Position acquisition method, device, equipment and storage medium in virtual scene
CN113633964A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN114470775A (en) Object processing method, device, equipment and storage medium in virtual scene
CN114344906B (en) Control method, device, equipment and storage medium for partner object in virtual scene
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN112121432A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112121433B (en) Virtual prop processing method, device, equipment and computer readable storage medium
CN112870694B (en) Picture display method and device of virtual scene, electronic equipment and storage medium
CN113769392B (en) Method and device for processing state of virtual scene, electronic equipment and storage medium
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN112717403B (en) Virtual object control method and device, electronic equipment and storage medium
CN116920401A (en) Virtual object control method, device, equipment, storage medium and program product
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN116920403A (en) Virtual object control method, device, equipment, storage medium and program product
KR102698789B1 (en) Method and apparatus for processing information of virtual scenes, devices, media and program products
CN114146413A (en) Virtual object control method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40098965

Country of ref document: HK