CN113318443A - Reconnaissance method, device, equipment and medium based on virtual environment - Google Patents

Reconnaissance method, device, equipment and medium based on virtual environment Download PDF

Info

Publication number
CN113318443A
CN113318443A CN202110609266.6A CN202110609266A CN113318443A CN 113318443 A CN113318443 A CN 113318443A CN 202110609266 A CN202110609266 A CN 202110609266A CN 113318443 A CN113318443 A CN 113318443A
Authority
CN
China
Prior art keywords
scout
target
virtual object
virtual
slave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110609266.6A
Other languages
Chinese (zh)
Other versions
CN113318443B (en
Inventor
邵甲坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110609266.6A priority Critical patent/CN113318443B/en
Publication of CN113318443A publication Critical patent/CN113318443A/en
Application granted granted Critical
Publication of CN113318443B publication Critical patent/CN113318443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a reconnaissance method, a reconnaissance device, reconnaissance equipment and a reconnaissance medium based on a virtual environment, and relates to the field of virtual environments. The method comprises the following steps: displaying a slave virtual object, wherein the virtual environment also comprises a master virtual object, and the slave virtual object is a virtual object having an attribution relationship with the master virtual object; receiving a scout control operation for a slave virtual object; displaying a scanning process of scouting from the virtual object in the virtual environment in a target scouting manner based on the scout control operation; in response to the target scout being scanned from the virtual object, scout information is displayed for providing the main virtual object with information of the target scout in the virtual environment, including the target scout displayed with a target highlighting effect having the ability to indicate the target scout through the obstacle. The scouting process of the target scout object in the virtual environment is realized through the slave virtual object, and the functional diversity of the slave virtual object is improved.

Description

Reconnaissance method, device, equipment and medium based on virtual environment
Technical Field
The present application relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, and a medium for reconnaissance based on a virtual environment.
Background
In some applications that include virtual environments, for example: a virtual reality application program, a three-dimensional map program, a military simulation program, a Third-Person shooter Game (TPS), a First-Person shooter Game (FPS), a Massively Multiplayer Online Role Playing Game (MMORPG), a Multiplayer Online tactical competition Game (MOBA), etc., there is an additional individual unit of virtual object of a virtual object body controlled by a player and used by a non-player, some preset instructions can be executed under the control of the player, and the virtual object can be a virtual pet.
In the related art, the virtual pet is mainly controlled by Artificial Intelligence (AI), and the instruction that can receive the player is very limited, for example, in a part of role playing type game, the instruction for the virtual pet mainly includes three types of attack, follow, and standby, the attack instruction is that the virtual pet attacks a virtual object marked by the player, the follow instruction is that the virtual pet follows the player, and the standby instruction is that the virtual pet stands by at a position selected by the player.
However, the above-described instructions for the virtual pet are implemented with less functions and have lower suitability for games other than the role-playing type game.
Disclosure of Invention
The embodiment of the application provides a reconnaissance method, a reconnaissance device and a reconnaissance medium based on a virtual environment, and the functional diversity of slave virtual objects can be achieved. The technical scheme is as follows:
in one aspect, a virtual environment-based reconnaissance method is provided, the method comprising:
displaying a slave virtual object, wherein the slave virtual object is positioned in a virtual environment, the virtual environment also comprises a master virtual object, and the slave virtual object is a virtual object having a home relationship with the master virtual object;
receiving a scout control operation for the slave virtual object, the scout control operation to instruct the slave virtual object to scout a target scout object in the virtual environment;
displaying the scanning process of scouting in the virtual environment in a target scouting mode from the virtual object based on the scout control operation;
displaying scout information in response to the target scout being scanned by the slave virtual object, the scout information for providing the master virtual object with information of the target scout in the virtual environment, the scout information including the target scout displayed with a target highlighting effect, the target highlighting effect having the ability to indicate the target scout through an obstacle.
In another aspect, there is provided a virtual environment-based scout apparatus, the apparatus comprising:
the display module is used for displaying a slave virtual object, the slave virtual object is positioned in a virtual environment, the virtual environment also comprises a master virtual object, and the slave virtual object is a virtual object which has an attribution relationship with the master virtual object;
a receiving module for receiving a scout control operation for the slave virtual object, the scout control operation for instructing the slave virtual object to scout a target scout object in the virtual environment;
the display module is further used for displaying a scanning process of scouting the slave virtual object in the virtual environment in a target scouting mode based on the scout control operation;
the display module is further configured to display scout information in response to the target scout being scanned by the slave virtual object, the scout information being used to provide the master virtual object with information of the target scout in the virtual environment, the scout information including the target scout displayed with a target highlighting effect, the target highlighting effect having an ability to indicate the target scout through an obstacle.
In another aspect, a computer device is provided, where the terminal includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the virtual environment-based spying method according to any one of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the virtual environment-based spying method according to any one of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the virtual environment-based reconnaissance method described in any of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
in the virtual environment, a master virtual object carrying a slave virtual object can scout a target scout object in the virtual environment through the slave virtual object, when a scout control operation for the slave virtual object is received, a scanning process of scout of the slave virtual object in the virtual environment in a target scout mode is displayed, and when the target scout object is scanned from the virtual object, scout information corresponding to the target scout object is displayed, wherein the scout information comprises the target scout object displayed with a target highlighting effect, and the target highlighting effect has the capability of indicating the target scout object through an obstacle. By implementing the scout function from the virtual object, the functional diversity of the slave virtual object is improved, and the adaptability of the slave virtual object in various application programs is also improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for virtual environment based reconnaissance provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic illustration of follow-up scout from a virtual object as provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a first direction identification provided by an exemplary embodiment of the present application;
FIG. 5 is a second schematic direction identification provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic illustration of a first location identification provided by an exemplary embodiment of the present application;
FIG. 7 is a second location identification schematic provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of a method for virtual environment based reconnaissance as provided by another exemplary embodiment of the present application;
FIG. 9 is a schematic illustration of a scout information display interface provided by an exemplary embodiment of the present application;
FIG. 10 is a flowchart of a method for virtual environment based reconnaissance as provided by another exemplary embodiment of the present application;
FIG. 11 is a schematic view of a target scout display interface provided by an exemplary embodiment of the present application;
FIG. 12 is a schematic view of a scan range provided by an exemplary embodiment of the present application;
FIG. 13 is a flowchart of a method for displaying a slave virtual object according to an exemplary embodiment of the present application;
FIG. 14 is a schematic view of a targeted prop interaction interface provided in accordance with an exemplary embodiment of the present application;
FIG. 15 is a schematic illustration of a call interface from a virtual object provided by an exemplary embodiment of the present application;
fig. 16 is a block diagram of a virtual environment-based spy apparatus according to an exemplary embodiment of the present application;
fig. 17 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional three-dimensional environment, or a pure fictional three-dimensional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, and the following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but are not limited thereto. Optionally, the virtual environment is also used for virtual environment engagement between at least two virtual characters. Optionally, the virtual environment is further used for fighting between at least two virtual characters using a virtual firearm. Optionally, the virtual environment is further configured to engage a virtual firearm between at least two virtual characters within a target area, the target area being smaller over time in the virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
In the embodiment of the present application, the virtual object includes a master virtual object and a slave virtual object. The main virtual object is a virtual object body used by a player, the player can control the main virtual object to walk, run, jump, shoot, fight, drive, pick up articles, use skills and other actions in the virtual environment, and the interactivity is strong. The slave virtual object is a virtual object controlled by a player and different from the master virtual object, the slave virtual object is mainly controlled through AI, the player can control the slave virtual object by giving an instruction to the slave virtual object, the slave virtual object can provide a functional service for the master virtual object, for example, the slave virtual object can move along with the master virtual object by giving a following instruction, and the attack instruction can make the slave virtual object attack the hostile virtual object marked by the master virtual object. Less actions/activities can be implemented from the virtual object, which corresponds to less interactivity.
In conjunction with the above noun explanations, an implementation environment of the embodiments of the present application will be explained. FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first device 110, a second device 120, a server 130, and a communication network 140.
The first device 110 and the second device 120 are installed and run with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, an MOBA game and a multi-player gunfight living game. Wherein the first device 110 is a device used by a first user, the second device 120 is a device used by a second user, and the first user controls a first main virtual object located in the virtual environment to perform an activity through the first device 110, the activity including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking up, and shooting. The second user controls a second main virtual object located in the virtual environment to perform an activity through the second device 120, wherein the first main virtual object and the second main virtual object may be in a teammate relationship or a hostile relationship. The first and second master virtual objects may be virtual characters, such as simulated characters or cartoon characters.
The first device 110 and the second device 120 are connected to the server 130 through the communication network 140. The device types include: at least one of a game console, a desktop computer, a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer.
The server 130 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 130 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, server 130 undertakes primary computing work and first device 110 and second device 120 undertakes secondary computing work; alternatively, the server 130 undertakes secondary computing work and the first device 110 and the second device 120 undertake primary computing work; alternatively, the server 130, the first device 110, and the second device 120 perform cooperative computing by using a distributed computing architecture.
In this embodiment, a first user controls a first master virtual object through the first device 110, where the first master virtual object corresponds to a first slave virtual object, and the first slave virtual object may receive a control operation of the first user. Illustratively, the first device 110 sends a control operation on the first slave virtual object, the first device 110 generates a corresponding control request according to the control operation, sends the control request to the server 130, and after the server 130 determines the affiliation of the first slave virtual object according to the control request, the server 130 controls the first slave virtual object to execute a control function corresponding to the control request, taking the control function as a scout function as an example, the server 130 controls the first slave virtual object to scan an area within a target range. When the server 130 receives the movement control request of the second master virtual object sent by the second device 120, in response to determining that the position of the second master virtual object corresponding to the movement control request after movement is within the target range and the first slave virtual object is performing the scanning operation, the server sends the position information of the second master virtual object to the first device 110, and the first device 110 displays the spy information of the second master virtual object according to the position information.
With reference to the foregoing implementation environment, an application scenario of the embodiment of the present application is schematically described.
In the embodiment of the present application, the virtual environment-based reconnaissance method is schematically illustrated as being applied to a shooting game, and the method may also be applied to applications such as a massively multiplayer online role playing game, a multiplayer online tactical competition game, a virtual reality application, a three-dimensional map program, and a military simulation program, which are not limited herein.
The user controls the main virtual object to move in the virtual environment, and in order to ensure the survival of the main virtual object, the enemy virtual object which threatens the safety of the user needs to be killed so as to ensure the survival of the main virtual object to the end of the game. The enemy virtual object may be an enemy master virtual object manipulated by another user, or may be a Non-player Character (NPC) virtual object controlled by an AI. Taking the above virtual environment as a three-dimensional scene as an example, the visual field range corresponding to the main virtual object controlled by the user is limited, generally, the visual field range is within a certain distance in a certain range, and the visual field range can be blocked due to the existence of an obstacle. Since the view situation that the user acquires through the main virtual object is limited, the user cannot guarantee to pay attention to the situation of the enemy virtual object in each direction.
In the embodiment of the application, the reconnaissance function is realized through the slave virtual object, and the reconnaissance information is provided for the master virtual object, so that the information acquisition efficiency of a user in a virtual environment is improved, and the functional diversity of the slave virtual object is improved. Illustratively, a user sends a scout control operation to the slave virtual object, instructs the slave virtual object to scout a target scout object in the virtual environment, the slave virtual object scans the virtual environment within a scanning range after receiving an instruction corresponding to the scout control operation, and when the target scout object is scanned, corresponding scout information is displayed on a virtual environment interface to prompt a user who operates the master virtual object to have the target scout object within the scanning range of the slave virtual object, so as to ensure that the user can timely obtain information of the target scout object in the surrounding virtual environment.
Referring to fig. 2, a flowchart of a virtual environment-based reconnaissance method according to an exemplary embodiment of the present application is shown, where the method is performed by a first device in the computer system, and the method includes:
step 201, displaying a slave virtual object, wherein the slave virtual object is located in a virtual environment, and the virtual environment further comprises a master virtual object.
In this embodiment of the present application, the virtual environment includes a master virtual object and a slave virtual object, the master virtual object is a virtual object body controlled by a user, the slave virtual object is a virtual object having an attribution relationship with the master virtual object, that is, the slave virtual object is an auxiliary virtual object of the master virtual object, and the slave virtual object can provide a functional service for the master virtual object. Wherein the user has a higher interactivity with the master virtual object than with the slave virtual object. In one example, the master virtual object is a user-controlled virtual hero and the slave virtual object is a virtual pet belonging to the virtual hero.
A virtual environment interface is displayed in the first device, where the virtual environment interface includes a picture of a main virtual object observing a virtual environment, and schematically, the virtual environment picture is a picture of the main virtual object observing the virtual environment in a view angle direction, and optionally, the view angle direction may be a first person view angle or a third person view angle, which is not limited herein. The first person perspective is a perspective corresponding to a picture that can be observed by a main virtual object in a virtual environment, and the picture corresponding to the first person perspective does not include the main virtual object, such as an arm and a virtual gun that can only see the main virtual object; the third person refers to a viewing angle, that is, a viewing angle from which the main virtual object is observed through the camera model in the virtual environment, a picture corresponding to the third person refers to the viewing angle and includes the main virtual object itself, and the camera model is usually located behind the main virtual object to observe the main virtual object, such as a three-dimensional model of the main virtual object and a virtual prop (e.g., a virtual gun) held by the main virtual object.
At step 202, a scout control operation for a slave virtual object is received.
A scout control operation is used to instruct scout from the virtual object on the target scout object in the virtual environment. Illustratively, the manner of receiving the above-mentioned spy control operation by the first device may be by receiving a shortcut key operation signal, or by receiving a touch signal on the control. Illustratively, when the shortcut key corresponding to the shortcut key operation signal is the target shortcut key, the first device determines to receive the scout control operation, for example, by inputting the shortcut key signal corresponding to the "P" key in the keyboard connected to the first device. Illustratively, when a control of the slave virtual object is included in the virtual environment interface, when a touch signal on the control is received, it is determined that a scout control operation is received.
Illustratively, the scout control operation corresponds to a control type, and different scout functions can be realized by triggering scout control operations of different control types. In the embodiment of the present application, the control type corresponding to the spy control operation includes, but is not limited to, at least one of the following types:
following scout type
The follow-scout type scout control operation is used to instruct the slave virtual object to scout the target scout object within the scanning range during the movement following the master virtual object. As shown in fig. 3, in the virtual environment screen 310, the master virtual object 301 and the slave virtual object 302 are located at the first position, the scanning range corresponding to the slave virtual object 302 is the first range 311, when the master virtual object 301 moves, the slave virtual object 302 moves following the master virtual object 301, in the virtual environment screen 320, the master virtual object 301 and the slave virtual object 302 are located at the second position, and the scanning range corresponding to the slave virtual object 302 is the second range 321, wherein the first range 311 and the second range 321 correspond to the same range size and different positions.
(II) type of stay reconnaissance
The stay reconnaissance type reconnaissance control operation is used for indicating that the slave virtual object stays at the target position and reconnaissance is performed on the target reconnaissance object in the scanning range. Illustratively, the target position may be a fixed scout point in the preset virtual environment, or a target scout point in the virtual environment manually set by the user.
(III) type of route planning scout
A planned-route reconnaissance type of reconnaissance control operates to instruct movement of the virtual object in the virtual environment along a target route, reconnaissance of the surrounding virtual environment during the movement. Illustratively, the target route may be a preset scout route or a scout route manually input by a user. In one example, a user may call a drawing area of a target route through a preset shortcut key or a preset control, where the drawing area may include a map corresponding to a virtual environment, and the user may draw the target route based on the map, and may move and scout from a virtual object according to the target route after the drawing is completed.
Illustratively, the reconnaissance control operation received by the first device further corresponds to a reconnaissance time length corresponding to the execution of the reconnaissance function by the virtual object, and the reconnaissance time length may be preset by the system or may be manually set by the user.
The first equipment generates a reconnaissance control instruction according to the reconnaissance control operation after receiving the reconnaissance control operation aiming at the slave virtual object, and sends the reconnaissance control instruction to the server, and the server controls the slave virtual object to execute a reconnaissance function after receiving the reconnaissance control instruction. Illustratively, the scout control instruction carries a control type identifier corresponding to the scout control operation, an object identifier corresponding to the slave virtual object, and a scout duration, and the server determines a target scout mode executed by the slave virtual object according to the control type identifier and instructs the slave virtual object to scan the target scout object within the scanning range within the scout duration.
And step 203, displaying a scanning process of scouting in the virtual environment in a target scouting mode from the virtual object based on the scout control operation.
Illustratively, the target reconnaissance mode of the virtual object is determined according to the control type corresponding to the reconnaissance control operation, that is, the control type of the reconnaissance control operation and the target reconnaissance mode have a corresponding relationship.
Illustratively, the target detection mode is determined by detection time, detection frequency and control type. The scout time is used for indicating the time length of scout performed by the slave virtual object, and the scout frequency is used for indicating the scanning frequency of the slave virtual object to the virtual environment in the scanning process. Optionally, the reconnaissance time and the reconnaissance frequency corresponding to the target reconnaissance mode may be manually set by a user, or may be preset reconnaissance time and reconnaissance frequency corresponding to the control type.
In the embodiment of the application, after the control type is determined according to the reconnaissance control operation, when the control type corresponds to the following reconnaissance type, displaying a scanning process of reconnaissance of the slave virtual object in the virtual environment in a first reconnaissance mode, wherein the slave virtual object moves along with the master virtual object in the first reconnaissance mode; when the control type corresponds to the stopping reconnaissance type, displaying a scanning process of reconnaissance of the slave virtual object in the virtual environment in a second reconnaissance mode, and stopping the slave virtual object at the target position in the second reconnaissance mode; and when the control type corresponds to the planned route scout type, displaying a scanning process of scouting the slave virtual object in the virtual environment in a third scout mode, and moving the slave virtual object in the virtual environment according to the target route in the third scout mode.
At step 204, scout information is displayed in response to the target scout object being scanned from the virtual object.
The scout information is used to provide the master virtual object with information of the target scout object in the virtual environment. Optionally, the spy information is only displayed by the first device corresponding to the main virtual object, that is, only the user controlling the main virtual object can obtain the spy information; the reconnaissance information is shared by the main virtual object and the teammate main virtual object, and the teammate main virtual object is a virtual object having a teammate relationship with the main virtual object, namely, both a user including the main virtual object and a user of the teammate virtual object can obtain the reconnaissance information.
Alternatively, the target scout object is determined to be scanned from the virtual object when the target scout object enters the scanning range of the slave virtual object, i.e., the target scout object is determined to be scanned from the virtual object in response to the target scout object being within the scanning range of the slave virtual object.
Optionally, the scanning ray may be released from the virtual object during the process of scouting the virtual environment, and the target scout object is determined to be scanned from the virtual object when the scanning ray from the virtual object collides with the target scout object, that is, the target scout object is determined to be scanned from the virtual object in response to the collision of the scanning ray from the virtual object with the target scout object.
The target scout object is illustratively a hostile virtual object, which may be another main virtual object controlled by a hostile banked player, or an NPC virtual object having a hostile relationship with a current main virtual object, and is not limited herein.
Illustratively, the scout information includes, but is not limited to, the following target scout information:
(one), direction information
The direction information is used to indicate the direction of the position of the target scout object relative to the position of the main virtual object. That is, in response to the target scout being scanned from the virtual object, the direction information of the target scout is displayed.
Illustratively, the direction information may be displayed in a map area in the virtual environment interface, or may be displayed in a virtual environment area in the virtual environment interface.
In one example, a first direction identification of the target scout object is displayed in a map area in the virtual environment interface, as shown in fig. 4, a master virtual object identification 401 and a slave virtual object identification 402 are displayed in the map area 400, and after the target scout object is scanned from the slave virtual object identification 402 within the scanning range, the first direction identification is displayed in the map area 400 according to the direction of the target scout object relative to the master virtual object 401, as shown by a shaded area 403 in fig. 4.
In one example, the second direction indication of the target scout object is displayed in the virtual environment region in the virtual environment interface, and as shown in fig. 5, after the target scout object is scanned from the virtual object within the scanning range, the second direction indication 503 is displayed in the rotating axis direction indicator 504 in the virtual environment region 500 according to the direction of the target scout object relative to the main virtual object 501. Illustratively, the directional display of the target scout object may also be realized by a directional indicator superimposed on a virtual environment area in the virtual environment interface, the directional indicator indicating only one direction.
(II) position information
The location information is used to indicate the location of the target scout object in the virtual environment. That is, in response to the target scout being scanned from the virtual object, the position information of the target scout is displayed.
Illustratively, the location information may be displayed in a map area of the virtual environment interface, or may be displayed in a virtual environment area of the virtual environment interface.
In one example, a first location identifier of a target scout object is displayed in a map area in the virtual environment interface, as shown in fig. 6, a master virtual object identifier 601 and a slave virtual object identifier 602 are displayed in the map area 600, and after the target scout object is scanned within a scanning range from the slave virtual object 602, a first location identifier 603 is displayed in the map area 600.
In one example, the second location identification of the target scout object is displayed in a virtual environment region in the virtual environment interface, as shown in fig. 7, the virtual environment region 700 includes a master virtual object 701 and a slave virtual object, and the second location identification 703 is displayed in the virtual environment region 700 after the slave virtual object scans the target scout object within the scanning range.
(III) State information
The state information is used to indicate a virtual object state condition of the target scout object. Illustratively, the state information may include a virtual life value (HP) of the target scout, a virtual magic value (man Point, MP), equipment information, and the like.
In the embodiment of the present application, the reconnaissance information includes a target reconnaissance object displayed with a target highlighting effect, where the target highlighting effect has an ability to indicate the target reconnaissance object through an obstacle, that is, when the target reconnaissance object is scanned from the virtual object, even if an obstacle exists between the main virtual object and the target reconnaissance object, the main virtual object can observe the target reconnaissance object displayed with the target highlighting effect through the obstacle, and the visual effect when the target reconnaissance object is reconnaissance is enhanced.
In the embodiment of the present application, when a target scout is scanned from a virtual object, the target scout is rendered and displayed with a target highlighting effect.
The target pop-up effect corresponds to a material identifier, and the material identifier is used for uniquely identifying an effect rendering material corresponding to the target pop-up effect, wherein the effect rendering material may be downloaded by the first device from the server in real time, or preloaded in the storage area by the first device. In one example, when an application program corresponding to the virtual environment is installed, the first device downloads a valid effect rendering material package in advance, the valid effect rendering material package comprises an effect rendering material corresponding to the target salient effect, when the target scout object is not scouted from the virtual object, the target scout object is rendered and displayed by using the original effect rendering material, when the target scout object is scouted from the virtual object, the first device determines the target effect rendering material from the effect rendering material package according to a material identifier corresponding to the target salient effect, and renders and displays the target scout object.
Responding to the scanning of the target scout object from the virtual object, the first equipment acquires a material identifier corresponding to the target highlighting effect, acquires a corresponding effect rendering material according to the material identifier, mounts the effect rendering material on a virtual model corresponding to the target scout object, and displays the target scout object with the target highlighting effect.
Optionally, the target highlighting effect includes displaying the target scout object with different skin materials, and the first device renders the target scout object with the target skin material by changing the skin material corresponding to the target scout object, so as to display the target scout object with the target highlighting effect, and enhance the visual effect when the target scout object is scout.
Optionally, the target highlighting effect includes displaying the target scout with a stroke effect, and the first device renders the target scout with the target stroke effect by adding the stroke effect to the target scout, so as to display the target scout with the target highlighting effect, and enhance a visual effect when the target scout is detected. Illustratively, the thickness of the delineation line corresponding to the delineation effect may also be in a positive correlation with the distance between the target scout object and the main virtual object, that is, when the detected target scout object is farther from the main virtual object, the corresponding delineation line is thinner, and the target scout object is closer to the main virtual object, and the corresponding delineation line is thicker.
Optionally, the target highlighting effect further corresponds to a negative state (deboff) identifier, and the negative state identifier may be displayed in a state column corresponding to the target scout. Illustratively, the negative state corresponding to the target highlighting effect also corresponds to a state display duration, that is, the display duration of the target scout is displayed with the target highlighting effect. In one example, the display duration of the state after each scan from the virtual object is 3 seconds, that is, when the target scout object is scanned from the virtual object, the target scout object is displayed with the target highlighting effect within 3 seconds, and the target scout object may carry a negative state identifier, and the display duration of the identifier in the state bar is 3 seconds.
Illustratively, the target highlighting effect corresponds to a transparency, and the first device determines the number of obstacles between the target scout and the main virtual object, and displays the target scout under the target highlighting effect with the transparency corresponding to the number of obstacles. In one example, the transparency is higher for the target highlighting effect when the number of obstacles between the target scout object and the main virtual object is larger, and the transparency is lower for the target highlighting effect when the number of obstacles between the target scout object and the main virtual object is smaller.
In summary, according to the reconnaissance method based on the virtual environment provided in the embodiment of the present application, the master virtual object carrying the slave virtual object can reconnaissance the target reconnaissance object in the virtual environment through the slave virtual object, when receiving a reconnaissance control operation for the slave virtual object, a scanning process of reconnaissance of the slave virtual object in the virtual environment in a target reconnaissance manner is displayed, and when the target reconnaissance object is scanned from the virtual object, reconnaissance information corresponding to the target reconnaissance object is displayed, where the reconnaissance information includes the target reconnaissance object displayed with a target highlighting effect, and the target highlighting effect has an ability of indicating the target reconnaissance object through an obstacle. By implementing the scout function from the virtual object, the functional diversity of the slave virtual object is improved, and the adaptability of the slave virtual object in various application programs is also improved.
Referring to fig. 8, a flowchart of a virtual environment-based reconnaissance method according to an exemplary embodiment of the present application is shown, in which a reconnaissance method taking a control type as an example of a following reconnaissance type is schematically provided, and the method includes:
step 801, displaying the slave virtual object.
In this embodiment of the present application, the virtual environment includes a master virtual object and a slave virtual object, the master virtual object is a virtual object body controlled by a user, the slave virtual object is a virtual object having an attribution relationship with the master virtual object, that is, the slave virtual object is an auxiliary virtual object of the master virtual object, and the slave virtual object can provide a functional service for the master virtual object. Wherein the user has a higher interactivity with the master virtual object than with the slave virtual object. In one example, the master virtual object is a user-controlled virtual hero and the slave virtual object is a virtual pet belonging to the virtual hero.
At step 802, a scout control operation for a slave virtual object is received.
A scout control operation is used to instruct scout from the virtual object on the target scout object in the virtual environment. Illustratively, the manner of receiving the above-mentioned spy control operation by the first device may be by receiving a shortcut key operation signal, or by receiving a touch signal on the control.
And step 803, determining a control type corresponding to the scout control operation.
Illustratively, the reconnaissance control operation corresponds to a control type, and different reconnaissance functions can be realized by triggering different control types of reconnaissance control operations. In the embodiment of the application, the control types include a following reconnaissance type, a stop reconnaissance type and a planned route reconnaissance type. In one example, scout control operations of different control types are triggered by different shortcut keys, when the first device receives a trigger signal of the shortcut key "F1", it is determined that the control type of the received scout control operation corresponds to a following scout type, when the first device receives a trigger signal of the shortcut key "F2", it is determined that the control type of the received scout control operation corresponds to a stay scout type, and when the first device receives a trigger signal of the shortcut key "F3", it is determined that the control type of the received scout control operation corresponds to a planned route scout type.
And step 804, in response to determining that the control type of the scout control operation corresponds to the follow scout type, displaying a scanning process of scout from the virtual object in the virtual environment in the first scout mode.
The slave virtual object moves following the master virtual object in the first reconnaissance mode.
The method comprises the steps that a first device generates a reconnaissance control instruction according to reconnaissance operation and sends the reconnaissance control instruction to a server, the server determines that a reconnaissance type corresponds to a following reconnaissance type according to a reconnaissance type identifier carried in the reconnaissance control instruction, determines that a target reconnaissance mode is a first reconnaissance mode according to a mapping relation between the reconnaissance type and the target reconnaissance mode, sends reconnaissance time, reconnaissance frequency, a scanning range and the reconnaissance operation corresponding to the first reconnaissance mode to the first device, and the first device controls a virtual object to execute the reconnaissance operation on a virtual environment in the scanning range within the reconnaissance time at the reconnaissance frequency. In one example, the scout time corresponds to 10 minutes, the scout frequency corresponds to 10 seconds/time, the scanning range corresponds to a circular range with a radius of 50 meters, and the slave virtual object scans the virtual environment in the scanning range at a frequency of 10 seconds/time when moving along with the master virtual object after the user gives an instruction corresponding to the scout control operation, and the scout duration lasts for 10 minutes.
In response to the target scout being scanned from the virtual object, a target distance of the target scout is determined, step 805.
The target distance is used for indicating the distance between the position of the target reconnaissance object and the position of the main virtual object. In the embodiment of the present application, in the process of scouting the scanning range from the virtual object, if the target scout object is scanned from the virtual object, it is determined that the target scout object is scout from the virtual object, and the distance between the target scout object and the main virtual object is determined.
Step 806, in response to the target distance reaching the first preset distance, displaying the direction information of the target scout object relative to the main virtual object.
The direction information is used to indicate the direction of the position of the target scout object relative to the position of the main virtual object. Optionally, the first preset distance may be a fixed distance, or may be a distance associated with the slave virtual object, for example, the slave virtual object corresponds to a virtual level, and the size of the first preset distance is in a positive correlation with the virtual level of the slave virtual object.
When the target distance of the target reconnaissance object is determined to reach the first preset distance, only the direction information of the target reconnaissance object relative to the main virtual object is displayed, namely, because the reconnaissance object is far away from the main virtual object, the corresponding reconnaissance effect is poor, and only the direction information is displayed. In one example, the directional information is displayed in a map area in the virtual environment interface.
Step 807, in response to the target distance being less than the first preset distance, displaying the position information of the target scout.
The position information is used to indicate the position of the target scout object in the virtual environment. And when the target distance of the target scout is determined to be smaller than the first preset distance, displaying the position information of the target scout, namely, displaying the position information of the target scout if the corresponding scout effect is better because the scout target is closer to the main virtual object. Illustratively, when the target distance is smaller than the first preset distance, both the direction information and the position information of the target scout object may be displayed, as shown in fig. 9, a master virtual object 901 and a slave virtual object 902 are displayed in a virtual environment region in the virtual environment interface, the slave virtual object 902 is in a following state, the virtual environment interface 900 includes a map region 910, a master virtual object identifier and a slave virtual object identifier are displayed in the map region 910, and when the target scout object is scanned from the virtual object 902 and the target distance between the target scout object and the master virtual object is smaller than the first preset distance, the direction information of the target scout object is displayed in the map region 910 through a shaded region 903, and the position information of the target scout object is displayed through a second position identifier 904.
In summary, according to the reconnaissance method based on the virtual environment provided in the embodiment of the present application, the master virtual object carrying the slave virtual object may reconnaissance the target reconnaissance object in the virtual environment through the slave virtual object, when a reconnaissance control operation for the slave virtual object is received, and a control type corresponding to the reconnaissance control operation is a follow-on reconnaissance type, the slave virtual object may reconnaissance the virtual environment within a scanning range while moving along with the master virtual object, when the target reconnaissance object is scanned by the slave virtual object, according to a relationship between a target distance between the target reconnaissance object and the master virtual object and a first preset distance, display reconnaissance information having different information amounts, that is, when the target distance reaches the first preset distance, only display direction information of the target reconnaissance object, and when the target distance is smaller than the first preset distance, the position information of the target reconnaissance object is displayed, so that the functional diversity of the slave virtual object is improved, and the adaptability of the slave virtual object in various application programs is improved.
Referring to fig. 10, a flowchart of a virtual environment-based reconnaissance method according to an exemplary embodiment of the present application is shown, in which a reconnaissance method taking a control type as an example of a stop reconnaissance type is schematically provided, and the method includes:
step 1001 displays the slave virtual object.
At step 1002, a scout control operation for a slave virtual object is received.
And step 1003, determining a control type corresponding to the scout control operation.
In the embodiment of the present application, steps 1001 to 1003 are the same as steps 801 to 803, and are not described herein again.
And 1004, in response to determining that the control type of the scout control operation corresponds to the stay scout type, displaying a scanning process of scout from the virtual object in the virtual environment in a second scout mode.
The slave virtual object stays at the target position in the second reconnaissance mode.
The method comprises the steps that a first device generates a reconnaissance control instruction according to reconnaissance operation and sends the reconnaissance control instruction to a server, the server determines that a reconnaissance type corresponds to a stop reconnaissance type according to a reconnaissance type identifier carried in the reconnaissance control instruction, determines that a target reconnaissance mode is a second reconnaissance mode according to a mapping relation between the reconnaissance type and the target reconnaissance mode, sends reconnaissance time, reconnaissance frequency, a scanning range and the reconnaissance operation corresponding to the second reconnaissance mode to the first device, and the first device controls a virtual object to execute the reconnaissance operation on a virtual environment in the scanning range within the reconnaissance time at the reconnaissance frequency.
Step 1005, in response to the target scout being scanned from the virtual object, determines a number of obstacles between the target scout and the master virtual object.
In the embodiment of the present application, the target highlighting effect corresponds to a highlighting-style display effect, and when the target scout is scanned from the virtual object within the scanning range, the target scout is displayed in a highlighting style in a virtual environment region in the virtual environment interface. As shown in fig. 11, a master virtual object 1101 and a slave virtual object 1102 are displayed in the virtual environment interface 1100, and when the target scout object 1103 is scanned in a scanning range 1110 from the slave virtual object 1102, the target scout object 1103 is displayed in a highlighted manner.
Illustratively, the highlight patterns correspond to different brightnesses.
Optionally, when an obstacle exists between the target scout object and the main virtual object, the target scout object is displayed in a highlight mode with preset brightness in the virtual environment interface.
Optionally, the brightness is related to the number of obstacles between the target scout object and the main virtual object, and in one example, a negative correlation between the brightness of the target scout object and the number of obstacles is displayed. Illustratively, the number of the obstacles is obtained according to the number of the virtual articles collided with the collision detection line segment by establishing the collision detection line segment between the target scout object and the main virtual object.
Step 1006, displaying the target scout object with the brightness corresponding to the number of obstacles.
And after the number of the obstacles is determined, determining the brightness for displaying the target virtual object according to the mapping relation between the number of the obstacles and the brightness data, and displaying the target scout object by the brightness corresponding to the number of the obstacles.
Illustratively, when the slave virtual object scans the virtual environment within the scanning range, the device may detect, at intervals of time, a circular region centered on the slave virtual object according to the scout frequency, detect whether a target scout object exists, and if the target scout object is detected, add a negative effect to the target scout object. As shown in fig. 12, a scanning range 1204 is scanned from a virtual object 1202, and when it is determined that a target scout 1203 exists within the scanning range 1204, the apparatus modifies a skin material corresponding to the target scout 1203 and increases a stroking effect.
In summary, according to the reconnaissance method based on the virtual environment provided in the embodiment of the present application, the master virtual object carrying the slave virtual object may reconnaissance the target reconnaissance object in the virtual environment through the slave virtual object, when receiving a reconnaissance control operation for the slave virtual object and a control type corresponding to the reconnaissance control operation is a stay reconnaissance type, the slave virtual object reconnaissance the virtual environment in the scanning range at a target position specified by a user, and when the target reconnaissance object is scanned by the slave virtual object, the target reconnaissance object is displayed according to brightness corresponding to the number of obstacles between the target reconnaissance object and the master virtual object, so that functional diversity of the slave virtual object is improved, and meanwhile, adaptability of the slave virtual object in various applications is also improved.
In some embodiments, the reconnaissance function of the slave virtual object not only enables reconnaissance of the target reconnaissance object, but also reconnaissance of the virtual item in the virtual environment, displaying item information.
Illustratively, after the target virtual object is scanned from the virtual object within the scanning range, the object information corresponding to the target virtual object is displayed in the virtual environment interface. The target virtual object may be a virtual item, a virtual firearm, or the like in the virtual environment that can be picked up by the master virtual object, and the target virtual item may be set by the system or may be designated by the user. For example, when the user wants to acquire the virtual firearm M416, the user sets the virtual firearm M416 to be scout from the virtual object when performing a scout control operation on the virtual object, and when the virtual object is scanned from within the scanning range to the virtual firearm M416, the virtual firearm M416 is displayed in a highlighted manner in the virtual environment interface, and the user can know where the virtual firearm M416 is located.
Illustratively, the target virtual item may also be a virtual item fixedly arranged in the virtual environment, for example, a virtual door, a virtual supply box, etc. in the virtual environment. The target virtual article can be scanned by the virtual object, the state corresponding to the target virtual article is determined, if the target virtual article corresponds to the used state, the target virtual article is displayed in a highlighted mode in the virtual environment interface, for example, when a virtual door scanned from the virtual object to the scanning range is in an open state, the virtual door is displayed in a highlighted mode, and when the virtual supply box is scanned and picked up, the virtual supply box is displayed in a highlighted mode. The target virtual article is displayed according to the using state of the target virtual article, so that richer information in a virtual environment can be provided for a user, and the target virtual article is warned.
Referring to fig. 13, a flowchart of a slave virtual object display method provided by an exemplary embodiment of the present application is shown, where a home relationship exists between a slave virtual object and a master virtual object, and the method includes:
step 1301, displaying the target prop.
The target prop is used for establishing a corresponding relation between the master virtual object and the slave virtual object. Illustratively, the target prop may be set in a virtual environment, and the main virtual object may obtain the target prop in the virtual environment by picking up; the target prop may also be a prop that the master virtual object is equipped/used through a prop interface before entering the virtual environment.
In this embodiment, the target prop is set in the virtual environment for example, and the target prop is refreshed in the virtual environment, which is schematically illustrated in the following. As shown in fig. 14, in the virtual environment interface 1400, when the user moves the cursor to the target item 1401, an item introduction 1402 corresponding to the target item 1401 and an operation prompt message 1403 are displayed. The user can control the main virtual object to pick up the operation of the target prop by triggering the shortcut key E.
Step 1302, receiving a usage operation of the target prop by the master virtual object.
Illustratively, the user can use the target prop through a virtual backpack, a virtual prop bar, and the like. The slave virtual object can be summoned by directly using the target prop, and the summoning of the slave virtual object can also be realized by meeting the use condition of the target prop. In the embodiment of the application, the use of the target prop corresponds to the use condition. In one example, the target prop corresponds to an energy bar, and when the master virtual object accumulates sufficient energy in the virtual environment, the master virtual object may summon the slave virtual object by using the target prop, e.g., when the master virtual object kills the virtual object, a certain amount of energy may be accumulated.
And step 1303, in response to the main virtual object meeting the use condition of the target prop, attributing the auxiliary virtual object to the main virtual object.
And when the main virtual object is determined to meet the use condition of the target prop, establishing an attribution relationship between the main virtual object and the auxiliary virtual object.
In one example, as shown in fig. 15, in a virtual environment screen 1510, after a master virtual object 1501 uses a target prop and the killing of a target enemy virtual object 1502 is completed, it is determined that the master virtual object 1501 satisfies the use condition of the target prop, and then a composite animation of the master virtual object 1501 and a composite slave virtual object is displayed through a virtual environment screen 1520.
In step 1304, a call operation from the master virtual object to the slave virtual object is received.
The slave virtual object may be summoned when the master virtual object needs to spy from the virtual object on the surrounding virtual environment. Illustratively, the manner of receiving the call operation by the first device may be by receiving a shortcut key operation signal, or by receiving a touch signal on a control.
Step 1305, in response to determining that the master virtual object owns the slave virtual object, displaying the slave virtual object.
The first device determines that the master virtual object owns the slave virtual object based on a call operation to the slave virtual object. Illustratively, a calling instruction corresponding to the calling operation carries a first object identifier of a slave virtual object to be called and a second object identifier of a master virtual object sending the calling instruction, the first device sends the calling instruction to the server, the server determines an affiliation relationship between the master virtual object and the slave virtual object according to the first object identifier and the second object identifier, determines that the master virtual object executing the calling operation owns the slave virtual object, and then sends instruction information for displaying the slave virtual object to the first device, and the first device displays the slave virtual object according to the instruction information.
In summary, according to the slave virtual object display method provided in the embodiment of the present application, the home relationship between the master virtual object and the slave virtual object is established through the target prop, and when the home relationship exists between the master virtual object and the slave virtual object, the master virtual object can call the slave virtual object to implement a reconnaissance function, so that the functional diversity of the slave virtual object is improved, and meanwhile, the adaptability of the slave virtual object in various application programs is also improved.
Referring to fig. 16, a block diagram of a virtual environment-based spying apparatus according to an exemplary embodiment of the present application is shown, where the apparatus includes:
a display module 1610 configured to display a slave virtual object, where the slave virtual object is located in a virtual environment, and the virtual environment further includes a master virtual object, and the slave virtual object is a virtual object having an attribution relationship with the master virtual object;
a receiving module 1620 configured to receive a scout control operation for the slave virtual object, the scout control operation being configured to instruct the slave virtual object to scout a target scout object in the virtual environment;
the display module 1610 is further configured to display, based on the scout control operation, the scanning process of scout from the virtual object in the virtual environment in a target scout manner;
the display module 1610 is further configured to, in response to the target scout being scanned by the slave virtual object, display scout information for providing the information of the target scout in the virtual environment to the master virtual object, wherein the scout information includes the target scout displayed with a target highlighting effect, and the target highlighting effect has an ability to indicate the target scout through an obstacle.
In an alternative embodiment, the scout control operation corresponds to a control type, the control type comprising a follow scout type;
the display module 1610 is further configured to, in response to determining that the control type of the scout control operation corresponds to the following scout type, display a scanning process of the slave virtual object performing scout in the virtual environment in a first scout mode, where the slave virtual object moves along with the master virtual object in the first scout mode.
In an alternative embodiment, the scout control operation corresponds to a control type, the control type comprising a stay scout type;
the display module 1610 is further configured to, in response to determining that the control type of the reconnaissance control operation corresponds to the stay reconnaissance type, display a scanning process in which the slave virtual object performs reconnaissance in the virtual environment in a second reconnaissance manner, where the slave virtual object stays at the target position in the second reconnaissance manner.
In an alternative embodiment, the reconnaissance control operation corresponds to a control type, the control type comprising a planned route reconnaissance type;
the display module 1610 is further configured to, in response to determining that the control type of the reconnaissance control operation corresponds to the planned route reconnaissance type, display a scanning process of the slave virtual object in a third reconnaissance mode for reconnaissance in the virtual environment, where the slave virtual object moves in the virtual environment according to a target route in the third reconnaissance mode.
In an optional embodiment, the display module 1610 is further configured to display, in response to the target scout object being scanned by the slave virtual object, direction information of the target scout object, the direction information indicating a direction of the position of the target scout object relative to the position of the master virtual object.
In an alternative embodiment, the display module 1610 is further configured to display the first direction identifier of the target scout object in a map area in a virtual environment interface; or displaying the second direction identification of the target scout object in a virtual environment area in the virtual environment interface.
In an optional embodiment, the display module 1610 is further configured to display position information of the target scout object in response to the target scout object being scanned by the slave virtual object, the position information indicating the position of the target scout object in the virtual environment.
In an alternative embodiment, the display module 1610 is further configured to display the first location identifier of the target scout object in a map area in the virtual environment interface; or displaying the second position identification of the target scout object in a virtual environment area in the virtual environment interface.
In an optional embodiment, the apparatus further comprises:
a determining module (not shown in the figures) for determining a target distance of the target scout object in response to the target scout object being scanned by the slave virtual object, the target distance being indicative of a distance between a location of the target scout object relative to a location of the master virtual object;
the display module 1610 is further configured to display, in response to the target distance reaching a first preset distance, direction information of the target scout object relative to the main virtual object, where the direction information is used to indicate a direction of the position of the target scout object relative to the position of the main virtual object; or, in response to the target distance being less than the first preset distance, displaying position information of the target scout, the position information being used to indicate the position of the target scout in the virtual environment.
In an optional embodiment, the display module 1610 is further configured to display the scout information in response to the scan ray of the slave virtual object colliding with the target scout object and determining that the target scout object is scanned by the slave virtual object;
or the like, or, alternatively,
in response to the target scout object being within the scanning range of the slave virtual object, determining that the target scout object is scanned by the slave virtual object, displaying the scout information.
In an optional embodiment, the apparatus further comprises:
a rendering module (not shown in the figure) for responding to the target scout object being scanned by the slave virtual object, and acquiring a material identifier corresponding to the target highlighting effect;
the rendering module is further used for obtaining a corresponding effect rendering material according to the material identifier;
the rendering module is further configured to mount the effect rendering material on a virtual model corresponding to the target scout object;
the display module 1610 is further configured to display the target scout object with the target highlighting effect.
In an alternative embodiment, the target highlighting effect corresponds to transparency;
the determination module is further configured to determine the number of obstacles between the target scout object and the main virtual object;
the display module 1610 is further configured to display the target scout object under the target highlighting effect with a transparency corresponding to the number of obstacles.
In an optional embodiment, the receiving module 1620 is further configured to receive a call operation of the slave virtual object by the master virtual object;
the display module 1610 is further configured to display the slave virtual object in response to determining that the master virtual object owns the slave virtual object.
In an optional embodiment, the display module 1610 is further configured to display a target prop, where the target prop is used to establish a correspondence between the master virtual object and the slave virtual object;
the receiving module 1620 is further configured to receive a usage operation of the target prop by the master virtual object;
the determining module is further configured to attribute the slave virtual object to the master virtual object in response to the master virtual object satisfying the usage condition of the target prop.
In summary, the scout apparatus based on a virtual environment provided in the embodiment of the present application, a master virtual object carrying a slave virtual object may scout a target scout object in the virtual environment through the slave virtual object, and when receiving a scout control operation for the slave virtual object, display a scanning process of scout in the virtual environment by the slave virtual object in a target scout manner, and when the target scout object is scanned from the virtual object, display scout information corresponding to the target scout object, where the scout information includes the target scout object displayed with a target highlighting effect, and the target highlighting effect has an ability to indicate the target scout object through an obstacle. By implementing the scout function from the virtual object, the functional diversity of the slave virtual object is improved, and the adaptability of the slave virtual object in various application programs is also improved.
It should be noted that: the reconnaissance apparatus based on a virtual environment provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the reconnaissance device based on the virtual environment provided by the above embodiment and the reconnaissance method embodiment based on the virtual environment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Fig. 17 shows a block diagram of a terminal 1700 according to an exemplary embodiment of the present application. The terminal 1700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1702 is used to store at least one instruction for execution by processor 1701 to implement the virtual environment-based reconnaissance methods provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuit 1704, display screen 1705, camera assembly 1706, audio circuit 1707, positioning assembly 1708, and power supply 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of terminal 1700; in other embodiments, display 1705 may be at least two, each disposed on a different surface of terminal 1700 or in a folded design; in still other embodiments, display 1705 may be a flexible display disposed on a curved surface or a folded surface of terminal 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used to locate the current geographic Location of the terminal 1700 to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be based on a GPS (Global Positioning System) in the united states, a beidou System in china, or a galileo System in russia.
Power supply 1709 is used to power the various components in terminal 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When the power supply 1709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the touch display screen 1705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal 1700, and the gyro sensor 1712 may cooperate with the acceleration sensor 1711 to acquire a 3D motion of the user on the terminal 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be disposed on the side frames of terminal 1700 and/or underlying touch display 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal 1700, the user's grip signal to the terminal 1700 can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed at the lower layer of the touch display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of terminal 1700. When a physical key or vendor Logo is provided on terminal 1700, fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the touch display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1705 is turned down. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
Proximity sensors 1716, also known as distance sensors, are typically disposed on the front panel of terminal 1700. Proximity sensor 1716 is used to gather the distance between the user and the front face of terminal 1700. In one embodiment, when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually reduced, processor 1701 controls touch display 1705 to switch from a bright screen state to a dark screen state; when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually increased, processor 1701 controls touch display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that are loaded and executed by the processor to implement the virtual environment based scout method of any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (17)

1. A virtual environment based reconnaissance method, the method comprising:
displaying a slave virtual object, wherein the slave virtual object is positioned in a virtual environment, the virtual environment also comprises a master virtual object, and the slave virtual object is a virtual object having a home relationship with the master virtual object;
receiving a scout control operation for the slave virtual object, the scout control operation to instruct the slave virtual object to scout a target scout object in the virtual environment;
displaying the scanning process of scouting in the virtual environment in a target scouting mode from the virtual object based on the scout control operation;
displaying scout information in response to the target scout being scanned by the slave virtual object, the scout information for providing the master virtual object with information of the target scout in the virtual environment, the scout information including the target scout displayed with a target highlighting effect, the target highlighting effect having the ability to indicate the target scout through an obstacle.
2. The method of claim 1, wherein the scout control operation has a control type, the control type comprising a follow scout type;
the scanning process for displaying the scout from the virtual object in the virtual environment in a target scout manner based on the scout control operation comprises the following steps:
and when the control type of the scout control operation is determined to be the following scout type, displaying a scanning process of the slave virtual object for scout in the virtual environment in a first scout mode, wherein the slave virtual object moves along with the master virtual object in the first scout mode.
3. The method of claim 1, wherein the scout control operation has a control type, the control type comprising a dwell scout type;
the scanning process for displaying the scout from the virtual object in the virtual environment in a target scout manner based on the scout control operation comprises the following steps:
and when the control type of the scout control operation is determined to correspond to the stopping scout type, displaying a scanning process of scout of the slave virtual object in the virtual environment in a second scout mode, wherein the slave virtual object stops at a target position in the second scout mode.
4. The method of claim 1, wherein the scout control operation has a control type, the control type comprising a planned route scout type;
the scanning process for displaying the scout from the virtual object in the virtual environment in a target scout manner based on the scout control operation comprises the following steps:
and when the control type of the scout control operation is determined to be the planned route scout type, displaying a scanning process of scout of the slave virtual object in the virtual environment in a third scout mode, wherein the slave virtual object moves in the virtual environment according to a target route in the third scout mode.
5. The method of any one of claims 1 to 4, wherein said displaying scout information in response to said target scout object being scanned by said slave virtual object comprises:
in response to the target scout being scanned by the slave virtual object, displaying direction information of the target scout indicating a direction of the position of the target scout relative to the position of the master virtual object.
6. The method of claim 5, wherein said displaying directional information of said target scout comprises:
displaying a first direction identification of the target scout object in a map area in a virtual environment interface; or displaying the second direction identification of the target scout object in a virtual environment area in the virtual environment interface.
7. The method of any one of claims 1 to 4, wherein said displaying scout information in response to said target scout object being scanned by said slave virtual object comprises:
in response to the target scout being scanned by the slave virtual object, displaying position information of the target scout, the position information indicating a position of the target scout in the virtual environment.
8. The method of claim 7, wherein said displaying the positional information of the target scout comprises:
displaying a first location identification of the target scout object in a map area in a virtual environment interface; or displaying the second position identification of the target scout object in a virtual environment area in the virtual environment interface.
9. The method of any one of claims 1 to 4, wherein said displaying scout information in response to said target scout object being scanned by said slave virtual object comprises:
in response to the target scout object being scanned by the slave virtual object, determining a target distance of the target scout object, the target distance being indicative of a distance between a location of the target scout object relative to a location of the master virtual object;
in response to the target distance reaching a first preset distance, displaying direction information of the target scout object relative to the main virtual object, the direction information indicating a direction of the position of the target scout object relative to the position of the main virtual object; or, in response to the target distance being less than the first preset distance, displaying position information of the target scout, the position information being used to indicate the position of the target scout in the virtual environment.
10. The method of any one of claims 1 to 4, wherein said displaying scout information in response to said target scout object being scanned by said slave virtual object comprises:
in response to a collision of a scanning ray of the slave virtual object with the target scout object, determining that the target scout object is scanned by the slave virtual object, displaying the scout information;
or the like, or, alternatively,
in response to the target scout object being within the scanning range of the slave virtual object, determining that the target scout object is scanned by the slave virtual object, displaying the scout information.
11. The method of any one of claims 1 to 4, wherein said displaying scout information in response to said target scout object being scanned by said slave virtual object comprises:
responding to the scanning of the target scout object by the slave virtual object, and acquiring a material identifier corresponding to the target outstanding effect;
acquiring a corresponding effect rendering material according to the material identifier;
mounting the effect rendering material on a virtual model corresponding to the target scout object;
and displaying the target scout object with the target highlighting effect.
12. The method of claim 10, wherein the target pop corresponds to transparency;
the displaying the target scout object with the target highlighting effect includes:
determining a number of obstacles between the target scout object and the primary virtual object;
and displaying the target scout object under the target highlighting effect in a transparency corresponding to the number of the obstacles.
13. The method of any of claims 1 to 4, wherein said displaying is from a virtual object, comprising:
receiving a calling operation of the master virtual object to the slave virtual object;
displaying the slave virtual object in response to determining that the master virtual object owns the slave virtual object.
14. The method of claim 11, further comprising:
displaying a target prop, wherein the target prop is used for establishing a corresponding relation between the master virtual object and the slave virtual object;
receiving the use operation of the target prop by the main virtual object;
attributing the slave virtual object to the master virtual object in response to the master virtual object satisfying a condition of use of the target prop.
15. A virtual environment based reconnaissance apparatus, the apparatus comprising:
the display module is used for displaying a slave virtual object, the slave virtual object is positioned in a virtual environment, the virtual environment also comprises a master virtual object, and the slave virtual object is a virtual object which has an attribution relationship with the master virtual object;
a receiving module for receiving a scout control operation for the slave virtual object, the scout control operation for instructing the slave virtual object to scout a target scout object in the virtual environment;
the display module is further used for displaying a scanning process of scouting the slave virtual object in the virtual environment in a target scouting mode based on the scout control operation;
the display module is further configured to display scout information in response to the target scout being scanned by the slave virtual object, the scout information being used to provide the master virtual object with information of the target scout in the virtual environment, the scout information including the target scout displayed with a target highlighting effect, the target highlighting effect having an ability to indicate the target scout through an obstacle.
16. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual environment based scout method of any one of claims 1 to 14.
17. A computer-readable storage medium having stored therein at least one program code, the program code being loaded and executed by a processor to implement the virtual environment-based reconnaissance method of any of claims 1 to 14.
CN202110609266.6A 2021-06-01 2021-06-01 Reconnaissance method, device, equipment and medium based on virtual environment Active CN113318443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110609266.6A CN113318443B (en) 2021-06-01 2021-06-01 Reconnaissance method, device, equipment and medium based on virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110609266.6A CN113318443B (en) 2021-06-01 2021-06-01 Reconnaissance method, device, equipment and medium based on virtual environment

Publications (2)

Publication Number Publication Date
CN113318443A true CN113318443A (en) 2021-08-31
CN113318443B CN113318443B (en) 2023-03-17

Family

ID=77423347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110609266.6A Active CN113318443B (en) 2021-06-01 2021-06-01 Reconnaissance method, device, equipment and medium based on virtual environment

Country Status (1)

Country Link
CN (1) CN113318443B (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
木槿CN: "《bilibili》", 12 February 2021 *

Also Published As

Publication number Publication date
CN113318443B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN109876438B (en) User interface display method, device, equipment and storage medium
CN109529319B (en) Display method and device of interface control and storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111589127B (en) Control method, device and equipment of virtual role and storage medium
CN110755841A (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN110496392B (en) Virtual object control method, device, terminal and storage medium
CN112402950A (en) Using method, device, equipment and storage medium of virtual prop
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN111921197A (en) Method, device, terminal and storage medium for displaying game playback picture
CN112704876B (en) Method, device and equipment for selecting virtual object interaction mode and storage medium
CN111603770A (en) Virtual environment picture display method, device, equipment and medium
CN113398571A (en) Virtual item switching method, device, terminal and storage medium
CN109806583B (en) User interface display method, device, equipment and system
CN111589141A (en) Virtual environment picture display method, device, equipment and medium
CN112402962A (en) Signal display method, device, equipment and medium based on virtual environment
CN112569600A (en) Path information transmission method in virtual scene, computer device and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN112569607A (en) Display method, device, equipment and medium for pre-purchased prop
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40050641

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant