CN110448907B - Method and device for displaying virtual elements in virtual environment and readable storage medium - Google Patents

Method and device for displaying virtual elements in virtual environment and readable storage medium Download PDF

Info

Publication number
CN110448907B
CN110448907B CN201910760232.XA CN201910760232A CN110448907B CN 110448907 B CN110448907 B CN 110448907B CN 201910760232 A CN201910760232 A CN 201910760232A CN 110448907 B CN110448907 B CN 110448907B
Authority
CN
China
Prior art keywords
virtual
environment
observation
specified
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910760232.XA
Other languages
Chinese (zh)
Other versions
CN110448907A (en
Inventor
杨智翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910760232.XA priority Critical patent/CN110448907B/en
Publication of CN110448907A publication Critical patent/CN110448907A/en
Application granted granted Critical
Publication of CN110448907B publication Critical patent/CN110448907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5252Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game

Abstract

The application discloses a display method and device of virtual elements in a virtual environment and a readable storage medium, and relates to the field of virtual environments. The method comprises the following steps: displaying a first environment interface, wherein the first environment interface comprises a first picture observed in a common observation mode, and a virtual environment comprises specified virtual elements; receiving an observation skill triggering operation; switching the common observation mode into a detection observation mode for the specified virtual element; and displaying a second environment interface, wherein the second environment interface comprises a second picture for observing the virtual environment in a detection observation mode. Through receiving observation skill trigger operation, will observe the ordinary observation mode of virtual environment and switch into the detection observation mode, promptly, trigger operation according to observation skill and switch over appointed virtual element into outstanding expression form, strengthen the expression to this appointed virtual element to the process of virtual fight is assisted, improves the efficiency and the observation ability of observing virtual environment in virtual fight.

Description

Method and device for displaying virtual elements in virtual environment and readable storage medium
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method and a device for displaying virtual elements in a virtual environment and a readable storage medium.
Background
A Battle Royal (BR) game is a game in which a virtual object is placed in a virtual environment and a series of escape rules are provided, and after a player sets an escape strategy according to the escape rules, the virtual object in the game is controlled to escape.
In the related art, when a player controls a virtual object to escape from a virtual environment, the player may observe the virtual environment from a first person perspective or a third person perspective of the virtual object, where the observation content includes: virtual buildings, other virtual objects, virtual firearms, virtual props, virtual plants, and the like.
However, the above observation method for the virtual environment cannot generate an auxiliary effect on the virtual battle fighting process, the observation result is relatively single, and when the observation is performed by the above observation method, the environment information of the virtual environment which can be expressed is limited, some environment information cannot be reflected by the observation method, and the observation efficiency for the virtual environment is relatively low.
Disclosure of Invention
The embodiment of the application provides a display method and device of virtual elements in a virtual environment and a readable storage medium, and can solve the problems that an observation mode of the virtual environment cannot generate an auxiliary effect on a fighting process of virtual fighting, an observation result is relatively single, and the observation efficiency of the virtual environment is relatively low in the virtual fighting. The technical scheme is as follows:
in one aspect, a method for displaying a virtual element in a virtual environment is provided, the method comprising:
displaying a first environment interface, wherein the first environment interface comprises a first picture for observing a virtual environment by a virtual object in a common observation mode, the virtual environment comprises a specified virtual element, and the specified virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment;
receiving an observation skill triggering operation, wherein the observation skill triggering operation is used for triggering switching of a mode for observing the virtual environment;
switching the common observation mode into a detection observation mode for detecting the specified virtual element according to the observation skill triggering operation, wherein the detection observation mode is a mode of displaying the specified virtual element in a highlighted form in the visual field range of the virtual object;
displaying a second environment interface, wherein the second environment interface includes a second screen in which the virtual object observes the virtual environment in the probe observation manner, and the second screen includes the specified virtual element highlighted in the highlighted form.
In another aspect, there is provided an apparatus for displaying a virtual element in a virtual environment, the apparatus comprising:
the display module is used for displaying a first environment interface, wherein the first environment interface comprises a first picture for observing a virtual environment by a virtual object in a common observation mode, the virtual environment comprises a specified virtual element, and the specified virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment;
the receiving module is used for receiving an observation skill triggering operation, and the observation skill triggering operation is used for triggering switching of a mode of observing the virtual environment;
a switching module, configured to switch the common observation mode to a detection observation mode for detecting the specified virtual element according to the observation skill triggering operation, where the detection observation mode is a mode in which the specified virtual element is displayed in a highlighted form in a visual field of the virtual object;
the display module is further configured to display a second environment interface, where the second environment interface includes a second screen in which the virtual object observes the virtual environment in the probe observation manner, and the second screen includes the specified virtual element highlighted in the highlighted form.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the display method of a virtual element in a virtual environment as provided in the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the display method of the virtual element in the virtual environment as provided in the embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer causes the computer to execute the method for displaying virtual elements in a virtual environment as provided in the embodiments of the present application.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method comprises the steps of receiving observation skill triggering operation, switching a common observation mode for observing the virtual environment into a detection observation mode, namely, switching a specified virtual element from a non-prominent expression mode to a prominent expression mode according to the observation skill triggering operation, and strengthening the expression of the specified virtual element, so that the process of virtual fight is assisted, the specified virtual element is embodied in the prominent expression mode, the information of the specified virtual element in the virtual environment is emphatically expressed, and the efficiency and the observation capability for observing the virtual environment in the virtual fight are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a birth point selection process provided by an exemplary embodiment of the present application;
FIG. 2 is a skill presentation diagram of viewing skills provided in an exemplary embodiment of the present application;
fig. 3 is a process diagram of a viewing perspective conversion method provided in an exemplary embodiment of the present application;
FIG. 4 shows a block diagram of an electronic device provided by an exemplary embodiment of the present application;
FIG. 5 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for displaying virtual elements in a virtual environment, as provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of a comparison of a prominent representation and a non-prominent representation provided by an exemplary embodiment of the present application;
FIG. 8 is a flowchart of a method for displaying virtual elements in a virtual environment, as provided by another exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for displaying virtual elements in a virtual environment, as provided by another exemplary embodiment of the present application;
FIG. 10 is a representation of a designated virtual element before and after occlusion by an obstruction in a highlighted manner according to another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for displaying virtual elements in a virtual environment, as provided by another exemplary embodiment of the present application;
FIG. 12 is a flowchart of a method for displaying virtual elements in a virtual environment, as provided by another exemplary embodiment of the present application;
FIG. 13 is a block diagram of an apparatus for displaying virtual elements in a virtual environment, according to an exemplary embodiment of the present application;
FIG. 14 is a block diagram of an apparatus for displaying virtual elements in a virtual environment, according to another exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal according to another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
tactical sports (Battle royal, BR) game: the virtual role is placed in a virtual environment, a series of escape rules are provided, and after a user sets an escape strategy according to the escape rules, the virtual role in the game is controlled to realize escape.
Optionally, in this embodiment of the application, the escape rule provided by the BR game at least includes: birth rules, safety zone rules, additional identity rules, observation rules and escape rules.
The four rules are explained separately:
first, birth rules
The birth rule is used for representing the determining mode of the position of the virtual character after the virtual character enters the game. Optionally, the virtual environment corresponds to a map, n preset positions in the map correspond to n birth points, the user may select any one of the n birth points when the game play starts, and after the game play starts, the initial position of the virtual character controlled by the user is the position corresponding to the selected birth point. Optionally, the candidate birth points provided in each pair may be the n birth points, or some of the n birth points, such as: and m birth points are determined from the n birth points to be used as selectable birth points of the current game, and the user selects any one of the m birth points, wherein m is more than 0 and less than n. Optionally, the area formed by connecting the m birth points is displayed as a bar-shaped area in the map corresponding to the virtual environment, optionally, the birth point of the starting position of the bar-shaped area is closest to the first side of the map, the birth point of the ending position of the bar-shaped area is closest to the second side of the map, and the first side and the second side are two opposite sides.
Illustratively, 31 positions in the map correspond to 31 birth points, and 8 birth points are determined in a single session, as shown in fig. 1, fig. 1 is a schematic diagram of a birth point selection process provided in an exemplary embodiment of the present application, as shown in fig. 1, a map corresponding to a virtual environment shows 31 birth points 110, and according to a current session, 8 birth points 120 in a bar-shaped area are randomly determined as birth points of virtual roles selectable by a user in the current session.
It is noted that the bar-shaped area is used for gathering the virtual objects participating in the game in the bar-shaped area at the beginning of the game, and controlling the virtual objects to move to the security area in a relatively consistent path along with the refreshing side of the security area. It should be noted that the stripe regions may also be implemented as regions with other shapes, which is not limited in the embodiments of the present application.
Second, safety zone rules
In the escape process, the virtual character is influenced by environmental factors in the virtual environment and needs to be moved to a safe area so as to prevent the environmental factors in the dangerous area from gradually reducing the life value of the virtual character until the virtual character is eliminated. Optionally, the security zone may be refreshed according to the game progress, may also be refreshed according to the use of the game props by the user, and may also be refreshed according to the use of the skill by the user, wherein the refresh process corresponds to the characteristics of the refresh rate, the refresh range, the refresh interval duration, and the like. Optionally, during the refresh process, the secure area is gradually reduced from a first area with a larger range to a second area with a smaller range, and the second area is a sub-area in the first area, that is, the refresh process is a process of gradually reducing the secure area from an edge of the first area to a second area determined inside the first area. The refreshing speed is used for indicating the time length for the first area to be reduced to the second area, and the refreshing time length can be fixed or can be correspondingly prolonged or shortened according to the use of props or skills by a user; the refreshing range is used for representing the range of the refreshed second region in the virtual environment, and the refreshing range can be preset and can also be increased or decreased according to the use of props or skills by a user; the refresh interval duration is used for representing the interval duration between two adjacent safety zone refresh events, and the refresh interval duration can be fixed duration, and can also be prolonged or shortened according to the use of props or skills by a user.
The first region may be a regular-shaped region or an irregular-shaped region; the second region may be a region of a regular shape within the first region, or may be a region of an irregular shape within the first region. Optionally, the virtual environment is divided by squares with a preset size in a map corresponding to the virtual environment, for example: each square corresponds to an area of 100 × 100 size in the virtual environment, and the refreshing of the secure area may be performed in units of square refreshing, such as: the first area occupies 6400 interconnected squares with irregular outlines, and the second area occupies 3800 interconnected squares in the 6400 squares.
Optionally, in the process of determining the security zone, first determining an area with a preset size and a preset shape in the virtual environment as a first security zone obtained by final refresh, generating a second security zone surrounding the first security zone on the basis of the first security zone, generating a third security zone surrounding the second security zone on the basis of the second security zone, and so on, determining to obtain the refresh between two adjacent security zones according to the refresh times of the security zones, such as: if the security zone needs to be refreshed for 4 times, the first security zone is refreshed from the maximum virtual environment range to obtain a fourth security zone, the second security zone is refreshed from the fourth security zone to obtain a third security zone, the third security zone is refreshed from the third security zone to obtain a second security zone, and the fourth security zone is refreshed from the second security zone to obtain the first security zone.
The description will be given by taking the example of performing the first security zone refresh within the maximum virtual environment range, where the maximum virtual environment range is a square range, at least one refresh point is randomly determined on four sides of the square range, and in the refresh process, the refresh points on each side are gradually refreshed within the square range until the fourth security zone is obtained by refresh.
Optionally, the virtual character can also create a danger zone with a preset size at any position in the current safety zone through props or skills. Illustratively, after the virtual character a acquires the prop drilling rig, the drilling rig is used at a first position of a current safety zone in the virtual environment, and then a danger zone with a preset shape and a preset size is generated in the safety zone by taking the first position as an initial position.
Third, add the identity rule
Optionally, in this embodiment of the application, the escape rules provided in the BR game further include rules corresponding to additional identities of virtual characters, and when the virtual characters escape in the virtual environment, the virtual characters have different skills and different visual contents corresponding to different additional identities. Alternatively, the additional identity may be randomly assigned by the server to the virtual character in the play before the play of the BR game is started, or the user may select from the additional identities randomly assigned by the server after the matching is successful. Optionally, when the additional identity is randomly allocated by the server, the server allocates the additional identity to the virtual role in the opposite office according to a preset proportion, for example: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7: 2: 1, when 100 virtual roles are paired in a office, allocating a first additional identity to 70 virtual roles, allocating a second additional identity to 20 virtual roles, and allocating a third additional identity to 10 virtual roles; optionally, when the additional identity is selected by the user, the server controls the number of virtual characters for each additional identity in the bureau to be maintained at a preset ratio, such as: the preset ratio of the first additional identity, the second additional identity and the third additional identity is 7: 2: 1, for 100 virtual roles in the office, when the virtual role of the first additional identity is selected to reach 70, the server prompts the first additional identity as the non-selectable additional identity.
Optionally, in the BR game provided in this embodiment of the present application, at least three additional identities are provided, and each additional identity corresponds to a corresponding skill set. Illustratively, a first additional identity, a second additional identity and a third additional identity are provided in the BR game, wherein the first additional identity corresponds to a first skill set, the second additional identity corresponds to a second skill set, and the third additional identity corresponds to a third skill set, wherein there may be an intersection between the first skill set, the second skill set and the third skill set, that is, there may be a target skill, belonging to at least two of the first skill set, the second skill set and the third skill set. Optionally, each skill set further includes a respective corresponding independent skill, that is, the first skill set includes a first skill, and the first skill does not belong to the second skill set nor the third skill set; the second skill set comprises a second skill, the second skill neither belonging to the first skill set nor to the third skill set; a third skill is included in the third skill set, the third skill belonging to neither the first skill set nor the second skill set.
Illustratively, the above three additional identities are taken as examples, and in the BR game of the embodiment of the present application, each additional identity corresponds to at least one specific skill. Schematically, the independent skills for each additional identity are separately illustrated:
a first additional identity (destroyer additional identity) whose corresponding first skill comprises: visible to a destroyer treasure box in the virtual environment, there are provided three props available to virtual characters of only the additional identity of the destroyer, including: 1. a annunciator; 2. a seismograph; 3. a drilling rig, wherein an annunciator is used to obtain an additional equipment reward, illustratively, the annunciator user summons a higher-rated prop (e.g., helmet, armor, backpack, etc.) and/or the annunciator is used to summon a more comprehensive lethally-effective weapon; the seismograph is used for changing the refreshing progress of the safety zone, illustratively, when the virtual object is used for the seismograph between two safety zone refreshing events, the time interval between the two refreshing events is correspondingly reduced by preset time length, such as: the current safety zone starts to be refreshed to the next safety zone after 20 seconds, and when the virtual object is used for the seismograph, the current safety zone starts to be refreshed to the next safety zone after 10 seconds; the drilling rig is used to create a safety zone of a preset size. Illustratively, after the virtual character a acquires the prop drilling rig, the drilling rig is used at a first position of a current safety zone in the virtual environment, and then a danger zone with a preset shape and a preset size is generated in the safety zone by taking the first position as an initial position.
A second additional identity (hunter additional identity) corresponding to a second skill comprising: and marking the positions of other virtual characters around the position of the virtual character in the map by triggering the props. Optionally, the map divides the virtual environment by squares with a preset size, and when the virtual character triggers the prop, the positions of the virtual character in 9 squares on the periphery of the squares (including the squares) are marked in the map by taking the square where the virtual character is located in the map as a center square.
Schematically, fig. 2 is a schematic diagram of a skill demonstration manner of a third observation skill provided by an exemplary embodiment of the present application, and as shown in fig. 2, a map 200 of a virtual environment divides the virtual environment by squares of a preset size, a current target virtual character is located in the squares as shown in fig. 2, when the target virtual character triggers the third observation skill, the distribution of virtual characters in 9 squares (shown as 9 squares in a dashed line frame in fig. 2) on the peripheral side of the squares 210 (including the squares 210) is determined by taking the squares 210 as a center, and corresponding coordinates in the map are marked according to the location of each virtual character, as shown as marks 220 in fig. 2.
Alternatively, after the hunter marks the positions of the virtual characters in 9 squares on the periphery of the squares (including the squares) in the map, when a killing event (a killer kills the victim) occurs between the hunter and the marked virtual characters, the killer obtains a corresponding gain, such as: increase the blood returning speed, increase the moving speed, and the like.
A third additional identity (seeker additional identity) corresponding to a third skill comprising: when the virtual roles maintain lives in the virtual environment and collect the target props to reach the preset number, calling the escape props, and determining to acquire the virtual roles of the escape props or acquiring the virtual roles of the escape props and the escape success of teammates thereof.
Fourth, observe the rule
Optionally, in the BR game according to this embodiment of the present application, at least three special viewing skills are provided, and before starting a game play, a user selects any one of the at least three special viewing skills as a special viewing skill for a master virtual character controlled by the user to view a virtual environment in the game play. Schematically, three observation skills are taken as examples, and the three special observation skills are respectively explained:
first observation skill (hawk overlook): the virtual environment is observed through the first prop at a view angle, namely after the first prop (such as a virtual bird) is triggered, the first prop rises to a preset height in the air of the virtual environment, and the virtual character observes the virtual environment through the first prop at the height. Referring to fig. 3, fig. 3 is a process schematic diagram of an observation perspective conversion method according to an exemplary embodiment of the present application, as shown in fig. 3, a virtual environment is observed at a first perspective of a virtual character 300, an observation screen 310 includes an object 311 and a hill 312 in the virtual environment, when a user triggers a first prop 320 through an external input device (e.g., presses an R key on a keyboard), the virtual environment is observed by the first prop 320 rising to a preset height of the virtual environment, and the observation screen 330 includes the object 311, the hill 312 and an object 313 on the other side of the hill 312.
Second viewing skill (footprint tracking): after the virtual character triggers the skill, the step of the virtual character in the peripheral area of the virtual environment is displayed in the game interface, and the step is used for indicating the traveling direction of the virtual character passing through the area within a preset time length (note that, when the virtual character travels in a reverse mode in the area, the traveling direction indicated by the step is opposite to the actual traveling direction of the virtual character).
Third viewing skill (spar probe): the weapon spar within the preset range with the position of the virtual character as the center is observed by triggering the detection prop, optionally, the weapon spar is a mark attached to a weapon (the weapon spar can be alternatively realized as a weapon map, a weapon accessory and the like), optionally, the weapon can be a weapon held by the virtual character, and also can be a weapon to be picked up and placed on the ground of the virtual environment, optionally, when the virtual character triggers the second observation skill, other virtual articles in the virtual environment observed by the virtual character are displayed in certain gray scale and transparency, and the weapon spar is displayed in certain brightness through other virtual articles (such as walls, hillsides, floors and the like). Optionally, when the prop is triggered to observe the weapon spar, the observation range takes the position where the virtual character is located as the center, and the weapon spar in the observation range is highlighted by the spherical observation range with the preset radius.
Fifth, rule of escape
Optionally, the escape rules for different additional identities may be the same or different, and the same set of unified escape rules may be used for all additional identities, and a set of additional escape rules may be used for a specific additional identity. Schematically, 1, aiming at all virtual roles, when the virtual roles maintain lives to a final safety zone and obtain the escape prop, determining to obtain the virtual role of the escape prop, or obtaining the virtual role of the escape prop and the escape success of teammates thereof; 2. aiming at the virtual roles of the target additional identity (the additional identity of the searcher in the additional identity rule), when the virtual roles maintain lives in the virtual environment and collect the target props to reach the preset number, the escape props are called, and the virtual roles of the escape props are determined to be obtained, or the virtual roles of the escape props and the teammates of the escape props are obtained to be successfully escaped. It should be noted that, when the escape is performed in the above manner 2, the total number of the target properties in the virtual environment is a preset number, and the preset number is used to control the number of the virtual characters of the target additional identity of the escape in the manner 2, such as: optionally, the BR game prompts the virtual roles with the target additional identities in real time for the remaining number and/or positions of the target properties which are not obtained by the virtual roles, and when the sum of the number of the target properties held by the virtual roles and the remaining number does not reach the required number for calling the escape properties, the virtual roles switch to a fighting strategy to escape in the method 1.
The above mode 1 and mode 2 are two schemes that exist in parallel, that is, for a virtual character with a target additional identity, the escape can be successful through the mode 1 or the escape can be successful through the mode 2, but the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 2 after the escape is successful through the mode 1, or the virtual character with the target additional identity does not need to continuously meet the escape requirement corresponding to the mode 1 after the escape is successful through the mode 2. Optionally, the target prop in the above mode 2 is a prop that is visible in the virtual environment by the target virtual character with the additional identity, that is, the target prop is invisible in the virtual environment by the virtual character with the additional identity.
Optionally, the virtual roles display different game results after escaping in different ways, and optionally, when the virtual role of the target additional identity corresponds to an additional escape rule and the escape succeeds according to the escape rule, the virtual role of the target additional identity displays an additional display result according to the additional escape rule. Illustratively, after the virtual character escapes through the first escape rule, a result interface of the game shows that "you like you get a successful escape to obtain a first name", and a result interface of the game shows that "you like you get a second name" when the virtual character still has a life value in the virtual environment; when the virtual character of the target additional identity escapes through the second escape rule, and the virtual character of the target additional identity is the first virtual character escaping through the second escape rule in the virtual environment, displaying that 'Maotai you become the first name of the target additional identity and the escape is successful' on the result interface of matching, displaying that 'Maotai you become the second name of the target additional identity and the escape is successful' on the result interface of matching aiming at the second virtual character escaping through the second escape rule in the virtual environment, and so on.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with an application program supporting a virtual environment, such as an application program supporting a three-dimensional virtual environment. The application program may be any one of a tactical competitive Game, a virtual reality application program, a three-dimensional map program, a military simulation program, a Third-person Shooting Game (TPS), a First-person Shooting Game (FPS), and a Multiplayer Online tactical competitive Game (MOBA). Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
Fig. 4 shows a block diagram of an electronic device according to an exemplary embodiment of the present application. The electronic device 400 includes: an operating system 420 and application programs 422.
Operating system 420 is the base software that provides applications 422 with secure access to computer hardware.
Application 422 is an application that supports a virtual environment. Optionally, application 422 is an application that supports a three-dimensional virtual environment. The application 422 may be any one of a virtual reality application, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, an MOBA game, and a multi-player gunfight type live game. The application 422 may be a stand-alone application, such as a stand-alone 3D game program.
Fig. 5 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 500 includes: a first device 520, a server 540, and a second device 560.
The first device 520 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, an MOBA game and a multi-player gunfight living game. The first device 520 is a device used by a first user who uses the first device 520 to control a first virtual object located in a virtual environment for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first device 520 is connected to the server 540 through a wireless network or a wired network.
The server 540 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 540 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 540 undertakes primary computing work and the first device 520 and the second device 560 undertake secondary computing work; alternatively, the server 540 undertakes secondary computing work and the first device 520 and the second device 560 undertake primary computing work; alternatively, the server 540, the first device 520, and the second device 560 perform cooperative computing by using a distributed computing architecture.
The second device 560 is installed and operated with an application program supporting a virtual environment. The application program can be any one of BR games, virtual reality application programs, three-dimensional map programs, military simulation programs, FPS games, MOBA games and multi-player gunfight living games. The second device 560 is a device used by a second user who uses the second device 560 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different teams, different organizations, or two groups with enemy.
Optionally, the applications installed on the first device 520 and the second device 560 are the same, or the applications installed on the two devices are the same type of application for different control system platforms. The first device 520 may generally refer to one of a plurality of devices, and the second device 560 may generally refer to one of a plurality of devices, and this embodiment is illustrated by the first device 520 and the second device 560 only. The first device 520 and the second device 560 may be of the same or different device types, including: at least one of a game console, a desktop computer, a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The following embodiments are illustrated where the device is a desktop computer.
Those skilled in the art will appreciate that the number of devices described above may be greater or fewer. For example, the number of the devices may be only one, or several tens or hundreds, or more. The number and the type of the devices are not limited in the embodiments of the present application.
With reference to the above description of the BR game and the description of the implementation environment, a method for displaying items in a virtual environment provided in the embodiment of the present application is described, taking application of the method to a terminal as an example, as shown in fig. 6, where the method includes:
step 601, displaying a first environment interface, where the first environment interface includes a first picture in which a virtual object observes a virtual environment in a common observation manner.
Optionally, the virtual environment includes a designated virtual element, and the designated virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment. Optionally, the specified virtual element may also be a special effect displayed in the virtual environment.
Optionally, the virtual environment includes a virtual item, and the specified virtual element is an element attached to the virtual item. Optionally, the common viewing style is used to represent the manner in which a given virtual element is displayed in a non-highlighted representation on a virtual prop.
Alternatively, the first screen may be a screen for observing the virtual environment from a first person perspective of the virtual object, or may be a screen for observing the virtual environment from a third person perspective of the virtual object.
Optionally, the virtual property comprises at least one of a virtual weapon property, a virtual accessory property, a virtual backpack property, a virtual medicine property, wherein the virtual weapon property comprises a virtual firearm, a virtual cane, a virtual bow, a virtual sword, and the like; the virtual accessory prop comprises a firearm accessory on a virtual firearm, a walking stick accessory on a virtual walking stick and the like; the virtual backpack prop comprises a primary backpack, a secondary backpack, a tertiary backpack and the like; the virtual drug properties include drug bottles, pills, and the like.
Optionally, the designated virtual element may be attached to the virtual item in the form of an additional object, may be implemented as the virtual item itself, may also be attached to the virtual item in the form of a map, and may also be attached to the virtual item in the form of a non-highlighted representation that is invisible and a highlighted representation that is visible.
Optionally, the virtual item includes a virtual weapon item, and the specified virtual element is a spar mounted on the virtual weapon item, or the specified virtual element is a chartlet attached to the virtual weapon item; or the virtual prop comprises a virtual backpack prop, and the designated virtual element is a pendant attached to the virtual backpack prop; or the virtual item comprises a virtual accessory item, and the specified virtual element is a chartlet attached to the virtual accessory item; or, the virtual prop includes a virtual medicine prop, and the designated virtual element is a medicine bottle corresponding to the virtual medicine prop.
Optionally, in the non-highlighted form, the specified virtual element on the virtual item may be invisible, and in the highlighted form, the specified virtual element on the virtual item is a visible mark; or, in the non-highlighted form, the designated virtual element on the virtual item is represented as a common decoration, and in the highlighted form, the designated virtual element is a decoration for enhancing display; or, in a non-prominent presentation form, the designated virtual element on the virtual item appears as a necessary accessory for applying the virtual item, such as: when the virtual prop is implemented as a virtual firearm, the designated virtual element is a crystal providing power to the virtual firearm, and correspondingly under the prominent expression form, the designated virtual element is expressed as an essential accessory for enhancing display, such as: when the virtual prop is implemented as a virtual firearm, highlighting the spar on the virtual firearm.
Optionally, for the virtual items in the virtual environment, a designated virtual element may be correspondingly configured on each virtual item, or a designated virtual element may be correspondingly configured on a part of the virtual items. Illustratively, for the virtual guns in the virtual environment, the designated virtual elements may be configured on all the virtual guns in the virtual environment, or the designated virtual elements may be configured on the virtual rifle and the virtual sniper gun, and the designated virtual elements are not configured on other virtual guns (such as a virtual shotgun, a virtual pistol, and a virtual submachine gun).
Step 602, receiving an observation skill triggering operation.
Optionally, the viewing skill triggering operation is used to trigger a switch in the manner in which the virtual environment is viewed.
Optionally, the observation skill triggering operation is configured to trigger switching of a normal observation mode in which the virtual object observes the virtual environment to a probe observation mode.
Alternatively, when the terminal is implemented as a mobile terminal with a touch display screen, for example: when the mobile phone, the tablet computer and the portable laptop notebook computer are used, the observation skill triggering operation can be realized as a touch operation on the touch display screen or a pressing operation on a physical key on the terminal; when the terminal is implemented as a desktop computer, the viewing skill triggering operation may be implemented as an input operation through an external input device, such as: the input is performed by an external input device such as a keyboard and a mouse. Illustratively, the terminal is a desktop computer, and the viewing skill triggering operation is triggered by selecting the letter key R on the keyboard.
Alternatively, the viewing skills may be skills accompanying the virtual object itself; it may also be the skills that the virtual object obtains by picking up the virtual item in the virtual environment, such as: the virtual object picks up an observation ball in the virtual environment, and the observation ball is triggered to be used as the observation skill triggering operation; it may also be the skills that the virtual object selects and acquires among a preset number of skills, illustratively, before the start of the game, the BR game is provided with three viewing skills to choose from, a first viewing skill, a second viewing skill and a third viewing skill, respectively, among which the player selects the first viewing skill as the one that the virtual object may trigger in the game of the game.
And 603, switching the common observation mode into a detection observation mode for detecting the specified virtual element according to the observation skill triggering operation.
Optionally the probe viewing style is used to indicate the manner in which the specified virtual element is displayed in a highlighted form within the field of view of the virtual object.
Optionally, the specified virtual element on the virtual item is switched to the highlighted form in a non-highlighted form under a common observation mode according to the observation skill triggering operation.
Wherein the prominent expression includes: at least one of the magnified representation, the highlighted representation, the penetrated representation, and the animated representation, i.e., the non-highlighted representation corresponding to the highlighted representation, includes: a standard size representation corresponding to the enlarged representation, a standard brightness representation corresponding to the highlighted representation, a standard display hierarchy representation corresponding to the penetrated representation, and a continuous display representation corresponding to the flickering representation.
Wherein the magnification expression form is used for expressing a form of magnifying and displaying the specified virtual element; the highlight expression form is used for expressing a form of highlighting the specified virtual element in a preset color; the penetrating representation form is used for representing a form that light emitted by the specified virtual element penetrates through other objects in the virtual environment, or the penetrating representation form is used for representing a form that objects except the specified virtual element in the virtual environment are represented in preset transparency; the flickering expression form is used for expressing the form of flickering expression of light emitted by the specified virtual element at a preset flickering frequency.
Optionally, in the probe observation mode, the specified virtual elements in all the regions of the virtual environment may be highlighted, or the specified virtual elements in a specific region may be highlighted. Optionally, the operation is triggered according to the observation skill, the virtual object is taken as the center of sphere, the preset radius is the radius of the spherical observation range, and the specified virtual element in the spherical observation range is displayed in a highlighted form.
Step 604, displaying a second environment interface, where the second environment interface includes a second screen for observing the virtual environment by the virtual object in a probing observation manner.
Optionally, the second screen includes a designated virtual element highlighted in a highlighted form.
In summary, according to the display method of the virtual elements in the virtual environment provided by this embodiment, the common observation mode for observing the virtual environment is switched to the detection observation mode by receiving the observation skill triggering operation, that is, the specified virtual element is switched from the non-salient expression form to the salient expression form according to the observation skill triggering operation, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the salient expression form, and emphasize the information of the specified virtual element in the virtual environment, thereby assisting the process of virtual battle, and improving the efficiency and the observation capability for observing the virtual environment in the virtual battle.
Schematically, by way of example, the highlighting expression includes an enlarged expression and a penetrating expression, fig. 7 is a schematic diagram comparing the highlighting expression and the non-highlighting expression provided in an exemplary embodiment of the present application, and as shown in fig. 7, both sides of a view blocking object 711 under the non-highlighting expression 710 respectively include a virtual object 712, a virtual gun 713, a virtual object 714, and a virtual gun 715 worn by the virtual object 714, where the virtual gun 715 worn by the virtual gun 713, the virtual object 714, and the virtual gun 714 is represented by a dotted line, and represents that the virtual gun 713, the virtual object 714, and the virtual gun 715 worn by the virtual object 714 are virtual objects invisible in a virtual environment of the virtual object 713, and a crystal stone 720 (represented by dotted lines) is hung on the virtual gun 713 and the virtual gun 715; when an observation skill triggering action is received, the virtual environment is observed in a highlighted representation 730, wherein, the two sides of the view obstructing object 711 under the highlighted representation 730 respectively comprise a virtual object 712, a virtual gun 713, a virtual object 714 and a virtual gun 715 worn by the virtual object 714, wherein the virtual firearm 713, the virtual object 714 and the virtual firearm 715 worn by the virtual object 714 are shown by dotted lines, the virtual firearm 715 worn by the virtual firearm 713, the virtual object 714 and the virtual object 714 is represented by a virtual object which is invisible in the virtual environment of the virtual object 713, the spar 720 on the virtual firearm 713 and the spar 720 on the virtual firearm 715 are shown by solid lines, to represent this spar 720 as a virtual object that the virtual object 713 can see in the virtual environment, and this spar 720 is enlarged relative to the spar 720 under the non-prominent manifestation 710.
In an alternative embodiment, when the highlighted representation includes a penetrating representation, the highlighting is implemented by changing a display hierarchy of a specified virtual element, and fig. 8 is a flowchart of a method for displaying a virtual element in a virtual environment according to another exemplary embodiment of the present application, which is described by taking the method as an example applied to a terminal, and as shown in fig. 8, the method includes:
step 801, displaying a first environment interface, where the first environment interface includes a first screen in which a virtual object observes a virtual environment in a normal observation manner.
Optionally, the virtual environment includes a designated virtual element, and the designated virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment.
Optionally, the virtual environment includes a virtual item, and the specified virtual element is an element attached to the virtual item. Optionally, the common viewing style is used to represent the manner in which a given virtual element is displayed in a non-highlighted representation on a virtual prop.
Optionally, the designated virtual element may be attached to the virtual item in the form of an additional object, may be implemented as the virtual item itself, may also be attached to the virtual item in the form of a map, and may also be attached to the virtual item in the form of a non-highlighted representation that is invisible and a highlighted representation that is visible.
Step 802, receiving an observation skill triggering operation.
Optionally, the viewing skill triggering operation is used to trigger a switch in the manner in which the virtual environment is viewed.
Optionally, the observation skill triggering operation is configured to trigger switching of a normal observation mode in which the virtual object observes the virtual environment to a probe observation mode.
Optionally, the triggering manner of the observation skill triggering operation is described in detail in the step 602, and is not described herein again.
Step 803, when the through representation is included in the highlighted representation, determining a first display hierarchy of the virtual object between the virtual object and the specified virtual element according to the observation skill triggering operation.
Optionally, when the through representation is included in the highlighted representation, that is, the designated virtual element is displayed at a display level at which the virtual object is visible, and optionally, a virtual object exists between the designated virtual element and the virtual object as an obstruction, that is, the designated virtual element is displayed at a display level above the first display level of the virtual object.
Step 804, adjusting the second display level of the designated virtual element to be above the first display level.
Optionally, the position of the virtual item where the specified virtual element is located in the virtual environment is unchanged, the display level of the specified virtual element on the virtual item is adjusted, and the second display level of the specified virtual element is adjusted to be above the first display level of the barrier, so that the positions of the virtual item and the specified virtual element on the virtual item in the virtual environment are unchanged, and the specified virtual element is visible to the virtual object.
Optionally, after adjusting the second display hierarchy of the designated virtual element, the virtual object between the virtual object and the designated virtual element is set to a preset transparency. Optionally, different transparencies are set for different types of objects, such as: the house type and tree type virtual objects are set to be semitransparent, other virtual objects are set to be full-transparency, when the virtual objects observe the virtual environment in a detection observation mode, the positions of the designated virtual elements are determined through the shielding objects with the semitransparent degree, and the other virtual objects cannot observe the other virtual objects in the detection observation mode because the other virtual objects are set to be full-transparency.
Step 805, displaying a second environment interface, where the second environment interface includes a second screen for observing the virtual environment by the virtual object in a probing observation manner.
Optionally, the second screen includes a designated virtual element highlighted in a highlighted form.
In summary, according to the display method of the virtual elements in the virtual environment provided by this embodiment, the common observation mode for observing the virtual environment is switched to the detection observation mode by receiving the observation skill triggering operation, that is, the specified virtual element is switched from the non-salient expression form to the salient expression form according to the observation skill triggering operation, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the salient expression form, and emphasize the information of the specified virtual element in the virtual environment, thereby assisting the process of virtual battle, and improving the efficiency and the observation capability for observing the virtual environment in the virtual battle.
According to the method provided by the embodiment, the display hierarchy of the designated virtual element is adjusted, so that the designated virtual element is displayed in a form that the virtual object is visible, the expression of the designated virtual element is improved, the process of virtual fight is assisted, and the efficiency and the observation capability of observing the virtual environment in the virtual fight are improved.
In the method provided by this embodiment, after the display hierarchy of the designated virtual element is adjusted, the virtual environment is set to be in a form that the shielded designated virtual element is visible and the shielded virtual object is invisible, and the virtual object can observe other virtual objects through the designated virtual element and can also be disguised through the designated virtual element, so that the process of virtual engagement is assisted, and the efficiency and the observation capability of observing the virtual environment in the virtual engagement are improved.
In an alternative embodiment, the highlighted representation further includes an enlarged representation and a highlighted representation, fig. 9 is a flowchart of a method for displaying a virtual element in a virtual environment according to another exemplary embodiment of the present application, which is described by taking as an example that the method is applied to a terminal, and as shown in fig. 9, the method includes:
step 901, a first environment interface is displayed, where the first environment interface includes a first screen in which a virtual object observes a virtual environment in a normal observation manner.
Optionally, the virtual environment includes a designated virtual element, and the designated virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment.
Optionally, the virtual environment includes a virtual item, and the specified virtual element is an element attached to the virtual item. Optionally, the common viewing style is used to represent the manner in which a given virtual element is displayed in a non-highlighted representation on a virtual prop.
Optionally, the designated virtual element may be attached to the virtual item in the form of an additional object, may be implemented as the virtual item itself, may also be attached to the virtual item in the form of a map, and may also be attached to the virtual item in the form of a non-highlighted representation that is invisible and a highlighted representation that is visible.
And step 902, receiving an observation skill triggering operation.
Optionally, the viewing skill triggering operation is used to trigger a switch in the manner in which the virtual environment is viewed.
Optionally, the observation skill triggering operation is configured to trigger switching of a normal observation mode in which the virtual object observes the virtual environment to a probe observation mode.
Optionally, the triggering manner of the observation skill triggering operation is described in detail in the step 602, and is not described herein again.
And 903, when the highlighted expression form comprises a magnified expression form, magnifying the size of the specified virtual element in the virtual environment according to a preset proportion by using an observation skill triggering operation.
Optionally, when the size of the specified virtual element in the virtual environment is enlarged at a preset scale, any one of the following manners is included:
firstly, amplifying a specified virtual element to a preset size corresponding to the highlighted representation form;
and secondly, determining a first display size of the appointed virtual element in the non-highlighted form, determining a preset proportion, and multiplying the first display size by the preset proportion to obtain the enlarged appointed virtual element.
And 904, when the highlighted expression form comprises the highlighted expression form, adjusting the display brightness of the specified virtual element to the brightness corresponding to the highlighted expression form.
Optionally, after adjusting the second display hierarchy of the designated virtual element, the display brightness of the virtual object between the virtual object and the designated virtual element is adjusted to a preset brightness. Optionally, after the second display hierarchy of the designated virtual element is prominently adjusted, the display brightness of other virtual objects in the virtual environment except for the designated virtual element may be adjusted to a preset brightness.
Alternatively, in the highlight expression form, only the display brightness of the designated virtual element may be adjusted to the brightness corresponding to the highlight expression form, or the display brightness of the designated virtual element may be adjusted simultaneously with the adjustment of the display brightness of the designated virtual element, and the display brightness of other objects in the virtual environment except for the designated virtual element may be adjusted simultaneously, for example: and adjusting the display brightness of other objects except the designated virtual elements in the virtual environment to preset brightness.
Step 905, displaying a second environment interface, where the second environment interface includes a second screen for observing the virtual environment by the virtual object in a probing observation manner.
Optionally, the second screen includes a designated virtual element highlighted in a highlighted form.
In summary, according to the display method of the virtual elements in the virtual environment provided by this embodiment, the common observation mode for observing the virtual environment is switched to the detection observation mode by receiving the observation skill triggering operation, that is, the specified virtual element is switched from the non-salient expression form to the salient expression form according to the observation skill triggering operation, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the salient expression form, and emphasize the information of the specified virtual element in the virtual environment, thereby assisting the process of virtual battle, and improving the efficiency and the observation capability for observing the virtual environment in the virtual battle.
It should be noted that the above-mentioned embodiment corresponding to fig. 8 and the embodiment corresponding to fig. 9 may be implemented in combination, or may be implemented as an independent solution.
Referring to fig. 10, schematically, after receiving an observation skill triggering operation, a second environment interface 1010 is displayed, in which a virtual object 1020 is displayed, and a highlighted, enlarged and penetrated displayed spar 1011 and spar 1012 are displayed, where the spar 1011 and the virtual object 1020 include a shield 1030 (not currently shielded), and the spar 1012 and the virtual object 1020 include a shield 1040 (not currently shielded), after the display of the shield 1030 is cancelled, a virtual firearm 1050 is displayed at a position corresponding to the spar 1011 in the virtual environment, and after the display of the shield 1040 is cancelled, a virtual firearm 1060 is displayed at a position corresponding to the spar 1012 in the virtual environment and a virtual firearm 1070 carried by the virtual object 1060 are displayed.
Fig. 11 is a flowchart of a method for displaying a virtual element in a virtual environment according to another exemplary embodiment of the present application, where the method includes:
at step 1101, the player uses tactical props.
Optionally, the tactical prop is used for triggering an observation skill, and the operation of using the tactical prop is an operation of triggering the observation skill.
Optionally, the observation skill is configured to switch a normal observation mode of the virtual object by the player to a detection observation mode, where the normal observation mode is a mode in which the feature S mounted on the target object is displayed in a non-highlighted form, and the detection observation mode is a mode in which the feature S mounted on the target object is displayed in a highlighted form.
In step 1102, a model or map of the magnified feature S is mounted on the target object S1.
Optionally, the feature S mounted on the target object is enlarged to a preset display size and then displayed. Wherein, the feature S can be implemented as a model mounted on the target object, such as: spar, pendants, etc., can also be realized as a map on a target object.
Step 1103, modify the display level of S such that it is occluded across the field of view.
Optionally, when there is an obstruction between the virtual object and S1, determining a display level of the obstruction, and adjusting the display level of S1 above the display level of the obstruction.
In step 1104, the brightness of the field of view is reduced and the brightness of S is increased.
It should be noted that, in the above steps 1102 to 1104, the steps 1102, 1103 and 1104 may be executed in the order, or the execution order of the three steps may be adjusted to any order, or the three steps may be executed synchronously, and the execution order of the steps 1102 to 1104 is not limited in this embodiment of the application.
In step 1105, the player actively stops using the tactical prop.
At step 1106, the tactical prop is disabled.
Optionally, each tactical prop corresponds to a single use duration and a cooling duration, wherein the single use duration is used for representing a single maximum duration of use of the tactical prop by the player, and the cooling duration is used for representing a waiting duration between each two times of use of the tactical prop.
Optionally, the player may actively finish using the tactical prop when the duration of the tactical prop does not reach the duration of a single use, or may receive the invalidation of the tactical prop when the player uses the tactical prop and the duration of the tactical prop reaches the duration of a single use.
In summary, according to the display method of the virtual elements in the virtual environment provided by this embodiment, the common observation mode for observing the virtual environment is switched to the detection observation mode by receiving the observation skill triggering operation, that is, the specified virtual element is switched from the non-salient expression form to the salient expression form according to the observation skill triggering operation, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the salient expression form, and emphasize the information of the specified virtual element in the virtual environment, thereby assisting the process of virtual battle, and improving the efficiency and the observation capability for observing the virtual environment in the virtual battle.
Fig. 12 is a flowchart of a method for displaying virtual elements in a virtual environment according to another exemplary embodiment of the present application, which is described by taking the case that there are X virtual objects in the virtual environment to trigger observation skills, as shown in fig. 12, the method includes:
at step 1201, player A uses a tactical prop.
Step 1202 declares parameters that affect the highlighted way of the feature S.
At step 1203, player B uses the tactical prop.
In step 1204, parameters that affect how the feature S is highlighted are declared.
Player X uses the tactical prop, step 1205.
In step 1206, parameters that affect the highlighted way of feature S are declared.
Optionally, after the plurality of players trigger the use of the tactical prop, the parameters of the highlighted feature S may be superimposed, may also be alternatively implemented, and may also be implemented independently, where the superimposing manner is as follows: after the player A uses the tactical prop, the size of the S is improved by 10 percent, after the player B uses the tactical prop, the size of the S is improved by 10 percent on the basis of improving by 10 percent, and the like. The independent implementation mode is as follows: and aiming at the tactical prop used by the player A, the characteristic S in the sight line range of the player A is declared with the highlighted display mode parameter, the characteristic S in the sight line range of the player B is declared with the highlighted display mode parameter, and the like.
Step 1207, calculate the highlighting of the feature S.
In step 1208, the magnified model or map of the feature S is mounted on the target object.
Optionally, the feature S mounted on the target object is enlarged to a preset display size and then displayed. Wherein, the feature S can be implemented as a model mounted on the target object, such as: spar, pendants, etc., can also be realized as a map on a target object.
Step 1209, modify the display level of S to be occluded across the field of view.
Optionally, when there is an obstruction between the virtual object and S1, determining a display level of the obstruction, and adjusting the display level of S1 above the display level of the obstruction.
Step 1210, reduce the brightness of the visual field and increase the brightness of S.
Step 1211, the player actively stops using the tactical prop.
At step 1212, the tactical prop is disabled.
Optionally, each tactical prop corresponds to a single use duration and a cooling duration, wherein the single use duration is used for representing a single maximum duration of use of the tactical prop by the player, and the cooling duration is used for representing a waiting duration between each two times of use of the tactical prop.
Optionally, the player may actively finish using the tactical prop when the duration of the tactical prop does not reach the duration of a single use, or may receive the invalidation of the tactical prop when the player uses the tactical prop and the duration of the tactical prop reaches the duration of a single use.
In summary, according to the display method of the virtual elements in the virtual environment provided by this embodiment, the common observation mode for observing the virtual environment is switched to the detection observation mode by receiving the observation skill triggering operation, that is, the specified virtual element is switched from the non-salient expression form to the salient expression form according to the observation skill triggering operation, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the salient expression form, and emphasize the information of the specified virtual element in the virtual environment, thereby assisting the process of virtual battle, and improving the efficiency and the observation capability for observing the virtual environment in the virtual battle.
Fig. 13 is a schematic structural diagram of a display apparatus for displaying virtual elements in a virtual environment according to an exemplary embodiment of the present application, which is described by taking the application of the apparatus in a terminal as an example, and as shown in fig. 13, the apparatus includes: a display module 1310, a receiving module 1320, and a switching module 1330;
a display module 1310, configured to display a first environment interface, where the first environment interface includes a first picture in which a virtual object observes a virtual environment in a common observation manner, and the virtual environment includes a specified virtual element, where the specified virtual element is a three-dimensional model or a two-dimensional overlay existing in the virtual environment;
a receiving module 1320, configured to receive an observation skill triggering operation, where the observation skill triggering operation is used to trigger switching of a manner of observing the virtual environment;
a switching module 1330, configured to switch the common observation mode to a detection observation mode for detecting the specified virtual element according to the observation skill triggering operation, where the detection observation mode is a mode in which the specified virtual element is displayed in a highlighted form in a visual field of the virtual object;
the display module 1310 is further configured to display a second environment interface, where the second environment interface includes a second screen in which the virtual object observes the virtual environment in the probe observation manner, and the second screen includes the specified virtual element highlighted in the highlighted form.
In an alternative embodiment, the prominent representation comprises at least one of a penetration representation, a magnification representation, a highlight representation, and a flash representation;
the penetrating representation is used for representing a representation of the light emitted by the specified virtual element penetrating other objects in the virtual environment; or, the penetrating representation form is used for representing a form of representing objects except the specified virtual element in the virtual environment in a preset transparency;
the amplification expression form is used for expressing a form of amplifying and displaying the specified virtual element;
the highlight expression form is used for expressing a form of highlighting the specified virtual element in a preset color;
the flickering expression form is used for expressing the form of flickering expression of the light emitted by the specified virtual element at a preset flickering frequency.
In an alternative embodiment, as shown in fig. 14, the switching module 1330 includes:
a determining unit 1331, configured to determine a first display level of a virtual object between the virtual object and the specified virtual element according to the viewing skill triggering operation when the penetrating representation is included in the highlighted representation;
an adjusting unit 1332, configured to adjust the second display level of the designated virtual element to be above the first display level.
In an alternative embodiment, the display module 1310 is further configured to set the virtual object between the virtual object and the designated virtual element to the preset transparency.
In an optional embodiment, the switching module 1330 includes:
an adjusting unit 1332, configured to, when the highlighted representation includes the enlarged representation, enlarge the size of the specified virtual element in the virtual environment at a preset ratio according to the observation skill triggering operation;
the adjusting unit 1332 is further configured to, when the highlighted representation form is included in the highlighted representation form, adjust the display brightness of the specified virtual element to a brightness corresponding to the highlighted representation form.
In an optional embodiment, the display module 1310 is further configured to adjust the display brightness of the virtual object between the virtual object and the specified virtual element to a preset brightness when the highlighted representation is included in the highlighted representation.
In an optional embodiment, the display module 1310 is further configured to trigger an operation according to the observation skill, and display the specified virtual element in the spherical observation range in the highlighted representation form by taking the virtual object as a center of sphere and taking a preset radius as a radius of a spherical observation range.
In an optional embodiment, the virtual item includes a virtual weapon item, and the specified virtual element is a spar mounted on the virtual weapon item, or the specified virtual element is a map attached to the virtual weapon item;
or the like, or, alternatively,
the virtual prop comprises a virtual backpack prop, and the designated virtual element is a pendant attached to the virtual backpack prop;
or the like, or, alternatively,
the virtual item comprises a virtual accessory item, and the specified virtual element is a chartlet attached to the virtual accessory item;
or the like, or, alternatively,
the virtual prop comprises a virtual medicine prop, and the designated virtual element is a medicine bottle corresponding to the virtual medicine prop.
In summary, the display apparatus for virtual elements in a virtual environment provided in this embodiment switches, by receiving an observation skill triggering operation, a common observation manner for observing the virtual environment to a detection observation manner, that is, switches, according to the observation skill triggering operation, a specified virtual element on a virtual prop from a non-highlighted form to a highlighted form, so as to enhance the expression of the specified virtual element, embody the specified virtual element in the highlighted form, and emphasize information of the specified virtual element in the virtual environment, thereby assisting the process of virtual engagement and improving the efficiency and observation capability for observing the virtual environment in the virtual engagement.
It should be noted that: the display device of virtual elements in a virtual environment provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the display apparatus for virtual elements in a virtual environment and the display method embodiment for virtual elements in a virtual environment provided by the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 15 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present invention. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement a method of displaying virtual elements in a virtual environment as provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, touch screen display 1505, camera 1506, audio circuitry 1507, positioning assembly 1508, and power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate the current geographic position of the terminal 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, or the russian galileo System.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of the display on touch screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
An embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded by the processor and implements the method for displaying the virtual element in the virtual environment as described in any one of fig. 6, fig. 8, fig. 9, fig. 11, and fig. 12.
An embodiment of the present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the method for displaying a virtual element in a virtual environment as described in any one of fig. 6, fig. 8, fig. 9, fig. 11, and fig. 12.
The present application further provides a computer program product, which when run on a computer, causes the computer to execute the method for displaying virtual elements in a virtual environment provided by the above method embodiments.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that are loaded and executed by the processor to implement a method for displaying a virtual element in a virtual environment as described in any one of fig. 6, fig. 8, fig. 9, fig. 11, and fig. 12.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for displaying virtual elements in a virtual environment, the method comprising:
displaying a first environment interface, wherein the first environment interface comprises a first picture for observing a virtual environment by a virtual object in a common observation mode, the virtual environment comprises a specified virtual element, and the specified virtual element is attached to the virtual prop in the form of an additional object or attached to the virtual prop in the form of a map;
receiving an observation skill triggering operation, wherein the observation skill triggering operation is used for triggering switching of a mode for observing the virtual environment;
determining a first display hierarchy of a virtual object between the virtual object and the specified virtual element according to the observation skill triggering operation;
adjusting a second display level of the designated virtual element onto the first display level;
setting the virtual object between the virtual object and the designated virtual element to a preset transparency;
displaying a second environment interface, wherein the second environment interface comprises a second picture for observing the virtual environment by the virtual object in a detection observation mode, the second picture comprises the specified virtual element highlighted in a highlighted representation form, and the highlighted representation form comprises a highlighted representation form;
and adjusting the display brightness of the specified virtual element to the brightness corresponding to the highlight expression form, and simultaneously adjusting the display brightness of the virtual object between the virtual object and the specified virtual element to preset brightness.
2. The method of claim 1,
the prominent representation further comprises at least one of a penetration representation, a magnification representation, a flash representation;
the penetrating representation is used for representing a representation of the light emitted by the specified virtual element penetrating other objects in the virtual environment; or, the penetrating representation form is used for representing a form of representing objects except the specified virtual element in the virtual environment in a preset transparency;
the amplification expression form is used for expressing a form of amplifying and displaying the specified virtual element;
the flickering expression form is used for expressing the form of flickering expression of the light emitted by the specified virtual element at a preset flickering frequency.
3. The method according to claim 2, wherein the switching the common observation mode to a probing observation mode for probing the specified virtual element according to the observation skill triggering operation comprises:
and when the highlighted expression form comprises the amplified expression form, amplifying the size of the specified virtual element in the virtual environment at a preset ratio according to the observation skill triggering operation.
4. The method according to any one of claims 1 to 3, wherein the switching the common observation mode to a probing observation mode for probing the specified virtual element according to the observation skill triggering operation further comprises:
and according to the observation skill triggering operation, displaying the specified virtual element in the spherical observation range in the highlighted representation form by taking the virtual object as the center of sphere and taking the preset radius as the radius of the spherical observation range.
5. A method according to any one of claims 1 to 3, wherein the specified virtual element is an element attached to a virtual prop in a virtual environment;
the virtual prop comprises a virtual weapon prop, and the designated virtual element is crystal stone mounted on the virtual weapon prop, or the designated virtual element is a chartlet attached to the virtual weapon prop;
or the like, or, alternatively,
the virtual prop comprises a virtual backpack prop, and the designated virtual element is a pendant attached to the virtual backpack prop;
or the like, or, alternatively,
the virtual item comprises a virtual accessory item, and the specified virtual element is a chartlet attached to the virtual accessory item;
or the like, or, alternatively,
the virtual prop comprises a virtual medicine prop, and the designated virtual element is a medicine bottle corresponding to the virtual medicine prop.
6. An apparatus for displaying virtual elements in a virtual environment, the apparatus comprising:
the display module is used for displaying a first environment interface, wherein the first environment interface comprises a first picture for observing a virtual environment by a virtual object in a common observation mode, and the virtual environment comprises a specified virtual element; the designated virtual element is attached to the virtual prop in the form of an additional object, or the designated virtual element is attached to the virtual prop in the form of a map;
the receiving module is used for receiving an observation skill triggering operation, and the observation skill triggering operation is used for triggering switching of a mode of observing the virtual environment; determining a first display hierarchy of a virtual object between the virtual object and the specified virtual element according to the observation skill triggering operation; adjusting a second display level of the designated virtual element onto the first display level;
the display module is further configured to set the virtual object between the virtual object and the specified virtual element to a preset transparency; displaying a second environment interface, wherein the second environment interface comprises a second picture for observing the virtual environment by the virtual object in a detection observation mode, the second picture comprises the specified virtual element highlighted in a highlighted representation form, and the highlighted representation form comprises a highlighted representation form; and adjusting the display brightness of the specified virtual element to the brightness corresponding to the highlight expression form, and simultaneously adjusting the display brightness of the virtual object between the virtual object and the specified virtual element to preset brightness.
7. The apparatus of claim 6,
the prominent representation further comprises at least one of a penetration representation, a magnification representation, a flash representation;
the penetrating representation is used for representing a representation of the light emitted by the specified virtual element penetrating other objects in the virtual environment; or, the penetrating representation form is used for representing a form of representing objects except the specified virtual element in the virtual environment in a preset transparency;
the amplification expression form is used for expressing a form of amplifying and displaying the specified virtual element;
the flickering expression form is used for expressing the form of flickering expression of the light emitted by the specified virtual element at a preset flickering frequency.
8. The apparatus of claim 7, further comprising:
and the adjusting unit is used for amplifying the size of the specified virtual element in the virtual environment according to the observation skill triggering operation in a preset proportion when the highlighted expression form comprises the amplified expression form.
9. A computer device comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the method for displaying a virtual element in a virtual environment according to any one of claims 1 to 5.
10. A computer-readable storage medium, wherein at least one instruction or at least one program is stored in the readable storage medium, and the at least one instruction or the at least one program is loaded by a processor and executed to implement the method for displaying a virtual element in a virtual environment according to any one of claims 1 to 5.
CN201910760232.XA 2019-08-16 2019-08-16 Method and device for displaying virtual elements in virtual environment and readable storage medium Active CN110448907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910760232.XA CN110448907B (en) 2019-08-16 2019-08-16 Method and device for displaying virtual elements in virtual environment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910760232.XA CN110448907B (en) 2019-08-16 2019-08-16 Method and device for displaying virtual elements in virtual environment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110448907A CN110448907A (en) 2019-11-15
CN110448907B true CN110448907B (en) 2020-12-01

Family

ID=68487244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910760232.XA Active CN110448907B (en) 2019-08-16 2019-08-16 Method and device for displaying virtual elements in virtual environment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110448907B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110882538B (en) * 2019-11-28 2021-09-07 腾讯科技(深圳)有限公司 Virtual living character display method, device, storage medium and computer equipment
CN111888764B (en) * 2020-07-31 2022-02-22 腾讯科技(深圳)有限公司 Object positioning method and device, storage medium and electronic equipment
CN114470772A (en) * 2022-01-21 2022-05-13 腾讯科技(深圳)有限公司 Interface display method, device, terminal, storage medium and computer program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107913521A (en) * 2017-11-09 2018-04-17 腾讯科技(深圳)有限公司 The display methods and device of virtual environment picture

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SK289010B6 (en) * 2016-10-17 2022-11-24 Ústav experimentálnej fyziky SAV, v. v. i. Method of interactive quantification of digitized 3D objects using eye tracking camera
CN108228065B (en) * 2016-12-09 2021-10-01 阿里巴巴集团控股有限公司 Method and device for detecting UI control information and electronic equipment
CN109947332A (en) * 2019-03-21 2019-06-28 广州视源电子科技股份有限公司 A kind of page elements highlight method, apparatus, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107913521A (en) * 2017-11-09 2018-04-17 腾讯科技(深圳)有限公司 The display methods and device of virtual environment picture

Also Published As

Publication number Publication date
CN110448907A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
CN109126129B (en) Method, device and terminal for picking up virtual article in virtual environment
CN109529319B (en) Display method and device of interface control and storage medium
KR102637047B1 (en) Virtual object control method, device and media for marking virtual items
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
WO2021143259A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN110694261A (en) Method, terminal and storage medium for controlling virtual object to attack
CN111467802B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN111481934B (en) Virtual environment picture display method, device, equipment and storage medium
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN111282266B (en) Skill aiming method, device, terminal and storage medium in three-dimensional virtual environment
CN110448907B (en) Method and device for displaying virtual elements in virtual environment and readable storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN112076467A (en) Method, device, terminal and medium for controlling virtual object to use virtual prop
CN113509714B (en) Virtual prop synthesis preview method, device, terminal and storage medium
CN111672118B (en) Virtual object aiming method, device, equipment and medium
CN111659117A (en) Virtual object display method and device, computer equipment and storage medium
CN108786110A (en) Gun sight display methods, equipment and storage medium in virtual environment
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN112138374B (en) Virtual object attribute value control method, computer device, and storage medium
CN110448905B (en) Virtual object control method, device, equipment and storage medium in virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant