CN111672100A - Virtual item display method in virtual scene, computer equipment and storage medium - Google Patents

Virtual item display method in virtual scene, computer equipment and storage medium Download PDF

Info

Publication number
CN111672100A
CN111672100A CN202010473631.0A CN202010473631A CN111672100A CN 111672100 A CN111672100 A CN 111672100A CN 202010473631 A CN202010473631 A CN 202010473631A CN 111672100 A CN111672100 A CN 111672100A
Authority
CN
China
Prior art keywords
virtual
prop
appearance
virtual object
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010473631.0A
Other languages
Chinese (zh)
Other versions
CN111672100B (en
Inventor
裴媛媛
杨韧
王立刚
李庆文
陈亮亮
李乐鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010473631.0A priority Critical patent/CN111672100B/en
Publication of CN111672100A publication Critical patent/CN111672100A/en
Application granted granted Critical
Publication of CN111672100B publication Critical patent/CN111672100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Abstract

The application relates to a virtual item display method, computer equipment and a storage medium in a virtual scene, and relates to the technical field of virtual scenes. The method comprises the following steps: by displaying the virtual scene picture, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold value, the user-defined appearance parameter of the first virtual prop is obtained from the server, and when the second virtual object is in the virtual scene picture, the first virtual prop equipped on the second virtual object is displayed in the virtual scene picture according to the user-defined appearance parameter of the first virtual object, so that the terminal can obtain the user-defined appearance parameters of other virtual objects when the other virtual objects meet specified conditions, the data volume of the terminal for obtaining the user-defined appearance parameters is reduced, and the network flow of the terminal is saved.

Description

Virtual item display method in virtual scene, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method for displaying virtual items in a virtual scene, a computer device, and a storage medium.
Background
Currently, in game-type applications with virtual weapons, such as first-person shooter-type games, a user may select other models than the initial model of the virtual weapon and may present the selected virtual weapon model under a virtual scene during the battle.
In the related art, model appearances corresponding to a plurality of virtual weapons may be preset in a game application program, the same virtual weapon may have different model appearances, and a user may customize the model appearance of the virtual weapon when the user does not enter into a battle, for example, the user selects any one of the model appearances of the virtual weapon in a hall for preparing the battle as the model appearance of the virtual weapon displayed in the next battle process.
In the related art, in order to show the model appearances of the virtual weapons of other users during a battle, a user terminal needs to acquire data of the model appearances of the virtual weapons of all other users during the battle, which requires the terminal to acquire a large amount of data from a server, thereby wasting network traffic of the terminal.
Disclosure of Invention
The embodiment of the application provides a virtual item display method, computer equipment and a storage medium in a virtual scene, which can reduce the waste of network flow of a terminal when displaying the custom appearance of a virtual item equipped by other virtual objects in the virtual scene, and adopts the following technical scheme:
on one hand, a method for displaying virtual props in a virtual scene is provided, and the method comprises the following steps:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed at a visual angle corresponding to a first virtual object;
responding to the fact that the distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold value, and obtaining a user-defined appearance parameter of a first virtual prop from a server, wherein the first virtual prop is a virtual prop equipped with the second virtual object; the user-defined appearance parameter is used for indicating the user-defined appearance effect of the virtual prop;
and responding to the situation that the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual object.
On one hand, a method for displaying virtual props in a virtual scene is provided, and the method comprises the following steps:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed at a visual angle corresponding to a first virtual object;
responding to the fact that the distance between a second virtual object and the first virtual object in the virtual scene is larger than or equal to a first distance threshold value, and displaying a first virtual prop equipped on the second virtual object in the virtual scene picture according to the self-defined appearance parameter of the first virtual prop;
and in response to the distance between the second virtual object and the first virtual object in the virtual scene being smaller than the first distance threshold, displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to default appearance parameters.
In one aspect, a device for displaying virtual items in a virtual scene is provided, the device comprising:
the image display module is used for displaying a virtual scene image, wherein the virtual scene image is an image of a virtual scene observed at a visual angle corresponding to a first virtual object;
the parameter obtaining module is used for responding that the distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold value, and obtaining a user-defined appearance parameter of a first virtual prop from a server, wherein the first virtual prop is a virtual prop equipped with the second virtual object; the user-defined appearance parameter is used for indicating the user-defined appearance effect of the virtual prop;
and the prop display module is used for responding to the situation that the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the user-defined appearance parameter of the first virtual object.
In one possible implementation, the apparatus further includes:
the parameter query module is used for querying the custom appearance parameter of the first virtual item in a local buffer in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold before the custom appearance parameter of the first virtual item is acquired from a server in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold;
the parameter acquisition module comprises:
and the parameter obtaining submodule is used for responding to the situation that the user-defined appearance parameter of the first virtual item is not inquired in the local buffer, and obtaining the user-defined appearance parameter of the first virtual item from the server.
In one possible implementation, the apparatus further includes:
and the parameter storage module is used for storing the user-defined appearance parameters of the first virtual prop acquired from the server into a local cache.
In a possible implementation manner, the parameter obtaining module includes:
the identification obtaining submodule is used for obtaining the identification of the first virtual item from a virtual scene server; the virtual scene server is a server for providing background support for the virtual scene;
the appearance parameter obtaining submodule is used for obtaining the user-defined appearance parameters of the first virtual prop from an appearance user-defined server through the identifier of the first virtual prop; the appearance self-defining server is a server for providing background support for the appearance effect self-defining function of the virtual prop.
In one possible implementation, the apparatus further includes:
and the first prop displaying module is used for responding that the distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold, the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the default appearance parameter of the first virtual prop.
In one possible implementation, the customized appearance parameters include at least one of customized fill shading parameters and customized pattern parameters; the user-defined filling shading parameter is used for indicating filling shading superposed on the model surface of the virtual prop; the user-defined pattern parameter is used for indicating a pattern superposed on the filling shading of the virtual prop;
the parameter acquisition module comprises:
a first parameter obtaining submodule, configured to obtain a custom filling shading parameter of the first virtual prop from the server in response to a distance between the second virtual object and the first virtual object in the virtual scene being smaller than the first distance threshold and larger than a second distance threshold;
a second parameter obtaining sub-module, configured to obtain, from the server, a custom pattern parameter of the first virtual item in response to a distance between the second virtual object and the first virtual object in the virtual scene being smaller than or equal to the second distance threshold;
wherein the first distance threshold is greater than the second distance threshold.
In one possible implementation, the apparatus further includes:
the system comprises a custom interface display module, a display module and a display module, wherein the custom interface display module is used for displaying an appearance custom interface before displaying a virtual scene picture, and the appearance custom interface comprises a virtual prop display area and a custom control area; the virtual prop display area is used for displaying a model of a second virtual prop, and the user-defined control area comprises user-defined controls of various appearance elements;
the element superposition module is used for responding to the selection operation of a target user-defined control and superposing an appearance element corresponding to the target user-defined control on the model of the second virtual prop;
the parameter generating module is used for responding to a user-defined finishing operation and generating a user-defined appearance parameter of the second virtual prop according to an appearance element superposed on the model of the second virtual prop;
the parameter uploading module is used for uploading the user-defined appearance parameters of the second virtual prop to a server;
the device further comprises:
and the second prop display module is used for responding to the first virtual object to equip the second virtual prop in the virtual scene, and displaying the second virtual prop equipped on the first virtual object in the virtual scene picture according to the user-defined appearance parameter of the second virtual prop.
In one possible implementation, the model of the second virtual prop includes at least two customizable components;
the device further comprises:
the component determination module is used for determining a target component with a self-defined appearance in response to the selection operation of all or part of the at least two components before the appearance element corresponding to the target definition control is superposed on the model of the second virtual prop in response to the selection operation of the target self-defined control;
the element superposition module comprises:
and the shading superposition submodule is used for responding to the selection operation of a target self-defined control, the appearance element corresponding to the target self-defined control is a filling shading, and the filling shading corresponding to the target self-defined control is superposed on the target assembly.
In one possible implementation, the element superposition module includes:
the pattern superposition submodule is used for responding to selection operation of a target self-defining control, displaying the target pattern on the model of the second virtual prop in a superposition mode when an appearance element corresponding to the target self-defining control is a target pattern, and adjusting the pattern;
and the pattern adjusting sub-module is used for responding to the adjusting operation executed on the pattern adjusting control and adjusting the target pattern.
In one possible implementation, the adjustment operation includes at least one of a size scaling operation, a position adjustment operation, and a pose adjustment operation.
In one possible implementation, the apparatus further includes:
the pattern combination module is used for responding to the user-defined completion operation, enabling at least two patterns with overlapped areas to exist on the model of the second virtual prop, and combining the at least two patterns with overlapped areas into a single combined pattern;
and the control adding module is used for adding the custom control corresponding to the combined pattern in the custom control area.
In one aspect, a device for displaying virtual items in a virtual scene is provided, the device comprising:
the image display module is used for displaying a virtual scene image, wherein the virtual scene image is an image of a virtual scene observed at a visual angle corresponding to a first virtual object;
the user-defined prop display module is used for responding to the fact that the distance between a second virtual object and the first virtual object in the virtual scene is larger than or equal to a first distance threshold value, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the user-defined appearance parameters of the first virtual prop;
and the default prop display module is used for displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to default appearance parameters in response to the fact that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold value.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the method for displaying virtual items in the above virtual scene.
In yet another aspect, a computer-readable storage medium is provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for displaying virtual props in the above virtual scene.
At least one aspect relates to a computer program product configured to cause: when executed on a data processing system comprising a processor and a memory, cause the data processing system to perform the method of the above aspect. The computer program product may be embodied in or provided on a tangible, non-transitory computer readable medium.
According to the method and the device, the virtual scene picture is displayed, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the user-defined appearance parameters of the first virtual prop are obtained from the server, when the second virtual object is located in the virtual scene picture, the first virtual prop equipped on the second virtual object is displayed in the virtual scene picture according to the user-defined appearance parameters of the first virtual object, so that the terminal can obtain the user-defined appearance parameters of other virtual objects when the other virtual objects meet the specified conditions, the data volume of the terminal for obtaining the user-defined appearance parameters is reduced, and the network flow of the terminal is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of an appearance customization interface provided by an exemplary embodiment of the present application;
fig. 3 is a schematic diagram of a process of displaying virtual items in a virtual scene according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a trigger for obtaining a custom appearance parameter according to the embodiment shown in FIG. 3;
fig. 5 is a schematic diagram of a process of displaying virtual items in a virtual scene according to an exemplary embodiment of the present application;
fig. 6 is a flowchart of a method for displaying a virtual item in a virtual scene according to an exemplary embodiment of the present application;
FIG. 7 is a schematic interface diagram of a ground color fill according to the embodiment of FIG. 6;
FIG. 8 is a schematic interface diagram of a shading filling according to the embodiment shown in FIG. 6;
FIG. 9 is a schematic interface diagram of an overall filling of shading according to the embodiment shown in FIG. 6;
FIG. 10 is a schematic illustration of a calculation of a fill shading and fill shading according to the embodiment of FIG. 6;
FIG. 11 is a schematic diagram of a target pattern overlay interface according to the embodiment shown in FIG. 6;
FIG. 12 is a schematic diagram of an embodiment of FIG. 6 relating to superimposing a target pattern with map coordinates;
FIG. 13 is a schematic diagram of a combined pattern generation according to the embodiment of FIG. 6;
FIG. 14 is a schematic diagram of an interface generated by a customization scheme according to the embodiment shown in FIG. 6;
FIG. 15 is a schematic illustration of a lobby preparation interface according to the embodiment of FIG. 6;
FIG. 16 is a flow chart of the display of virtual props during a battle according to the embodiment shown in FIG. 5;
fig. 17 is a block diagram illustrating a structure of a virtual item display device in a virtual scene according to an exemplary embodiment of the present application;
fig. 18 is a block diagram illustrating a structure of a virtual item display device in a virtual scene according to an exemplary embodiment of the present application;
fig. 19 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
First, terms referred to in the embodiments of the present application are described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual props: the tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon which can hurt other virtual objects, such as a pistol, a rifle, a sniper, a dagger, a knife, a sword, an axe and the like, and a supply tool such as a bullet, wherein a quick cartridge clip, a sighting telescope, a silencer and the like are arranged on the appointed virtual weapon, and can provide a virtual pendant with partial added attributes for the virtual weapon, and defense tools such as a shield, a armor, a armored car and the like.
Wherein, virtual weapon can be called main virtual prop, and virtual pendant can be called auxiliary virtual prop.
First person shooter game: the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
User Interface (UI) controls: refers to any visual control or element that is visible on the user interface of the application, such as controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which are responsive to user actions.
The virtual prop of "equipment, carry or assemble" in this application refers to a virtual prop owned by a virtual object. For example, a virtual item equipped or assembled with a virtual object refers to a virtual item that the virtual object is using; in addition, the virtual object has a backpack, backpack grids exist in the backpack, and the virtual prop carried by the virtual object is the virtual prop stored in the backpack of the virtual object.
The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, a first-person shooter game, a Multiplayer Online Battle arena games (MOBA), and the like, and the following embodiments are exemplified by application in the first-person shooter game.
Referring to fig. 1, a schematic structural diagram of a terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the touch system 150 is used to detect a trigger operation performed by a user on the display component or the key.
The power supply 160 is used to power the various other components in the terminal.
In the embodiment of the present application, the processor in the main board 110 may generate a virtual scene by executing or calling the program code and data stored in the memory, and expose the generated virtual scene through the external input/output device 120. In the process of displaying the virtual scene, a trigger operation executed when the user interacts with the virtual scene may be detected by the capacitive touch system 150.
For example, please refer to fig. 2, which illustrates a schematic diagram of an appearance customization interface provided by an exemplary embodiment of the present application. As shown in fig. 2, the virtual item selection interface includes a virtual item display area 20 and a custom control area 21, where the virtual item display area 20 may include a virtual item type selection control 201 and a virtual item model display area 202, and the custom control area 21 may include a custom control 211.
When the user triggers the custom control 211 in the custom control area 21, the appearance element corresponding to the custom control 211 can be displayed in the virtual item model display area 202 in the virtual item display area 20.
In a possible implementation manner, the appearance customization interface may be an operation interface for customizing the appearance of the owned virtual prop by a user viewing the owned virtual prop in a backpack, or may be an operation interface for customizing the appearance of all the virtual props by a user viewing an equipment picture.
Please refer to fig. 3, which illustrates a schematic diagram of a process of displaying a virtual item in a virtual scene according to an exemplary embodiment of the present application. Wherein, the method can be executed by a computer device, and the computer device can be a terminal. As shown in fig. 3, the computer device may present the virtual item by performing the following steps.
Step 31, displaying a virtual scene picture, wherein the virtual scene picture is a picture of the virtual scene observed at a viewing angle corresponding to the first virtual object.
In a possible implementation manner, taking a terminal to display the virtual scene picture as an example, the virtual scene picture is a picture in a program interface displayed when the terminal runs a virtual scene application program; the first virtual object is a virtual object controlled by a user through a terminal, or in other words, the first virtual object is a virtual object corresponding to a currently logged-in user account in a virtual scene application program run by the terminal.
Step 32, in response to that the distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold, obtaining a custom appearance parameter of a first virtual prop from a server, wherein the first virtual prop is a virtual prop equipped with the second virtual object; the customized appearance parameter is used for indicating the customized appearance effect of the user on the virtual prop.
In a possible implementation manner, the first distance threshold is a distance threshold preset by a developer or a maintenance person.
Optionally, the first distance threshold is a user-defined distance threshold. For example, when the terminal runs the virtual scene application, a distance threshold setting interface may be presented, the user may perform a distance threshold setting operation in the distance threshold setting interface, and accordingly, the terminal sets the first distance threshold in response to the distance threshold setting operation.
Step 33, in response to the second virtual object being in the virtual scene picture, displaying the first virtual item equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual object.
In one possible implementation, in response to the distance between the second virtual object and the first virtual object in the virtual scene being greater than or equal to the first distance threshold, the computer device does not perform the step of obtaining the customized appearance parameter of the first virtual prop from the server.
For example, please refer to fig. 4, which illustrates a schematic diagram of a trigger for obtaining a custom appearance parameter according to an embodiment of the present application. As shown in fig. 4, in the virtual scene 42 presented by the terminal 41, there are a virtual object 42a, a virtual object 42b, and a virtual object 42c, where the virtual object 42a is a virtual object controlled by the terminal 41, and the virtual object 42b and the virtual object 42c are virtual objects controlled by other terminals, respectively. In fig. 4, the area 42d around the virtual object 42a is a circular area with a radius of a first distance threshold, the virtual object 42b is within the area 42d, i.e. the distance between the virtual object 42b and the virtual object 42a is smaller than the first distance threshold, and the virtual object 42c is outside the area 42d, i.e. the distance between the virtual object 42c and the virtual object 42a is larger than the first distance threshold; at this time, the terminal 41 acquires the custom appearance parameters of the virtual item equipped with the virtual object 42b from the server 43, without acquiring the custom appearance parameters of the virtual item equipped with the virtual object 42c from the server 43. Taking a game scene as an example, the range of some game scenes is very large, and accordingly, there are many virtual objects (for example, hundreds) in the game scene, and in one game, for a certain player, the number of other objects within a certain range (for example, a circular range with a radius equal to a first distance threshold) that can be close to the virtual object controlled by the player is usually not too many, and with the solution shown in the embodiment of the present application, the terminal of the player only needs to obtain the customized appearance parameters corresponding to a small number of other virtual objects from the server, and does not need to obtain the customized appearance parameters corresponding to all other virtual objects in the virtual scene.
To sum up, in the scheme shown in the embodiment of the present application, by displaying the virtual scene picture, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the user-defined appearance parameter of the first virtual prop is obtained from the server, and when the second virtual object is in the virtual scene picture, the first virtual prop equipped on the second virtual object is displayed in the virtual scene picture according to the user-defined appearance parameter of the first virtual object, so that the terminal obtains the user-defined appearance parameters of other virtual objects when the other virtual objects meet the specified conditions, the data volume of obtaining the user-defined appearance parameters by the terminal is reduced, and the network traffic of the terminal is saved.
Through the scheme shown in fig. 3, when the terminal displays the virtual scene, the display process of the virtual item equipped to other virtual objects can be as shown in fig. 4. Please refer to fig. 5, which illustrates a schematic diagram of a process of displaying a virtual item in a virtual scene according to an exemplary embodiment of the present application. As shown in fig. 5, a terminal (for example, the terminal shown in fig. 1) running an application corresponding to the virtual item may display the virtual item by performing the following steps.
Step 51, displaying a virtual scene picture, wherein the virtual scene picture observes the picture of the virtual scene at a viewing angle corresponding to the first virtual object.
Step 52, in response to that the distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold, displaying the first virtual item equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual item.
Step 53, in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, displaying the first virtual item equipped on the second virtual object in the virtual scene picture according to default appearance parameters.
To sum up, in the scheme shown in the embodiment of the present application, by displaying the virtual scene picture, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the virtual prop equipped with the second virtual object is displayed according to the customized appearance parameter corresponding to the second virtual object, and when the distance between the second virtual object and the first virtual object is not smaller than the first distance threshold, the virtual prop equipped with the second virtual object is displayed according to the default appearance parameter, so that the terminal obtains the customized appearance parameters of other virtual objects when the other virtual objects meet the specified conditions, the data volume of obtaining the customized appearance parameters by the terminal is reduced, and the network traffic of the terminal is saved.
Please refer to fig. 6, which shows a flowchart of a method for displaying a virtual item in a virtual scene according to an exemplary embodiment of the present application. Wherein, the method can be executed by a computer device, and the computer device can be a terminal. As shown in fig. 6, taking the computer device as a terminal as an example, the terminal may display the virtual item by performing the following steps.
Step 601, displaying an appearance self-defining interface.
In the embodiment of the disclosure, when entering a game application program with a virtual item display scene, a trigger operation may be performed according to a user-defined requirement on the appearance of a virtual item, so as to display an interface for performing appearance definition on the virtual item.
The appearance customized interface can comprise a virtual prop display area and a customized control area; the virtual prop display area may be used to display a model of a second virtual prop, and the custom control area may include custom controls of various appearance elements.
The appearance self-defining interface can be an editing interface for presetting the appearance of the virtual prop by each user before entering the game-play state.
In a possible implementation manner, the virtual item display area may include a trigger control for selecting a second virtual item type, and when a trigger operation on the trigger control corresponding to the second virtual item of the designated type is received, a model of the second virtual item of the designated type may be displayed in the virtual item display area.
In one possible implementation, the appearance element in the custom control area may include at least one of a ground color filled in a designated area on the model of the second virtual prop, a ground pattern filled in the designated area, and a pattern of a decal on the surface of the model of the second virtual prop.
When the user triggers the designated area on the model of the second virtual prop, the user can determine to perform the next operation on the designated area, and if the user selects the corresponding custom control in the custom control area, when the selected custom control is the filling of the ground color or the ground pattern, the designated area can be filled with the ground color or the ground pattern corresponding to the custom control.
For example, please refer to fig. 7, which illustrates an interface diagram of a ground color filling according to an embodiment of the present application; as shown in fig. 7, by performing a trigger operation on the control 71 corresponding to the grip area of the virtual weapon model, a ground color may be filled in the grip area 72, and a custom control corresponding to a desired ground color, for example, a custom control corresponding to blue, is selected in the custom control area, and after the selection operation is completed, the grip area 72 of the virtual weapon model may be filled with blue ground color.
Please refer to fig. 8, which illustrates an interface diagram of a shading filling according to an embodiment of the present application; as shown in fig. 8, by performing a trigger operation on a control 71 corresponding to a grip region of the virtual weapon model, a shading may be filled in the region 72, and a required shading, which may be a blue dot shading, is selected in the user-defined control region, and after the selection operation is completed, the grip region of the virtual weapon model may be filled with the blue dot shading.
Alternatively, please refer to fig. 9, which illustrates an interface diagram of a shading integral filling according to an embodiment of the present application; as shown in fig. 9, when the shading filling is selected to be performed on the whole of the virtual weapon model, the control 91 corresponding to the whole of the virtual weapon model may be triggered, and when the blue dot shading is selected in the custom control area, and when the specific filling area is designated as the whole model, the blue dot shading filling may be performed on the whole model of the virtual weapon.
Step 602, in response to the selection operation of all or part of the at least two customizable components, determining the target component with customized appearance.
The model of the second virtual prop may include the at least two customizable components.
In a possible implementation manner, a selection function of the appearance-customized component is provided for the user, that is, the user determines to perform appearance customization on one or more components through a selection operation of the customized component in the model of the second virtual prop, and the component selected by the user is the target component.
For example, taking the example that the virtual item is a virtual gun in a game scene, the virtual gun includes three customizable components, namely a stock, a gun body receiver and a gun handle. In one possible implementation, the user determines that the appearance is customized as a whole with the full gun by selecting all three customizable components; in another possible implementation, the user determines to individually customize the appearance of a component by selecting any one of the stock, the body receiver, and the grip.
Step 603, in response to the selection operation of the target custom control, overlaying an appearance element corresponding to the target custom control on the model of the second virtual prop.
In the embodiment of the disclosure, when a selection operation of a target custom control in the custom control area is received, appearance elements corresponding to the target custom controls may be superimposed on a model of a second virtual item displayed in the virtual item display area.
In one possible implementation manner, in response to a selection operation on a target custom control, and an appearance element corresponding to the target custom control is a filling shading, the filling shading corresponding to the target custom control is superimposed on a target component.
The appearance elements corresponding to the target custom controls can be different, and the selection operation of the target custom controls can be performed at least once.
In a possible implementation manner, when the appearance element corresponding to the target custom control includes a filling shading and a filling ground color, in order to superimpose the filling shading and the filling ground color into a target component, it is necessary to obtain an initial color value of each model vertex data in an editing region where the target component is superimposed in an initial model of the virtual prop, and then obtain the filling shading and the filling ground color, multiply the filling shading superimposed on the initial color value of each corresponding model vertex data by a color value of the filling ground color, and determine a corresponding filling effect in the editing region of each superimposed target component.
For example, when the virtual prop is a virtual weapon, the color of the vertex data of the body area of the initial model of the virtual weapon is red, the color of the vertex data of the stock area is green, and the color of the vertex data of the grip area is blue, if the filling shading used in three different areas is different, the filling effect in the area needs to superpose the color value of the vertex data and the filling shading used in the area, and then multiply the filling color value used in the area, and the customized filling shading and the filling shading can be realized by changing the filling shading used in the area and the filling color value used in the area.
Please refer to fig. 10, which illustrates a schematic diagram of a calculation of filling ground color and filling ground tint according to an embodiment of the present application; as shown in fig. 10, the vertex data color value 1001 is a red area and the vertex data color value 1001 is a blue area, the used filling color values 1003 are all white (255, 255, 255), it can be obtained through calculation that the custom ground color in the two areas should be consistent with the pattern color of the filling ground tint 1002, the used filling color value 1003 in the area with the vertex data color value green is blue (34, 30, 255), and the custom ground color in the area with the vertex data color value 1001 green can be obtained through calculation that the custom ground color in the area with the vertex data color value green is blue, because the filling ground tint 1002 used in the area with the vertex data color value green 1001 is a striped pattern, the area becomes a blue striped pattern, and finally, the effect 1004 of overlaying the color values is displayed on the virtual prop model.
In another possible implementation manner, in response to a selection operation on the target custom control, and an appearance element corresponding to the target custom control is a target pattern, the target pattern is displayed in a superimposed manner on the model of the second virtual prop, and the pattern adjustment control adjusts the target pattern in response to an adjustment operation performed on the pattern adjustment control.
When the appearance element corresponding to the target custom control comprises a target pattern, and after the target custom control is selected, the target pattern corresponding to the target custom control can be superposed on the virtual prop model in the virtual prop display area, and the pattern adjusting control for adjusting the target pattern can adjust the pattern through adjusting the pattern adjusting control, so that the target pattern can be adjusted.
Wherein the adjustment operation comprises at least one of a size scaling operation, a position adjustment operation, and a pose adjustment operation.
For example, the adjusting of the target pattern may include at least one of a rotation angle of rotating the target pattern, a change of transparency of the target pattern, an adjustment of a size of the target pattern, and an adjustment of a degree of offset of a texture in the target pattern.
For example, please refer to fig. 11, which illustrates a schematic diagram of a target pattern overlay interface according to an embodiment of the present application; as shown in fig. 11, when a smiling face pattern is selected as a target pattern in a custom control, a smiling face pattern 1103 that can be adjusted and also can be deleted can be generated in a virtual prop display area, the position and size of the smiling face pattern can be adjusted by dragging a moving control 1102 at the lower right corner, and the smiling face pattern can be deleted by touching a control 1101 at the upper left corner.
In a possible implementation mode, generating mapping coordinate (UV) information of the virtual prop model, when a target user-defined control is selected, projecting and updating the UV information into the mapping coordinate information according to a rotation angle, a transparency, a size and a texture offset set by an adjusting operation, displaying an adjusted target pattern on the virtual prop model according to the mapping coordinate information, and displaying the superimposed target pattern on the virtual prop model through the same mapping process.
For example, please refer to fig. 12, which illustrates a schematic diagram of superimposing a target pattern by map coordinates according to an embodiment of the present application; as shown in fig. 12, superimposed decal parameters 1202 such as the rotation angle, transparency, size, and texture offset of the decal picture smiley face 1201 are changed by an adjustment operation, the decal picture smiley face is mapped on corresponding decal coordinates (UV)1203 according to these values, and a superimposed smiley face pattern 1204 is displayed on the virtual item model according to the mapped decal coordinates.
In another possible implementation manner, by using a manner that two rendering targets are alternately drawn, one of the rendering targets is used for overlaying the pattern and is arranged on the material of the virtual prop, the other rendering target is used for recording the drawing result of the previous time, and the next time of overlaying is performed, the two rendering targets are alternately used, so that the overlaying effect of the target pattern is realized.
In addition, in response to the user-defined finishing operation and the existence of the pattern with at least two overlapped areas on the model of the second virtual prop, combining the pattern with at least two overlapped areas into a single combined pattern; and adding a custom control corresponding to the combined pattern in the custom control area.
Wherein, the single combined pattern can be formed by combining and superposing according to at least one basic figure.
Wherein the basic pattern may be at least one of a rectangle, a triangle, and a circle.
For example, please refer to fig. 13, which illustrates a schematic diagram of a combined pattern generation according to an embodiment of the present application; as shown in fig. 13, after four circular patterns with different colors are combined and superimposed, it may be determined that the generated pattern of the target component is a pattern 1301 similar to a bulls-eye, the bulls-eye pattern 1301 may be one of decal patterns and stored in a decal pattern library, and the bulls-eye pattern may be superimposed on a virtual prop model currently performing appearance customization by selecting a trigger control 1302 corresponding to the bulls-eye pattern.
In one possible implementation, the single combined pattern is formed by combining and superposing a preset target pattern and at least one basic pattern.
Wherein, the data corresponding to the single combined pattern can be stored in the database for storing the target pattern data, or can also be stored in the memory.
Step 604, in response to the custom completion operation, generating a custom appearance parameter of the second virtual item according to the appearance element superimposed on the model of the second virtual item.
In the embodiment of the disclosure, when appearance customization is completed, the terminal may obtain fixed data of an appearance element corresponding to a control performing selection operation and data performing desired adjustment, and generate a customized appearance parameter.
The customized appearance parameters can be used for constructing a customized model of the virtual prop.
Step 605, uploading the customized appearance parameters of the second virtual item to a server.
In this embodiment of the disclosure, the terminal may upload the customized appearance parameters corresponding to the customized second virtual item to the appearance customization server.
The appearance self-defining server can be a hall server, and the hall server is used for providing a server with background support for the virtual scene matching service.
In a possible implementation manner, the model of the customized virtual prop constructed by the customized appearance parameters can be stored in the server for the user to query and set at any time.
For example, please refer to fig. 14, which shows a schematic interface diagram generated by a customization scheme according to an embodiment of the present application; as shown in fig. 14, the virtual weapon model after the target pattern is superimposed may be stored as scheme 1, and when the user performs touch operation on the trigger control 1401 corresponding to scheme 1, the terminal may directly invoke the custom appearance parameters in scheme 1 to construct the virtual weapon model into the custom appearance 1302 corresponding to scheme 1.
For example, the customized appearance of the virtual prop edited by the user in the hall interface may be converted into data to be uploaded to the server, the data of N parts are required for the ground color and the ground pattern data, the data of the applique needs to occupy a large amount of data, and the specific occupied amount of data may be as shown in table 1 below.
Data field Numerical value Type (B) Size and breadth
Applique ID
1001 int32 4 bytes
Color value ID 1 int8 1 byte
Rotate 0.0 float 4 bytes
Transparency 1.0 float 4 bytes
Zoom (4.0,4.0) float2 8 bytes
Texture migration (16.0,8.0) float2 8 bytes
TABLE 1
From the above data amount, the data amount required for one decal is calculated to be 4+1+4+4+8+8 ≈ 29 (byte), and if there are 30 decals, the required data amount is 29 × 30 ≈ 870 (byte) ≈ 0.84KB
In one possible implementation, the request for data from the lobby server may be via the TCP protocol.
Step 606, showing the virtual scene picture.
In the embodiment of the disclosure, the terminal may display the virtual scene picture in the battle process of the application program by receiving an operation instruction of a user, where the virtual scene picture is a picture of a virtual scene observed at a viewing angle corresponding to the first virtual object.
Wherein, a first virtual object and a second virtual object can be included in the virtual scene picture, wherein the second virtual object can be other virtual objects except the first virtual object. The first virtual object and the second virtual object may be equipped with virtual props and may be presented in a virtual scene.
Referring to fig. 15, a schematic diagram of a lobby preparation interface according to an embodiment of the present application is shown; in one possible implementation, the virtual scene is a lobby preparation picture as described in fig. 15, or a picture during a battle.
The user-defined virtual props of the prepared pictures in the hall can be displayed.
Step 607, in response to the distance between the second virtual object and the first virtual object in the virtual scene being less than the first distance threshold, querying the customized appearance parameter of the first virtual item in the local buffer.
In the embodiment of the disclosure, in the battle process, when it is acquired that the distance of the second virtual object in the virtual scene is smaller than the first distance threshold, the terminal corresponding to the first virtual object queries whether the terminal has the custom appearance parameter of the first virtual item in the local buffer.
Wherein the first virtual item may be a virtual item of a second virtual object equipment; the customized appearance parameters may be used to indicate the user's customized appearance effect on the virtual prop.
For example, when the first distance threshold is preset to be 5m, in a virtual scene of the user a, a virtual object operated by the user a is taken as a circular point and is taken as a circle with a radius of 5m, and if it is detected that other virtual objects appear in the circular area, whether custom appearance parameters corresponding to virtual props held by the other virtual objects are cached locally or not is queried in a terminal used by the user a.
In one possible implementation manner, when the distance between the first virtual object and the second virtual object in the virtual scene within a period of time is smaller than the first distance threshold, the local cache of the first virtual object may store the customized appearance parameter of the first virtual item.
In one possible implementation manner, in response to the customized appearance parameter of the first virtual item being queried in the local cache, the first virtual item equipped on the second virtual object is displayed in the virtual scene picture according to the customized appearance parameter being queried locally.
Step 608, in response to that the customized appearance parameter of the first virtual item is not queried in the local buffer, obtaining the customized appearance parameter of the first virtual item from the server.
In a possible implementation manner, the terminal obtains the identifier of the first virtual item from the virtual scene server.
The virtual scene server is a server for providing background support for the virtual scene.
The identification of the virtual item may include an identification of a virtual object holding the virtual item and a type identification of the virtual item.
That is, the identification of the first virtual item may include an identification of the second virtual object and an identification of the item type of the first virtual item.
For example, the identifier of the first virtual item may include an identifier of the second virtual object, the identifier of the second virtual object may include an account number, a name, and the like of the second virtual object, and the type of the first virtual item may include an identifier of a model, a name, and the like.
In one possible implementation, the corresponding custom appearance parameters may be different for the same type of virtual weapon for different virtual objects; for the same kind of virtual weapon of the same virtual object, the corresponding customized appearance parameters may be one or more fixed.
In a possible implementation manner, the customized appearance parameters of the first virtual item are obtained from the appearance customized server through the identifier of the first virtual item.
The appearance self-defining server can be a server for providing background support for the appearance effect self-defining function of the virtual prop. The appearance self-defining server can also be a hall server and is used for providing a server with background support for the virtual scene matching service.
In a possible implementation manner, in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold and larger than a second distance threshold, the customized filling shading parameter of the first virtual prop is obtained from the server.
Wherein the first distance threshold is greater than the second distance threshold.
For example, in the course of fighting, when the distance between the first virtual object and the second virtual object is within the specified range, the picture of the first virtual item displayed in the virtual scene of the first virtual object is small, and if the detailed pattern on the first virtual item is displayed, the display is unclear, and the picture of the virtual scene is not affected much, so that in order to save traffic, the detailed pattern on the first virtual item does not need to be obtained, and only the filling shading of the first virtual item needs to be displayed. Therefore, when the actual distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the first virtual object can obtain part of the customized appearance parameters, and when the actual distance is judged to be larger than the second distance threshold, the customized filling shading parameters of the first virtual prop can be obtained from the server.
In one possible implementation manner, in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than or equal to the second distance threshold, the custom pattern parameter of the first virtual prop is obtained from the server.
For example, in the course of fighting, when the distance between the first virtual object and the second virtual object is within the designated range, the picture of the first virtual item displayed in the virtual scene of the first virtual object is large, and the detail pattern on the first virtual item needs to be displayed, so that the parameter corresponding to the detail pattern on the first virtual item needs to be obtained to display the detail pattern of the first virtual item and the filling shading of the first virtual item needs to be displayed. Therefore, when the actual distance between the second virtual object and the first virtual object in the virtual scene is smaller than or equal to the second distance threshold, the first virtual object can obtain all the customized appearance parameters, and when the actual distance is judged to be smaller than or equal to the second distance threshold, the customized filling shading parameters and the customized pattern parameters of the first virtual prop can be obtained from the server.
The customized appearance parameters comprise at least one of customized filling shading parameters and customized pattern parameters; the user-defined filling shading parameter is used for indicating filling shading superposed on the model surface of the virtual prop; the custom pattern parameter is used to indicate a pattern superimposed on the filled shading of the virtual prop.
In another possible implementation manner, in response to that the distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold, and the second virtual object is in the virtual scene picture, the first virtual item equipped on the second virtual object is displayed in the virtual scene picture according to the default appearance parameter of the first virtual item.
For example, in the battle process, when the distance between the first virtual object and the second virtual object is within the designated range, the first virtual item may be displayed in the virtual scene of the first virtual object, but the displayed picture of the first virtual item is extremely small, and it is not necessary to display the detail pattern on the first virtual item and fill the shading, so in order to save the flow, when the actual distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold, the first virtual object may display the first virtual item according to the default appearance parameter of the first virtual item.
And step 609, storing the user-defined appearance parameters of the first virtual prop acquired from the server into a local cache.
In this embodiment of the present disclosure, the terminal may store part or all of the customized appearance parameters in a local cache, so as to query the corresponding customized appearance parameters according to the first virtual item identifier next time.
The customized appearance parameters can comprise customized filling shading parameters and customized pattern parameters.
For example, please refer to fig. 16, which shows a flowchart of virtual item display in a battle process according to an embodiment of the present application; please refer to fig. 16, which illustrates a flowchart of displaying virtual items during a battle according to an embodiment of the present application. As shown in fig. 16, during the battle, player 1 and player 2 are present. When the player 1 does not hold the gun in the fighting process, the player 2 holds the gun with the customized appearance, in the virtual scene corresponding to the player 2, the player 2 can synchronize the data of the weapon carried by the player 2 through the combat server to obtain the appearance parameters corresponding to the customized gun, wherein the data of the weapon may be refreshed by player 2 when acquiring the weapon during the battle (S1601), when player 2 enters the synchronous visual field range of player 1, the combat server synchronizes the weapon data of player 2 to player 1 (S1602), and when player 1 receives the weapon data of player 2, it requests the weapon self-defining appearance parameters of player 2 to the lobby server (S1603), and when player 1 requests the weapon self-defining appearance parameters of player 2 to the lobby server, whether the local cache data can be directly used is judged according to whether the local client has weapon self-defining appearance parameters of the cache player 2 (S1604). When the customized appearance parameters of player 2 are synchronized to player 1, the weapon customized appearance effect of player 2 is shown on the client of player 1.
To sum up, the scheme shown in the embodiment of the present application displays the virtual scene picture, obtains the customized appearance parameter of the first virtual prop from the server when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, and displays the first virtual prop equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual object when the second virtual object is in the virtual scene picture, so that the terminal can display the customized appearance virtual prop when meeting the specified conditions, the data size of the terminal obtaining the customized appearance parameter is reduced, and the network traffic of the terminal is saved.
Fig. 17 is a block diagram illustrating a structure of a virtual item presentation apparatus in a virtual scene according to an exemplary embodiment. The virtual item exhibition device can be used in computer equipment to execute all or part of the steps in the method shown in the corresponding embodiment of fig. 3 or fig. 6. The virtual item display device in the virtual scene may include:
a picture displaying module 1701 for displaying a virtual scene picture, wherein the virtual scene picture is a picture of the virtual scene observed at a viewing angle corresponding to the first virtual object;
a parameter obtaining module 1702, configured to obtain, from a server, a customized appearance parameter of a first virtual item in response to that a distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold, where the first virtual item is a virtual item equipped with the second virtual object; the user-defined appearance parameter is used for indicating the user-defined appearance effect of the virtual prop;
and a prop displaying module 1703, configured to, in response to that the second virtual object is in the virtual scene picture, display the first virtual prop equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual object.
In one possible implementation, the apparatus further includes:
the parameter query module is used for querying the custom appearance parameter of the first virtual item in a local buffer in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold before the custom appearance parameter of the first virtual item is acquired from a server in response to that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold;
the parameter obtaining module 1702 includes:
and the parameter obtaining submodule is used for responding to the situation that the user-defined appearance parameter of the first virtual item is not inquired in the local buffer, and obtaining the user-defined appearance parameter of the first virtual item from the server.
In one possible implementation, the apparatus further includes:
and the parameter storage module is used for storing the user-defined appearance parameters of the first virtual prop acquired from the server into a local cache.
In a possible implementation manner, the parameter obtaining module 1702 includes:
the identification obtaining submodule is used for obtaining the identification of the first virtual item from a virtual scene server; the virtual scene server is a server for providing background support for the virtual scene;
the appearance parameter obtaining submodule is used for obtaining the user-defined appearance parameters of the first virtual prop from an appearance user-defined server through the identifier of the first virtual prop; the appearance self-defining server is a server for providing background support for the appearance effect self-defining function of the virtual prop.
In one possible implementation, the apparatus further includes:
and the first prop displaying module is used for responding that the distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold, the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the default appearance parameter of the first virtual prop.
In one possible implementation, the customized appearance parameters include at least one of customized fill shading parameters and customized pattern parameters; the user-defined filling shading parameter is used for indicating filling shading superposed on the model surface of the virtual prop; the user-defined pattern parameter is used for indicating a pattern superposed on the filling shading of the virtual prop;
the parameter obtaining module 1702 includes:
a first parameter obtaining submodule, configured to obtain a custom filling shading parameter of the first virtual prop from the server in response to a distance between the second virtual object and the first virtual object in the virtual scene being smaller than the first distance threshold and larger than a second distance threshold;
a second parameter obtaining sub-module, configured to obtain, from the server, a custom pattern parameter of the first virtual item in response to a distance between the second virtual object and the first virtual object in the virtual scene being smaller than or equal to the second distance threshold;
wherein the first distance threshold is greater than the second distance threshold.
In one possible implementation, the apparatus further includes:
the system comprises a custom interface display module, a display module and a display module, wherein the custom interface display module is used for displaying an appearance custom interface before displaying a virtual scene picture, and the appearance custom interface comprises a virtual prop display area and a custom control area; the virtual prop display area is used for displaying a model of a second virtual prop, and the user-defined control area comprises user-defined controls of various appearance elements;
the element superposition module is used for responding to the selection operation of a target user-defined control and superposing an appearance element corresponding to the target user-defined control on the model of the second virtual prop;
the parameter generating module is used for responding to a user-defined finishing operation and generating a user-defined appearance parameter of the second virtual prop according to an appearance element superposed on the model of the second virtual prop;
the parameter uploading module is used for uploading the user-defined appearance parameters of the second virtual prop to a server;
the device further comprises:
and the second prop display module is used for responding to the first virtual object to equip the second virtual prop in the virtual scene, and displaying the second virtual prop equipped on the first virtual object in the virtual scene picture according to the user-defined appearance parameter of the second virtual prop.
In one possible implementation, the model of the second virtual prop includes at least two customizable components;
the device further comprises:
the component determination module is used for determining a target component with a self-defined appearance in response to the selection operation of all or part of the at least two components before the appearance element corresponding to the target definition control is superposed on the model of the second virtual prop in response to the selection operation of the target self-defined control;
the element superposition module comprises:
and the shading superposition submodule is used for responding to the selection operation of a target self-defined control, the appearance element corresponding to the target self-defined control is a filling shading, and the filling shading corresponding to the target self-defined control is superposed on the target assembly.
In one possible implementation, the element superposition module includes:
the pattern superposition submodule is used for responding to selection operation of a target self-defining control, displaying the target pattern on the model of the second virtual prop in a superposition mode when an appearance element corresponding to the target self-defining control is a target pattern, and adjusting the pattern;
and the pattern adjusting sub-module is used for responding to the adjusting operation executed on the pattern adjusting control and adjusting the target pattern.
In one possible implementation, the adjustment operation includes at least one of a size scaling operation, a position adjustment operation, and a pose adjustment operation.
In one possible implementation, the apparatus further includes:
the pattern combination module is used for responding to the user-defined completion operation, enabling at least two patterns with overlapped areas to exist on the model of the second virtual prop, and combining the at least two patterns with overlapped areas into a single combined pattern;
and the control adding module is used for adding the custom control corresponding to the combined pattern in the custom control area.
To sum up, in the scheme shown in the embodiment of the application, by displaying the virtual scene picture, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the user-defined appearance parameter of the first virtual prop is obtained from the server, and when the second virtual object is in the virtual scene picture, the first virtual prop equipped on the second virtual object is displayed in the virtual scene picture according to the user-defined appearance parameter of the first virtual object, so that the terminal obtains the user-defined appearance parameters of other virtual objects when the other virtual objects meet the specified conditions, the data volume of the terminal obtaining the user-defined appearance parameters is reduced, and the network traffic of the terminal is saved.
Fig. 18 is a block diagram illustrating a structure of a virtual item presentation device in a virtual scene according to an exemplary embodiment. The virtual item exhibition device in the virtual scene may be used in a terminal to execute all or part of the steps executed by the terminal in the method shown in the corresponding embodiment of fig. 5 or fig. 6. The virtual item display device in the virtual scene may include:
an image display module 1801, configured to display a virtual scene image, where the virtual scene image is an image of a virtual scene observed at a viewing angle corresponding to a first virtual object;
a custom prop display module 1802, configured to, in response to a distance between a second virtual object and the first virtual object in the virtual scene being greater than or equal to a first distance threshold, display, according to a custom appearance parameter of a first virtual prop, the first virtual prop equipped on the second virtual object in the virtual scene picture;
a default prop displaying module 1803, configured to, in response to that a distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, display the first virtual prop equipped on the second virtual object in the virtual scene picture according to a default appearance parameter.
To sum up, in the scheme shown in the embodiment of the present application, by displaying the virtual scene picture, when the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold, the virtual prop equipped with the second virtual object is displayed according to the customized appearance parameter corresponding to the second virtual object, and when the distance between the second virtual object and the first virtual object is not smaller than the first distance threshold, the virtual prop equipped with the second virtual object is displayed according to the default appearance parameter, so that the terminal obtains the customized appearance parameters of other virtual objects when the other virtual objects meet the specified conditions, the data volume of obtaining the customized appearance parameters by the terminal is reduced, and the network traffic of the terminal is saved.
FIG. 19 is a block diagram illustrating the architecture of a computer device 1900 according to an example embodiment. The computer device 1900 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture experts Group Audio Layer III, motion video experts compression standard Audio Layer 3), an MP4 player (Moving Picture experts Group Audio Layer IV, motion video experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1900 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1902 is used to store at least one instruction for execution by processor 1901 to implement all or part of the steps in the methods provided by the method embodiments herein.
In some embodiments, computer device 1900 may also optionally include: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a touch screen display 1905, a camera 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, providing the front panel of computer device 1900; in other embodiments, display 1905 may be at least two, each disposed on a different surface of computer device 1900 or in a folded design; in still other embodiments, display 1905 may be a flexible display disposed on a curved surface or on a folding surface of computer device 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing, or inputting the electric signals into the radio frequency circuit 1904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
The Location component 1908 is used to locate the current geographic Location of the computer device 1900 for navigation or LBS (Location Based Service). The Positioning component 1908 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.
Power supply 1909 is used to provide power to the various components in computer device 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, fingerprint sensor 1912, optical sensor 1915, and proximity sensor 1916.
The acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may cooperate with the acceleration sensor 1911 to acquire a 3D motion of the user with respect to the computer device 1900. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is disposed on the side frame of the computer device 1900, the user can detect a holding signal of the computer device 1900, and the processor 1901 can perform right-left hand recognition or quick operation based on the holding signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1914 is configured to collect a fingerprint of the user, and the processor 1901 identifies the user according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1914 may be disposed on the front, back, or side of computer device 1900. When a physical button or vendor Logo is provided on computer device 1900, fingerprint sensor 1919 may be integrated with the physical button or vendor Logo.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front panel of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the touch display 1905 is controlled by the processor 1901 to switch from a bright screen state to a dark screen state when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually becomes larger, the touch display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 19 is not intended to be limiting of computer device 1900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3 or fig. 4 or fig. 5 is also provided. For example, the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is further provided, and the computer program product stores at least one instruction, and the at least one instruction is loaded and executed by a processor to implement all or part of the steps of the method shown in the corresponding embodiment of fig. 3 or fig. 4 or fig. 5.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A method for displaying virtual props in a virtual scene is characterized by comprising the following steps:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed at a visual angle corresponding to a first virtual object;
responding to the fact that the distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold value, and obtaining a user-defined appearance parameter of a first virtual prop from a server, wherein the first virtual prop is a virtual prop equipped with the second virtual object; the user-defined appearance parameter is used for indicating the user-defined appearance effect of the virtual prop;
and responding to the situation that the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the customized appearance parameter of the first virtual object.
2. The method of claim 1, wherein prior to obtaining the custom appearance parameter of the first virtual prop from the server in response to the distance of the second virtual object from the first virtual object in the virtual scene being less than the first distance threshold, further comprising:
in response to a distance between a second virtual object and the first virtual object in the virtual scene being less than a first distance threshold, querying a local buffer for a custom appearance parameter of the first virtual prop;
the obtaining of the user-defined appearance parameter of the first virtual item from the server includes:
in response to not querying the custom appearance parameter of the first virtual item in a local buffer, obtaining the custom appearance parameter of the first virtual item from the server.
3. The method of claim 2, further comprising:
and storing the user-defined appearance parameters of the first virtual prop acquired from the server into a local cache.
4. The method of claim 1, wherein obtaining the custom appearance parameter of the first virtual prop from the server comprises:
acquiring an identifier of the first virtual item from a virtual scene server; the virtual scene server is a server for providing background support for the virtual scene;
acquiring a user-defined appearance parameter of the first virtual prop from an appearance user-defined server through the identifier of the first virtual prop; the appearance self-defining server is a server for providing background support for the appearance effect self-defining function of the virtual prop.
5. The method of claim 1, further comprising:
in response to that the distance between the second virtual object and the first virtual object in the virtual scene is greater than or equal to the first distance threshold value and the second virtual object is in the virtual scene picture, displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the default appearance parameter of the first virtual prop.
6. The method of claim 1, wherein the custom appearance parameters include at least one of custom fill shading parameters, and custom pattern parameters; the user-defined filling shading parameter is used for indicating filling shading superposed on the model surface of the virtual prop; the user-defined pattern parameter is used for indicating a pattern superposed on the filling shading of the virtual prop;
the obtaining, from the server, the customized appearance parameter of the first virtual prop in response to the distance between the second virtual object and the first virtual object in the virtual scene being smaller than a first distance threshold value, includes:
in response to the fact that the distance between the second virtual object and the first virtual object in the virtual scene is smaller than the first distance threshold and larger than a second distance threshold, obtaining a custom filling shading parameter of the first virtual prop from the server;
in response to the distance between the second virtual object and the first virtual object in the virtual scene being smaller than or equal to the second distance threshold, obtaining a custom pattern parameter of the first virtual prop from the server;
wherein the first distance threshold is greater than the second distance threshold.
7. The method according to any one of claims 1 to 6, wherein before presenting the virtual scene picture, the method further comprises:
displaying an appearance self-defining interface, wherein the appearance self-defining interface comprises a virtual prop display area and a self-defining control area; the virtual prop display area is used for displaying a model of a second virtual prop, and the user-defined control area comprises user-defined controls of various appearance elements;
in response to the selection operation of a target self-defining control, overlaying an appearance element corresponding to the target self-defining control on the model of the second virtual prop;
responding to a user-defined finishing operation, and generating a user-defined appearance parameter of the second virtual prop according to appearance elements superposed on the model of the second virtual prop;
uploading the user-defined appearance parameters of the second virtual prop to a server;
the method further comprises the following steps:
responding to the first virtual object to equip the second virtual prop in the virtual scene, and displaying the second virtual prop equipped on the first virtual object in the virtual scene picture according to the user-defined appearance parameter of the second virtual prop.
8. The method of claim 7, wherein the model of the second virtual prop comprises at least two customizable components;
before the responding to the selection operation of the target custom control and superimposing the appearance element corresponding to the target custom control on the model of the second virtual prop, the method further includes:
in response to the selection operation of all or part of the at least two customizable components, determining a target component with customized appearance;
the step of superimposing, in response to the selection operation of the target custom control, an appearance element corresponding to the target custom control on the model of the second virtual item includes:
responding to the selection operation of a target self-defining control, wherein the appearance element corresponding to the target self-defining control is a filling shading, and the filling shading corresponding to the target self-defining control is superposed on the target assembly.
9. The method of claim 7, wherein the superimposing, in response to the selection operation of the target custom control, the appearance element corresponding to the target custom control on the model of the second virtual prop comprises:
responding to the selection operation of a target user-defined control, wherein the appearance element corresponding to the target user-defined control is a target pattern, and displaying the target pattern and a pattern adjusting control on the model of the second virtual prop in a superposition manner;
and adjusting the target pattern in response to the adjustment operation performed on the pattern adjustment control.
10. The method of claim 9, wherein the adjustment operation comprises at least one of a size scaling operation, a position adjustment operation, and a pose adjustment operation.
11. The method of claim 9, wherein the method comprises:
in response to the user-defined finishing operation and the existence of the pattern with at least two overlapped areas on the model of the second virtual prop, combining the pattern with at least two overlapped areas into a single combined pattern;
and adding a custom control corresponding to the combined pattern in the custom control area.
12. A method for displaying virtual props in a virtual scene is characterized by comprising the following steps:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed at a visual angle corresponding to a first virtual object;
responding to the fact that the distance between a second virtual object and the first virtual object in the virtual scene is larger than or equal to a first distance threshold value, and displaying a first virtual prop equipped on the second virtual object in the virtual scene picture according to the self-defined appearance parameter of the first virtual prop;
and in response to the distance between the second virtual object and the first virtual object in the virtual scene being smaller than the first distance threshold, displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to default appearance parameters.
13. A virtual item display device in a virtual scene, the device comprising:
the image display module is used for displaying a virtual scene image, wherein the virtual scene image is an image of a virtual scene observed at a visual angle corresponding to a first virtual object;
the parameter obtaining module is used for responding that the distance between a second virtual object and the first virtual object in the virtual scene is smaller than a first distance threshold value, and obtaining a user-defined appearance parameter of a first virtual prop from a server, wherein the first virtual prop is a virtual prop equipped with the second virtual object; the user-defined appearance parameter is used for indicating the user-defined appearance effect of the virtual prop;
and the prop display module is used for responding to the situation that the second virtual object is in the virtual scene picture, and displaying the first virtual prop equipped on the second virtual object in the virtual scene picture according to the user-defined appearance parameter of the first virtual object.
14. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method for presenting virtual items in a virtual scene according to any one of claims 1 to 11.
15. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method for displaying virtual items in a virtual scene according to any one of claims 1 to 11.
CN202010473631.0A 2020-05-29 2020-05-29 Virtual item display method in virtual scene, computer equipment and storage medium Active CN111672100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010473631.0A CN111672100B (en) 2020-05-29 2020-05-29 Virtual item display method in virtual scene, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010473631.0A CN111672100B (en) 2020-05-29 2020-05-29 Virtual item display method in virtual scene, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111672100A true CN111672100A (en) 2020-09-18
CN111672100B CN111672100B (en) 2021-12-10

Family

ID=72453189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010473631.0A Active CN111672100B (en) 2020-05-29 2020-05-29 Virtual item display method in virtual scene, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111672100B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112316421A (en) * 2020-11-27 2021-02-05 腾讯科技(深圳)有限公司 Equipment method, device, terminal and storage medium of virtual prop
CN112843723A (en) * 2021-02-03 2021-05-28 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN112884874A (en) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for applying decals on virtual model
CN113398583A (en) * 2021-07-19 2021-09-17 网易(杭州)网络有限公司 Applique rendering method and device of game model, storage medium and electronic equipment
CN113797548A (en) * 2021-09-18 2021-12-17 珠海金山网络游戏科技有限公司 Object processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491534A (en) * 2018-03-29 2018-09-04 腾讯科技(深圳)有限公司 Information displaying method, device in virtual environment and computer equipment
CN109603151A (en) * 2018-12-13 2019-04-12 腾讯科技(深圳)有限公司 Skin display methods, device and the equipment of virtual role
US20190143221A1 (en) * 2017-11-15 2019-05-16 Sony Interactive Entertainment America Llc Generation and customization of personalized avatars
US20190266806A1 (en) * 2018-02-27 2019-08-29 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190143221A1 (en) * 2017-11-15 2019-05-16 Sony Interactive Entertainment America Llc Generation and customization of personalized avatars
US20190266806A1 (en) * 2018-02-27 2019-08-29 Soul Vision Creations Private Limited Virtual representation creation of user for fit and style of apparel and accessories
CN108491534A (en) * 2018-03-29 2018-09-04 腾讯科技(深圳)有限公司 Information displaying method, device in virtual environment and computer equipment
CN109603151A (en) * 2018-12-13 2019-04-12 腾讯科技(深圳)有限公司 Skin display methods, device and the equipment of virtual role
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
情义-嘉龙: "《bilibili网》", 14 March 2020 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112316421A (en) * 2020-11-27 2021-02-05 腾讯科技(深圳)有限公司 Equipment method, device, terminal and storage medium of virtual prop
CN112843723A (en) * 2021-02-03 2021-05-28 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN112843723B (en) * 2021-02-03 2024-01-16 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN112884874A (en) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for applying decals on virtual model
CN112884874B (en) * 2021-03-18 2023-06-16 腾讯科技(深圳)有限公司 Method, device, equipment and medium for applying applique on virtual model
CN113398583A (en) * 2021-07-19 2021-09-17 网易(杭州)网络有限公司 Applique rendering method and device of game model, storage medium and electronic equipment
CN113797548A (en) * 2021-09-18 2021-12-17 珠海金山网络游戏科技有限公司 Object processing method and device
CN113797548B (en) * 2021-09-18 2024-02-27 珠海金山数字网络科技有限公司 Object processing method and device

Also Published As

Publication number Publication date
CN111672100B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN111672100B (en) Virtual item display method in virtual scene, computer equipment and storage medium
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
CN111420402B (en) Virtual environment picture display method, device, terminal and storage medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN111589142A (en) Virtual object control method, device, equipment and medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN112169325B (en) Virtual prop control method and device, computer equipment and storage medium
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
EP3971838A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
CN113509714B (en) Virtual prop synthesis preview method, device, terminal and storage medium
CN112044069A (en) Object prompting method, device, equipment and storage medium in virtual scene
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN113577765B (en) User interface display method, device, equipment and storage medium
CN111603770A (en) Virtual environment picture display method, device, equipment and medium
CN112007362B (en) Display control method, device, storage medium and equipment in virtual world
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112330823A (en) Virtual item display method, device, equipment and readable storage medium
CN111389015A (en) Method and device for determining game props and storage medium
CN111035929B (en) Elimination information feedback method, device, equipment and medium based on virtual environment
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN111672121A (en) Virtual object display method and device, computer equipment and storage medium
CN112604274A (en) Virtual object display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028488

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant