CN112717391A - Role name display method, device, equipment and medium for virtual role - Google Patents

Role name display method, device, equipment and medium for virtual role Download PDF

Info

Publication number
CN112717391A
CN112717391A CN202110082965.XA CN202110082965A CN112717391A CN 112717391 A CN112717391 A CN 112717391A CN 202110082965 A CN202110082965 A CN 202110082965A CN 112717391 A CN112717391 A CN 112717391A
Authority
CN
China
Prior art keywords
virtual
name
character
dimensional
role
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110082965.XA
Other languages
Chinese (zh)
Other versions
CN112717391B (en
Inventor
金雨嫣
赵宇浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110082965.XA priority Critical patent/CN112717391B/en
Publication of CN112717391A publication Critical patent/CN112717391A/en
Application granted granted Critical
Publication of CN112717391B publication Critical patent/CN112717391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a role name display method, device, equipment and medium of a virtual role, and belongs to the field of human-computer interaction. The method comprises the following steps: displaying a role name creating interface of the virtual role, wherein a name input control is displayed on the role name creating interface; displaying a role name in the name input control in response to an input operation on the name input control; the character name is displayed on a virtual prop used by a virtual character located in the virtual environment. The application provides an interactive mode for mapping and displaying a role name to a virtual prop in a virtual environment after the role name is input on a role name creating interface, and the interactive mode realizes an immersive display effect that the role name passes through a virtual world from a two-dimensional name creating interface.

Description

Role name display method, device, equipment and medium for virtual role
Technical Field
The present application relates to the field of human-computer interaction, and in particular, to a method, an apparatus, a device, and a medium for displaying a role name of a virtual role.
Background
The user uses the virtual character to play a game in the virtual environment. For example, the user controls virtual character 1 to race in the three-dimensional racing world, or the user controls virtual character 2 to seek a treasure in the three-dimensional fairy world.
In the related art, when a user creates a virtual character used by the user, a game program displays a character name creation interface on which an input box and a confirmation button are displayed. The user enters a character name such as "invincible warrior" in an input box. Then, the user clicks the confirm button, and the character name of the virtual character is successfully created.
However, the above-described character names are limited in display manner, and can be displayed only on interfaces such as a game loading interface, a blood bar of a virtual character, a chat interface, and a settlement interface.
Disclosure of Invention
The application provides a role name display method, a role name display device and a role name display medium of a virtual role, and achieves an immersive display effect that the role names penetrate from a two-dimensional name creation interface to a virtual world. The technical scheme at least comprises the following steps:
according to an aspect of the present application, there is provided a character name display method of a virtual character, the method including:
displaying a role name creating interface of the virtual role, wherein a name input control is displayed on the role name creating interface;
displaying a role name in the name input control in response to an input operation on the name input control;
the character name is displayed on a virtual prop used by a virtual character located in the virtual environment.
According to another aspect of the present application, there is provided a character name display apparatus of a virtual character, the apparatus including:
the display module is used for displaying a role name creation interface of the virtual role, and the role name creation interface displays a name input control;
the display module is also used for responding to the input operation on the name input control and displaying the role name in the name input control;
and the display module is also used for displaying the character name on the virtual prop used by the virtual character in the virtual environment.
According to an aspect of the present application, there is provided a computer device including: a processor and a memory storing a computer program loaded and executed by the processor to implement the character name display method of a virtual character as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium storing a computer program loaded and executed by a processor to implement the character name display method of a virtual character as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the role name display method of the virtual character.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
a name input control is displayed on a role name creation interface, then a user inputs and confirms the role name of a virtual role on the name input control, and finally a role name is displayed on a virtual prop used by the virtual role in a virtual environment. The application provides an interactive mode for mapping and displaying a role name to a virtual prop in a virtual environment after the role name is input on a role name creating interface, and the interactive mode realizes an immersive display effect that the role name passes through a virtual world from a two-dimensional name creating interface.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided by an exemplary embodiment;
FIG. 2 is a flowchart of a role name display method for a virtual role provided by an exemplary embodiment;
FIG. 3 is a schematic diagram of a role name creation interface in accordance with an illustrative embodiment;
FIG. 4 is a schematic view of a virtual prop displaying a character name in accordance with an exemplary embodiment;
FIG. 5 is a flowchart of a role name display method for a virtual role provided by another exemplary embodiment;
FIG. 6 is a schematic diagram of a terminal acquiring a two-dimensional texture image in accordance with an illustrative embodiment;
FIG. 7 is a computer code diagram of a two-dimensional texture image converted to a normal map in an exemplary embodiment;
FIG. 8 is a computer code diagram of a two-dimensional texture image converted to a normal map in accordance with another exemplary embodiment;
FIG. 9 is a schematic computer code diagram of a three-dimensional model with a normal map affixed to a virtual prop according to an exemplary embodiment;
FIG. 10 is a schematic view of a virtual prop displaying a character name according to another exemplary embodiment;
fig. 11 is a flowchart of a character name display method for a virtual character according to another exemplary embodiment;
FIG. 12 is a pictorial diagram of an imprinted animation of an exemplary embodiment;
FIG. 13 is a schematic view of an animation of a virtual character using a virtual prop according to an exemplary embodiment;
fig. 14 is a block diagram of a character name display apparatus of a virtual character according to an exemplary embodiment;
FIG. 15 is a block diagram illustrating a computer device according to an example embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
horizontal plate game: refers to a game in which the movement path of a game character is controlled on a horizontal screen. In all or most of the frames in the horizontal game, the movement path of the game character is along the horizontal direction. According to the content, the horizontal game is divided into horizontal cross-cut, horizontal adventure, horizontal competitive and horizontal strategy; the horizontal type game is classified into a two-dimensional (2D) horizontal type game and a three-dimensional (3D) horizontal type game according to the technology.
Virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application.
Virtual props: the special tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon capable of changing attribute values of other virtual objects, a supply tool such as a bullet, a defense tool such as a shield, a armour and a armored car, a virtual tool such as a virtual light beam and a virtual shock wave which is displayed through a hand when the virtual object releases skills, a part of a body trunk of the virtual object, such as a hand and a leg, and a virtual tool capable of changing attribute values of other virtual objects, long-distance virtual tools such as a pistol, a rifle and a sniper, short-distance virtual tools such as a dagger, a knife, a sword and a rope, and a throwing type virtual tool such as a flying axe, a flying knife, a grenade, a flash bomb and a smoke bomb.
In this application, a virtual prop refers to a prop that can display a role name, indicating the identity of the role. Illustratively, the virtual prop includes at least one of a virtual nameplate, a virtual bracelet, a virtual brooch, and a virtual band, which is not limited in the present application.
Virtual roles: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters and animals displayed in a three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Normal mapping: refers to a special texture that can be applied to the surface of a three-dimensional model. The normal map gives each pixel of the two-dimensional image a height value, which contains surface information for many details. The normal mapping is different from the prior texture, can be only used for a two-dimensional surface, and can create a plurality of special stereoscopic visual effects on a three-dimensional model as the extension of concave-convex textures.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual environment. The application program may be any one of a three-dimensional map program, a military simulation program, a landscape shooting, a landscape adventure, a landscape crossing, a landscape policy, a Virtual Reality (VR) application program, and an Augmented Reality (AR) program. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual character located in a virtual environment for activities including, but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, driving, aiming, picking up, using a throw-like prop, attacking other virtual characters. Illustratively, the first avatar is a first virtual character, such as a simulated character object or an animated character object. Illustratively, the first user controls the first avatar to perform an activity through a UI control on the virtual environment screen.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 further includes a receiving module 1421, a control module 1422, and a sending module 1423, the receiving module 1421 is configured to receive a request sent by a client, such as a team formation request; the control module 1422 is configured to control rendering of a virtual environment screen; the sending module 1423 is configured to send a response to the client, for example, send a prompt message indicating that the formation is successful to the client. The server 140 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a three-dimensional map program, a military simulation program, a cross-plate shooting, a cross-plate adventure, a cross-plate passing, a cross-plate strategy, a virtual reality application program and an augmented reality program. The second terminal 160 is a terminal used by a second user who uses the second terminal 160 to control a second virtual character located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, driving, aiming, picking up, using a throw-like prop, attacking other virtual characters. Illustratively, the second avatar is a second virtual character, such as a simulated character object or an animated character object.
Optionally, the first avatar object and the second avatar object are in the same virtual environment. Optionally, the first avatar object and the second avatar object may belong to the same team, the same organization, the same camp, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual character object and the second virtual character object may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a role name display method for a virtual role according to an exemplary embodiment of the present application. Taking the method as an example to be applied to the terminal shown in fig. 1, as shown in fig. 2, the method includes:
step 210: displaying a role name creating interface of the virtual role, wherein a name input control is displayed on the role name creating interface;
the role name creation interface is a user interface for creating names for virtual roles controlled by the user.
Part (a) of fig. 3 shows a role name creation interface. The character name creation interface 300 includes a background area 310 and a name entry control 320. The background area 310 creates the background of the interface 300 for the character name, and the background area 310 includes at least one of a pattern, a letter, and a pattern. Illustratively, the background area 310 is shown with the text "why we are warfare, deceased, alive, a life enthusiasm, peaceful future". More specifically, the characters can be artistic fonts including at least one of three-dimensional characters, projected characters, metal characters, wood grain characters, crystal characters, flame characters, background image relief characters, streamer characters and mouse drawings.
Name entry control 320 is a control for a user to create a name for a virtual character. Exemplary, name input controlPiece 320 is displayed as "name:please input a name”。
In one embodiment, in response to operation of the user touching the name input control 320, the terminal receives the role name input by the user using the input tool. Illustratively, in response to the operation of the user touching the name input control 320, the terminal displays a virtual keyboard on the character name creation interface 300, the virtual keyboard is used for the user to input a name, and the terminal receives the name input by the user by using the virtual keyboard; illustratively, in response to operation of the user touching the name input control 320, the terminal receives a name input by the user using an input device, the input device including at least one of a keyboard and a microphone.
In one embodiment, the role name creation interface 300 also displays a random name entry control for the terminal to randomly select a role name for the user. In response to the operation of the user touching the second name input control, the terminal receives an instruction from the user to create a role name, and displays a random role name on the role name creation interface 300.
Optionally, the touch operation is at least one of a click operation, a double-click operation, a pressure touch operation, a floating touch operation, and a sliding operation performed on the name input control 320.
Step 220: displaying a role name in the name input control in response to an input operation on the name input control;
the role name in the name input control is obtained by the input operation of the user on the name input control. Part (b) of fig. 3 shows a role name creation interface. The character name creation interface 300 shown in part (b) of fig. 3 includes a background area 310, a name input control 320, and a character name 330.
The character name 330 displays a feature called a presentation feature, and the presentation feature includes at least one of a character type, a character style, a size, a color, and a thickness. In one embodiment, the character name 330 is characterized by a word, a regular script, a four-point, black, bold; in one embodiment, the character name 330 is characterized by an artistic font, a four-point, a red, bold; in one embodiment, the character name 330 is represented by a character string comprising words, symbols, and numbers, wherein the words are artistic fonts, four-size, red, and bold.
Illustratively, the artistic fonts include at least one of solid words, projected words, metal words, wood grain words, crystal words, flame words, background image relief words and streamer words. The symbol includes at least one of an exclamation point, a question mark, a period, a comma, and a pause.
In some embodiments, when a character name created by a user for a virtual character exceeds a system-preset character length, the character name cannot be created. In some embodiments, a role name created by a user for a virtual role cannot be created when the role name has been used by other users. In some embodiments, a role name created by a user for a virtual role cannot be created when the role name uses characters outside of a system preset character set.
Step 230: the character name is displayed on a virtual prop used by a virtual character located in the virtual environment.
And responding to the completion of inputting the role name by the user, and displaying the role name on a virtual prop used by the virtual role in the virtual environment by the terminal. Fig. 4 exemplarily shows that the virtual prop displays the character name. Virtual item display character name interface 400 includes virtual character 410, virtual item 420, character name 430, and cueing area 440.
The virtual character 410 is an active object in the virtual environment, in this application, the virtual character 410 is a character controlled by the user in the virtual environment. By controlling a series of actions of running, jumping, attacking, retreating and the like of the virtual character, the user can interact with other users in the virtual environment. Optionally, virtual character 410 is a three-dimensional virtual character.
Virtual item 420 refers to an item in a virtual environment. In one embodiment, virtual prop 420 is a prop in a three-dimensional virtual environment. In this application, virtual prop 420 refers to a prop that may display the name of virtual character 410, indicating the identity of virtual character 410. In some embodiments, the virtual prop includes at least one of a virtual nameplate, a virtual bracelet, a virtual brooch, and a virtual band, which are not limited in this application. Optionally, the virtual prop is a virtual alloy military plate. Optionally, the virtual prop is a three-dimensional virtual prop.
The role name 430 refers to a role name created by a user for a virtual role. The representation characteristics of the character name 430 are described in detail in step 220, and are not described in detail here.
In one embodiment, the character name 430 shows a concave-convex effect when the virtual environment is a three-dimensional virtual environment, the virtual character is a three-dimensional virtual character, and the virtual prop is a three-dimensional virtual prop. That is, a character name having a concave-convex effect is displayed on a virtual item used by a virtual character located in a three-dimensional virtual environment. The concave-convex effect is a stereoscopic effect that the character name 430 shows compared to the virtual prop 420.
The prompt field 440 refers to the system prompting the user to confirm the character name of the virtual character. In one embodiment, the prompt area 440 is displayed as "hello, talma, strychni, your code is 00972".
In summary, in the method provided in this embodiment, a name input control is displayed on the role name creation interface, then the user inputs and confirms the role name of the virtual role on the name input control, and finally the role name is displayed on the virtual prop used by the virtual role in the virtual environment. The embodiment provides an interactive mode for mapping and displaying a role name to a virtual prop in a virtual environment after the role name is input on a role name creating interface, and the interactive mode realizes an immersive display effect that the role name passes through a virtual world from a two-dimensional name creating interface.
Fig. 5 is a flowchart of a role name display method for a virtual role according to an exemplary embodiment of the present application. The virtual environment is a three-dimensional virtual environment, the virtual character is a three-dimensional virtual character, and the virtual prop is a three-dimensional virtual prop. Taking the method as an example to be applied to the terminal shown in fig. 1, as shown in fig. 5, the method includes:
step 510: displaying a role name creation interface of the virtual role;
the character name creation interface for the virtual character is in a three-dimensional virtual environment, and in one embodiment, the character name creation interface 300 displays an animated special effect, i.e., a dynamic background area and a dynamic name entry control are presented on the character name creation interface.
Step 520: displaying a role name in the name input control in response to an input operation on the name input control;
the name entry control is a three-dimensional text entry box located in the virtual environment.
A three-dimensional text entry box refers to a text entry box in a three-dimensional virtual environment, which in one embodiment exhibits a dynamic, stereoscopic visual effect.
In one embodiment, in response to an input operation on the name input control, the role name creation interface displays the role name while the confirmation name control is displayed. And responding to the operation of the user touch confirmation name control, the terminal receives the indication of the user confirmation role name, and enters the next step of role name creation.
Step 530: acquiring a two-dimensional texture image of a role name;
based on the instruction of the user for confirming the role name, the terminal acquires the two-dimensional texture image of the role name of the virtual role. The two-dimensional texture image is a two-dimensional image including a character name, which contains character name information and shows a representation characteristic of the character name. The two-dimensional texture image is a two-dimensional image composed of pixel points, and each pixel point has a pixel value.
In one embodiment, the character name in the three-dimensional character input box is photographed through a camera model located in the virtual environment, and a two-dimensional texture image of the character name is obtained. Wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels. Fig. 6 shows the terminal acquiring a two-dimensional texture image. Fig. 6 includes a three-dimensional text entry box 610, a camera model 620, and a two-dimensional texture image 630.
The terminal creates a camera model 620 at a designated position in the three-dimensional virtual environment, and the camera model 620 is used exclusively for capturing a character name image of the three-dimensional text input box 610. The camera model 620 captures and captures the character name image of the three-dimensional text input box 610 to obtain a two-dimensional texture image. Wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels. Illustratively, the two-dimensional texture image 630 is a two-dimensional texture image in the R8 format. Where the R8 format refers to an image with only R (red) channel pixel values.
Step 540: converting the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image;
normal mapping refers to a special texture that can be applied to the surface of a three-dimensional model. The normal map contains the orientation of the pixel points in the two-dimensional texture image, i.e. the normal map contains the normals of all the pixel points of the two-dimensional texture image. The normals of all the pixel points form a normal map. The normal map contains surface information in many details. The normal mapping is different from the prior texture, can be only used for a two-dimensional surface, and can create a plurality of special stereoscopic visual effects on a three-dimensional model as the extension of concave-convex textures.
And the terminal shoots the role name in the three-dimensional character input box through a camera model positioned in the virtual environment to obtain a two-dimensional texture image of the role name.
In one embodiment, the pixel values of the pixel points in the two-dimensional texture image are converted into the normals of the pixel points through a height-to-normal formula, and a normal map is obtained.
The height-to-normal formula is used to convert the height map of the two-dimensional texture image into a normal map. The height map is an image made based on the difference in pixel values between different color channels. The normal map is an image made based on the normals of the pixel points.
FIG. 7 is a computer code diagram of a two-dimensional texture image converted into a normal map according to an exemplary embodiment of the present application. FIG. 8 is a computer code diagram of a two-dimensional texture image converted into a normal map according to an exemplary embodiment of the present application.
FIG. 7 is a C # key code, which shows a corresponding interpretation of the existing related code, and is described below.
The Maintex corresponds to the two-dimensional texture image 630 shown as "talma" in fig. 6. Rendering object RenderTarget refers to a three-dimensional model of the virtual prop. NormalMap refers to a normal map, BumpMap refers to a height map, NormalMapTexture refers to the texture of the normal map, BumpMapTexture refers to the texture of the height map, GlobalTexture refers to the texture of the original map, blitMaterial refers to the texture of the normal map, and Blit is an additive texture function of the normal map. The conversion of the height map into the normal map requires setting the material of the normal map to perform map conversion, and finally the normal map is obtained.
FIG. 8 is the code associated with a height map to normal map. There is a corresponding explanation of the relevant code in fig. 8, which is explained below.
l, r, u and d respectively indicate the offset positions of the upper, lower, left and right four pixels of the current pixel. h _ l, h _ r, h _ u and h _ d respectively refer to pixel values of four pixels, namely, upper, lower, left and right, of the current pixel. dh _ dx indicates the height difference of the pixel values of the left and right pixels of the current pixel, and dh _ dy indicates the height difference of the pixel values of the upper and lower pixels of the current pixel. normal refers to the normal line.
In one embodiment, for a pixel point in a two-dimensional texture image, an upper pixel point, a lower pixel point, a left pixel point and a right pixel point relative to the pixel point are determined; calculating a difference value between the lower pixel point and the upper pixel point to obtain a first height difference; calculating a difference value between the right pixel point and the left pixel point to obtain a second height difference; and converting the first height difference and the second height difference into the normal of the pixel point to obtain a normal map. And combining the first height difference and the second height difference with a fixed constant 0.1 to obtain a normal line, wherein the normal line comprises a normal line direction and a normal line size.
In an exemplary manner, the first and second electrodes are,
dhdx=r-l,dhdy=d-u;
left and right direction vector of pixel point A
Figure BDA0002909998940000111
(dx,0Dh _ dx) and the vector in the up-down direction is
Figure BDA0002909998940000112
(0, dy, dh _ dy), then the normal of the A pixel
Figure BDA0002909998940000113
Then obtain
Figure BDA0002909998940000114
(-dh _ dx · dy, -dx · dh _ dy, dxdy), both dx and dy being constants, assuming a constant of 0.1, we obtain
Figure BDA0002909998940000115
(dh _ dx, dh _ dy, -0.1) because
Figure BDA0002909998940000116
The positive and negative values of the middle z-direction component influence the positive and negative surfaces of the normal map, so that the component in the z direction is independently multiplied by-1 to obtain the normal map
Figure BDA0002909998940000117
(dh_dx,dh_dy,0.1)。
The normal map is composed of normals of all pixel points of the height map. To pair
Figure BDA0002909998940000118
Normalization processing and format processing, wherein the normalization processing and the format processing are used for eliminating the form problem among normals of different pixel points.
The normal map includes the orientation of each pixel, and the orientation of one pixel is calculated based on the height values of the surrounding pixels (up, down, left, and right). The height value comprises a pixel value under an R channel, a pixel value under a G channel and a pixel value under a B channel.
Step 550: in the three-dimensional virtual environment, the normal map is attached to a three-dimensional model of a virtual prop used by a virtual character, and the character name with the concave-convex effect is displayed.
The three-dimensional model of the virtual prop is the expression form of the virtual prop at the terminal.
FIG. 9 is a computer code diagram of a three-dimensional model with a normal map affixed to a virtual prop according to an exemplary embodiment of the present application.
FIG. 9 is a key code of a three-dimensional model with a normal map attached to a virtual prop. There is a corresponding explanation of the relevant code in fig. 9, which is explained below.
Strirrer. getpropertyblock refers to a material property of a three-dimensional model in which a virtual property is set, materialproperty.settexture refers to a texture of a three-dimensional model in which a virtual property is set, and materialproperty.setfloat refers to a concave-convex intensity of a three-dimensional model in which a virtual property is set.
The terminal firstly obtains the material attribute of the three-dimensional model of the virtual prop, then sets the normal map texture into the material attribute of the three-dimensional model of the virtual prop, and then sets the concave-convex strength into the material attribute of the three-dimensional model of the virtual prop, and the material attribute setting is completed based on the operation.
Responding to the operation of confirming the role name by the user, in the three-dimensional virtual environment, the terminal pastes the normal line map to the three-dimensional model of the virtual prop used by the virtual role, then the terminal sets the light effect for the virtual prop, and the virtual prop displays the role name with the concave-convex effect.
Illustratively, the terminal sets a lighting effect for the virtual prop, that is, the system sets a lighting vector, and the lighting vector and a normal map on the three-dimensional model of the virtual prop perform vector operation to obtain a pixel value of a pixel point on the three-dimensional model of the virtual prop.
As shown in fig. 10, fig. 10 shows that the virtual prop displays the name of a character. Wherein the character name "talema" shows a concavo-convex effect.
In summary, in the method provided in this embodiment, first, a name input control is displayed on a role name creation interface, then, a user inputs and confirms a role name of a virtual role on the name input control, then, a terminal obtains a two-dimensional texture image of the role name, converts the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image, and finally, in a three-dimensional virtual environment, the terminal maps the normal map to a three-dimensional model of a virtual prop used by the virtual role to display the role name with a concave-convex effect. The embodiment provides an interactive mode for mapping and displaying a role name to a virtual prop in a virtual environment after the role name is input on a role name creating interface, and the interactive mode realizes an immersive display effect that the role name passes through a virtual world from a two-dimensional name creating interface.
According to the method provided by the embodiment, a three-dimensional concave-convex effect of converting a two-dimensional picture to a virtual prop is realized in a mode of converting a height map into a normal map, and a lightweight three-dimensional effect generation mode is provided.
Fig. 11 is a flowchart of a role name display method for a virtual role according to an exemplary embodiment of the present application. In this exemplary embodiment, the virtual environment comprises a three-dimensional virtual environment and the virtual props comprise three-dimensional virtual props. Taking the application of the method to the terminal shown in fig. 1 as an example, the method includes:
step 1101: displaying a role name creating interface of the virtual role, wherein a name input control is displayed on the role name creating interface;
the role name creation interface is an interface in which a user creates a name for a role. In this embodiment, the character name creation interface of the virtual character is in a three-dimensional virtual environment, and in one embodiment, the character name creation interface displays an animation special effect, that is, a dynamic background area and a dynamic name input control are displayed on the character name creation interface.
Step 1102: displaying a role name in the name input control in response to an input operation on the name input control;
the role name in the name input control is obtained by the input operation of the user on the name input control. In an exemplary embodiment of the present application, the name entry control is a three-dimensional text entry box located in a virtual environment.
In one embodiment, in response to an input operation on the name input control, the role name creation interface displays the role name while the confirmation name control is displayed. And responding to the operation of the user touch confirmation name control, the terminal receives the indication of the user confirmation role name, and enters the next step of role name creation.
Step 1103: acquiring a two-dimensional texture image of a role name;
based on the instruction of the user for confirming the role name, the terminal acquires the two-dimensional texture image of the role name of the virtual role. The two-dimensional texture image is a two-dimensional image including a character name, which contains character name information and shows a representation characteristic of the character name. The two-dimensional texture image is a two-dimensional image composed of pixel points, and each pixel point has a pixel value. The name entry control is a three-dimensional text entry box located in the virtual environment.
The method comprises the steps that a role name in a three-dimensional character input box is photographed through a camera model located in a virtual environment, and a two-dimensional texture image of the role name is obtained; wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels.
Step 1104: converting the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image;
normal mapping refers to a special texture that can be applied to the surface of a three-dimensional model. In one embodiment, the pixel values of the pixel points in the two-dimensional texture image are converted into the normal directions of the pixel points through a height-to-normal formula, and a normal map is obtained.
The height-to-normal formula is used to convert the height map of the two-dimensional texture image into a normal map. The height map is an image made based on the difference in pixel values between different color channels. The normal map is an image made based on the normals of the pixel points.
Step 1105: in a three-dimensional virtual environment, attaching a normal map to a three-dimensional model of a virtual prop used by a virtual character;
and responding to the operation of confirming the role name by the user, and pasting the normal line map to a three-dimensional model of a virtual prop used by the virtual role by the terminal in the three-dimensional virtual environment.
Step 1106: displaying the imprinting animation which gradually imprints the character name on the virtual prop;
in response to the user's operations of inputting a character name and confirming the character name, the terminal controls the display screen to display the imprinting animation. And the imprinting animation displays the dynamic process of imprinting the character name on the virtual prop.
In one embodiment, the imprint animation includes three frames.
Fig. 12 (a) is a first screen view of an imprint animation according to an exemplary embodiment of the present application. And displaying the character name and the background area on the display screen based on an instruction of displaying the imprinting animation on the display screen sent by the terminal. The character name displays character name information of the virtual character input by the user. In one embodiment, the character name is the same as the character name expression characteristic of the virtual character input by the user, namely, the character type, the character style, the size, the color and the thickness are the same. In one embodiment, the character name is different from the character name expression characteristic of the virtual character input by the user, namely at least one of character type, character style, size, color and thickness. Illustratively, the character name is displayed as a tilted artistic font, and the character name of the virtual character input by the user is displayed as a vertical regular font.
The background area comprises at least one of patterns, characters and decorative patterns. Illustratively, the background area is shown as the text "why we are warfare, deceased, alive, a life enthusiasm, peaceful future".
Fig. 12 (b) is a second screen view of an imprinting motion picture according to an exemplary embodiment of the present application. The second picture schematic diagram comprises a character name and a virtual prop. And displaying the virtual prop on the display screen based on an instruction of displaying the imprinting animation on the display screen triggered by the user.
The virtual prop is a prop which can display the name of a role and indicate the identity of the role. Illustratively, the virtual prop includes at least one of a virtual nameplate, a virtual bracelet, a virtual brooch, and a virtual band, which is not limited in the present application.
Illustratively, the terminal controls the display screen to display the virtual prop. Optionally, the terminal controls the background area to become dark gradually until the background area reaches the set brightness threshold, and then the terminal controls the virtual prop to become bright gradually until the virtual prop reaches the set brightness threshold.
Illustratively, the terminal controls the display screen to remove the background area and display the virtual item at the same time. Optionally, the background area gradually becomes dark until the set brightness threshold is reached, and meanwhile, the virtual prop gradually becomes bright until the set brightness threshold is reached.
In one embodiment, the terminal adjusts the size of the character name on the display screen to match the size of the virtual prop. In one embodiment, the terminal adjusts the size of the virtual prop on the display screen until the size matches the size of the character name. In one embodiment, the terminal simultaneously adjusts the character names and the sizes of the virtual props on the display screen until the character names and the virtual props are matched.
Part (c) of fig. 12 is a third screen schematic diagram of the imprinting animation of the exemplary embodiment of the present application. In fig. 12(c) the character name has been matched to the virtual prop.
Step 1107: displaying the name of a role with a concave-convex effect by using the three-dimensional model of the virtual prop;
the three-dimensional model of the virtual prop is the representation form of the virtual prop in a three-dimensional virtual environment.
The terminal has functions of receiving data, processing and transmitting data.
Responding to the operation of confirming the role name by the user, in the three-dimensional virtual environment, the terminal pastes the normal line map to the three-dimensional model of the virtual prop used by the virtual role, and the virtual prop displays the role name with the concave-convex effect. Fig. 10 shows virtual item display character names. Wherein the character name "talema" shows a concavo-convex effect.
Step 1108: and displaying the action animation of the virtual character using the virtual prop.
Illustratively, FIG. 13 shows a screen in which the avatar touches the virtual prop. In various embodiments, the manner in which the virtual character uses the virtual prop includes, but is not limited to: wearing the virtual prop, driving the virtual prop, attacking by using the virtual prop and making a specified action by using the virtual prop.
In summary, in the method provided in this embodiment, a name input control is displayed on a role name creation interface, then a user inputs and confirms a role name of a virtual role on the name input control, then a terminal obtains a two-dimensional texture image of the role name, the two-dimensional texture image is converted into a normal map based on pixel values of pixels in the two-dimensional texture image, then an imprinting animation in which the role name is gradually imprinted on a virtual prop is displayed on a display screen, then the terminal affixes the normal map to a three-dimensional model of the virtual prop used by the virtual role in a three-dimensional virtual environment, the virtual prop displays the role name with a concave-convex effect, and finally the terminal controls the display screen to display an action animation in which the virtual role uses the virtual prop. The embodiment provides an interactive mode for mapping and displaying a role name to a virtual prop in a virtual environment after the role name is input on a role name creating interface, and the interactive mode realizes an immersive display effect that the role name passes through a virtual world from a two-dimensional name creating interface.
The method provided by the embodiment further provides a novel interaction mode by displaying the imprinting animation for gradually imprinting the character name on the virtual prop and using the imprinting animation to transmit an intuitive process for passing the character name from the two-dimensional name creation interface to the three-dimensional virtual world to the user.
Fig. 14 is a block diagram illustrating a character name display apparatus for a virtual character according to an exemplary embodiment of the present application, where, as shown in fig. 14, the apparatus includes:
a display module 1420, configured to display a role name creation interface of the virtual role, where the role name creation interface displays a name input control;
an interaction module 1440 for displaying the character name in the name input control in response to an input operation on the name input control;
the display module 1420 is further configured to display the character name on a virtual prop used by the virtual character located in the virtual environment.
In an optional embodiment, the display module 1420 is further configured to display the character name with a concave-convex effect on the virtual prop used by the virtual character located in the three-dimensional virtual environment.
In an alternative embodiment, the display module 1420 includes an obtaining sub-module 1421, a converting sub-module 1422, and a displaying sub-module 1423:
the obtaining sub-module 1421 is configured to obtain a two-dimensional texture image of the role name;
the conversion submodule 1422 is configured to convert the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image;
the display sub-module 1423 is further configured to, in the three-dimensional virtual environment, attach the normal map to a three-dimensional model of the virtual item used by the virtual character, and display the character name with a concave-convex effect.
In an optional embodiment, the obtaining sub-module 1421 is further configured to take a picture of the role name in the three-dimensional text input box through a camera model located in the virtual environment, so as to obtain a two-dimensional texture image of the role name;
wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels.
In an optional embodiment, the converting submodule 1422 is further configured to convert the pixel value of the pixel point in the two-dimensional texture image into the normal direction of the pixel point through a height-to-normal formula, so as to obtain the normal map.
In an optional embodiment, the converting submodule 1422 is further configured to determine, for a pixel point in the two-dimensional texture image, an upper pixel point, a lower pixel point, a left pixel point, and a right pixel point, which are relative to the pixel point.
In an optional embodiment, the converting submodule 1422 is further configured to calculate a difference between the lower pixel point and the upper pixel point, so as to obtain a first height difference; and calculating the difference value between the right pixel point and the left pixel point to obtain a second height difference.
In an optional embodiment, the converting submodule 1422 is further configured to convert the first height difference and the second height difference into a normal of the pixel point, so as to obtain the normal map.
In an optional embodiment, the display sub-module 1423 is further configured to display an imprinting animation in which the character name is gradually imprinted on the virtual prop, so that the character name with a concave-convex effect is on the virtual prop.
In an optional embodiment, the display sub-module 1423 is further configured to display an animation of the virtual character using the virtual prop.
It should be noted that: the role name display device of the virtual role provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the role name display apparatus of the virtual role provided in the above embodiments and the role name display method embodiment of the virtual role belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 15 shows a block diagram of an electronic device 1500 provided in an exemplary embodiment of the present application. The electronic device 1500 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 1500 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, electronic device 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 is configured to store at least one instruction for execution by the processor 1501 to implement the method for accelerating out-of-domain network resources provided by the method embodiments herein.
In some embodiments, the electronic device 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, provided on the front panel of the electronic device 1500; in other embodiments, the display 1505 may be at least two, each disposed on a different surface of the electronic device 1500 or in a folded design; in other embodiments, the display 1505 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the electronic device 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is configured to locate a current geographic Location of the electronic device 1500 to implement navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, or the russian galileo System.
The power supply 1509 is used to supply power to the various components in the electronic device 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyroscope sensor 1512, pressure sensor 1513 fingerprint, sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the electronic apparatus 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 may detect a body direction and a rotation angle of the electronic device 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 may cooperate to collect a 3D motion of the user on the electronic device 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 1513 may be disposed on a side bezel of the electronic device 1500 and/or underneath the display 1505. When the pressure sensor 1513 is disposed on the side frame of the electronic device 1500, the holding signal of the user to the electronic device 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the display screen 1505, the processor 1501 controls the operability control on the UI interface in accordance with the pressure operation of the user on the display screen 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the electronic device 1500. When a physical key or vendor Logo is provided on the electronic device 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the display screen 1505 is adjusted down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also referred to as a distance sensor, is typically provided on the front panel of the electronic device 1500. The proximity sensor 1516 is used to capture the distance between the user and the front of the electronic device 1500. In one embodiment, the processor 1501 controls the display 1505 to switch from the bright screen state to the dark screen state when the proximity sensor 1516 detects that the distance between the user and the front of the electronic device 1500 is gradually decreased; when the proximity sensor 1516 detects that the distance between the user and the front of the electronic device 1500 gradually becomes larger, the processor 1501 controls the display 1505 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 15 is not intended to be limiting of electronic device 1500, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the role name display method for a virtual role provided by the above method embodiment.
A computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the role name display method of the virtual role provided by the above method embodiment.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A character name display method of a virtual character, the method comprising:
displaying a role name creating interface of the virtual role, wherein a name input control is displayed on the role name creating interface;
displaying the role name in the name input control in response to an input operation on the name input control;
the character name is displayed on a virtual prop used by the virtual character located in the virtual environment.
2. The method of claim 1, wherein the displaying the character name on a virtual prop used by the virtual character located in a virtual environment comprises:
displaying the character name with concave-convex effect on the virtual prop used by the virtual character in the three-dimensional virtual environment.
3. The method of claim 2, wherein the displaying the character name with a concave-convex effect on the virtual prop used by the virtual character located in the three-dimensional virtual environment comprises:
acquiring a two-dimensional texture image of the role name;
converting the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image;
and in the three-dimensional virtual environment, the normal map is attached to a three-dimensional model of the virtual prop used by the virtual character, and the character name with the concave-convex effect is displayed.
4. The method of claim 3, wherein the name input control is a three-dimensional text entry box located in the virtual environment; the obtaining of the two-dimensional texture image of the character name includes:
shooting the role name in the three-dimensional character input box through a camera model located in the virtual environment to obtain a two-dimensional texture image of the role name;
wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels.
5. The method according to claim 3, wherein the converting the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image comprises:
and converting the pixel value of the pixel point in the two-dimensional texture image into the normal direction of the pixel point through a height normal rotation formula to obtain the normal map.
6. The method of claim 5, wherein the converting the pixel value of the pixel point in the two-dimensional texture image into the normal direction of the pixel point by using a height-to-normal formula to obtain the normal map comprises:
determining an upper pixel point, a lower pixel point, a left pixel point and a right pixel point which are relative to the pixel points for the pixel points in the two-dimensional texture image;
calculating a difference value between the lower pixel point and the upper pixel point to obtain a first height difference; calculating a difference value between the right pixel point and the left pixel point to obtain a second height difference;
and converting the first height difference and the second height difference into the normal of the pixel point to obtain the normal map.
7. The method of claim 2, wherein before displaying the character name with a concave-convex effect on a three-dimensional virtual prop used by the virtual character located in a three-dimensional virtual environment, further comprising:
and displaying the imprinting animation for gradually imprinting the character name on the virtual prop so as to enable the character name with a concave-convex effect on the virtual prop.
8. The method of any of claims 1 to 3, further comprising:
and displaying the action animation of the virtual character using the virtual prop.
9. A character name display apparatus for a virtual character, the apparatus comprising:
the display module is used for displaying a role name creating interface of the virtual role, and the role name creating interface displays a name input control;
the interaction module is used for responding to the input operation on the name input control and displaying the role name in the name input control;
the display module is further configured to display the character name on a virtual item used by the virtual character in the virtual environment.
10. The apparatus of claim 9, wherein the display module is further configured to display the character name with a concave-convex effect on the virtual prop used by the virtual character in the three-dimensional virtual environment.
11. The apparatus of claim 10, wherein the display module comprises: acquiring a submodule, a conversion submodule and a display submodule;
the obtaining submodule is used for obtaining a two-dimensional texture image of the role name;
the conversion submodule is used for converting the two-dimensional texture image into a normal map based on pixel values of pixel points in the two-dimensional texture image;
the display sub-module is further configured to attach the normal map to a three-dimensional model of the virtual prop used by the virtual character in the three-dimensional virtual environment, and display the character name with a concave-convex effect.
12. The apparatus of claim 11,
the obtaining submodule is used for shooting the role name in the three-dimensional character input box through a camera model located in the virtual environment to obtain a two-dimensional texture image of the role name;
wherein the two-dimensional texture image is an image having a single color channel, or an image having a plurality of color channels.
13. The apparatus of claim 12,
and the conversion submodule is used for converting the pixel value of the pixel point in the two-dimensional texture image into the normal direction of the pixel point through a height-to-normal formula to obtain the normal map.
14. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement the character name display method of a virtual character according to any one of claims 1 to 8.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is loaded and executed by a processor to implement the character name display method of a virtual character according to any one of claims 1 to 8.
CN202110082965.XA 2021-01-21 2021-01-21 Method, device, equipment and medium for displaying character names of virtual characters Active CN112717391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110082965.XA CN112717391B (en) 2021-01-21 2021-01-21 Method, device, equipment and medium for displaying character names of virtual characters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110082965.XA CN112717391B (en) 2021-01-21 2021-01-21 Method, device, equipment and medium for displaying character names of virtual characters

Publications (2)

Publication Number Publication Date
CN112717391A true CN112717391A (en) 2021-04-30
CN112717391B CN112717391B (en) 2023-04-25

Family

ID=75594745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110082965.XA Active CN112717391B (en) 2021-01-21 2021-01-21 Method, device, equipment and medium for displaying character names of virtual characters

Country Status (1)

Country Link
CN (1) CN112717391B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359269A (en) * 2022-03-09 2022-04-15 广东工业大学 Virtual food box defect generation method and system based on neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015054005A (en) * 2013-09-11 2015-03-23 株式会社カプコン Game program and game device
CN106730846A (en) * 2016-11-10 2017-05-31 北京像素软件科技股份有限公司 The data processing method and device of one attribute stage property
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
US20200306633A1 (en) * 2018-03-23 2020-10-01 Tencent Technology (Shenzhen) Company Limited Equipment display method, apparatus, device and storage medium in virtual environment battle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015054005A (en) * 2013-09-11 2015-03-23 株式会社カプコン Game program and game device
CN106730846A (en) * 2016-11-10 2017-05-31 北京像素软件科技股份有限公司 The data processing method and device of one attribute stage property
US20200306633A1 (en) * 2018-03-23 2020-10-01 Tencent Technology (Shenzhen) Company Limited Equipment display method, apparatus, device and storage medium in virtual environment battle
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王一夫等: "《Adobe创意大学三维纹理设计标准教材》", 31 March 2012, 印刷工业出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359269A (en) * 2022-03-09 2022-04-15 广东工业大学 Virtual food box defect generation method and system based on neural network

Also Published As

Publication number Publication date
CN112717391B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US11980814B2 (en) Method and apparatus for controlling virtual object to mark virtual item and medium
CN109529319B (en) Display method and device of interface control and storage medium
WO2019153750A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
CN111589128B (en) Operation control display method and device based on virtual scene
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN108786110B (en) Method, device and storage medium for displaying sighting telescope in virtual environment
CN109917910B (en) Method, device and equipment for displaying linear skills and storage medium
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112156464A (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN108744511B (en) Method, device and storage medium for displaying sighting telescope in virtual environment
CN111013137B (en) Movement control method, device, equipment and storage medium in virtual scene
CN111589141A (en) Virtual environment picture display method, device, equipment and medium
CN111325822B (en) Method, device and equipment for displaying hot spot diagram and readable storage medium
CN110738738B (en) Virtual object marking method, equipment and storage medium in three-dimensional virtual scene
CN112755517B (en) Virtual object control method, device, terminal and storage medium
CN110833695A (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113289336A (en) Method, apparatus, device and medium for tagging items in a virtual environment
CN111035929B (en) Elimination information feedback method, device, equipment and medium based on virtual environment
CN112306332A (en) Method, device and equipment for determining selected target and storage medium
CN111672115A (en) Virtual object control method and device, computer equipment and storage medium
CN112717391B (en) Method, device, equipment and medium for displaying character names of virtual characters
CN113559494A (en) Virtual item display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042557

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant