CN116328310A - Virtual model processing method, device, computer equipment and storage medium - Google Patents

Virtual model processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116328310A
CN116328310A CN202111599971.9A CN202111599971A CN116328310A CN 116328310 A CN116328310 A CN 116328310A CN 202111599971 A CN202111599971 A CN 202111599971A CN 116328310 A CN116328310 A CN 116328310A
Authority
CN
China
Prior art keywords
virtual
face
virtual model
character
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111599971.9A
Other languages
Chinese (zh)
Inventor
林�智
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111599971.9A priority Critical patent/CN116328310A/en
Publication of CN116328310A publication Critical patent/CN116328310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6615Methods for processing data by generating or executing the game program for rendering three dimensional images using models with different levels of detail [LOD]
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a virtual model processing method, a device, computer equipment and a storage medium, wherein the face of a first virtual character is adjusted based on a face virtual model of a target virtual character by responding to a face virtual model acquisition event of acquiring the face virtual model from the target virtual character in a second virtual character, and the updated first virtual character is displayed on a game interface; according to the method and the device for obtaining the face virtual model, the player can trigger the face virtual model to obtain the event through triggering operation on the game interface, so that the player can quickly obtain the face model meeting the self requirements in the game process, and the processing efficiency is improved.

Description

Virtual model processing method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for processing a virtual model, a computer device, and a storage medium.
Background
With the continuous development of computer communication technology, in order to meet the pursuit of people for mental life, entertainment games capable of being operated on terminals have been developed, for example, multi-person online action games developed based on client or server architecture. In the action athletic game, a player can operate the virtual characters in the screen to play the game, and can execute relevant operations such as attack and the like in a game scene based on a third view angle of the characters operated by the player, so that the player can experience visual impact brought by the game in an immersive manner, and the initiative and the sense of reality of the game are greatly enhanced.
At present, in order to meet the personalized customization demands of different players in the running process of game application programs, a face pinching function is generally provided for the players when virtual game characters corresponding to the players are created, so that the players can create favorite characters according to the demands. However, when a player wants to create a favorite game character appearance, a lot of time is required to create a corresponding virtual game character, so that the face pinching process of the player is complicated and takes a long time.
Disclosure of Invention
The embodiment of the application provides a virtual model processing method, a virtual model processing device, computer equipment and a storage medium, wherein the touch operation is performed on a game interface, so that the face of a virtual character currently controlled by a player is quickly changed, and the processing efficiency is improved.
The embodiment of the application provides a virtual model processing method, which is applied to a first terminal and comprises the following steps:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, wherein the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
And responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role.
The embodiment of the application also provides a virtual model processing method, which is applied to the second terminal and comprises the following steps:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and a second virtual role, wherein the first virtual role and the second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the second user;
receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character, and displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request;
determining an authorization condition in response to an authorization setup operation for the facial virtual model authorization hint information;
And authorizing the face virtual model of the second virtual role to the first virtual role based on the authorization condition so that a first user corresponding to the first virtual role obtains an updated first virtual role, wherein the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
Correspondingly, the embodiment of the application also provides a virtual model processing device, which is applied to the first terminal and comprises:
the game device comprises a first display unit, a second display unit and a control unit, wherein the first display unit is used for displaying a game interface, the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
and the first response unit is used for responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
In some embodiments, the apparatus further comprises:
and the second display unit is used for displaying the updated first virtual character on the game interface.
In some embodiments, the apparatus further comprises:
and the storage unit is used for storing the updated first virtual character into a face virtual model database so that the first user can acquire the updated first virtual character through the face virtual model database in a game.
In some embodiments, the apparatus further comprises:
a first response subunit, configured to generate a face virtual model acquisition instruction in response to a face virtual model acquisition operation for the target virtual character;
the first sending unit is used for sending the face virtual model obtaining instruction to a second terminal, wherein the face virtual model obtaining instruction is used for indicating the second terminal to confirm whether to return target face virtual model data of the face virtual model of the target virtual character to the first terminal, the first terminal is a terminal of a game account corresponding to the first virtual character, and the second terminal is a terminal of a game account corresponding to the target virtual character;
The first receiving subunit is used for receiving the target face virtual model data returned by the second terminal;
and the first adjusting unit is used for adjusting the face of the first virtual character based on the target face virtual model data to obtain an updated first virtual character.
In some embodiments, the apparatus further comprises:
and the first detection unit is used for taking the second virtual character as a target virtual character and generating a face virtual model acquisition instruction based on target face virtual model data of a face virtual model of the first virtual character when the relative distance between the first virtual character and the second virtual character is detected to be in a specified distance range and the duration time of the relative distance in the specified distance range accords with a preset time threshold value.
In some embodiments, the apparatus further comprises:
a second adjusting unit, configured to adjust the face of the target virtual character based on the face virtual model of the first virtual character, to obtain an updated target virtual character;
and the first display subunit is used for displaying the updated target virtual character on the game interface.
In some embodiments, the apparatus further comprises:
A second response subunit configured to respond to a drag operation for the target virtual character;
and the second detection unit is used for generating a facial virtual model acquisition instruction when the drag operation is detected to be released at the position where the first virtual character is located.
In some embodiments, the apparatus further comprises:
the second receiving subunit is used for displaying the payment information and the payment control in a preset information display area of the game interface when receiving the payment information returned by the second terminal;
the third response subunit is used for responding to the touch operation of the payment control and acquiring a payment result;
and the second sending unit is used for sending the payment result to the second terminal so that the second terminal returns the target face virtual model data to the first terminal based on the payment result.
In some embodiments, the apparatus further comprises:
a second display subunit, configured to, when receiving a face virtual model presentation request sent by a second terminal, display presentation request information in a preset information display area of the game interface based on the face virtual model presentation request, where the face virtual model presentation request carries target face virtual model data of a face virtual model of a target virtual character, and the presentation request information is used to indicate: confirming whether the target face virtual model data is received;
And a third receiving subunit operable to receive the target face virtual model data when a confirmation receiving operation in response to the presentation request information is detected.
In some embodiments, the apparatus further comprises:
the generating unit is used for generating prompt information based on the target face virtual model data, wherein the prompt information is used for indicating: confirm whether to update the face of the first virtual character based on the target face virtual model data;
a fourth response subunit, configured to adjust, in response to a confirmation operation for the prompt information, a face of the first virtual character based on the target face virtual model data, to obtain an updated first virtual character;
and a third display subunit, configured to display the updated first virtual character on the game interface.
In some embodiments, the apparatus further comprises:
a fifth response subunit, configured to obtain, in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, a non-homogenous token usage right corresponding to the face virtual model of the target virtual character, where the second user manipulating the target virtual character has multiple transfer rights of the non-homogenous token usage right, and the non-homogenous token usage right is used to indicate rights of the user to use the face virtual model on the virtual character;
A first processing subunit for associating the non-homogenous token usage rights with the first avatar to enable the first user to have rights to use a facial virtual model of the target avatar on the first avatar;
and the third adjusting unit is used for adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
In some embodiments, the apparatus further comprises:
a sixth response subunit, configured to obtain, in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, ownership of a non-homogenous token corresponding to the face virtual model of the target virtual character, where the ownership of the non-homogenous token is used to instruct a user to trade or give the face virtual model to other users and to instruct the user to use the face virtual model on a virtual character;
a second processing subunit, configured to unbind the ownership of the non-homogeneous token with a second user corresponding to the target virtual character, associate the ownership of the non-homogeneous token with a first user corresponding to the first virtual character, so as to cancel the authority of the second user to trade or give the face virtual model of the target virtual character to other users, and use the face virtual model of the target virtual character on the first virtual character, so that the first user has the authority to trade or give the face virtual model of the target virtual character to other users, and use the face virtual model of the target virtual character on the first virtual character;
And a fourth adjustment unit, configured to adjust the face of the first virtual character based on the face virtual model of the target virtual character, to obtain an updated first virtual character.
Correspondingly, the embodiment of the application also provides a virtual model processing device, which is applied to the second terminal and comprises:
the third display unit is used for displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and a second virtual role, the first virtual role and the second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the second user;
the receiving unit is used for receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character, and displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request;
a determining unit configured to determine an authorization condition in response to an authorization setting operation for the face virtual model authorization prompt information;
The processing unit is configured to authorize the face virtual model of the second virtual role to the first virtual role based on the authorization condition, so that a first user corresponding to the first virtual role obtains an updated first virtual role, where the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In some embodiments, the apparatus further comprises:
a fourth receiving subunit, configured to receive a face virtual model interchange request sent by a first terminal, where the face virtual model interchange request carries face virtual model data of a face virtual model of a first virtual role, and the first terminal is a terminal of a game account corresponding to the first virtual role;
a fourth display subunit, configured to display a face virtual model authorization prompt message in the information prompt area of the game interface based on the face virtual model interchange request;
a seventh response subunit, configured to determine that an authorization condition is unconditional consent in response to a confirmation operation for the face virtual model authorization prompt information;
an updating unit, configured to update a face of the second virtual character based on the face virtual model of the first virtual character, obtain an updated second virtual character, and display the updated second virtual character on the game interface;
A third sending unit, configured to send, to the first terminal, the face virtual model of the second virtual role, so that a first user corresponding to the first virtual role obtains an updated first virtual role, where the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In some embodiments, the apparatus further comprises:
and a third detection unit configured to take the payment information as an authorization condition when an input determination operation of the payment information input through the amount input control is detected.
In some embodiments, the apparatus further comprises:
an eighth response subunit configured to respond to a drag operation for the second virtual character;
and the fourth detection unit is used for generating a face virtual model giving instruction when the drag operation is detected to be released at the position where the first virtual character is located, wherein the face virtual model giving instruction is used for inquiring whether the first terminal accepts the face virtual model of the second virtual character.
Accordingly, embodiments of the present application further provide a computer device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor, where the computer program when executed by the processor implements the steps of any one of the virtual model processing methods.
Accordingly, embodiments of the present application further provide a storage medium having a computer program stored thereon, where the computer program when executed by a processor implements the steps of any of the virtual model processing methods.
The embodiment of the application provides a virtual model processing method, a device, computer equipment and a storage medium, wherein the face of a first virtual character is adjusted based on a face virtual model of a target virtual character by responding to a face virtual model acquisition event of acquiring the face virtual model from the target virtual character in a second virtual character in a game, and the updated first virtual character is displayed on the game interface; according to the method and the device for obtaining the face virtual model, the player can trigger the face virtual model to obtain the event through triggering operation on the game interface, so that the player can quickly obtain the face model meeting the self requirements in the game process, and the processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a virtual model processing system according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a virtual model processing method according to an embodiment of the present application.
Fig. 3 is a schematic view of an application scenario of a virtual model processing method according to an embodiment of the present application.
Fig. 4 is another flow chart of a virtual model processing method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of another application scenario of the virtual model processing method provided in the embodiment of the present application.
Fig. 6 is a schematic diagram of another application scenario of the virtual model processing method provided in the embodiment of the present application.
Fig. 7 is a schematic diagram of another application scenario of the virtual model processing method provided in the embodiment of the present application.
Fig. 8 is a schematic structural diagram of a virtual model processing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic diagram of another structure of a virtual model processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a virtual model processing method, a virtual model processing device, computer equipment and a storage medium. Specifically, the virtual model processing method in the embodiment of the present application may be executed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, when the virtual model processing method is run on a terminal, the terminal device stores a game application and is used to present a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a game application program and runs the game application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including game screens and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the game, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the virtual model processing method is running on a server, it may be a cloud game. Cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game application program and a game picture presentation main body are separated, and the storage and the running of the sound processing method are completed on a cloud game server. The game image presentation is completed at a cloud game client, which is mainly used for receiving and sending game data and presenting game images, for example, the cloud game client may be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but the terminal device for processing game data is a cloud game server in the cloud. When playing the game, the user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
Referring to fig. 1, fig. 1 is a schematic view of a virtual model processing system according to an embodiment of the present application. The system may include at least one terminal, at least one server, at least one database, and a network. The terminal held by the user can be connected to the server of different games through the network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, the terminal has one or more multi-touch-sensitive screens for sensing and obtaining inputs of a user through touch or slide operations performed at a plurality of points of the one or more touch-sensitive display screens. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks, through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals to connect and synchronize with each other through an appropriate network to support multiplayer games. In addition, the system may include multiple databases coupled to different servers and information related to the gaming environment may be continuously stored in the databases as different users play multiplayer games online.
The embodiment of the application provides a virtual model processing method which can be executed by a terminal or a server. The embodiments of the present application will be described with reference to a virtual model processing method executed by a terminal as an example. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal by responding to the received operation instruction, and can also control the content of the opposite-end server by responding to the received operation instruction. For example, the user-generated operational instructions for the graphical user interface include instructions for launching the gaming application, and the processor is configured to launch the gaming application after receiving the user-provided instructions for launching the gaming application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch-sensitive display screen. A touch display screen is a multi-touch-sensitive screen capable of sensing touch or slide operations performed simultaneously by a plurality of points on the screen. The user performs touch operation on the graphical user interface by using a finger, and when the graphical user interface detects the touch operation, the graphical user interface controls different virtual objects in the graphical user interface of the game to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role playing game, a strategy game, a sports game, a educational game, and the like. Wherein the game may comprise a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by a user (or player) may be included in the virtual scene of the game. In addition, one or more obstacles, such as rails, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual object, e.g., to limit movement of the one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, scores, character health status, energy, etc., to provide assistance to the player, provide virtual services, increase scores related to the player's performance, etc. In addition, the graphical user interface may also present one or more indicators to provide indication information to the player. For example, a game may include a player controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using an Artificial Intelligence (AI) algorithm, implementing a human-machine engagement mode. For example, virtual objects possess various skills or capabilities that a game player uses to achieve a goal. For example, the virtual object may possess one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by the player of the game using one of a plurality of preset touch operations with the touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of the user.
It should be noted that, the schematic view of the scenario of the virtual model processing system shown in fig. 1 is merely an example, and the virtual model processing system and scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
In view of the foregoing, embodiments of the present application provide a virtual model processing method, apparatus, computer device, and storage medium, which are described in detail below. The following description of the embodiments is not intended to limit the preferred embodiments.
The embodiment of the application provides a virtual model processing method, which can be executed by a terminal or a server, and the embodiment of the application is described by taking the virtual model processing method executed by the terminal as an example.
Referring to fig. 2, fig. 2 is a flow chart of a virtual model processing method provided in the embodiment of the present application, where the virtual model processing method is applied to a first terminal, and the first terminal is a terminal of a game account corresponding to a first virtual character, and the specific flow may be as follows steps 101 to 102:
101. And displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user.
In the embodiment of the application, the computer device may display a game interface, and a virtual scene is displayed on the game interface, where the virtual scene is a virtual environment displayed (or provided) when an application program runs on the terminal. The virtual environment can be a simulation environment for the real world, a semi-simulation and semi-fictional three-dimensional environment, or a pure fictional three-dimensional environment. The virtual environment is used for virtual environment combat between at least two virtual characters, and virtual resources available for the at least two virtual characters are provided in the virtual environment. Displaying a virtual scene in a game interface, wherein one or more virtual roles are displayed in the virtual scene, the virtual roles can be a first virtual role or a second virtual role, the second virtual role comprises a same-camp virtual role and also comprises an opposite-camp virtual role, and the same-camp virtual role is a virtual role which is in the same camp as the first virtual role; the hostile virtual character is a virtual character that is not the same as the first virtual character. For example, a first virtual character, a co-camping virtual character, and an hostile camping virtual character may coexist in a virtual scene, which is illustrated herein and not by way of limitation.
Wherein the computer device may generate a user interface by rendering the game application on a touch display screen of the computer device having the touch display screen to display a game page on the user interface.
A virtual character (or hero) refers to a movable object in a virtual environment. A virtual character refers to a virtual object in a game that a user or player controls through a terminal. In the embodiment of the application, the virtual object in the game controlled by the current user through the terminal is called a first virtual character, namely a virtual character controlled by the local end user.
102. And responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role.
The face virtual model provided by the embodiment of the application can be face model parameters corresponding to real face data of a reference face, and the face virtual model can be used for representing the appearance of the face of the virtual character. For example, facial model parameters of the facial virtual model may include facial five sense organs, facial parts other than five sense organs, facial shape, facial skin color, hairstyle, hair color, and the like. The terminal equipment can conduct modeling according to face model parameters of the face virtual model provided by the user, and the face virtual model is obtained. Optionally, the user may also adjust facial model parameters of the facial virtual model, for example, the size of the facial features and the position in the face may be adjusted, and operations such as repairing or skin color adjustment may also be performed, so that the facial virtual model is more harmonious and attractive, and meets the expected effect of the user.
Specifically, the updated first virtual character may be a virtual character displayed when the user creates a face virtual model, and the face virtual model of the updated first virtual character may be a preview image of the first virtual character; or after the user selects the first virtual character to enter the game play, the virtual character is rendered in the game scene corresponding to the game play, and the updated face virtual model of the first virtual character can be an actual image for displaying the first virtual character in the virtual scene.
In an embodiment, the step of "in response to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character", the method may include:
generating a face virtual model acquisition instruction in response to a face virtual model acquisition operation for the target virtual character;
the face virtual model obtaining instruction is sent to a second terminal, wherein the face virtual model obtaining instruction is used for indicating the second terminal to confirm whether to return target face virtual model data of the face virtual model of the target virtual role to the first terminal, the first terminal is a terminal of a game account corresponding to the first virtual role, and the second terminal is a terminal of a game account corresponding to the target virtual role;
Receiving the target face virtual model data returned by the second terminal;
and adjusting the face of the first virtual character based on the target face virtual model data to obtain an updated first virtual character.
To implement a face virtual model interchange operation in a game play, the step of "generating a face virtual model acquisition instruction in response to a face virtual model acquisition operation for the target virtual character" may include:
when the relative distance between the first virtual character and the second virtual character is detected to be in a specified distance range, and the duration time of the relative distance in the specified distance range accords with a preset time threshold, the second virtual character is taken as a target virtual character, and a face virtual model acquisition instruction is generated based on target face virtual model data of a face virtual model of the first virtual character.
Optionally, after the step of receiving the target face virtual model data returned by the second terminal, the method may include:
adjusting the face of the target virtual character based on the face virtual model of the first virtual character to obtain an updated target virtual character;
And displaying the updated target virtual character on the game interface.
For example, referring to fig. 3, in the embodiment of the present application, when a computer device detects that a relative distance between a first virtual character and a second virtual character is within a specified distance range, the first virtual character and the second virtual character are in a face-to-face state, and a duration time of the relative distance within the specified distance range meets a preset time threshold, the second virtual character is taken as a target virtual character, and a face of the first virtual character is adjusted based on a face virtual model of the target virtual character to obtain an updated first virtual character, and meanwhile, a face of the target virtual character is adjusted based on the face virtual model of the first virtual character to obtain an updated target virtual character; and finally, displaying the updated first virtual character and the updated target virtual character on the game interface.
In a specific embodiment, the step of generating a face virtual model acquisition instruction in response to the face virtual model acquisition operation for the target virtual character may include:
responding to a drag operation for the target virtual character;
And when the drag operation is detected to be released at the position of the first virtual character, generating a facial virtual model acquisition instruction.
For example, when the application scenario in the embodiment of the present application is that a game application is started at a computer end, the drag operation may be that the user performs the drag operation on the game interface by manipulating the mouse while the user triggers a function key (for example, ctrl key); when the application scenario in the embodiment of the present application is that a game application is started at the hand-free end, the drag operation may be an operation generated after a user clicks or touches a game interface with a finger and then slides.
Optionally, the terminal may respond to a click operation for the target virtual character, and may perform multiple click operations, for example, double click operations or multiple click operations, on the target virtual character to generate a face virtual model acquisition instruction; still alternatively, the terminal may respond to a long press operation for the target virtual character to generate the face virtual model acquisition instruction.
Optionally, before the step of receiving the target face virtual model data returned by the second terminal, the method may include:
when payment information returned by the second terminal is received, displaying the payment information and a payment control in a preset information display area of the game interface;
Responding to touch operation for the payment control, and acquiring a payment result;
and sending the payment result to the second terminal so that the second terminal returns the target face virtual model data to the first terminal based on the payment result.
In order to ensure the autonomous selection rights of the user, the step of "responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character", the method may include:
when a face virtual model presentation request sent by a second terminal is received, presentation request information is displayed in a preset information display area of the game interface based on the face virtual model presentation request, wherein the face virtual model presentation request carries target face virtual model data of a face virtual model of a target virtual character, and the presentation request information is used for indicating: confirming whether the target face virtual model data is received;
the target face virtual model data is received when a confirmation receiving operation in response to the gift request information is detected.
Optionally, after the step of "receiving the target face virtual model data", the method may include:
generating prompt information based on the target face virtual model data, wherein the prompt information is used for indicating: confirm whether to update the face of the first virtual character based on the target face virtual model data;
when the confirmation operation for the prompt information is responded, the face of the first virtual character is adjusted based on the target face virtual model data, and an updated first virtual character is obtained;
and displaying the updated first virtual character on the game interface.
It should be noted that, the touch operation in the embodiments of the present application may be an operation performed by a user on the game interface through the touch display screen, for example, a touch operation generated by the user clicking or touching the game interface with a finger. The user may also click on the game interface to generate a touch operation by controlling a mouse button, for example, the user clicks on the game interface by pressing a right mouse button to generate a touch operation.
In a specific embodiment, the step of "responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character", the method may include:
In response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, acquiring a non-homogenous token usage right corresponding to the face virtual model of the target virtual character, wherein a second user manipulating the target virtual character has multiple transfer rights for the non-homogenous token usage right for indicating rights of the user to use the face virtual model on a virtual character;
associating the non-homogenous token usage rights with the first avatar to enable the first user to have rights to use a facial virtual model of the target avatar on the first avatar;
and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
In a specific embodiment, the step of "responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character", the method may include:
In response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, acquiring ownership of a non-homogenous token corresponding to the face virtual model of the target virtual character, wherein the ownership of the non-homogenous token is used to indicate rights of a user to trade or give the face virtual model to other users and rights of the user to use the face virtual model on a virtual character;
unbinding a second user, corresponding to the target virtual character, of ownership of the non-homogeneous token, associating the ownership of the non-homogeneous token with a first user, corresponding to the first virtual character, so as to cancel the authority of the second user to trade or give the face virtual model of the target virtual character to other users, and the authority of using the face virtual model of the target virtual character on the first virtual character, and enable the first user to have the authority of trade or give the face virtual model of the target virtual character to other users, and the authority of using the face virtual model of the target virtual character on the first virtual character;
And adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
Among them, non-homogeneous token (NFT) is a unit of data called a blockchain (digital ledger), each token representing a unique digital profile. Because of their non-interchangeability, non-homogenous tokens may represent digital documents such as drawings, sounds, movies, items in games, or other forms of creative work. While the files (works) themselves are infinitely replicable, tokens representing them are tracked on their underlying blockchain and provide ownership proof for the buyer. Cryptocurrency such as ethernet, bitcoin, etc. all have their own token standards to define the use of NFTs.
In this embodiment, the NFT is provided with NFT ownership and NFT usage rights. Specifically, the ownership of the NFT represents the copyright attribution of a certain face virtual model, and an owner of the face virtual model can generate the usage rights of the NFT, and a face virtual model of a game can only cast one NFT ownership; the NFT usage rights are used to represent the data usage rights of the face pinching work, and are only used by players who like the work in a game, one game face pinching work can cast a plurality of NFT usage rights, and one NFT usage right can only authorize one virtual character to use the face virtual model corresponding to the NFT.
Optionally, after the step of "obtaining the updated first virtual character based on the face virtual model of the target virtual character, in response to the face virtual model obtaining event of the face virtual model obtained from the target virtual character in the second virtual character, the method may include:
and displaying the updated first virtual character on a game interface.
In an embodiment, after the step of "in response to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, adjusting the face of the first virtual character based on the face virtual model of the target virtual character, resulting in an updated first virtual character", the method may include:
and storing the updated first virtual character into a face virtual model database, so that the first user obtains the updated first virtual character through the face virtual model database in a game.
According to the method, the face of the first virtual character is adjusted through the computer equipment based on the face virtual model of the target virtual character, the updated first virtual character is obtained, the updated first virtual character is displayed on the game interface, the first virtual character corresponds to the NFT use right of the game account with the face virtual model of the target virtual character, the face virtual model of the target virtual character can be used, and the face virtual model of the target virtual character cannot be modified.
Referring to fig. 4, fig. 4 is a flow chart of a virtual model processing method provided in the embodiment of the present application, where the virtual model processing method is applied to a second terminal, and the second terminal is a terminal of a game account corresponding to a second virtual character, and the specific flow may be as follows steps 201 to 204:
201. and displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and a second virtual role, the first virtual role and the second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the second user.
202. And receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character, and displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request.
In an embodiment, the step of receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character and displaying face virtual model authorization prompt information in the information prompt area of the game interface based on the face virtual model authorization request may include:
Receiving a face virtual model interchange request sent by a first terminal, wherein the face virtual model interchange request carries face virtual model data of a face virtual model of a first virtual character, and the first terminal is a terminal of a game account corresponding to the first virtual character;
displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model interchange request;
responding to the confirmation operation aiming at the face virtual model authorization prompt information, and determining that the authorization condition is unconditional agreement;
updating the face of the second virtual character based on the face virtual model of the first virtual character to obtain an updated second virtual character, and displaying the updated second virtual character on the game interface;
sending the face virtual model of the second virtual character to the first terminal so that the first terminal displays an updated first virtual character on a game interface of a first operation user of the first virtual character, wherein the updated first virtual character is: and updating the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
Optionally, the step of "receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character" may include:
responding to a drag operation for the second virtual character;
and when the drag operation is detected to be released at the position of the first virtual character, generating a face virtual model giving instruction, wherein the face virtual model giving instruction is used for inquiring whether the first terminal accepts the face virtual model of the second virtual character.
203. In response to an authorization setup operation for the facial virtual model authorization hint information, an authorization condition is determined.
In order to enable the customization of the transaction by the player to be met, an amount input control is also displayed on the game interface, and the step of responding to the authorization setting operation of the authorization prompt information of the face virtual model and determining the authorization condition can comprise the following steps:
and when detecting the input determination operation of the payment information input through the amount input control, taking the payment information as an authorization condition.
204. And authorizing the face virtual model of the second virtual role to the first virtual role based on an authorization condition, so that the first user corresponding to the first virtual role obtains an updated first virtual role, wherein the updated first virtual role is as follows: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In a specific embodiment, the face virtual model of the second virtual character is authorized to the first virtual character based on the authorization condition, so that an updated first virtual character is displayed on a game interface of a first operation user of the first virtual character, wherein the updated first virtual character is: and updating the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In light of the foregoing, the virtual model processing method of the present application will be further described below by way of example. For example, as shown in fig. 5, taking an example of a specific implementation scenario in which a face virtual model is exchanged between a first virtual character and a second virtual character, specific embodiments of the scenario are as follows:
(1) After a user logs in a game at a terminal (mobile phone end), the user can enter a game, the terminal displays a game page on a game interface, wherein the game interface is used for displaying a virtual scene, a first virtual role and a second virtual role, wherein the first virtual role is a virtual role controlled by a first operation user, the second virtual role is a virtual role controlled by a second user, and the game interface is a game interface of the first virtual role.
(2) After entering a game, starting a preparation stage before game play, when the computer equipment detects that the relative distance between the first virtual character and the second virtual character is in a specified distance range, the first virtual character and the second virtual character are in a face-to-face state, and the duration time of the relative distance in the specified distance range accords with a preset time threshold value, taking the second virtual character as a target virtual character, adjusting the face of the first virtual character based on a face virtual model of the second virtual character to obtain an updated first virtual character, and simultaneously adjusting the face of the target virtual character based on the face virtual model of the first virtual character to obtain an updated target virtual character.
(3) And displaying the updated first virtual character and the updated target virtual character on the game interface.
In light of the foregoing, the virtual model processing method of the present application will be further described below by way of example. For example, as shown in fig. 6, taking a specific implementation scenario of a touch operation generated by a user using a mouse on a virtual character displayed on a terminal as an example, specific examples of the scenario are as follows:
(1) After a user logs in a game at a terminal (mobile phone end), the terminal displays a game page on a game interface, wherein the game interface is used for displaying a virtual scene, a first virtual role and a second virtual role, wherein the first virtual role is a virtual role controlled by a first operation user, the second virtual role is a virtual role controlled by a second user, and the game interface is a game interface of the first virtual role.
(2) And when the computer equipment detects that the drag operation of the second virtual character by the user is released at the position of the first virtual character, the face of the first virtual character is adjusted based on the face virtual model of the second virtual character, so that the updated first virtual character is obtained.
(3) And displaying the updated first virtual character on the game interface.
In light of the foregoing, the virtual model processing method of the present application will be further described below by way of example. For example, as shown in fig. 7, taking a specific implementation scenario of a touch operation generated by a user using a finger on a virtual character displayed on a terminal as an example, specific examples of the scenario are as follows:
(1) After a user logs in a game at a terminal (mobile phone end), the terminal displays a game page on a game interface, wherein the game interface is used for displaying a virtual scene, a first virtual role and a second virtual role, wherein the first virtual role is a virtual role controlled by a first operation user, the second virtual role is a virtual role controlled by a second user, and the game interface is a game interface of the first virtual role.
(2) And when the computer equipment detects that the drag operation of the user on the first virtual character is released at the position of the second virtual character, the face of the second virtual character is adjusted based on the face virtual model of the first virtual character, so that the updated second virtual character is obtained.
(3) And displaying the updated second virtual character on the game interface.
In order to facilitate better implementation of the virtual model processing method provided by the embodiment of the application, the embodiment of the application also provides a virtual model processing device based on the virtual model processing method. Wherein the meaning of nouns is the same as that in the virtual model processing method, and specific implementation details can be referred to the description in the method embodiment.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a virtual model processing apparatus according to an embodiment of the present application, which is applied to a first terminal, and the apparatus includes:
a first display unit 301, configured to display a game interface, where the game interface includes a virtual scene, and a first virtual character and at least one second virtual character that are located in the virtual scene, where the first virtual character is a virtual character controlled by a first user, and the second virtual character is a virtual character controlled by a second user, and the game interface is displayed on a terminal of the first user;
The first response unit 302, in response to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, adjusts the face of the first virtual character based on the face virtual model of the target virtual character, and obtains an updated first virtual character.
In some embodiments, the apparatus further comprises:
and the second display unit is used for displaying the updated first virtual character on the game interface.
In some embodiments, the apparatus further comprises:
and the storage unit is used for storing the updated first virtual character into a face virtual model database so that the first user can acquire the updated first virtual character through the face virtual model database in a game.
In some embodiments, the apparatus further comprises:
a first response subunit, configured to generate a face virtual model acquisition instruction in response to a face virtual model acquisition operation for the target virtual character;
the first sending unit is used for sending the face virtual model obtaining instruction to a second terminal, wherein the face virtual model obtaining instruction is used for indicating the second terminal to confirm whether to return target face virtual model data of the face virtual model of the target virtual character to the first terminal, the first terminal is a terminal of a game account corresponding to the first virtual character, and the second terminal is a terminal of a game account corresponding to the target virtual character;
The first receiving subunit is used for receiving the target face virtual model data returned by the second terminal;
and the first adjusting unit is used for adjusting the face of the first virtual character based on the target face virtual model data to obtain an updated first virtual character.
In some embodiments, the apparatus further comprises:
and the first detection unit is used for taking the second virtual character as a target virtual character and generating a face virtual model acquisition instruction based on target face virtual model data of a face virtual model of the first virtual character when the relative distance between the first virtual character and the second virtual character is detected to be in a specified distance range and the duration time of the relative distance in the specified distance range accords with a preset time threshold value.
In some embodiments, the apparatus further comprises:
a second adjusting unit, configured to adjust the face of the target virtual character based on the face virtual model of the first virtual character, to obtain an updated target virtual character;
and the first display subunit is used for displaying the updated target virtual character on the game interface.
In some embodiments, the apparatus further comprises:
A second response subunit configured to respond to a drag operation for the target virtual character;
and the second detection unit is used for generating a facial virtual model acquisition instruction when the drag operation is detected to be released at the position where the first virtual character is located.
In some embodiments, the apparatus further comprises:
the second receiving subunit is used for displaying the payment information and the payment control in a preset information display area of the game interface when receiving the payment information returned by the second terminal;
the third response subunit is used for responding to the touch operation of the payment control and acquiring a payment result;
and the second sending unit is used for sending the payment result to the second terminal so that the second terminal returns the target face virtual model data to the first terminal based on the payment result.
In some embodiments, the apparatus further comprises:
a second display subunit, configured to, when receiving a face virtual model presentation request sent by a second terminal, display presentation request information in a preset information display area of the game interface based on the face virtual model presentation request, where the face virtual model presentation request carries target face virtual model data of a face virtual model of a target virtual character, and the presentation request information is used to indicate: confirming whether the target face virtual model data is received;
And a third receiving subunit operable to receive the target face virtual model data when a confirmation receiving operation in response to the presentation request information is detected.
In some embodiments, the apparatus further comprises:
the generating unit is used for generating prompt information based on the target face virtual model data, wherein the prompt information is used for indicating: confirm whether to update the face of the first virtual character based on the target face virtual model data;
a fourth response subunit, configured to adjust, in response to a confirmation operation for the prompt information, a face of the first virtual character based on the target face virtual model data, to obtain an updated first virtual character;
and a third display subunit, configured to display the updated first virtual character on the game interface.
In some embodiments, the apparatus further comprises:
a fifth response subunit, configured to obtain, in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, a non-homogenous token usage right corresponding to the face virtual model of the target virtual character, where the second user manipulating the target virtual character has multiple transfer rights of the non-homogenous token usage right, and the non-homogenous token usage right is used to indicate rights of the user to use the face virtual model on the virtual character;
A first processing subunit for associating the non-homogenous token usage rights with the first avatar to enable the first user to have rights to use a facial virtual model of the target avatar on the first avatar;
and the third adjusting unit is used for adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
In some embodiments, the apparatus further comprises:
a sixth response subunit, configured to obtain, in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, ownership of a non-homogenous token corresponding to the face virtual model of the target virtual character, where the ownership of the non-homogenous token is used to instruct a user to trade or give the face virtual model to other users and to instruct the user to use the face virtual model on a virtual character;
a second processing subunit, configured to unbind the ownership of the non-homogeneous token with a second user corresponding to the target virtual character, associate the ownership of the non-homogeneous token with a first user corresponding to the first virtual character, so as to cancel the authority of the second user to trade or give the face virtual model of the target virtual character to other users, and use the face virtual model of the target virtual character on the first virtual character, so that the first user has the authority to trade or give the face virtual model of the target virtual character to other users, and use the face virtual model of the target virtual character on the first virtual character;
And a fourth adjustment unit, configured to adjust the face of the first virtual character based on the face virtual model of the target virtual character, to obtain an updated first virtual character.
Referring to fig. 9, fig. 9 is another schematic structural diagram of a virtual model processing apparatus according to an embodiment of the present application, which is applied to a second terminal, and the apparatus includes:
a third display unit 401, configured to display a game interface, where the game interface includes a virtual scene, and a first virtual character and a second virtual character that are located in the virtual scene, where the first virtual character is a virtual character controlled by a first user, and the second virtual character is a virtual character controlled by a second user, and the game interface is displayed on a terminal of the second user;
a receiving unit 402, configured to receive a face virtual model authorization request for authorizing a face virtual model of the second virtual character to the first virtual character, and display face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request;
a determining unit 403 configured to determine an authorization condition in response to an authorization setting operation for the face virtual model authorization prompt information;
A processing unit 404, configured to authorize, based on the authorization condition, a face virtual model of the second virtual character to the first virtual character, so that a first user corresponding to the first virtual character obtains an updated first virtual character, where the updated first virtual character is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In some embodiments, the apparatus further comprises:
a fourth receiving subunit, configured to receive a face virtual model interchange request sent by a first terminal, where the face virtual model interchange request carries face virtual model data of a face virtual model of a first virtual role, and the first terminal is a terminal of a game account corresponding to the first virtual role;
a fourth display subunit, configured to display a face virtual model authorization prompt message in the information prompt area of the game interface based on the face virtual model interchange request;
a seventh response subunit, configured to determine that an authorization condition is unconditional consent in response to a confirmation operation for the face virtual model authorization prompt information;
an updating unit, configured to update a face of the second virtual character based on the face virtual model of the first virtual character, obtain an updated second virtual character, and display the updated second virtual character on the game interface;
A third sending unit, configured to send, to the first terminal, the face virtual model of the second virtual role, so that a first user corresponding to the first virtual role obtains an updated first virtual role, where the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
In some embodiments, the apparatus further comprises:
and a third detection unit configured to take the payment information as an authorization condition when an input determination operation of the payment information input through the amount input control is detected.
In some embodiments, the apparatus further comprises:
an eighth response subunit configured to respond to a drag operation for the second virtual character;
and the fourth detection unit is used for generating a face virtual model giving instruction when the drag operation is detected to be released at the position where the first virtual character is located, wherein the face virtual model giving instruction is used for inquiring whether the first terminal accepts the face virtual model of the second virtual character.
The embodiment of the application provides a virtual model processing device, wherein a game interface is displayed through a first display unit 301, the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user; the first response unit 302 obtains an updated first virtual character by adjusting the face of the first virtual character based on the face virtual model of the target virtual character in response to a face virtual model acquisition event that acquires the face virtual model from the target virtual character in the second virtual character. According to the method and the device for obtaining the face virtual model, the player can trigger the face virtual model to obtain the event through triggering operation on the game interface, so that the player can quickly obtain the face model meeting the self requirements in the game process, and the processing efficiency is improved.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 10. The computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, and performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the computer device 500.
In the embodiment of the present application, the processor 501 in the computer device 500 loads the instructions corresponding to the processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, wherein the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
and responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 10, the computer device 500 further includes: a touch display screen 503, a radio frequency circuit 504, an audio circuit 505, an input unit 506, and a power supply 507. The processor 501 is electrically connected to the touch display 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 10 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display screen 503 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 501, and can receive commands from the processor 501 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 501 to determine the type of touch event, and the processor 501 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch sensitive display 503 may also implement an input function as part of the input unit 506.
In the present embodiment, a graphical user interface is generated on touch-sensitive display screen 503 by processor 501 executing a gaming application. The touch display screen 503 is used for presenting a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface.
The radio frequency circuitry 504 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 505 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 505 and converted into audio data, which are processed by the audio data output processor 501 for transmission to, for example, another computer device via the radio frequency circuit 504, or which are output to the memory 502 for further processing. The audio circuit 505 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Alternatively, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 507 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 10, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the foregoing, in the computer device provided in this embodiment, by displaying a game interface, where the game interface includes a virtual scene, and a first virtual character and at least one second virtual character that are located in the virtual scene, where the first virtual character is a virtual character controlled by a first user, and the second virtual character is a virtual character controlled by a second user, and the game interface is displayed on a terminal of the first user; and responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role. According to the method and the device for obtaining the face virtual model, the player can trigger the face virtual model to obtain the event through triggering operation on the game interface, so that the player can quickly obtain the face model meeting the self requirements in the game process, and the processing efficiency is improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions or by controlling associated hardware, which may be stored on a storage medium (e.g., a computer-readable storage medium) and loaded and executed by a processor.
To this end, the embodiments of the present application provide a storage medium in which a plurality of computer programs are stored, which are capable of being loaded by a processor to perform the steps of any of the virtual model processing methods provided in the embodiments of the present application. For example, the computer program may perform the steps of:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, wherein the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
and responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any virtual model processing method provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any virtual model processing method provided in the embodiments of the present application may be achieved are detailed in the previous embodiments, and are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail a virtual model processing method, apparatus, computer device and storage medium provided in the embodiments of the present application, and specific examples are applied to describe the principles and implementations of the present application, where the descriptions of the foregoing embodiments are only used to help understand the technical solutions and core ideas of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (20)

1. The virtual model processing method is applied to a first terminal and is characterized by comprising the following steps:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, wherein the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
and responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual role in the second virtual role, and adjusting the face of the first virtual role based on the face virtual model of the target virtual role to obtain an updated first virtual role.
2. The virtual model processing method according to claim 1, wherein after adjusting the face of the first virtual character based on the face virtual model of the target virtual character in response to a face virtual model acquisition event of acquiring the face virtual model from the target virtual character in the second virtual character, obtaining the updated first virtual character, further comprising:
And displaying the updated first virtual character on the game interface.
3. The virtual model processing method according to claim 1, wherein after adjusting the face of the first virtual character based on the face virtual model of the target virtual character in response to a face virtual model acquisition event of acquiring the face virtual model from the target virtual character in the second virtual character, obtaining the updated first virtual character, further comprising:
and storing the updated face virtual model of the first virtual character into a face virtual model database, so that the first user can acquire the updated face virtual model of the first virtual character through the face virtual model database in a game.
4. The virtual model processing method of claim 1, wherein the adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain the updated first virtual character in response to a face virtual model acquisition event that acquires the face virtual model from the target virtual character in the second virtual character, comprises:
generating a face virtual model acquisition instruction in response to a face virtual model acquisition operation for the target virtual character;
The face virtual model obtaining instruction is sent to a second terminal, wherein the face virtual model obtaining instruction is used for indicating the second terminal to confirm whether to return target face virtual model data of the face virtual model of the target virtual role to the first terminal, the first terminal is a terminal of a game account corresponding to the first virtual role, and the second terminal is a terminal of a game account corresponding to the target virtual role;
receiving the target face virtual model data returned by the second terminal;
and adjusting the face of the first virtual character based on the target face virtual model data to obtain an updated first virtual character.
5. The virtual model processing method of claim 4, wherein generating facial virtual model acquisition instructions in response to facial virtual model acquisition operations for the target virtual character comprises:
when the relative distance between the first virtual character and the second virtual character is detected to be in a specified distance range, and the duration time of the relative distance in the specified distance range accords with a preset time threshold, the second virtual character is taken as a target virtual character, and a face virtual model acquisition instruction is generated based on target face virtual model data of a face virtual model of the first virtual character.
6. The virtual model processing method of claim 5, further comprising, after receiving the target face virtual model data returned by the second terminal:
adjusting the face of the target virtual character based on the face virtual model of the first virtual character to obtain an updated target virtual character;
and displaying the updated target virtual character on the game interface.
7. The virtual model processing method of claim 4, wherein generating facial virtual model acquisition instructions in response to facial virtual model acquisition operations for the target virtual character comprises:
responding to a drag operation for the target virtual character;
and when the drag operation is detected to be released at the position of the first virtual character, generating a facial virtual model acquisition instruction.
8. The virtual model processing method of claim 4, further comprising, prior to receiving the target face virtual model data returned by the second terminal:
when payment information returned by the second terminal is received, displaying the payment information and a payment control in a preset information display area of the game interface;
Responding to touch operation for the payment control, and acquiring a payment result;
and sending the payment result to the second terminal so that the second terminal returns the target face virtual model data to the first terminal based on the payment result.
9. The virtual model processing method of claim 1, wherein the adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain the updated first virtual character in response to a face virtual model acquisition event that acquires the face virtual model from the target virtual character in the second virtual character, comprises:
when a face virtual model presentation request sent by a second terminal is received, presentation request information is displayed in a preset information display area of the game interface based on the face virtual model presentation request, wherein the face virtual model presentation request carries target face virtual model data of a face virtual model of a target virtual character, and the presentation request information is used for indicating: confirming whether the target face virtual model data is received;
the target face virtual model data is received when a confirmation receiving operation in response to the gift request information is detected.
10. The virtual model processing method according to claim 9, further comprising, after receiving the target face virtual model data:
generating prompt information based on the target face virtual model data, wherein the prompt information is used for indicating: confirm whether to update the face of the first virtual character based on the target face virtual model data;
when the confirmation operation for the prompt information is responded, the face of the first virtual character is adjusted based on the target face virtual model data, and an updated first virtual character is obtained;
and displaying the updated first virtual character on the game interface.
11. The virtual model processing method according to any one of claims 1 to 10, wherein the adjusting the face of the first virtual character based on the face virtual model of the target virtual character in response to a face virtual model acquisition event that acquires the face virtual model from the target virtual character in the second virtual character, to obtain the updated first virtual character, comprises:
in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, acquiring a non-homogenous token usage right corresponding to the face virtual model of the target virtual character, wherein a second user manipulating the target virtual character has multiple transfer rights for the non-homogenous token usage right for indicating rights of the user to use the face virtual model on a virtual character;
Associating the non-homogenous token usage rights with the first avatar to enable the first user to have rights to use a facial virtual model of the target avatar on the first avatar;
and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
12. The virtual model processing method according to any one of claims 1 to 10, wherein the adjusting the face of the first virtual character based on the face virtual model of the target virtual character in response to a face virtual model acquisition event that acquires the face virtual model from the target virtual character in the second virtual character, to obtain the updated first virtual character, comprises:
in response to a face virtual model acquisition event that acquires a face virtual model from a target virtual character in the second virtual character, acquiring ownership of a non-homogenous token corresponding to the face virtual model of the target virtual character, wherein the ownership of the non-homogenous token is used to indicate rights of a user to trade or give the face virtual model to other users and rights of the user to use the face virtual model on a virtual character;
Unbinding a second user, corresponding to the target virtual character, of ownership of the non-homogeneous token, associating the ownership of the non-homogeneous token with a first user, corresponding to the first virtual character, so as to cancel the authority of the second user to trade or give the face virtual model of the target virtual character to other users, and the authority of using the face virtual model of the target virtual character on the first virtual character, and enable the first user to have the authority of trade or give the face virtual model of the target virtual character to other users, and the authority of using the face virtual model of the target virtual character on the first virtual character;
and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
13. A virtual model processing method applied to a second terminal, comprising:
displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and a second virtual role, wherein the first virtual role and the second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the second user;
Receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character, and displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request;
determining an authorization condition in response to an authorization setup operation for the facial virtual model authorization hint information;
and authorizing the face virtual model of the second virtual role to the first virtual role based on the authorization condition so that a first user corresponding to the first virtual role obtains an updated first virtual role, wherein the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
14. The virtual model processing method of claim 13, wherein the receiving a face virtual model authorization request to authorize the face virtual model of the second virtual character to the first virtual character, displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request, comprises:
Receiving a face virtual model interchange request sent by a first terminal, wherein the face virtual model interchange request carries face virtual model data of a face virtual model of a first virtual character, and the first terminal is a terminal of a game account corresponding to the first virtual character;
displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model interchange request;
responding to the confirmation operation aiming at the face virtual model authorization prompt information, and determining that the authorization condition is unconditional agreement;
updating the face of the second virtual character based on the face virtual model of the first virtual character to obtain an updated second virtual character, and displaying the updated second virtual character on the game interface;
transmitting the face virtual model of the second virtual role to the first terminal so that a first user corresponding to the first virtual role obtains an updated first virtual role, wherein the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
15. The virtual model processing method of claim 13, wherein a monetary input control is also displayed on the game interface;
the determining, in response to an authorization setup operation for the facial virtual model authorization hint information, an authorization condition includes:
and when detecting the input determination operation of the payment information input through the amount input control, taking the payment information as an authorization condition.
16. The virtual model processing method of claim 13, wherein the receiving a facial virtual model authorization request to authorize the facial virtual model of the second virtual character to the first virtual character comprises:
responding to a drag operation for the second virtual character;
and when the drag operation is detected to be released at the position of the first virtual character, generating a face virtual model giving instruction, wherein the face virtual model giving instruction is used for inquiring whether the first terminal accepts the face virtual model of the second virtual character.
17. A virtual model processing apparatus applied to a first terminal, comprising:
the game device comprises a first display unit, a second display unit and a control unit, wherein the first display unit is used for displaying a game interface, the game interface comprises a virtual scene, a first virtual role and at least one second virtual role, the first virtual role and the at least one second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the first user;
And the first response unit is used for responding to a face virtual model acquisition event of acquiring a face virtual model from a target virtual character in the second virtual character, and adjusting the face of the first virtual character based on the face virtual model of the target virtual character to obtain an updated first virtual character.
18. A virtual model processing apparatus applied to a second terminal, comprising:
the third display unit is used for displaying a game interface, wherein the game interface comprises a virtual scene, a first virtual role and a second virtual role, the first virtual role and the second virtual role are positioned in the virtual scene, the first virtual role is a virtual role controlled by a first user, the second virtual role is a virtual role controlled by a second user, and the game interface is displayed on a terminal of the second user;
the receiving unit is used for receiving a face virtual model authorization request for authorizing the face virtual model of the second virtual character to the first virtual character, and displaying face virtual model authorization prompt information in an information prompt area of the game interface based on the face virtual model authorization request;
a determining unit configured to determine an authorization condition in response to an authorization setting operation for the face virtual model authorization prompt information;
The processing unit is configured to authorize the face virtual model of the second virtual role to the first virtual role based on the authorization condition, so that a first user corresponding to the first virtual role obtains an updated first virtual role, where the updated first virtual role is: and adjusting the virtual character obtained by the face of the first virtual character based on the face virtual model of the second virtual character.
19. A computer device comprising a memory in which a computer program is stored and a processor that performs the steps in the virtual model processing method of any of claims 1 to 16 by invoking the computer program stored in the memory.
20. A storage medium storing a computer program adapted to be loaded by a processor to perform the steps of the virtual model processing method according to any one of claims 1 to 16.
CN202111599971.9A 2021-12-24 2021-12-24 Virtual model processing method, device, computer equipment and storage medium Pending CN116328310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111599971.9A CN116328310A (en) 2021-12-24 2021-12-24 Virtual model processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111599971.9A CN116328310A (en) 2021-12-24 2021-12-24 Virtual model processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116328310A true CN116328310A (en) 2023-06-27

Family

ID=86879459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111599971.9A Pending CN116328310A (en) 2021-12-24 2021-12-24 Virtual model processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116328310A (en)

Similar Documents

Publication Publication Date Title
CN111760274B (en) Skill control method, skill control device, storage medium and computer equipment
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN112206517B (en) Rendering method, rendering device, storage medium and computer equipment
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN112870718A (en) Prop using method and device, storage medium and computer equipment
CN114159789A (en) Game interaction method and device, computer equipment and storage medium
CN113332716A (en) Virtual article processing method and device, computer equipment and storage medium
CN114189731B (en) Feedback method, device, equipment and storage medium after giving virtual gift
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN113350801A (en) Model processing method and device, storage medium and computer equipment
CN116328310A (en) Virtual model processing method, device, computer equipment and storage medium
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
CN116328315A (en) Virtual model processing method, device, terminal and storage medium based on block chain
CN117815670A (en) Scene element collaborative construction method, device, computer equipment and storage medium
CN116271791A (en) Game control method, game control device, computer equipment and storage medium
CN117160031A (en) Game skill processing method, game skill processing device, computer equipment and storage medium
CN116966544A (en) Region prompting method, device, storage medium and computer equipment
CN116999825A (en) Game control method, game control device, computer equipment and storage medium
CN116999835A (en) Game control method, game control device, computer equipment and storage medium
CN117771678A (en) Method and device for adjusting virtual component, computer equipment and storage medium
CN117482516A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium
CN115212566A (en) Virtual object display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination