CN113350801B - Model processing method, device, storage medium and computer equipment - Google Patents

Model processing method, device, storage medium and computer equipment Download PDF

Info

Publication number
CN113350801B
CN113350801B CN202110819172.1A CN202110819172A CN113350801B CN 113350801 B CN113350801 B CN 113350801B CN 202110819172 A CN202110819172 A CN 202110819172A CN 113350801 B CN113350801 B CN 113350801B
Authority
CN
China
Prior art keywords
character model
model
character
user interface
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110819172.1A
Other languages
Chinese (zh)
Other versions
CN113350801A (en
Inventor
罗书翰
林�智
胡志鹏
程龙
刘勇成
袁思思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110819172.1A priority Critical patent/CN113350801B/en
Publication of CN113350801A publication Critical patent/CN113350801A/en
Application granted granted Critical
Publication of CN113350801B publication Critical patent/CN113350801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a model processing method, a device, a storage medium and computer equipment, wherein the method comprises the following steps: displaying a first character model of the virtual character in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; the first character model and/or the second character model are adapted in response to model adaptation operations for the first character model and/or the second character model. The application can obtain ideal character model.

Description

Model processing method, device, storage medium and computer equipment
Technical Field
The present application relates to the field of computers, and in particular, to a model processing method, apparatus, computer readable storage medium, and computer device.
Background
In recent years, with the development and popularization of computer equipment technology, more and more applications such as games of leisure games, action games, role playing games, strategy games, sports games, educational games, shooting games, and the like are emerging. Taking a role playing game as an example, in order to meet the personalized customization needs of different players, a pinching function is generally provided for players when creating virtual roles corresponding to the players, so as to expect to obtain ideal virtual role models. However, the pinch function provided by the related art has difficulty in obtaining an ideal virtual character model.
Disclosure of Invention
The embodiment of the application provides a model processing method, a model processing device, a computer readable storage medium and computer equipment, which can obtain an ideal role model.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
A model processing method, comprising:
Displaying a first character model of the virtual character in a user interface;
Responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model;
Simultaneously displaying the first character model and the second character model in the user interface;
The first character model and/or the second character model are adapted in response to model adaptation operations for the first character model and/or the second character model.
A model processing apparatus comprising:
A first display module for displaying a first character model of the virtual character in the user interface;
A replication module for replicating the first character model to obtain a second character model in response to a first replication operation for the first character model;
A second display module for simultaneously displaying the first character model and the second character model in the user interface;
an adjustment module for adjusting the first character model and/or the second character model in response to a model adjustment operation for the first character model and/or the second character model.
A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the model processing method described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the model processing method as described above when the program is executed.
In the embodiment of the application, a first character model of a virtual character is displayed in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; in response to model adjustment operations for the first character model and/or the second character model, the first character model and/or the second character model are adjusted so that at least two different character models of the virtual character can be presented to the player simultaneously, thereby enabling the player to select a favorite character model, namely an ideal character model, from the at least two different character models.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a model processing system according to an embodiment of the present application.
Fig. 1b is a schematic flow chart of a model processing method according to an embodiment of the present application.
Fig. 1c is a first schematic diagram of a user interface according to an embodiment of the present application.
Fig. 2a is a schematic diagram of a second flow chart of a model processing method according to an embodiment of the present application.
Fig. 2b is a second schematic diagram of a user interface according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a model processing device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a model processing method, a model processing device, a storage medium and computer equipment. Specifically, the model processing method of the embodiment of the application can be executed by a computer device, wherein the computer device can be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a Personal computer (PC, personal Computer), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), and the like, and the terminal device may further include a client, where the client may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, when the model processing method is run on the terminal, the terminal device stores a game application program and presents a part of a game scene in a game through a display component. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a game application program and runs the game application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including game screens and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the game, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the model processing method is running on a server, it may be a cloud game. Cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game application program and a game picture presentation main body are separated, and the storage and the running of the model processing method are completed on a cloud game server. The game image presentation is completed at a cloud game client, which is mainly used for receiving and sending game data and presenting game images, for example, the cloud game client may be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but a terminal device executing the model processing method is a cloud game server in the cloud. When playing the game, the user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
Referring to fig. 1a, fig. 1a is a schematic system diagram of a model processing apparatus according to an embodiment of the application. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. Terminal 1000 held by a user may be connected to servers of different games through network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing software products corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining input from a user through touch or slide operations performed at multiple points of one or more touch sensitive display screens. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000, through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals 1000 may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals 1000 so as to be connected via an appropriate network and synchronized with each other to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 while different users play the multiplayer game online.
The embodiment of the application provides a model processing method which can be executed by a terminal or a server. The embodiment of the application is described by taking a model processing method executed by a terminal as an example. The terminal comprises a display component and a processor, wherein the display component is used for presenting a graphical user interface and receiving operation instructions generated by a user acting on the display component. When a user operates the graphical user interface through the display component, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the user-generated operational instructions for the graphical user interface include instructions for launching the gaming application, and the processor is configured to launch the gaming application after receiving the user-provided instructions for launching the gaming application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch-sensitive display screen. A touch display screen is a multi-touch-sensitive screen capable of sensing touch or slide operations performed simultaneously by a plurality of points on the screen. The user performs touch operation on the graphical user interface by using a finger, and when the graphical user interface detects the touch operation, the graphical user interface controls different virtual objects in the graphical user interface of the game to perform actions corresponding to the touch operation. For example, the game may be any of a recreational game, an action game, a role playing game, a strategy game, a sports game, an educational game, a first person shooter game (First person shooting game, FPS), and the like. Wherein the game may comprise a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by a user (or player) may be included in the virtual scene of the game. In addition, one or more obstacles, such as rails, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual object, e.g., to limit movement of the one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, scores, character health status, energy, etc., to provide assistance to the player, provide virtual services, increase scores related to the player's performance, etc. In addition, the graphical user interface may also present one or more indicators to provide indication information to the player. For example, a game may include a player controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using an Artificial Intelligence (AI) algorithm, implementing a human-machine engagement mode. For example, virtual objects possess various skills or capabilities that a game player uses to achieve a goal. For example, the virtual object may possess one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by the player of the game using one of a plurality of preset touch operations with the touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of the user.
It should be noted that, the system schematic diagram of the model processing system shown in fig. 1a is only an example, and the model processing system and the scenario described in the embodiment of the present application are for more clearly describing the technical solution of the embodiment of the present application, and do not constitute a limitation on the technical solution provided by the embodiment of the present application, and those skilled in the art can know that, with the evolution of the model processing system and the appearance of a new service scenario, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
In the present embodiment, description will be made from the viewpoint of a model processing apparatus which can be integrated in a computer device having a storage unit and a microprocessor mounted thereon and having arithmetic capability.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a model processing method according to an embodiment of the application. The model processing method comprises the following steps:
101. A first character model of the virtual character is displayed in a user interface.
In recent years, with the development and popularization of computer equipment technology, more and more applications such as games of leisure games, action games, role playing games, strategy games, sports games, educational games, shooting games, and the like are emerging. Taking a role playing game as an example, in order to meet the personalized customization needs of different players, a pinching function is generally provided for players when creating virtual roles corresponding to the players. The pinch function refers to a function of adjusting a character model of a virtual character, such as a head, an upper body, a lower body, or the like of the character model, for a player to freely adjust the virtual character.
Taking a role-playing game as an example, when a player initiates the role-playing game for the first time, the terminal displays a graphical user interface for creating a virtual role. The graphical user interface for creating the virtual character may be as shown in fig. 1 c. As shown in fig. 1c, the graphical user interface for creating a virtual character includes a character model 100, a shaping control 200, a makeup control 300, a hairstyle control 400, and the like for the virtual character to be created.
Wherein the character model 100 includes a head and an upper body. It is to be appreciated that the character model 100 may not be limited to the form shown in FIG. 1c, e.g., the character model 100 may also include only a head, or the character model 100 may also include a head, upper body, lower body, etc. In response to a touch operation on the shaping control 200, controls for adjusting the facial construction of the character model 100, such as controls for adjusting the size of eyes, controls for the length of the nose, etc., may be displayed on the graphical user interface. In response to a touch operation on cosmetic control 300, controls for adjusting the facial state of character model 100, such as controls 301, 302, etc., for changing the eyebrows, changing the lips, changing the eye colors, etc., may be displayed on the graphical user interface. In response to a touch operation on the hairstyle control 400, a control for adjusting the hairstyle of the character model 100 may be displayed on the graphical user interface.
In some embodiments, the graphical user interface may also display adjustment controls corresponding to the upper body so that the upper body of the character model 100 may be adjusted. When the character model 100 further includes a lower body, the graphical user interface may also display adjustment controls corresponding to the lower body to adjust the lower body of the character model 100.
In this embodiment, the first character model may be displayed in the user interface. The user interface may be a graphical user interface as shown in FIG. 1c and the first character model may be character model 100 as shown in FIG. 1 c.
It will be appreciated that the graphical user interface and character model 100 shown in FIG. 1c is merely one example of a user interface and first character model in this embodiment and is not intended to limit the application.
It should be noted that, in the embodiment of the present application, the display position of the first character model is not limited, and the first character model may be displayed at any position of the user interface. Generally, the first character model is displayed in an intermediate position of the user interface.
102. In response to a first copy operation for the first character model, the first character model is copied to obtain a second character model.
For example, the first copy operation may be a touch operation or a non-contact operation. When the first copy operation is a touch operation, the first copy operation may be a sliding operation or a clicking operation. For example, the first copy operation may be a long press operation, a heavy press operation, a long press combined slide (e.g., long press and slide in a certain direction) operation, a slide operation, or the like. The long press operation is a press operation with a press time longer than a preset time, the heavy press operation is a press operation with a press pressure greater than a preset pressure, and the long press operation is combined with the slide operation, i.e. the press operation with a press time longer than a preset time is combined with the slide operation with a press point of the press operation as a slide start point. The user may perform a first copy operation at any location of the user interface, and then terminal 1000 may receive a first copy instruction for the first character model generated by the user acting on the user interface, and may copy the first character model to obtain the second character model in response to the first copy operation for the first character model. The preset time period and the preset pressure may be set according to actual conditions, and are not particularly limited herein.
For example, assuming a first character model is character model 100, a player may press the character model 100 long, and terminal 1000 copies the character model 100 to obtain a second character model.
In some embodiments, when the first character model is copied, only the head of the first character model may be copied, the head and the upper body of the first character model may be copied, the head, the upper body and the lower body of the first character model may be copied, and the like, which is not particularly limited herein, and the actual design is subject to.
When the first copy operation is a non-contact operation, the first copy operation may be a voice control operation, a gesture control operation, or the like. For example, terminal 1000 can receive a first copy operation when a player speaks a predetermined voice, such as "copy character model". Terminal 1000 can receive a first copy operation when a player makes a preset gesture, such as a gesture to raise a thumb.
103. The first character model and the second character model are displayed simultaneously in the user interface.
It will be appreciated that after the second character model is obtained, the second character model can be displayed in the user interface such that the first character model and the second character model can be displayed simultaneously in the user interface.
For example, assuming that the second character model is a duplicate character model 100, then two identical character models 100 are displayed simultaneously in the user interface.
In some embodiments, the second character model may be displayed in a user interface with a preset transparency. The preset transparency may be set according to practical situations, and is not particularly limited herein.
104. The first character model and/or the second character model are adapted in response to model adaptation operations for the first character model and/or the second character model.
For example, after the second character model is obtained, a model to be adjusted may be determined from the first character model and the second character model in response to a model selection operation. When the model to be adjusted is the first character model, the first character model may be adjusted in response to a model adjustment operation for the first character model. When the model to be adapted is a second character model, the second character model may be adapted in response to a model adaptation operation for the second character model.
For example, if the player wants to adjust the first character model, the player may click on the first character model, and the terminal 1000 receives a model selection operation for selecting the first character model as the model to be adjusted, and the subsequently received model adjustment operations are all adjustment operations for the first character model.
It will be appreciated that the player may adjust only the first character model, or only the second character model, or both the first and second character models, as desired by the player.
In the embodiment of the application, a first character model of a virtual character is displayed in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; in response to model adjustment operations for the first character model and/or the second character model, the first character model and/or the second character model are adjusted so that at least two different character models of the virtual character can be presented to the player simultaneously, thereby enabling the player to select a favorite character model, namely an ideal character model, from the at least two different character models.
In some embodiments, responsive to the first copying operation, copying the first character model to obtain the second character model may further comprise:
determining a first display location of the second character model in the user interface based on the first copy operation;
Simultaneously displaying the first character model and the second character model at the user interface may include:
The first character model is displayed at the user interface while the second character model is displayed at the first display position.
For example, when the user performs the first copying operation, the terminal 1000 may receive two operation instructions simultaneously, where the first operation instruction is a copying instruction, and the second operation instruction is a position determining instruction, after the terminal executes the copying instruction, the terminal may copy the first character model to obtain the second character model, and after the terminal executes the position determining instruction, the display position of the second character model, that is, the first display position, may be determined.
After determining the first display position, the second character model may be displayed at the first display position while the first character model is displayed. For example, the first display position may be a right side position of the first character model, and the second character model may be displayed on the right side of the first character model. For another example, the first display position can be a left side position of the first character model, and the second character model can be displayed to the left side of the first character model.
In some embodiments, the first replication operation includes a replication sub-operation and a position determination sub-operation, and responsive to the first replication operation, replicating the first character model to obtain the second character model may include:
responsive to the replicon operation, replicating the first character model to obtain a second character model;
Determining a first display location of the second character model in the user interface in accordance with the first copy operation, comprising:
A first display position of the second character model in the user interface is determined in accordance with the position determination sub-operation.
For example, the first copy operation may include two sub-operations, a copy sub-operation and a location determination sub-operation, respectively. Wherein the replicator operation may be used to replicate the first character model to obtain a second character model, and the position determination sub-operation may be used to determine a display position of the second character model in the user interface, i.e., the first display position.
For example, the replicon operation may include a pressing operation in which a pressing force is greater than a preset pressure, and when the pressing operation in which the pressing force is greater than the preset pressure is received, it may be determined that the replicon operation is received. The position determining sub-operation may include a sliding operation using a pressing point of the copy sub-operation as a sliding start point, and a position of a sliding end point of the position determining sub-operation in the user interface may be determined as a first display position of the second character model in the user interface.
For another example, a user's double pointing click operation may be received, wherein the click operation of one finger is used to replicate the first character model and the click position of the click operation of the other finger may be used as the display position of the second character model, i.e., the first display position. In some embodiments, in order to avoid the misoperation, when a double pointing click operation with a pressing force of at least one finger being greater than a preset pressure and/or a pressing time being longer than a preset time is received, corresponding copying processing and position determining processing may be performed. In order to facilitate distinguishing which finger's clicking operation in the two-pointing clicking operation corresponds to copying the first character model, which finger's clicking operation corresponds to determining the first display position, it may be defined that one finger's clicking operation acts on the first character model and the other finger's clicking operation acts on a position of the user interface other than the position where the first character model is located, so that the first character model may be copied based on the one finger's clicking operation acting on the first character model, and the first display position may be determined according to the other finger's clicking operation.
The preset pressure and the preset time period may be set according to actual conditions, and are not particularly limited herein.
In some embodiments, the model processing method may further include:
When a pressing operation with the pressing time longer than a first preset time is received, determining that a replicon operation is received;
When a sliding operation using a pressing point of the copy sub-operation as a sliding start point is received, determining that the position determination sub-operation is received;
Determining a first display position of the second character model in the user interface based on the position determination sub-operation may include:
The position of the sliding termination point of the position determination sub-operation in the user interface is determined as a first display position.
For example, the player may press the first character model for a first preset period of time and slide to the right of the first character model for a certain distance with the pressing point of the long press operation as the sliding start point, the terminal may copy the first character model to obtain a second character model after receiving the copy operation and the position determining operation, and display the second character model at the position where the sliding end point of the position determining operation is located.
In some embodiments, the second character model may be displayed at an arbitrary position on the right side of the first character model when the player presses the first character model for a first preset length of time and slides a distance to the right side of the first character model with the pressing point of the long press operation as a slide start point. When the player presses the first character model for a first preset time period for a long time and slides a certain distance to the left side of the first character model with the pressing point of the long press operation as a sliding start point, the second character model can be displayed at any position on the left side of the first character model.
In some embodiments, the model processing method may further include:
when a second pressing operation with the pressing time longer than a second preset time is received, determining that a first copying operation is received;
Determining a first display location of the second character model in the user interface in accordance with the first copy operation may include:
Determining a pressing duration of a first copy operation;
A first display location of the second character model in the user interface is determined based on the duration of the pressing of the first copy operation.
For example, the player may press the first character model for a second preset time period, the terminal may receive a first copying operation, and in response to the first copying operation, may copy the first character model to obtain a second character model, and may determine a first display position of the second character model in the user interface according to the pressing time period of the first copying operation. Wherein the longer the duration of the press, the farther the second character model is from the first character model, whereas the shorter the duration of the press, the closer the second character model is from the first character model. Then, the player can control the pressing time period of the first copy operation based on his own demand, so that the second character model can be displayed at a position that the player wants to display.
It will be appreciated that the second character model may be displayed to the left or right of the first character model for convenience in comparing the character models by the player.
In some embodiments, after receiving the first copy operation, the real-time display position can also be determined according to the real-time pressing time length of the first copy operation, and the second character model can be displayed at the real-time display position, so that visual references can be provided for players, and the players can conveniently and well determine where to display the second character model.
The real-time pressing time length is a time length continuously changing along with the increase of the pressing time length of the first copying operation, and the real-time display position is a position changing along with the real-time pressing time length in real time.
In some embodiments, the model processing method may further include:
When a third pressing operation with the pressing force larger than the preset pressure is received, determining that the first copying operation is received;
Determining a first display location of the second character model in the user interface in accordance with the first copy operation may include:
determining a pressing force of a first copy operation;
a first display position of the second character model in the user interface is determined based on the pressing force of the first copy operation.
For example, the player may press the first character model again with a preset pressure, the terminal may receive a first copy operation, may copy the first character model in response to the first copy operation, obtain a second character model, and may determine a first display position of the second character model in the user interface according to the pressing force of the first copy operation. Wherein the greater the pressing force, the farther the second character model is from the first character model, and conversely, the smaller the pressing force, the closer the second character model is from the first character model. Then, the player can control the pressing force of the first copy operation based on his own demand, so that the second character model can be displayed at a position that the player wants to display.
It will be appreciated that the second character model may be displayed to the left or right of the first character model for convenience in comparing the character models by the player.
In some embodiments, upon receipt of the first copy operation, a real-time display position may also be determined based on the real-time pressing force of the first copy operation, and the second character model may be displayed at the real-time display position, thereby providing a visual reference to the player to facilitate the player in better determining where to display the second character model.
The real-time pressing force is the pressing force which is continuously changed along with the change of the pressing force of the first copying operation, and the real-time display position is the position which is changed along with the real-time pressing force in real time.
It should be noted that the first preset duration, the second preset duration, and the preset pressure may be set according to actual situations, and are not particularly limited herein.
In some embodiments, after adjusting the first and second character models in response to model adjustment operations for the first and second character models, further comprising:
Responsive to a second copy operation for the adjusted first character model and/or the adjusted second character model, copying the adjusted first character model and/or the adjusted second character model to obtain a third character model and/or a fourth character model;
And adjusting the adjusted first, second, third, and/or fourth character model in response to an adjustment operation for the adjusted first, second, third, and/or fourth character model.
For example, in response to an adjustment operation for the first character model, an adjusted first character model may be obtained and displayed. In response to the adjustment operation for the second character model, an adjusted second character model may be obtained and displayed. A third character model may be derived in response to a second copy operation for the adjusted first character model, and a fourth character model may be derived in response to a second copy operation for the adjusted second character model. In this way, the player can copy the adjusted character model at any time in the process of adjusting the first character model or the second character model to obtain the copied character model at any time, and can continue to adjust the operation after obtaining the corresponding copied character model, so that the user interface can display a plurality of different character models, and the player can compare the different character models and select the favorite character model, namely the ideal character model.
In some embodiments, in response to the adjusting operation for the adjusted first, second, third, and/or fourth character model, adjusting the adjusted first, second, third, and/or fourth character model may further comprise:
determining a target character model from character models displayed in the user interface in response to the model determination operation;
the displayed target character model is retained and character models other than the target character model among the character models displayed in the user interface are deleted.
For example, the adjusted first, second, third, and fourth character models are adjusted in response to an adjustment operation for the adjusted first, second, third, and fourth character models; determining a target character model from character models displayed in the user interface in response to the model determination operation; preserving the displayed target character model and deleting character models other than the target character model from the character models displayed in the user interface may include:
After the adjusted first, second, third, and fourth character models are adjusted to obtain and display the fifth, sixth, seventh, and eighth character models, the player may click on any one of the fifth, sixth, seventh, and eighth character models, and the terminal receives a model determination operation for instructing to determine the character model instructed by the model determination operation as the target character model. In response to the model determination operation, the character model for which the model determination operation indication is displayed may be retained, and the other character models may be deleted.
For example, assuming that the user interface displays a fifth character model, a sixth character model, a seventh character model, and an eighth character model, when the player clicks on the fifth character model, the fifth character model may remain displayed, deleting the other character models; when the player clicks on the sixth character model, the sixth character model may remain displayed and the other character models may be deleted; similarly, when the player clicks on the seventh character model, the seventh character model may remain displayed and the other character models may be deleted; when the player clicks on the eighth color model, the eighth color model can be kept displayed and the other character models can be deleted.
It will be appreciated that the click operation is only one example of a model determination operation and is not intended to limit the present application. In order to avoid an error response in terminal 1000, the operations, such as the first copy operation and the second copy operation, need to be different from each other, such as an adjustment operation for a character model, a model selection operation, a model determination operation, and a deletion operation. For example, it is assumed that the two operations are included, a first copy operation, which may be a long press operation, and a model determination operation, which may be a click operation.
In some embodiments, if the operations actually corresponding to the two operations are the same, for example, the model selection operation and the model determination operation both correspond to the click operation, then when the player performs the click operation, corresponding prompt information may be generated to allow the player to select whether to perform the model selection operation or the model determination operation; when the player selects to perform the model selection operation, the model selection operation can be executed; when the player chooses to perform the model determination operation, the model determination operation may be performed, thereby avoiding an erroneous response by terminal 1000.
It is to be appreciated that the adjusted first, second, third, or fourth character model is adjusted in response to an adjustment operation for the adjusted first, second, third, or fourth character model; determining a target character model from character models displayed in the user interface in response to the model determination operation; the specific process of retaining the displayed target character model and deleting the character model other than the target character model among the character models displayed in the user interface can be referred to the above embodiments, and will not be described herein.
In some embodiments, in response to the adjusting operation for the adjusted first, second, third, and/or fourth character model, adjusting the adjusted first, second, third, and/or fourth character model may further comprise:
responsive to the model deletion operation, deleting a corresponding one of the character models displayed by the user interface.
For example, some operations may be set as delete operations, e.g., clicking on a character model to slide up may be set to correspond to deleting the character model. Then, when a player wants to delete a character model, he can click on the character model to slide upward and terminal 1000 can delete the character model.
It should be noted that, after the first character model or the second character model is adjusted in response to the model adjustment operation for the first character model or the second character model, the corresponding copied model may be obtained in response to the corresponding copying operation, and the corresponding model may be adjusted, deleted, etc., which may be referred to the above embodiments and will not be described herein.
Referring to fig. 2a, fig. 2b is a schematic flow chart of a model processing method according to an embodiment of the application. The process may include:
201. A first character model of the virtual character is displayed in a user interface.
Taking a role-playing game as an example, when a player initiates the role-playing game for the first time, the terminal displays a graphical user interface for creating a virtual role. The graphical user interface for creating the virtual character may be as shown in fig. 1 c. As shown in fig. 1c, the graphical user interface for creating a virtual character includes a character model 100, a shaping control 200, a makeup control 300, a hairstyle control 400, and the like for the virtual character to be created.
Wherein the character model 100 includes a head and an upper body. It is to be appreciated that the character model 100 may not be limited to the form shown in FIG. 1c, e.g., the character model 100 may also include only a head, the character model 100 may also include a head, an upper body, a lower body, etc. In response to a touch operation on the shaping control 200, controls for adjusting the facial construction of the character model 100, such as controls for adjusting the size of eyes, controls for the length of the nose, etc., may be displayed on the graphical user interface. In response to a touch operation on cosmetic control 300, controls for adjusting the facial state of character model 100, such as controls 301, 302, etc., for changing the eyebrows, changing the lips, changing the eye colors, etc., may be displayed on the graphical user interface. In response to a touch operation on the hairstyle control 400, a control for adjusting the hairstyle of the character model 100 may be displayed on the graphical user interface.
In some embodiments, the graphical user interface may also display adjustment controls corresponding to the upper body so that the upper body of the character model 100 may be adjusted. When the character model 100 further includes a lower body, the graphical user interface may also display adjustment controls corresponding to the lower body to adjust the lower body of the character model 100.
In this embodiment, the first character model may be displayed in the user interface. The user interface may be a graphical user interface as shown in FIG. 1c and the first character model may be character model 100 as shown in FIG. 1 c.
It will be appreciated that the graphical user interface and character model 100 shown in FIG. 1c is merely one example of a user interface and first character model in this embodiment and is not intended to limit the application.
202. In response to a first copy operation for the first character model, the first character model is copied to obtain a second character model.
For example, the first copy operation may be a touch operation or a non-contact operation. When the first copy operation is a touch operation, the first copy operation may be a sliding operation or a clicking operation. For example, the first copy operation may be a long press operation, a heavy press operation, a long press combined slide (e.g., long press and slide in a certain direction) operation, a slide operation, or the like. The long press operation is a press operation with a press time longer than a preset time, the heavy press operation is a press operation with a press pressure greater than a preset pressure, and the long press operation is combined with the slide operation, i.e. the press operation with a press time longer than a preset time is combined with the slide operation with a press point of the press operation as a slide start point. The user may perform a first copy operation at any location of the user interface, and then terminal 1000 may receive a first copy instruction for the first character model generated by the user acting on the user interface, and may copy the first character model to obtain the second character model in response to the first copy operation for the first character model. The preset time period and the preset pressure may be set according to actual conditions, and are not particularly limited herein.
For example, assuming a first character model is character model 100, a player may press the character model 100 long, and terminal 1000 copies the character model 100 to obtain a second character model.
In some embodiments, when the first character model is copied, only the head of the first character model may be copied, the head and the upper body of the first character model may be copied, the head, the upper body and the lower body of the first character model may be copied, and the like, which is not limited herein, and is based on the actual requirements.
When the first copy operation is a non-contact operation, the first copy operation may be a voice control operation, a gesture control operation, or the like. For example, terminal 1000 can receive a first copy operation when a player speaks a predetermined voice, such as "copy character model". Terminal 1000 can receive a first copy operation when a player makes a preset gesture, such as a gesture to raise a thumb.
203. A first display location of the second character model in the user interface is determined in accordance with the first copy operation.
204. The first character model is displayed at the user interface while the second character model is displayed at the first display position.
For example, when the user performs the first copying operation, the terminal 1000 may receive two operation instructions simultaneously, where the first operation instruction is a copying instruction, and the second operation instruction is a position determining instruction, after the terminal executes the copying instruction, the terminal may copy the first character model to obtain the second character model, and after the terminal executes the position determining instruction, the display position of the second character model, that is, the first display position, may be determined.
After determining the first display position, the second character model may be displayed at the first display position while the first character model is displayed. For example, the first display position may be a right side position of the first character model, and the second character model may be displayed on the right side of the first character model. For another example, the first display position can be a left side position of the first character model, and the second character model can be displayed to the left side of the first character model.
205. The first character model is adjusted in response to a model adjustment operation for the first character model.
For example, when the user interface displays the first character model and the second character model simultaneously, the player may select any character model from the first character model and the second character model as the model to be adjusted, and in this embodiment, it is assumed that the player selects the first character model as the model to be adjusted.
In this embodiment, the player may click on control 301 or control 302, and terminal 1000 may receive a model adjustment operation for the first character model, and may adjust the eyebrow shape of the first character model in response to the model adjustment operation for the first character model, to obtain an adjusted first character model.
206. And in response to a second copying operation for the adjusted first character model, copying the adjusted first character model to obtain a third character model.
207. The third color model is adjusted in response to an adjustment operation for the third color model.
In this embodiment, in order to have more character models that can be compared, after the adjusted first character model is obtained, the player may perform a second copy operation to copy the adjusted first character model to obtain a third character model, so that the user interface may display the second character model, the adjusted first character model, and the third character model simultaneously. The player can also perform an adjustment operation to adjust the third character model to obtain an adjusted third character model, so that the user interface can simultaneously display three different character models (the second character model, the adjusted first character model and the adjusted third character model), and further the player can compare the three different character models to select a favorite character model, namely an ideal character model.
It will be appreciated that by continually performing the copy operation and the adjustment operation, the user interface may display more different character models, such as ten different character models, so that more character models are available for selection by the player, and so that the player may choose a preferred character model.
208. A target character model is determined from the character models displayed in the user interface in response to the model determination operation.
209. The displayed target character model is retained and character models other than the target character model among the character models displayed in the user interface are deleted.
For example, assuming the user interface has both the second character model, the adjusted first character model, and the adjusted third character model displayed, when the player clicks on the second character model, the second character model may remain displayed and the other character models may be deleted.
In some embodiments, a plurality of first slots are provided on the left side of the first character model adjacent to the head of the first character model at intervals, and a plurality of second slots are provided on the right side of the first character model adjacent to the head of the first character model at intervals, wherein the first slots and the second slots are configured to receive the replicated head model. For example, a head of a first character model may be replicated in response to a first replication operation on the first character model to obtain a first head model, and a display position of the first head model in a user interface may be determined according to the first replication operation, and the first head model may be displayed at the display position. For example, the player may press the head of the first character model for a long time and drag to the right, and may obtain the first head model and display the first head model in any of a plurality of first slots, e.g., the first head model may be displayed in the first slot closest to the first character model. The player may click on the first head model, which may replace the head model of the first character model, for display as a new head model of the first character model. Which head model of the first character model is which head model can be adjusted. After a certain adjustment is made to the head model of the first character model, if the player slides leftwards according to the adjusted head model, the adjusted head model may be copied and the copied head model may be displayed in any of a plurality of second slots, for example, the copied head model may be displayed in a second slot closest to the first character model, and so on, by continuously performing the copying and adjusting operations, the user interface may simultaneously display a plurality of head models and the character model including the head model, for example, the user interface may be as shown in fig. 2 b.
Wherein, in the embodiments of the present application, "a plurality" means "two" or "more than two".
In the embodiment of the application, a first character model of a virtual character is displayed in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; in response to model adjustment operations for the first character model and/or the second character model, the first character model and/or the second character model are adjusted so that at least two different character models of the virtual character can be presented to the player simultaneously, thereby enabling the player to select a favorite character model, namely an ideal character model, from the at least two different character models.
In order to facilitate better implementation of the model processing method provided by the embodiment of the application, the embodiment of the application also provides a device based on the model processing method. Where the meaning of a noun is the same as in the model processing method described above, specific implementation details may be referred to in the description of the method embodiment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a model processing device according to an embodiment of the application, wherein the model processing device 300 may include a first display module 301, a copy module 302, a second display module 303, an adjustment module 304, and the like.
A first display module 301 for displaying a first character model of the virtual character in the user interface.
A replication module 302 is configured to replicate the first character model to obtain a second character model in response to a first replication operation for the first character model.
A second display module 303 for simultaneously displaying the first character model and the second character model in the user interface.
An adjustment module 304 for adjusting the first character model and/or the second character model in response to a model adjustment operation for the first character model and/or the second character model.
In some embodiments, the replication module 302 may be configured to: determining a first display location of the second character model in the user interface in accordance with the first copy operation;
The second display module 303 may be configured to: the first character model is displayed at the user interface while the second character model is displayed at the first display position.
In some embodiments, the first copy operation includes a copy sub-operation and a location determination sub-operation, and the copy module 302 may be configured to: responsive to the replicon operation, replicating the first character model to obtain a second character model; and determining a first display position of the second character model in the user interface according to the position determination sub-operation.
In some embodiments, the replication module 302 may be configured to: when a pressing operation with the pressing time longer than a first preset time is received, determining that the replicon operation is received; when a sliding operation taking a pressing point of the copy sub-operation as a sliding starting point is received, determining that the position determination sub-operation is received; and determining the position of the sliding termination point of the position determination sub-operation in the user interface as the first display position.
In some embodiments, the replication module 302 may be configured to: when a second pressing operation with the pressing time longer than a second preset time is received, determining that the first copying operation is received; determining a pressing duration of the first copy operation; and determining a first display position of the second character model in the user interface according to the pressing time of the first copying operation.
In some embodiments, the replication module 302 may be configured to: responsive to a second copying operation for the adjusted first character model and/or the adjusted second character model, copying the adjusted first character model and/or the adjusted second character model to obtain a third character model and/or a fourth character model;
The adjustment module 304 may be configured to: and adjusting the adjusted first, second, third and/or fourth character model in response to an adjustment operation for the adjusted first, second, third and/or fourth character model.
In some embodiments, the adjustment module 304 may be configured to: determining a target character model from character models displayed in the user interface in response to a model determination operation; and displaying the target character model in a reserved manner, and deleting the character models except the target character model in the character models displayed in the user interface.
In some embodiments, the adjustment module 304 may be configured to: and deleting the corresponding character model in the character models displayed by the user interface in response to the model deleting operation.
As can be seen from the foregoing, in the embodiment of the present application, the first character model of the virtual character is displayed in the user interface through the first display module 301; the replication module 302 replicates the first character model to obtain a second character model in response to a first replication operation for the first character model; a second display module 303 displays the first character model and the second character model simultaneously in the user interface; the adaptation module 304 adapts the first character model and/or the second character model in response to model adaptation operations for the first character model and/or the second character model, such that at least two different character models of the virtual character may be presented to the player simultaneously, thereby allowing the player to select their favorite character model, i.e., the ideal character model, from the at least two different character models.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Correspondingly, the embodiment of the application also provides a computer device which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet Personal computer, a notebook computer, a touch screen, a game console, a Personal computer (PC, personal Computer), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 401 is a control center of computer device 400 and connects the various portions of the entire computer device 400 using various interfaces and lines to perform various functions of computer device 400 and process data by running or loading software programs and/or modules stored in memory 402 and invoking data stored in memory 402, thereby performing overall monitoring of computer device 400.
In the embodiment of the present application, the processor 401 in the computer device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
Displaying a first character model of the virtual character in a user interface;
Responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model;
Simultaneously displaying the first character model and the second character model in the user interface;
The first character model and/or the second character model are adapted in response to model adaptation operations for the first character model and/or the second character model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
In an embodiment of the present application, the game application executed by the processor 401 generates a user interface, i.e., a graphical user interface, on the touch display screen 403, where the virtual environment contains scene resource objects. The touch display 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to, for example, another computer device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., and will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment displays a first character model of a virtual character in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; in response to model adjustment operations for the first character model and/or the second character model, the first character model and/or the second character model are adjusted so that at least two different character models of the virtual character can be presented to the player simultaneously, thereby enabling the player to select a favorite character model, namely an ideal character model, from the at least two different character models.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the skills control methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
Displaying a first character model of the virtual character in a user interface; responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model; simultaneously displaying the first character model and the second character model at the user interface; the first character model and/or the second character model are adapted in response to model adaptation operations for the first character model and/or the second character model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps of any model processing method provided by the embodiment of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects of any model processing method provided by the embodiment of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted herein.
The foregoing has described in detail the method, apparatus, storage medium and computer device for model processing provided by the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the description of the foregoing embodiments is only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A model processing method, comprising:
Displaying a first character model of the virtual character in a user interface;
Responsive to a first copy operation for the first character model, copying the first character model to obtain a second character model;
Simultaneously displaying the first character model and the second character model in the user interface;
adjusting the first character model and/or the second character model in response to a model adjustment operation for the first character model and/or the second character model;
responsive to a second copying operation for the adjusted first character model and/or the adjusted second character model, copying the adjusted first character model and/or the adjusted second character model to obtain a third character model and/or a fourth character model;
And adjusting the adjusted first, second, third and/or fourth character model in response to an adjustment operation for the adjusted first, second, third and/or fourth character model.
2. The model processing method of claim 1, wherein the copying the first character model in response to the first copying operation for the first character model, after which the second character model is obtained, further comprises:
Determining a first display location of the second character model in the user interface in accordance with the first copy operation;
simultaneously displaying the first character model and the second character model at the user interface, comprising:
The first character model is displayed at the user interface while the second character model is displayed at the first display position.
3. The model processing method of claim 2, wherein the first copying operation includes a copying sub-operation and a position determining sub-operation, and the copying the first character model to obtain a second character model in response to the first copying operation for the first character model includes:
Responsive to the replicon operation, replicating the first character model to obtain a second character model;
The determining a first display position of the second character model in the user interface according to the first copy operation includes:
And determining a first display position of the second character model in the user interface according to the position determination sub-operation.
4. A model processing method according to claim 3, characterized in that the method further comprises:
when a pressing operation with the pressing time longer than a first preset time is received, determining that the replicon operation is received;
when a sliding operation taking a pressing point of the copy sub-operation as a sliding starting point is received, determining that the position determination sub-operation is received;
The determining, based on the position determination sub-operation, a first display position of the second character model in the user interface, comprising:
And determining the position of the sliding termination point of the position determination sub-operation in the user interface as the first display position.
5. The model processing method according to claim 2, characterized in that the method further comprises:
When a second pressing operation with the pressing time longer than a second preset time is received, determining that the first copying operation is received;
The determining a first display position of the second character model in the user interface according to the first copy operation includes:
Determining a pressing duration of the first copy operation;
And determining a first display position of the second character model in the user interface according to the pressing time of the first copying operation.
6. The model processing method according to claim 1, wherein after the adjusting the adjusted first, second, third, and/or fourth character model in response to the adjusting operation for the adjusted first, second, third, and/or fourth character model, further comprising:
Determining a target character model from character models displayed in the user interface in response to a model determination operation;
And displaying the target character model in a reserved manner, and deleting the character models except the target character model in the character models displayed in the user interface.
7. The model processing method according to claim 1, wherein after the adjusting the adjusted first, second, third, and/or fourth character model in response to the adjusting operation for the adjusted first, second, third, and/or fourth character model, further comprising:
And deleting the corresponding character model in the character models displayed by the user interface in response to the model deleting operation.
8. A model processing apparatus, comprising:
A first display module for displaying a first character model of the virtual character in the user interface;
A replication module for replicating the first character model to obtain a second character model in response to a first replication operation for the first character model;
A second display module for simultaneously displaying the first character model and the second character model in the user interface;
An adjustment module for adjusting the first character model and/or the second character model in response to a model adjustment operation for the first character model and/or the second character model;
the copying module is further used for copying the adjusted first character model and/or the adjusted second character model to obtain a third character model and/or a fourth character model in response to a second copying operation for the adjusted first character model and/or the adjusted second character model;
And the adjusting module is further used for adjusting the adjusted first character model, the adjusted second character model, the adjusted third character model and/or the fourth character model in response to the adjusting operation of the adjusted first character model, the adjusted second character model, the adjusted third character model and/or the adjusted fourth character model.
9. A computer readable storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor for executing the steps in the model processing method according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the model processing method according to any one of claims 1 to 7 when the program is executed.
CN202110819172.1A 2021-07-20 2021-07-20 Model processing method, device, storage medium and computer equipment Active CN113350801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110819172.1A CN113350801B (en) 2021-07-20 2021-07-20 Model processing method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110819172.1A CN113350801B (en) 2021-07-20 2021-07-20 Model processing method, device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113350801A CN113350801A (en) 2021-09-07
CN113350801B true CN113350801B (en) 2024-07-02

Family

ID=77539955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110819172.1A Active CN113350801B (en) 2021-07-20 2021-07-20 Model processing method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113350801B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117008769A (en) * 2022-08-23 2023-11-07 腾讯科技(成都)有限公司 Virtual character state setting method and device, storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136228A (en) * 2019-05-16 2019-08-16 腾讯科技(深圳)有限公司 Face replacement method, apparatus, terminal and the storage medium of virtual role

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110170166B (en) * 2015-08-24 2023-04-07 鲸彩在线科技(大连)有限公司 Game data generating and uploading method and device
US10953334B2 (en) * 2019-03-27 2021-03-23 Electronic Arts Inc. Virtual character generation from image or video data
CN110689604B (en) * 2019-05-10 2023-03-10 腾讯科技(深圳)有限公司 Personalized face model display method, device, equipment and storage medium
CN110766777B (en) * 2019-10-31 2023-09-29 北京字节跳动网络技术有限公司 Method and device for generating virtual image, electronic equipment and storage medium
CN110991325A (en) * 2019-11-29 2020-04-10 腾讯科技(深圳)有限公司 Model training method, image recognition method and related device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136228A (en) * 2019-05-16 2019-08-16 腾讯科技(深圳)有限公司 Face replacement method, apparatus, terminal and the storage medium of virtual role

Also Published As

Publication number Publication date
CN113350801A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN112870718B (en) Prop using method, prop using device, storage medium and computer equipment
CN113101650B (en) Game scene switching method, game scene switching device, computer equipment and storage medium
WO2024146067A1 (en) Virtual weapon processing method and apparatus, computer device, and storage medium
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN113413600B (en) Information processing method, information processing device, computer equipment and storage medium
CN113426129B (en) Method, device, terminal and storage medium for adjusting appearance of custom roles
CN113350801B (en) Model processing method, device, storage medium and computer equipment
WO2024051116A1 (en) Control method and apparatus for virtual character, and storage medium and terminal device
CN113426115B (en) Game role display method, device and terminal
CN115382202A (en) Game control method and device, computer equipment and storage medium
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN115501581A (en) Game control method and device, computer equipment and storage medium
CN112245914B (en) Viewing angle adjusting method and device, storage medium and computer equipment
CN115212572A (en) Control method and device of game props, computer equipment and storage medium
CN113867873A (en) Page display method and device, computer equipment and storage medium
CN112799754A (en) Information processing method, information processing device, storage medium and computer equipment
CN113426128B (en) Method, device, terminal and storage medium for adjusting appearance of custom roles
CN113426121B (en) Game control method, game control device, storage medium and computer equipment
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
CN118179012A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium
WO2024152504A1 (en) Game interaction method and apparatus, and computer device and storage medium
CN116328310A (en) Virtual model processing method, device, computer equipment and storage medium
CN117482516A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium
CN115317893A (en) Virtual resource processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant