WO2023077965A1 - 虚拟宠物的外观编辑方法、装置、终端及存储介质 - Google Patents
虚拟宠物的外观编辑方法、装置、终端及存储介质 Download PDFInfo
- Publication number
- WO2023077965A1 WO2023077965A1 PCT/CN2022/118479 CN2022118479W WO2023077965A1 WO 2023077965 A1 WO2023077965 A1 WO 2023077965A1 CN 2022118479 W CN2022118479 W CN 2022118479W WO 2023077965 A1 WO2023077965 A1 WO 2023077965A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mask
- target
- appearance
- texture
- virtual pet
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000004044 response Effects 0.000 claims abstract description 27
- 210000004209 hair Anatomy 0.000 claims description 53
- 238000004040 coloring Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 9
- 238000004043 dyeing Methods 0.000 claims description 7
- 239000000975 dye Substances 0.000 claims description 6
- 230000003993 interaction Effects 0.000 abstract description 3
- 241000282326 Felis catus Species 0.000 description 88
- 230000000875 corresponding effect Effects 0.000 description 44
- 238000010586 diagram Methods 0.000 description 24
- 230000000694 effects Effects 0.000 description 13
- 230000001133 acceleration Effects 0.000 description 9
- 210000005069 ears Anatomy 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 7
- 241001465754 Metazoa Species 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 210000001747 pupil Anatomy 0.000 description 4
- 239000000463 material Substances 0.000 description 3
- 241000282373 Panthera pardus Species 0.000 description 2
- 241000282330 Procyon lotor Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000037237 body shape Effects 0.000 description 2
- 238000009395 breeding Methods 0.000 description 2
- 230000001488 breeding effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000013351 cheese Nutrition 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 235000011437 Amygdalus communis Nutrition 0.000 description 1
- 241000220304 Prunus dulcis Species 0.000 description 1
- 235000020224 almond Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 235000019219 chocolate Nutrition 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/63—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Definitions
- the embodiments of the present application relate to the field of human-computer interaction, and in particular to a method, device, terminal and storage medium for editing the appearance of a virtual pet.
- the embodiment of the present application provides a method, device, terminal and storage medium for editing the appearance of a virtual pet. Since the target texture is formed by superimposing texture maps and target mask textures of different levels, different target textures can be generated by adjusting the target mask texture. Textures improve the freedom of editing the appearance of virtual pets and enrich the appearance characteristics of virtual pets. Described technical scheme is as follows:
- the embodiment of the present application provides a method for editing the appearance of a virtual pet, the method comprising:
- the terminal obtains a texture of the virtual pet, and the texture represents the appearance characteristics of the virtual pet;
- the terminal generates an edited target mask map corresponding to the target appearance feature in response to an editing operation on the target appearance feature, and the target appearance feature is an appearance feature other than the appearance feature represented by the map;
- the terminal superimposes the texture map and at least one layer of the target mask texture to obtain a target texture map, wherein target mask textures of different levels correspond to different target appearance features;
- the terminal applies the target map to the three-dimensional model of the virtual pet.
- an embodiment of the present application provides a method for editing the appearance of a virtual pet, the method comprising:
- the terminal displays an appearance editing interface, and the appearance editing interface includes a first appearance editing control and a second appearance editing control;
- the terminal updates the appearance characteristics of the virtual pet in response to a trigger operation on the first appearance editing control
- the terminal updates the target appearance feature of the virtual pet in response to the trigger operation on the second appearance editing control, and the target appearance feature is an appearance feature other than the appearance feature controlled by the first appearance editing control;
- the terminal displays the edited virtual pet in the appearance editing interface, wherein the edited virtual pet is a virtual pet model obtained by combining the appearance feature and the target appearance feature.
- the embodiment of the present application provides a device for editing the appearance of a virtual pet, and the device includes:
- An acquisition module configured to acquire a texture of a virtual pet, the texture representing the appearance characteristics of the virtual pet
- the editing module in response to an editing operation on the target appearance feature, generates a target mask map corresponding to the edited target appearance feature, and the target appearance feature is an appearance feature other than the appearance feature represented by the map;
- an overlay module configured to overlay the texture and at least one layer of the target mask texture to obtain a target texture, wherein target mask textures of different levels correspond to different target appearance features
- An application module configured to apply the target texture to the 3D model of the virtual pet.
- the embodiment of the present application provides a device for editing the appearance of a virtual pet, and the device includes:
- a display module configured to display an appearance editing interface, where the appearance editing interface includes a first appearance editing control and a second appearance editing control;
- a first updating module configured to update the appearance characteristics of the virtual pet in response to a trigger operation on the first appearance editing control
- a second updating module configured to update the target appearance feature of the virtual pet in response to a trigger operation on the second appearance editing control, the target appearance feature being other than the appearance feature controlled by the first appearance editing control Appearance features;
- a display module configured to display the edited virtual pet in the appearance editing interface, wherein the edited virtual pet is a virtual pet model obtained by combining the appearance features and the target appearance features.
- an embodiment of the present application provides a terminal, the terminal includes a processor and a memory, and the memory stores at least one instruction, at least one program, a code set or an instruction set, and the at least one instruction, the The at least one program, the code set or the instruction set is loaded and executed by the processor to implement the method for editing the appearance of a virtual pet as described in the above aspect.
- an embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one program, code set, or instruction set is stored in the computer-readable storage medium, and the at least one instruction, the At least one section of program, said code set or instruction set is loaded and executed by the processor to implement the method for editing the appearance of a virtual pet as described in the above aspects.
- an embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal executes the method for editing the appearance of a virtual pet provided in various optional implementation manners of the above aspect.
- the texture represents the appearance characteristics of the virtual pet
- the target mask texture represents the target appearance characteristics
- the target appearance characteristics are appearance characteristics other than the appearance characteristics represented by the texture
- the target texture is superimposed by textures and target mask textures of different levels
- different target textures can be generated by adjusting the target mask texture.
- it is avoided to create a matching texture for each additional virtual pet.
- a variety of virtual pets with different appearance characteristics can be generated through limited art resources, which improves the freedom of editing the appearance of virtual pets and enriches the appearance characteristics of virtual pets.
- FIG. 1 shows a schematic diagram of a masked area and a non-masked area provided by an exemplary embodiment of the present application
- Fig. 2 shows the schematic diagram of the virtual pet sticker generation process provided by an exemplary embodiment of the present application
- FIG. 3 shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application
- Fig. 4 shows the flowchart of the method for editing the appearance of a virtual pet provided by an exemplary embodiment of the present application
- Fig. 5 shows a schematic interface diagram of a method for editing the appearance of a virtual pet provided by an exemplary embodiment of the present application
- FIG. 6 shows a schematic diagram of overlaying a texture map and a target mask texture provided by an exemplary embodiment of the present application
- FIG. 7 shows a flow chart of a method for editing the appearance of a virtual pet provided in another exemplary embodiment of the present application.
- FIG. 8 shows a schematic diagram of a key color mask texture provided by an exemplary embodiment of the present application.
- Fig. 9 shows a schematic diagram of a pattern color mask map provided by an exemplary embodiment of the present application.
- Fig. 10 shows a schematic diagram of a glove color mask map provided by an exemplary embodiment of the present application
- FIG. 11 shows a schematic diagram of superposition of target mask textures at different levels provided by an exemplary embodiment of the present application
- Fig. 12 shows a flowchart of a method for adjusting mask color provided by an exemplary embodiment of the present application
- FIG. 13 shows a flow chart of a method for adjusting a mask range provided by an exemplary embodiment of the present application
- Fig. 14 shows a schematic diagram of the process of adjusting the mask range provided by an exemplary embodiment of the present application
- Fig. 15 shows a flowchart of a method for adjusting the gradient range provided by an exemplary embodiment of the present application
- Fig. 16 shows a schematic diagram of the process of adjusting the gradient range provided by an exemplary embodiment of the present application
- Fig. 17 shows a schematic diagram of the process of adjusting the mask position provided by an exemplary embodiment of the present application
- Fig. 18 shows a flowchart of a method for editing the appearance of a virtual pet provided by another exemplary embodiment of the present application
- Fig. 19 shows a schematic diagram of a virtual pet editing interface provided by an exemplary embodiment of the present application.
- Fig. 20 shows a schematic diagram of an interface for adjusting the nose and mouth of a virtual pet provided by an exemplary embodiment of the present application
- Fig. 21 shows a schematic diagram of a virtual pet ear model provided by an exemplary embodiment of the present application
- Fig. 22 shows a schematic diagram of adjusting the eyes of a virtual pet provided by an exemplary embodiment of the present application
- Fig. 23 shows a schematic diagram of a virtual pet decorative decal map provided by an exemplary embodiment of the present application.
- Fig. 24 shows a structural block diagram of a virtual pet appearance editing device provided by an exemplary embodiment of the present application
- Fig. 25 shows a structural block diagram of an apparatus for editing the appearance of a virtual pet provided by another exemplary embodiment of the present application.
- Fig. 26 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
- the "plurality” mentioned herein means two or more.
- “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
- the character “/” generally indicates that the contextual objects are an "or” relationship.
- Virtual pet a digital pet presented as a pet image in the form of a cartoon and/or animal.
- the virtual pet is a two-dimensional digital pet or a three-dimensional digital pet.
- the virtual pet is a three-dimensional virtual pet presented in the image of a cat.
- some of the pet images of virtual pets are randomly generated, and optionally, some of the pet images of virtual pets are generated according to the rules of genetic inheritance according to the pet images of parents' virtual pets and/or other grandparents' virtual pets, The comparison of the embodiments of the present application is not limited.
- the virtual pet is a digital pet displayed by an application program running on the terminal.
- the application includes at least one of the following functions: catching virtual pets, generating virtual pets, breeding virtual pets, trading virtual pets, using virtual pets to fight, using virtual pets for augmented reality (AugmentedReality, AR) interaction, using virtual Pets for socializing, AR education with virtual pets.
- the application is an application for acquiring, breeding and/or trading virtual pets based on the blockchain system.
- the application program is a geographic location-based social game program, and the social game program provides at least one function of collecting, growing and/or fighting with virtual pets.
- Appearance features refer to the features that embody the pet image of the virtual pet.
- the appearance characteristics of the virtual pet include different body parts such as hair, markings, eyes, nose, mouth, beard and ears, and each body part may have many different appearance characteristics.
- the above-mentioned appearance features may also include visible features such as color, shape, and texture.
- the appearance characteristics of hair can be white, gray, black, yellow, orange, etc.; Ears, rolled ears, folded ears, normal ears and other different shapes.
- the size, height, and position of some appearance features can also be adjusted to present different appearance features, such as the size and height of the nose can be adjusted, the height of the mouth, the opening and closing of the mouth flap can also be adjusted, and the size of the ears can also be adjusted. Adjustments and the like are not limited in this embodiment of the present application.
- Texture refers to the layer attached to the surface of the virtual pet model to form the appearance characteristics of the virtual pet. Different textures represent different appearance characteristics, and multiple textures are superimposed in hierarchical order and applied to the virtual pet model to generate virtual pets with different appearance characteristics.
- the texture may be a basic texture or a mask texture.
- the base texture refers to the texture with the lowest level
- the mask texture is a texture with a higher level than the base texture, which can be used to hide or display the content of the layer lower than its level.
- the basic texture 10 and the mask texture 11 are superimposed in hierarchical order and then applied to the virtual pet model to generate the virtual pet cat 12 .
- Mask map A map used to mask part of the underlying map (either a base map or a mask map) during the stacking process, including masked areas and non-masked areas. Among them, after the mask map is superimposed on the upper layer of other maps, the area corresponding to the masked area in the mask map in other maps is blocked, while the area corresponding to the non-masked area in the mask map is not blocked and can be seen through. Over-mask mapping is observed.
- the mask map 11 includes a masked area 112 and a non-masked area 111 .
- the level of the mask texture 11 is higher than that of the basic texture 10, so the content corresponding to the mask area 112 in the basic texture 10 is blocked, and the content corresponding to the non-masked area 111 is not blocked. It can be observed through the mask texture 11, so as to finally present the appearance characteristics of the virtual pet cat 12.
- the virtual pet held by the user often selects one of the multiple models in the application, or the individual appearance characteristics of the virtual pet can be selected.
- the reason for the modification is that the model of the virtual pet is made by replacing the model or texture as a whole, that is, a texture includes multiple appearance characteristics of the virtual pet. Therefore, for each additional appearance feature of a virtual pet, a texture needs to be created. For example, 20 kinds of virtual pets with different appearance characteristics need to be presented in the application program, and 20 stickers need to be made corresponding to them. At the same time, users can only choose among these 20 kinds of virtual pets. That is to say, there is a 1:1 quantitative relationship between the degree of freedom in customizing the appearance of virtual pets in the game and the amount of art resources, that is, textures, and the appearance characteristics of virtual pets that users can choose are limited.
- the target map is formed by superimposing one layer of map and at least one layer of mask map, the appearance features are controlled through the map, and other appearance features other than the appearance features controlled by the map are adjusted through different mask maps, so as to realize Variety of virtual pet appearances.
- different levels of target mask maps represent different appearance characteristics of virtual pets. Users can customize the appearance characteristics of virtual pets by editing the target mask maps. Therefore, it is not necessary to add a virtual pet every time. Appearance features, make a map.
- multiple target mask textures can be superimposed on one texture, and the appearance characteristics of the virtual pet can be changed by changing the target mask texture. Enriched the appearance characteristics of virtual pets.
- the application program needs to present white cat 201 , black and white cat 202 and black and white cat 203 , it is necessary to create white cat texture 204 , black and white cat texture 205 and black and white cat texture 206 respectively in related technologies. If you need cats with other appearance characteristics, such as orange cats, you still need to create a corresponding orange cat texture.
- the white cat 201 , the black and white cat 202 and the black and white cat 203 can be presented through the combination and superposition of the texture map 207 , the first target mask texture 208 and the second target mask texture 209 .
- the white cat 201 is generated by the texture 207
- the black and white cat 202 is formed by superimposing the texture 207 and the first target mask texture 208
- the black and white cat 203 is formed by the texture 207, the first target mask texture 208 and the second target mask texture 209 superimposed.
- the texture 207 can be a basic texture, that is, the texture with the lowest level.
- the appearance characteristic represented by the texture 207 is the common appearance characteristic of any virtual pet cat, so the texture 207 can be used to make any virtual pet cat.
- the appearance features represented by the first target mask map 208 are different from the appearance features represented by the second target mask map.
- the appearance characteristic represented by the texture map 207 is white hair
- the appearance characteristic represented by the first target mask map 208 is black hair
- the appearance characteristic represented by the second target mask map 209 is pattern-colored hair
- the white cat 201 only has
- the appearance features represented by the texture map 207 do not have the appearance characteristics represented by the first target mask texture 208 and the second target mask texture texture 208 , so the white cat 201 is only generated by the texture map 207 .
- the black and white cat 202 has the appearance characteristics represented by the texture map 207 and the first target mask texture 208 , so the black and white cat 202 is formed by superimposing the texture map 207 and the first target mask texture 208 .
- the black-and-white cat 203 has the appearance characteristics represented by the texture map 207, the first target mask texture 208 and the second target mask texture 209, so the black-and-white cat 203 is composed of the texture map 207, the first target mask texture 208 and the second target mask texture. 209 are superimposed.
- the virtual pet generated by the method for editing the appearance of the virtual pet provided in the embodiment of the present application can be applied to different application programs, and the application programs support editing operations on the virtual pet.
- the application program may be a game application program, a social application program, etc., which are not limited in this embodiment of the present application.
- the implementation environment of the embodiment of the present application will be described below by taking a game application program as an example. Please refer to Fig. 3, which shows a schematic diagram of an implementation environment provided by an embodiment of the present application.
- the implementation environment may include: a terminal 310 and a server 320 .
- the terminal 310 has an application program 311 that supports virtual pet editing installed and running. When the terminal runs the application program 311 , the user interface of the application program 311 is displayed on the screen of the terminal 310 .
- the application program 311 may be any one of a game application program, a social application program, and the like. In the embodiment of the present application, it is illustrated by taking that the application program 311 is a pet raising game.
- Terminal 310 is the terminal used by user 312.
- User 312 uses terminal 310 to customize and generate virtual pets with different appearance characteristics, and controls the virtual pets to breed, complete tasks, fight, etc.
- terminal 310 generally refers to one of multiple terminals.
- the terminal 310 can be a smart phone, a tablet computer, an e-book reader, a Moving Picture Experts Group Audio Layer III (MP3) player, a moving picture expert compression standard audio layer 4 ( At least one of Moving Picture Experts Group Audio Layer IV, MP4) player, laptop computer, and desktop computer.
- MP3 Moving Picture Experts Group Audio Layer III
- MP4 Moving Picture Experts Group Audio Layer IV
- MP4 Moving Picture Experts Group Audio Layer IV
- the terminal 310 is connected to the server 320 through a wireless network or a wired network.
- the server 320 includes at least one of a server, a server cluster composed of multiple servers, a cloud computing platform, and a virtualization center.
- the server 320 is used to provide background services for applications supporting virtual pet editing.
- the server 320 undertakes the main calculation work, and the terminal 310 undertakes the secondary calculation work; or, the server 320 undertakes the secondary calculation work, and the terminal 310 undertakes the main calculation work; or, a distributed computing architecture is adopted between the server 320 and the terminal 310 Perform collaborative computing.
- the server 320 includes a memory 321, a processor 322, a user account database 323, a material service module 324, and a user-oriented input/output interface (Input/Output Interface, I/O interface) 325.
- the processor 322 is used to load the instructions stored in the server 320, and process the data in the user account database 323 and the material service module 324;
- the user account database 323 is used to store the data of the user account of the terminal 310, such as the avatar of the user account, The nickname of the user account, the level of the user account, the service area where the user account is located;
- the material service module 324 is used to provide textures and models of different types of virtual pets to support the editing of virtual pets;
- the user-oriented I/O interface 325 It is used to establish communication and exchange data with the terminal 310 through a wireless network or a wired network.
- the editing of the virtual pet's appearance can be completed independently by the terminal, by the server, or by cooperation between the terminal and the server, which is not limited in this embodiment of the present application.
- the following embodiments are described by taking the terminal as an example to edit the appearance of a virtual pet.
- FIG. 4 shows a flow chart of a method for editing the appearance of a virtual pet provided by an exemplary embodiment of the present application.
- the method is used as an example for the terminal 310 in the implementation environment shown in FIG. 3.
- the method includes the following steps:
- Step 410 the terminal obtains the texture of the virtual pet, and the texture represents the appearance characteristics of the virtual pet.
- the terminal displays the appearance editing interface of the virtual pet, and acquires the texture of the virtual pet.
- the texture is a basic texture.
- the basic texture refers to the texture with the lowest level. Other textures (such as mask textures) can be superimposed on the basic texture, and the basic texture cannot be superimposed on other textures.
- the appearance features represented by the basic textures are the basic appearance features.
- the basic appearance features refer to the common appearance features of the same type of virtual pets. For example, the basic appearance features are the hair background color of the virtual pet.
- the appearance characteristics of the virtual pet include body shape characteristics, facial features and hair characteristics.
- the body shape feature is the virtual pet's fat or thin body, head size, cheek width, paw size, tail thickness and length, etc., which is not limited in this embodiment of the present application.
- the facial features are the size and shape of the virtual pet's ears, eyes, nose and mouth, etc., which is not limited in this embodiment of the present application.
- the hair feature is the base color of the hair and the color blocks of the base color of the hair, etc., which is not limited in this embodiment of the present application.
- the texture map of the virtual pet is used to represent the base color of the virtual pet's hair.
- the base color of the hair refers to the basic fur color of the virtual pet.
- the basic coat color is a common coat color of virtual pets of the same type.
- different types of virtual pets may have the same or different hair base colors.
- the type of the hair background color of the virtual pet can be summarized according to the hair traits of the corresponding animals in the natural world. For example, through the analysis of the hair shape of 15 different breeds of cats, including solid-color cats, blue cats, orange cats, leopard cats, raccoon cats, cheese, and Garfield, it can be known that the hair colors of cats are mainly white, gray, black, and yellow. For details, please refer to Table 1, which lists the hair shapes of solid-colored cats. Therefore, the texture of the virtual pet cat can be of four types: white, gray, yellow, and black.
- the color types of the texture maps are summarized based on the hair characteristics of corresponding animals in nature, the color types of the texture maps are fixed, that is to say, the hue of the texture maps is fixed, Users can realize different kinds of textures by adjusting the brightness and saturation of textures. For example, there are four types of textures for virtual pet cats: white, gray, yellow, and black. By adjusting the brightness and saturation of the textures, users can obtain different types of textures such as milky white, chocolate, light yellow, and dark brown, thereby enriching virtual pets. appearance characteristics.
- step 420 the terminal generates a target mask map corresponding to the edited target appearance feature in response to the editing operation on the target appearance feature, and the target appearance feature is an appearance feature other than the appearance feature represented by the map.
- the target appearance feature is an additional feature of the virtual pet, which is other appearance features other than the appearance features represented by the texture map.
- the appearance characteristic represented by the map is the basic appearance characteristic of the virtual pet
- the target appearance characteristic is an appearance characteristic other than the basic appearance characteristic. For example, if the appearance characteristic represented by the texture is the base color of the hair of the virtual pet, then the target appearance feature is the part of the virtual pet that is different from the base color of the hair.
- the target appearance feature is a key color block of the base color of the virtual pet hair, or a pattern color block, a glove color block, etc., which is not limited in this embodiment of the present application.
- the target appearance features are distributed in areas such as the face, back, chest, legs, tail, and limbs of the virtual pet, which is not limited in this embodiment of the present application.
- the target appearance characteristics of different virtual pets may be the same or different.
- the target appearance characteristics of the virtual pet may be summarized based on the hair characteristics of corresponding animals in the natural world.
- the target appearance characteristics of cats can be obtained by analyzing the hair shapes of 15 different breeds of cats, including solid-color cats, blue cats, orange cats, leopard cats, raccoon cats, cheese, and Garfield.
- Table 1 where Lists the coat shapes for solid colored cats. It can be seen from Table 1 that the target appearance characteristics of cats can be accent color and pattern color.
- the key color is mainly distributed on the cat's face, ears, body, chest and tail.
- the shape of the key color can be dots or flakes.
- the pattern color is mainly distributed on the cat's body or face, and the shape of the pattern color can be herringbone pattern, classic spot, spot spot, fine line spot and so on.
- the target appearance feature of the virtual pet cat can be the key color blocks of the background color of the hair, such as origin, flake, etc.
- the key color blocks can be distributed on the face, body, ears, tail and other parts of the virtual pet cat.
- the feature can also be a pattern color block of the hair background color, such as herringbone pattern, spots, etc.
- the pattern color block can be distributed on the face, back, chest and other parts of the virtual pet cat, and the target appearance feature can also be the hair background color Glove color blocks, such as high gloves and low gloves, the glove color blocks are mainly distributed on the limbs of the virtual pet cat.
- the user performs an editing operation on the target appearance feature on the appearance editing interface of the virtual pet, and the terminal generates a target mask map corresponding to the edited target appearance feature.
- the editing operation on the appearance feature of the target means that the user operates through the editing control on the appearance editing interface of the virtual pet to adjust the type, color, shape, size, etc. of the appearance feature of the target.
- the editing control is a button control, a slider control, or a palette control, etc., which is not limited in this embodiment of the present application.
- the type of the target appearance feature can be selected through the button control 501, the color of the target appearance feature can be selected through the color palette control 502, and the color of the target appearance feature can be selected through the color slider control 503.
- the brightness and saturation of the color of the target appearance feature can be adjusted, the size of the target appearance feature can be adjusted through the color block slider control 504 , and the gradient effect of the target appearance feature can be adjusted through the edge gradient slider control 505 .
- the target mask map represents the target appearance characteristics of the virtual pet.
- the target mask texture is a mask (MASK) texture, a mask texture, etc., which is not limited in this embodiment of the present application.
- the number of target mask textures is one or more, which is not limited in this embodiment of the present application.
- step 430 the terminal superimposes the texture map and at least one layer of the target mask texture to obtain the target texture, wherein different layers of the target mask texture correspond to different target appearance features.
- a layer of target texture is divided into multiple layers, that is, the target texture is formed by superimposing the texture and at least one layer of target mask texture. Since different levels of target mask maps correspond to different types of target appearance characteristics, users can select different target mask maps and textures to superimpose according to their preferences to form different target maps, and then form virtual pets with different appearance characteristics.
- the lower-level target mask textures are superimposed first, and the higher-level target mask textures are superimposed last.
- the higher-level target mask textures are superimposed first, and the lower-level target mask textures are superimposed last.
- target mask textures can be superimposed on one texture
- target mask textures of different levels correspond to different target appearance characteristics
- target mask textures of the same level correspond to the same target appearance characteristics.
- There may be different types of appearance features for the same target thus corresponding to different target mask maps.
- the appearance feature of the target is a pattern color
- the shape of the pattern color can be herringbone pattern, classic spot, dot spot or fine line spot, etc.
- the pattern color of different shapes corresponds to different target mask textures.
- the user can perform corresponding editing operations on the target mask textures of different levels, and the target mask textures of different levels do not affect each other.
- the target texture (not shown).
- the appearance feature represented by the texture map 601 is the hair background color
- the appearance feature represented by the first target mask map 602 is the key color block
- the appearance feature represented by the second target mask map 603 is the pattern color block
- the third target mask map 603 is represented by the pattern color block.
- the appearance feature represented by the texture map 604 is a glove color block.
- Step 440 the terminal applies the target texture to the three-dimensional model of the virtual pet.
- the terminal applies the target texture formed by superimposing the texture 601, the first target mask texture 602, the second target mask texture 603, and the third target mask texture 604 to the virtual pet cat 605.
- the texture represents the appearance characteristics of the virtual pet
- the target mask texture represents the appearance characteristics of the target
- the target appearance characteristics are appearance characteristics other than the appearance characteristics represented by the texture.
- the superposition of mask textures generates different target textures for virtual pet models, and then generates virtual pets with different appearance characteristics. Since the target texture is superimposed by textures and target mask textures of different levels, different target textures can be generated by adjusting the target mask texture. On the one hand, it is avoided to create a matching texture for each additional virtual pet. On the other hand, a variety of virtual pets with different appearance characteristics can be generated through limited art resources, which improves the freedom of editing the appearance of virtual pets and enriches the appearance characteristics of virtual pets.
- the mask color, mask range, gradient range, and mask position of the mask map are adjusted through editing operations, so as to realize different types of target mask maps, thereby enriching the appearance characteristics of the virtual pet.
- the size of the target mask texture is inconsistent with the texture, it is necessary to adjust the size of the target mask texture to fit the texture before obtaining the target texture.
- FIG. 7 shows a flowchart of a method for editing the appearance of a virtual pet provided by another exemplary embodiment of the present application.
- Step 710 the terminal obtains the texture of the virtual pet, and the texture represents the appearance characteristics of the virtual pet.
- Step 710 is the same as step 410, which is not limited in this embodiment of the present application.
- step 720 the terminal acquires a mask map corresponding to the appearance feature of the target in response to an editing operation on the appearance feature of the target.
- the terminal acquires a mask map corresponding to the target appearance feature.
- the mask map refers to the mask map corresponding to the appearance characteristics of the current target.
- the terminal when the terminal displays the appearance editing interface of the virtual pet for the first time, and the user edits the target appearance feature on the appearance editing interface for the first time, the terminal obtains the mask texture corresponding to the target appearance feature that has not been edited.
- the terminal when the user finishes editing the appearance feature of the target and exits the application program halfway, the terminal automatically saves the editing state of the appearance feature of the target and the corresponding mask texture before exiting the application program.
- the terminal enters the application program again, the user continues editing operations based on the target appearance feature when exiting the game last time, and the terminal obtains the mask map corresponding to the target appearance feature edited before exiting the game last time.
- Step 730 the terminal adjusts the mask texture based on the editing operation to obtain the target mask texture, wherein the adjustment method of the mask texture includes at least one of adjusting the color of the mask, adjusting the range of the mask, adjusting the range of the gradient, or adjusting the position of the mask .
- the mask map includes a mask area and a non-masking area.
- the map content below the mask area is blocked, and the map content below the non-masking area is not covered. Occlusion, that is, it can be observed through the target mask map.
- the masked area is an opaque area
- the non-masked area is a transparent area.
- the terminal obtains the target mask texture by adjusting the color of the mask region of the mask texture.
- the user selects a color through the editing control on the appearance editing interface of the virtual pet, and adjusts the color of the target appearance feature. Based on the editing operation, the terminal adjusts the color of the mask texture to obtain the target mask texture.
- the terminal determines the hue selected by the user through the palette control, that is, the color category, such as red, white, orange, etc., and then adjusts the color of the mask texture to obtain the target mask texture.
- the color category such as red, white, orange, etc.
- the terminal determines the brightness and saturation of the color adjusted by the user through the slider control, that is, the depth of the color, for example, positive red, dark red, light red, etc., and then adjusts the color of the mask map to obtain the target mask stickers.
- the slider control that is, the depth of the color, for example, positive red, dark red, light red, etc.
- the user selects the hue of the glove color block through the palette control 502 , and selects the saturation and brightness of the glove color block through the color slider control 503 , the terminal adjusts the mask texture based on the editing operation, and then obtains the target mask texture.
- the terminal obtains the target mask texture by adjusting the range of the mask area of the mask texture.
- the user adjusts the size of the target appearance feature through the editing control on the appearance editing interface of the virtual pet, and the terminal adjusts the mask range of the mask texture based on the editing operation, thereby obtaining the target mask texture.
- the user adjusts the size of the glove color block through the color block slider control 504, and the terminal adjusts the mask texture based on the editing operation, and then obtains the target mask. version map.
- the terminal obtains the target mask texture by adjusting the gradient range of the mask area of the mask texture.
- the user adjusts the edge gradient of the target appearance feature through the editing control on the appearance editing interface of the virtual pet. Based on the editing operation, the terminal adjusts the gradient range of the mask texture to obtain the target mask texture.
- the user adjusts the edge gradient effect of the glove color block through the edge gradient slider control 505, and the terminal adjusts the mask map based on the editing operation, and then Get the target mask map.
- the terminal obtains the target mask texture by adjusting a position of a mask area of the mask texture.
- the user adjusts the position of the target appearance feature through the editing control on the appearance editing interface of the virtual pet.
- the terminal adjusts the mask position of the mask map to obtain the target mask map.
- Step 740 if the size of the target mask texture is different from that of the texture, the terminal scales the target mask texture based on the size of the texture, wherein the size of the target mask texture after scaling matches the size of the texture.
- matching the size of the target mask texture with the size of the texture means that the size of the target mask texture is the same as the size of the texture.
- matching the size of the target mask texture with the size of the texture means that the size of the target mask texture is the same as the size of the area to be superimposed in the texture.
- the size of the target mask texture is the same as that of the texture, and the size of the target mask texture does not need to be adjusted before the texture is superimposed on the target mask texture.
- the size of the target mask texture is different from the size of the texture, and before the texture is superimposed on the target mask texture, the terminal automatically adjusts the size of the target mask texture to fit the texture.
- step 750 the terminal superimposes at least one layer of target mask textures on the texture based on the texture levels of the target mask textures of each layer to obtain the target texture.
- the target mask textures of each layer have a texture hierarchical relationship, that is to say, the target mask textures of each layer need to be superimposed in a certain order.
- the hierarchical order of the texture is fixed, and the target mask textures of other layers are superimposed on the texture in sequence to form the target texture.
- the target mask map is a key color mask map, a pattern color mask map, or a glove color mask map, etc., which is not limited in this embodiment of the present application.
- the key color mask map refers to the key color block which is different from the background color of the virtual pet hair.
- the type of the color block is dot shape, block shape or flake shape, etc., which is not limited in this embodiment of the present application.
- FIG. 8 it shows that different types of key color mask textures 801 are superimposed with the textures to form a target texture (not shown), and the target texture is applied to the effect presented by the virtual pet cat 802 .
- the pattern color mask map refers to a pattern color block different from the background color of the virtual pet hair.
- the color block type of the pattern is spot-like, stripe-like or stripe-like, etc., which is not limited in this embodiment of the present application.
- FIG. 9 it shows the effect of applying the target texture to the virtual pet cat 902 by superimposing different kinds of pattern color mask textures 901 and textures to form a target texture (not shown).
- the glove color mask map refers to the foot color block or hand color block on the hair background color.
- the type of the color block is block, dot, or flake, which is not limited in this embodiment of the present application.
- FIG. 10 it shows the effect that different types of glove color mask textures 1001 and textures are superimposed to form a target texture (not shown), and the target texture is applied to a virtual pet cat 1002 .
- each target mask map has a hierarchical relationship
- the texture level of the glove color mask map is higher than that of the pattern color mask map
- the texture layer level of the pattern color mask map is higher than that of the key color mask The texture level of the texture.
- the texture map 1101 , key color mask texture 1102 , pattern color mask texture 1103 and glove color mask texture 1104 are superimposed to form a target texture, and the target texture is applied to a virtual pet cat 1105 .
- the hierarchical relationship of the aforementioned maps is that the first layer is a map 1101 , the second layer is a key color mask map 1102 , the third layer is a pattern color mask map 1103 , and the fourth layer is a glove color mask map 1104 .
- the focus here is to describe the hierarchical relationship of the accent color mask map 1102 , pattern color mask map 1103 , and glove color mask map 1104 with examples, so the contents of the layers are not shown in the figure.
- the types of target mask textures in each layer are the same, for example, there are m types of texture maps, the target mask texture is layer i, and the number of target mask textures in different levels is M 1 to M i , and the generated target The number N of textures is:
- N m+(m ⁇ M 1 +m ⁇ M 2 ??+m ⁇ Mi)+(m ⁇ M 1 ⁇ M 2 +m ⁇ M 1 ⁇ M 3 ??+m ⁇ M 1 ⁇ M i )+... ... m x M 1 x M 2 x ... x M i .
- N types of virtual pets with different appearance characteristics can be presented.
- m, i, M1 to Mi and N are all positive integers.
- the target mask textures are key color mask textures, there are 6 types; pattern color mask textures, there are 6 types; glove color mask textures, there are 2 types.
- Step 760 the terminal applies the target texture to the three-dimensional model of the virtual pet.
- Step 760 is the same as step 440, which will not be described in detail in this embodiment of the present application.
- the mask map is divided into a masked area and a non-masked area, and the shape of the masked area corresponds to the shape of the target appearance feature of the virtual pet.
- the target mask map is obtained by dyeing the mask area.
- FIG. 12 shows a flowchart of a method for adjusting mask color provided by an exemplary embodiment of the present application.
- Step 1201 the terminal determines the target mask color of the mask map based on the editing operation.
- the user selects a color through the edit control on the appearance editing interface of the virtual pet, and the terminal determines the target mask color based on the color selected by the user.
- the editing operation is that the user selects the hue of the target mask through the palette control, that is, the category of the color, such as black, white, yellow, etc., which is not limited in this embodiment of the present application.
- the editing operation is that the user adjusts the brightness and saturation of the target mask color through the slider control, that is, the depth of the color.
- the depth of the color For example, positive red, dark red, light red, etc., which are not limited in this embodiment of the present application.
- the user selects the hue of the target mask color as yellow through the palette control 502, and adjusts the saturation and brightness by adjusting the color slider control 503, Adjust the color to orange, and the terminal determines that the target mask color is orange.
- Step 1202 the terminal dyes the mask area in the mask map based on the target mask color to obtain the target mask map.
- the terminal dyes the mask area in the mask map based on the determined target mask color to obtain the target mask map.
- the terminal determines that the color of the target mask is orange, and the terminal dyes the mask area of the mask texture orange, and then obtains the target mask texture.
- the glove color block of the virtual pet 507 appears orange.
- the color diversification of the target mask map is realized, thereby enriching the appearance characteristics of the virtual pet and improving the freedom of editing the appearance of the virtual pet.
- the size of the shape of the target appearance feature is changed by adjusting the size of the mask range.
- FIG. 13 shows a flowchart of a method for adjusting a mask range provided by an exemplary embodiment of the present application.
- Step 1301 the terminal determines a mask range threshold based on an editing operation.
- the mask area of the mask map is composed of several pixels with different color values, and the color values of the pixels decrease sequentially from the center to the edge of the mask area.
- the color value of the pixel is used to control the coloring degree of the pixel.
- the color value is positively correlated with the dyeing degree of the pixel point, that is to say, the larger the color value, the higher the dyeing degree of the pixel point, and the darker the color of the target appearance feature seen by the user on the appearance editing interface of the virtual pet, the higher the color value. Smaller, the lower the dyeing degree of the pixel, the lighter the color of the target appearance feature that the user sees on the appearance editing interface of the virtual pet. Therefore, in the embodiment of the present application, the mask range threshold refers to the color value threshold of the pixel.
- the color value ranges from 1 to 0, the color value at the center of the mask area is 1, and the closer to the edge of the mask area, the smaller the color value, that is to say, the coloring of the pixel in the center of the mask area
- the degree of coloring is the largest, and the coloring degree of edge pixels is the smallest.
- the user adjusts the size of the target appearance feature through the editing control on the appearance editing interface of the virtual pet, and the terminal determines the mask range threshold based on the size adjusted by the user.
- the size of the target appearance feature is different, and the corresponding mask range threshold is also different.
- the user adjusts the size of the glove color block through the color block slider control 504, and each time the user slides the color block slider control 504, the terminal adjusts the size of the glove color block.
- the size of the glove color block presented is also different. For example, the current glove color block size is 9, and the terminal determines the corresponding mask range threshold.
- the user moves the color block slider control 504 to the right so that the glove color block size becomes 15, the terminal continues to adjust based on the user's editing operations.
- the user can see that the glove color block of the virtual pet cat gradually becomes larger.
- Step 1302 when the color value of the pixel point is greater than or equal to the mask range threshold, the terminal determines that the pixel point belongs to the target mask range.
- the terminal determines the target mask range based on the mask range threshold and the color values of the pixels corresponding to the mask map within the maximum mask range.
- the color value of the pixel is greater than or equal to the threshold of the mask range
- the area formed by the pixels greater than or equal to the threshold of the mask range is the target mask range.
- the process of determining the range of the target mask is described with reference to FIG. 14 . It is determined that the mask texture 1401 corresponds to a pixel color value of 0.3 within the maximum mask range, and the mask texture 1401 and the texture map 1405 are superimposed to generate a target texture to be applied to the virtual pet cat 1403 .
- the user adjusts the size of the target appearance feature of the virtual pet cat 1403 through the editing control.
- the terminal determines that the mask range threshold is 0.5. At this time, the pixels with a color value greater than or equal to 0.5 within the maximum mask range belong to the target Mask range, and then determine the target mask texture 1402, the target mask texture 1402 and the texture map 1405 are superimposed to generate a target texture, and the target texture is applied to the virtual pet cat 1404.
- Step 1303 when the color value of the pixel is smaller than the mask range threshold, the terminal determines that the pixel does not belong to the target mask range.
- the pixel does not belong to the target mask range, that is to say, the pixels smaller than the mask range threshold are not used to display the target appearance feature.
- the pixel points less than 0.5 in the mask map 1401 do not belong to the target mask range.
- the user can see that the target appearance feature range of the virtual pet cat 1404 becomes smaller .
- the change of the target appearance feature size of the virtual pet is realized, and the appearance features of the virtual pet are enriched .
- the effect of edge gradation of the target appearance feature is achieved by adjusting the gradation range of the mask.
- FIG. 15 shows a flowchart of a method for adjusting the gradation range provided by an exemplary embodiment of the present application. .
- step 1501 the terminal determines a gradient range threshold based on an editing operation.
- the user adjusts the edge gradation of the target appearance feature through the editing control on the appearance editing interface of the virtual pet, and the terminal determines the threshold of the gradation range based on the editing operation.
- the effect of the edge gradient of the target appearance feature is different, and the corresponding gradient range threshold is also different.
- the user adjusts the edge gradient effect of the glove color block through the edge gradient slider control 505, and each time the user slides the edge gradient slider control 505, then
- the terminal adjusts the gradient range threshold once, and the edge gradient effect of the glove color block presented is also different.
- the edge gradient of the current glove color block is -15, and the terminal determines the corresponding gradient range threshold.
- the terminal determines the corresponding gradient range threshold.
- Step 1502 when the color value of the pixel point is smaller than the threshold value of the gradient range, the terminal modifies the color value of the pixel point to the threshold value of the gradient range.
- the terminal adjusts the gradient range within the mask range based on the gradient range threshold and the color value of the pixel points within the mask range corresponding to the mask map.
- the terminal modifies the color value of the pixel point to the gradient range threshold. Since there is a hierarchical relationship between target mask maps at different levels, modifying the gradient range within the mask range is done after adjusting the size of the mask range, so the terminal corresponds to the color of the pixels within the mask range based on the mask map.
- the value adjusts the gradient range area. Exemplarily, with reference to FIG. 16 , the process of adjusting the gradient range of the mask range is described.
- the terminal determines that the color value of the pixels within the mask range corresponding to the mask map 1601 is 0.5, and the mask map 1601 and the map 1605 are superimposed to generate a target map that is applied to the virtual pet cat 1603 .
- the user adjusts the edge gradient of the target appearance feature of the virtual pet cat 1603 through the editing control.
- the terminal determines the threshold of the gradient range to be 0.7.
- the color value of the pixels within the range of 0.5-0.7 is adjusted to 0.7. That is to change the dyeing degree corresponding to the pixel points, and then determine the target mask texture 1602 , the target mask texture 1602 and the texture map 1605 are superimposed to generate a target texture, and the target texture is applied to the virtual pet cat 1604 .
- Step 1503 when the color value of the pixel point is greater than or equal to the threshold of the gradient range, the terminal keeps the color value of the pixel point unchanged.
- the color value of the pixel point is greater than or equal to the gradient range threshold, and the color value of the pixel point remains unchanged, that is, the coloring degree corresponding to the pixel point remains unchanged.
- the color values of pixels greater than or equal to 0.7 are kept unchanged, so that in the appearance editing interface of the virtual pet, the user can see the edge gradient effect of the target appearance feature of the virtual pet cat 1604 .
- the positions of the masked area and the non-masked area of the mask map are not fixed, and the terminal converts the masked area and the non-masked area of the mask map based on the user's editing operation. The position is exchanged to obtain the target mask map.
- the masked area of the mask map is 1701 and the non-masked area 1702
- the masked area is 1701 and the non-masked area 1702 is not exchanged
- the masked map is superimposed with the texture map 1705 to generate the target texture, and apply the target texture to the virtual pet cat 1703.
- the user changes the position of the appearance feature of the target through the editing control.
- the terminal changes the masked area 1701 into a non-masked area, and changes the non-masked area 1702 into a masked area, and then determines the target mask texture.
- the mask texture and texture map 1705 are superimposed to generate a target texture, and the target texture is applied to the virtual pet cat 1704 .
- the position of the target appearance feature of the virtual pet cat 1704 has changed.
- the appearance characteristics of the virtual pet are enriched by exchanging the positions of the masked area and the non-masked area of the mask map.
- FIG. 18 shows a flowchart of a method for editing the appearance of a virtual pet provided by another exemplary embodiment of the present application.
- Step 1801 the terminal displays an appearance editing interface.
- the user starts the game on the terminal, and the terminal displays the appearance editing interface of the virtual pet.
- the appearance editing interface includes a virtual pet, a first appearance editing control and a second appearance editing control.
- the first appearance editing control is used to control the appearance characteristics of the virtual pet
- the second appearance editing control is used to control the target appearance characteristics of the virtual pet
- the target appearance characteristics are appearance characteristics other than the appearance characteristics controlled by the first appearance editing control.
- the first appearance editing control is used to control the basic appearance characteristics of the virtual pet
- the second appearance editing control is used to control appearance characteristics other than the basic appearance characteristics.
- the appearance editing interface 1901 of a virtual pet includes a virtual pet 1902 , a first appearance editing control 1903 and a second appearance editing control 1904 .
- Step 1802 the terminal updates the appearance characteristics of the virtual pet in response to the trigger operation on the first appearance editing control.
- the first appearance editing control is used to control the basic appearance characteristics of the virtual pet, and the basic appearance characteristics refer to the hair base color of the virtual pet.
- the first appearance editing control is a button control, a slider control, or a palette control, etc., which is not limited in this embodiment of the present application.
- the terminal adjusts the appearance features of the virtual pet based on the user's operations on the button controls, slider controls, and palette controls in the appearance editing interface, such as adjusting the background color of the virtual pet's hair.
- Step 1803 the terminal updates the target appearance feature of the virtual pet in response to the trigger operation on the second appearance editing control, and the target appearance feature is an appearance feature other than the appearance feature controlled by the first appearance editing control.
- the first appearance editing control is used to control the basic appearance feature of the virtual pet
- the basic appearance feature is the hair base color of the virtual pet
- the target appearance feature refers to the part of the virtual pet different from the hair base color.
- the terminal adjusts the part of the virtual pet that is different from the background color of the hair.
- the second appearance editing control is a button control, a slider control, or a palette control, etc., which is not limited in this embodiment of the present application.
- the target appearance feature is a part of the virtual pet that is different from the base color of the hair.
- the target appearance feature is the key color block of the virtual pet's hair, or it may be a pattern color block, a glove color block, etc., which is not limited in this embodiment of the present application.
- Step 1804 the terminal displays the edited virtual pet in the appearance editing interface, wherein the edited virtual pet is a virtual pet obtained by combining the appearance features controlled by the first appearance edit control and the target appearance features controlled by the second appearance edit control. pet models.
- the virtual pet model is obtained, and the terminal displays the virtual pet model on the appearance editing interface.
- a virtual pet cat is taken as an example.
- the terminal determines that the base color of the hair of the virtual pet cat is white, and the terminal displays the base color of the hair of the virtual pet cat 1902 in the appearance editing interface 1901 of the virtual pet as white.
- the terminal determines the color, size, and edge gradient of the glove color block of the virtual pet cat, and the terminal displays the glove color block of the virtual pet cat 1902 on the appearance editing interface 1901 of the virtual pet
- the color is black
- the patch size is 9,
- the edge gradient is -15.
- the terminal presents a virtual pet 1902 in the virtual pet appearance editing interface.
- the appearance features controlled by the first appearance edit control and the target appearance features controlled by the second appearance edit control are used to obtain the virtual pet model, and the appearance features controlled by the first appearance edit control and the target appearance features are adjusted.
- the target appearance characteristics controlled by the second appearance editing control generate virtual pets with different appearance characteristics, improve the freedom of editing the appearance of virtual pets, and enrich the appearance characteristics of virtual pets.
- the terminal replaces the facial features models and textures of the virtual pet based on the user's editing operations, enriching the facial features of the virtual pet and improving the freedom of editing the appearance of the virtual pet.
- the terminal adjusts the face, nose, mouth, lips, etc. of the virtual pet based on the user's editing operations to determine different types of virtual pets.
- the style of the nose map it reflects the differences of different breeds of the same virtual pet.
- the virtual pet is a cat as an example.
- the user selects a nose and mouth texture through the button control 2002 in the appearance editing interface 2001 of the virtual pet, and the terminal changes the nose and mouth type of the virtual pet cat 2003 based on the editing operation, so as to realize different appearance characteristics of the virtual pet cat 2003.
- the user adjusts the size of the nose, the height of the nose, the height of the mouth, the up and down of the mouth, and the opening and closing of the mouth through the sliding operation of the slider control 2004.
- the terminal determines the position of the nose, mouth and mouth of the virtual pet cat 2003. shapes to achieve different appearance features of the virtual pet cat 2003.
- the terminal adjusts the virtual pet ear model and the size of the ear based on the user's editing operation on the virtual pet ear model.
- the user selects a different ear model 2101 according to the button control, and the terminal replaces the ear model of the virtual pet based on the editing operation, and then presents a virtual pet cat 2102 with different appearance characteristics .
- the terminal adjusts the size of the virtual pet's eyes, the size and color of the pupils based on the editing operation of the user, and generates virtual pets with different appearance characteristics.
- the terminal adjusts the rotation of the corners of the eyes, it can show innocent drooping eyes, fierce hanging eyes, ordinary almond eyes and other eye shapes, and then present different expressions of virtual pets.
- the user selects the color of the pupil of the virtual pet cat through the color palette control 2201 and the color slider control 2202, and adjusts the eye size and the eye size of the virtual pet cat through the slider control 2203.
- Rotation and pupil size based on the editing operation, the terminal determines the pupil color and size of the virtual pet cat, the size and rotation of the eyes, and finally presents a virtual pet cat 2204 with different appearance characteristics.
- the terminal adjusts the decorative decals of the virtual pet to reflect different styles of the virtual pet.
- the pattern of the decorative decals is a patch, spot, or strip shape, which is not limited in this embodiment of the present application.
- the location of the decorative decals is located in the eyebrow area, cheek, mouth or chin area, etc., which is not limited in this embodiment of the present application.
- the user selects different decorative decals 2301 through the button control, and the terminal adjusts the decorative decals of the virtual pet based on the editing operation, thereby generating virtual pets with different styles cat.
- FIG. 24 is a structural block diagram of a device for editing the appearance of a virtual pet provided by an exemplary embodiment of the present application.
- the device includes:
- the obtaining module 2401 is used to obtain the texture of the virtual pet, and the texture represents the appearance characteristics of the virtual pet;
- the editing module 2402 is configured to generate a target mask map corresponding to the edited target appearance feature in response to an editing operation on the target appearance feature, where the target appearance feature is an appearance feature other than the appearance feature represented by the map;
- the overlay module 2403 is configured to overlay the texture map and at least one layer of the target mask texture to obtain the target texture, wherein the target mask textures of different levels correspond to different target appearance features;
- the application module 2404 is used for applying the target texture to the three-dimensional model of the virtual pet.
- edit module 2402 including:
- An acquisition unit configured to acquire a mask map corresponding to the target appearance feature in response to an editing operation on the target appearance feature
- An adjustment unit configured to adjust the mask texture based on an editing operation to obtain a target mask texture, wherein the adjustment method of the mask texture includes at least one of adjusting the color of the mask, adjusting the range of the mask, adjusting the range of the gradient, or adjusting the position of the mask kind.
- the adjustment method of the mask map is to adjust the mask color; the adjustment unit is used for:
- the adjustment method of the mask texture is to adjust the mask range; the adjustment unit includes:
- a first determining subunit configured to determine a mask range threshold based on an editing operation
- the second determining subunit is used to determine the target mask range based on the mask range threshold and the color value of the pixel to obtain the target mask texture.
- the mask range of the target mask texture is the target mask range
- the pixel point refers to The pixel points within the maximum mask range corresponding to the mask map, and the color value of the pixel points is used to control the coloring degree of the pixel points.
- the second determining subunit is used for:
- the color value of the pixel is smaller than the mask range threshold, it is determined that the pixel does not belong to the target mask range.
- the adjustment method of the mask map is to adjust the gradient range; the adjustment unit includes:
- the adjustment sub-unit is used to adjust the gradient range within the mask range based on the gradient range threshold and the color value of the pixel point to obtain the target mask map.
- the pixel point refers to the pixel point within the mask map corresponding to the mask map. Pixel The color value of the point is used to control the coloring degree of the pixel point.
- the color value of the pixel point In the case that the color value of the pixel point is greater than or equal to the gradient range threshold, the color value of the pixel point remains unchanged.
- the adjustment method of the mask map is to adjust the position of the mask; the adjustment unit is used for:
- the masked area and the non-masked area in the masked map are exchanged to obtain a target masked map.
- the superposition module 2403 is used for:
- At least one layer of target mask texture is superimposed on the texture map to obtain the target texture.
- the texture map represents the hair base color of the virtual pet
- At least one layer of target mask maps includes at least one of key color mask maps, pattern color mask maps or glove color mask maps;
- the key color mask map represents the key color block that is different from the hair background color
- the pattern color mask map represents the pattern color block that is different from the hair background color
- the glove color mask map represents the foot color patch or hand color patch different from the hair background color
- the texture level of the glove color mask texture is higher than that of the pattern color mask texture, and the texture level of the pattern color mask texture is higher than that of the key color mask texture.
- the device also includes a scaling module for:
- the target mask texture is scaled based on the size of the texture, wherein the size of the target mask texture matches the size of the texture after scaling.
- the texture represents the appearance characteristics of the virtual pet
- the target mask texture represents the target appearance characteristics
- the target appearance characteristics are appearance characteristics other than the appearance characteristics represented by the texture.
- the superposition of textures generates different target textures for the model of virtual pets, and then generates virtual pets with different appearance characteristics. Since the target texture is superimposed by textures and target mask textures of different levels, different target textures can be generated by adjusting the target mask texture. On the one hand, it is avoided to create a matching texture for each additional virtual pet. On the other hand, a variety of virtual pets with different appearance characteristics can be generated through limited art resources, which improves the freedom of editing the appearance of virtual pets and enriches the appearance characteristics of virtual pets.
- Fig. 25 is a structural block diagram of a device for editing the appearance of a virtual pet provided in another exemplary embodiment of the present application, the device comprising:
- a display module 2501 configured to display an appearance editing interface, where the appearance editing interface includes a first appearance editing control and a second appearance editing control;
- the first updating module 2502 is configured to update the appearance characteristics of the virtual pet in response to a trigger operation on the first appearance editing control
- the second updating module 2503 is configured to update the target appearance feature of the virtual pet in response to the trigger operation on the second appearance editing control, where the target appearance feature is an appearance feature other than the appearance feature controlled by the first appearance editing control;
- the display module 2504 is used to display the edited virtual pet in the appearance editing interface, wherein the edited virtual pet is a virtual pet model obtained by combining the appearance features controlled by the first appearance editing control and the target appearance features.
- the appearance features controlled by the first appearance edit control and the target appearance features controlled by the second appearance edit control are used to obtain the virtual pet model, and the appearance features controlled by the first appearance edit control and the second appearance features are adjusted.
- the target appearance characteristics controlled by the appearance editing control can generate virtual pets with different appearance characteristics, improve the freedom of editing the appearance of virtual pets, and enrich the appearance characteristics of virtual pets.
- FIG. 26 shows a structural block diagram of a terminal 2600 provided by an exemplary embodiment of the present application.
- the terminal 2600 can be a portable mobile terminal, such as: smart phone, tablet computer, Moving Picture Experts Group Audio Layer III (MP3) player, Moving Picture Experts Group Audio Layer 4 (Moving Picture Experts) Experts Group Audio Layer IV, MP4) player.
- Terminal 2600 may also be called user equipment, portable terminal and other names.
- the terminal 2600 includes: a processor 2601 and a memory 2602 .
- the processor 2601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- the processor 2601 can adopt at least one hardware form among digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) accomplish.
- the processor 2601 may also include a main processor and a coprocessor, the main processor is a processor for processing data in the wake-up state, and is also called a central processing unit (Central Processing Unit, CPU); the coprocessor is Low-power processor for processing data in standby state.
- CPU Central Processing Unit
- the processor 2601 may be integrated with a graphics processor (Graphics Processing Unit, GPU), and the GPU is used to render and draw the content that needs to be displayed on the display screen.
- the processor 2601 may also include an artificial intelligence (Artificial Intelligence, AI) processor, and the AI processor is used to process computing operations related to machine learning.
- AI Artificial Intelligence
- Memory 2602 may include one or more computer-readable storage media, which may be tangible and non-transitory.
- the memory 2602 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
- non-transitory computer-readable storage medium in the memory 2602 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 2601 to implement the method provided by the embodiment of the present application.
- the terminal 2600 may optionally further include: a peripheral device interface 2603 and at least one peripheral device.
- the peripheral device includes: at least one of a radio frequency circuit 2604 or a touch screen 2605 .
- the peripheral device interface 2603 may be used to connect at least one peripheral device related to input/output (Input/Output, I/O) to the processor 2601 and the memory 2602 .
- the processor 2601, memory 2602 and peripheral device interface 2603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 2601, memory 2602 and peripheral device interface 2603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the radio frequency circuit 2604 is used to receive and transmit radio frequency (Radio Frequency, RF) signals, also called electromagnetic signals.
- the radio frequency circuit 2604 communicates with the communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 2604 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 2604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
- the radio frequency circuit 2604 can communicate with other terminals through at least one wireless communication protocol.
- the touch screen 2605 is used to display the UI.
- the UI can include graphics, text, icons, video, and any combination thereof.
- the touch display 2605 also has the ability to collect touch signals on or over the surface of the touch display 2605.
- the touch signal can be input to the processor 2601 as a control signal for processing.
- the touch screen 2605 is used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- terminal 2600 also includes one or more sensors 2606 .
- the one or more sensors 2606 include but are not limited to: an acceleration sensor 2607 , a gyro sensor 2608 and a pressure sensor 2609 .
- the acceleration sensor 2607 can detect the acceleration on the three coordinate axes of the coordinate system established by the terminal 2600 .
- the acceleration sensor 2607 can be used to detect the components of the acceleration of gravity on the three coordinate axes.
- the processor 2601 may control the touch display screen 2605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2607 .
- the acceleration sensor 2607 can also be used for collecting game or user motion data.
- the gyro sensor 2608 can detect the body direction and rotation angle of the terminal 2600 , and the gyro sensor 2608 can cooperate with the acceleration sensor 2607 to collect 3D actions of the user on the terminal 2600 .
- the processor 2601 can realize the following functions: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control and inertial navigation.
- the pressure sensor 2609 may be disposed on the side frame of the terminal 2600 and/or the lower layer of the touch screen 2605 .
- the pressure sensor 2609 When the pressure sensor 2609 is arranged on the side frame of the terminal 2600, it can detect the user's grip signal on the terminal 2600, and perform left and right hand recognition or shortcut operation according to the grip signal.
- the operable controls on the UI interface can be controlled according to the user's pressure operation on the touch display screen 2605.
- the operable controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
- FIG. 26 does not constitute a limitation to 2600, and may include more or less components than shown in the figure, or combine certain components, or adopt a different component arrangement.
- the embodiment of the present application also provides a terminal, the terminal includes a processor and a memory, the memory stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set or The instruction set is loaded and executed by the processor to:
- the texture represents the appearance characteristics of the virtual pet
- the target appearance feature In response to an editing operation on the target appearance feature, generate a target mask map corresponding to the edited target appearance feature, and the target appearance feature is an appearance feature other than the appearance feature represented by the map;
- the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the mask texture is adjusted based on the editing operation to obtain a target mask texture, wherein the method of adjusting the mask texture includes at least one of adjusting mask color, adjusting mask range, adjusting gradient range, or adjusting mask position.
- the method of adjusting the mask map is to adjust the color of the mask; the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the mask map is adjusted by adjusting the mask range; the at least one instruction, at least one program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the mask range of the target mask map is the target mask range, and the pixel point refers to the maximum mask corresponding to the mask map. Pixels within the range, the color value of the pixel is used to control the coloring degree of the pixel.
- the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the color value of the pixel is smaller than the mask range threshold, it is determined that the pixel does not belong to the target mask range.
- the mask map is adjusted by adjusting the gradient range; the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the pixel point refers to the pixel point within the mask range corresponding to the mask map.
- the color value of the pixel point is used for Controls the degree of tinting of pixels.
- the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the color value of the pixel point In the case that the color value of the pixel point is greater than or equal to the gradient range threshold, the color value of the pixel point remains unchanged.
- the mask map is adjusted by adjusting the position of the mask; the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the masked area and the non-masked area in the masked map are exchanged to obtain a target masked map.
- the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- At least one layer of target mask texture is superimposed on the texture map to obtain the target texture.
- the texture map represents the hair base color of the virtual pet
- At least one layer of target mask maps includes at least one of key color mask maps, pattern color mask maps or glove color mask maps;
- the key color mask map represents the key color block that is different from the hair background color
- the pattern color mask map represents the pattern color block that is different from the hair background color
- the glove color mask map represents the foot color patch or hand color patch different from the hair background color
- the texture level of the glove color mask texture is higher than that of the pattern color mask texture, and the texture level of the pattern color mask texture is higher than that of the key color mask texture.
- the at least one instruction, at least one section of program, code set or instruction set is loaded and executed by the processor to achieve the following operations:
- the target mask texture is scaled based on the size of the texture, wherein the size of the target mask texture matches the size of the texture after scaling.
- the embodiment of the present application also provides a terminal, the terminal includes a processor and a memory, the memory stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set or The instruction set is loaded and executed by the processor to:
- the appearance editing interface includes the first appearance editing control and the second appearance editing control;
- the edited virtual pet is displayed in the appearance editing interface, wherein the edited virtual pet is a virtual pet model obtained by combining the appearance features controlled by the first appearance editing control and the target appearance features.
- the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores at least one instruction, and the at least one instruction is loaded and executed by the processor to realize the virtual How to edit the pet's appearance.
- a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
- the processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal executes the method for editing the appearance of a virtual pet provided in various optional implementation manners of the above aspect.
- the functions described in the embodiments of the present application may be implemented by hardware, software, firmware or any combination thereof.
- the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium.
- Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种虚拟宠物的外观编辑方法、装置、终端及存储介质,属于人机交互领域。该方法包括:获取虚拟宠物的贴图,贴图表征虚拟宠物的外观特征(410);响应于对目标外观特征的编辑操作,生成编辑后目标外观特征对应的目标蒙版贴图,目标外观特征为贴图表征的外观特征以外的外观特征(420);对贴图和至少一层目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征(430);将目标贴图应用于虚拟宠物的三维模型(440)。
Description
本申请要求于2021年11月05日提交、申请号为202111308492.7、发明名称为“虚拟宠物的外观编辑方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及人机交互领域,特别涉及一种虚拟宠物的外观编辑方法、装置、终端及存储介质。
游戏应用程序中存在很多虚拟宠物,用户可以通过改变虚拟宠物的身体模型、五官特征以及毛发特征等生成外观特征不相同的虚拟宠物。
发明内容
本申请实施例提供了一种虚拟宠物的外观编辑方法、装置、终端及存储介质,由于目标贴图由贴图和不同层级的目标蒙版贴图叠加而成,因此通过调整目标蒙版贴图生成不同的目标贴图,提高虚拟宠物外观编辑的自由度,丰富虚拟宠物的外观特征。所述技术方案如下:
一方面,本申请实施例提供了一种虚拟宠物的外观编辑方法,所述方法包括:
终端获取虚拟宠物的贴图,所述贴图表征所述虚拟宠物的外观特征;
所述终端响应于对目标外观特征的编辑操作,生成编辑后所述目标外观特征对应的目标蒙版贴图,所述目标外观特征为所述贴图表征的外观特征以外的外观特征;
所述终端对所述贴图和至少一层所述目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;
所述终端将所述目标贴图应用于所述虚拟宠物的三维模型。
另一方面,本申请实施例提供了一种虚拟宠物的外观编辑方法,所述方法包括:
终端显示外观编辑界面,所述外观编辑界面包括第一外观编辑控件和第二外观编辑控件;
所述终端响应于对所述第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;
所述终端响应于对所述第二外观编辑控件的触发操作,更新所述虚拟宠物的目标外观特征,所述目标外观特征为所述第一外观编辑控件控制的外观特征以外的外观特征;
所述终端在所述外观编辑界面中展示编辑后的所述虚拟宠物,其中,编辑后的所述虚拟宠物是由所述外观特征和所述目标外观特征结合后得到的虚拟宠物模型。
另一方面,本申请实施例提供了一种虚拟宠物的外观编辑装置,所述装置包括:
获取模块,用于获取虚拟宠物的贴图,所述贴图表征所述虚拟宠物的外观特征;
编辑模块,响应于对目标外观特征的编辑操作,生成编辑后所述目标外观特征对应的目标蒙版贴图,所述目标外观特征为所述贴图表征的外观特征以外的外观特征;
叠加模块,用于对所述贴图和至少一层所述目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;
应用模块,用于将所述目标贴图应用于所述虚拟宠物的三维模型。
另一方面,本申请实施例提供了一种虚拟宠物的外观编辑装置,所述装置包括:
显示模块,用于显示外观编辑界面,所述外观编辑界面包括第一外观编辑控件和第二外观编辑控件;
第一更新模块,用于响应于对所述第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;
第二更新模块,用于响应于对所述第二外观编辑控件的触发操作,更新所述虚拟宠物的目标外观特征,所述目标外观特征为所述第一外观编辑控件控制的外观特征以外的外观特征;
展示模块,用于在所述外观编辑界面中展示编辑后的所述虚拟宠物,其中,编辑后的所述虚拟宠物是由所述外观特征和所述目标外观特征结合后得到的虚拟宠物模型。
另一方面,本申请实施例提供了一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述方面所述的虚拟宠物的外观编辑方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述方面所述的虚拟宠物的外观编辑方法。
另一方面,本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式中提供的虚拟宠物的外观编辑方法。
本申请实施例提供的技术方案的有益效果至少包括:
在本申请实施例中,贴图表征虚拟宠物的外观特征,目标蒙版贴图表征目标外观特征,目标外观特征为贴图表征的外观特征以外的外观特征,通过贴图和不同层级的目标蒙版贴图的叠加,生成不同的目标贴图用于虚拟宠物的模型,进而生成外观特征不同的虚拟宠物。由于目标贴图由贴图和不同层级的目标蒙版贴图叠加而成,因此调整目标蒙版贴图即可生成不同的目标贴图,一方面避免每增加一种虚拟宠物,制作一张与之相匹配的贴图,另一方面通过有限的美术资源量可生成多种外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度,丰富虚拟宠物的外观特征。
图1示出了本申请一个示例性实施例提供的遮罩区域和非遮罩区域的示意图;
图2示出了本申请一个示例性实施例提供的虚拟宠物贴图生成过程的原理图;
图3示出了本申请一个示例性实施例提供的实施环境的示意图;
图4示出了本申请一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图;
图5示出了本申请一个示例性实施例提供的虚拟宠物的外观编辑方法的界面示意图;
图6示出了本申请一个示例性实施例提供的贴图与目标蒙版贴图叠加示意图;
图7示出了本申请另一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图;
图8示出了本申请一个示例性实施例提供的重点色蒙版贴图的示意图;
图9示出了本申请一个示例性实施例提供的花纹色蒙版贴图的示意图;
图10示出了本申请一个示例性实施例提供的手套色蒙版贴图的示意图;
图11示出了本申请一个示例性实施例提供的不同层级的目标蒙版贴图叠加的示意图;
图12示出了本申请一个示例性实施例提供的调整蒙版颜色的方法流程图;
图13示出了本申请一个示例性实施例提供的调整遮罩范围的方法流程图;
图14示出了本申请一个示例性实施例提供的调整遮罩范围的过程示意图;
图15示出了本申请一个示例性实施例提供的调整渐变范围的方法流程图;
图16示出了本申请一个示例性实施例提供的调整渐变范围的过程示意图;
图17示出了本申请一个示例性实施例提供的调整遮罩位置的过程示意图;
图18示出了另本申请一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图;
图19示出了本申请一个示例性实施例提供的虚拟宠物编辑界面示意图;
图20示出了本申请一个示例性实施例提供的调整虚拟宠物鼻嘴的界面示意图;
图21示出了本申请一个示例性实施例提供的虚拟宠物耳朵模型的示意图;
图22示出了本申请一个示例性实施例提供的调整虚拟宠物眼睛的示意图;
图23示出了本申请一个示例性实施例提供的虚拟宠物装饰性贴花贴图的示意图;
图24示出了本申请一个示例性实施例提供的虚拟宠物的外观编辑装置的结构框图;
图25示出了本申请另一个示例性实施例提供的虚拟宠物的外观编辑装置的结构框图;
图26示出了本申请一个示例性实施例提供的终端的结构框图。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
首先,对本申请实施例中涉及的名词进行介绍:
虚拟宠物:是以卡通形象/或动物形式的宠物形象进行呈现的数字宠物。该虚拟宠物是二维数字宠物或三维数字宠物,比如,虚拟宠物是以猫的形象呈现的三维虚拟宠物。可选地,存在一部分虚拟宠物的宠物形象是随机生成的,可选地,存在一部分虚拟宠物的宠物形象是根据父母亲虚拟宠物和/或其他祖辈虚拟宠物的宠物形象按照基因遗传规则生成的,本申请实施例对比不作限定。
在一些实施例中,虚拟宠物是终端运行的应用程序所展示的数字宠物。该应用程序包括如下功能中的至少一项:抓取虚拟宠物、生成虚拟宠物、繁育虚拟宠物、交易虚拟宠物、使用虚拟宠物进行战斗、使用虚拟宠物进行增强现实(AugmentedReality,AR)互动,使用虚拟宠物进行社交,使用虚拟宠物进行AR教育。在另一些实施例中,该应用程序是基于区块链系统进行虚拟宠物的获取、繁育和/交易的应用程序。在另一些实施例中,该应用程序是基于地理位置的社交游戏程序,该社交游戏程序提供有利用虚拟宠物进行收藏、成长和/或战斗中的至少一种功能。
外观特征:外观特征是指体现虚拟宠物的宠物形象的特征。可选地,虚拟宠物的外观特征包括毛发、斑纹、眼睛、鼻子、嘴巴、胡须以及耳朵等不同身体部位,每一个身体部位均可以有很多种不同的外观特征。上述外观特征也可以包括颜色、形状、纹理等可见特征。例如,毛发的外观特征可以是白色、灰色、黑色、黄色、橙色等;斑纹的外观特征可以是点状、块状、片状、斑状、条状等;耳朵的外观特征可以是长耳、短耳、卷耳、折耳、正常耳等不同形状。另外有些外观特征的大小、高低以及位置也能进行调整以呈现不同的外观特征,例如鼻子的大小、高低可以进行调整、嘴巴的高低、嘴瓣的开合也可以进行调整、耳朵的大小也可以进行调整等,本申请实施例对此不作限定。
贴图:是指附着于虚拟宠物模型表面,用于形成虚拟宠物外观特征的图层。不同的贴图表征不同的外观特征,多个贴图按照层级顺序叠加后应用于虚拟宠物模型可生成外观特征不同的虚拟宠物。在本申请实施例中,贴图可以是基础贴图,也可以是蒙版贴图。基础贴图是指层级最低的贴图,蒙版贴图是层级高于基础贴图的贴图,可以用来隐藏或者显示比其层级低的图层中的内容。示例性的,如图1所示,基础贴图10与蒙版贴图11按照层级顺序叠加后应用于虚拟宠物模型生成虚拟宠物猫12。
蒙版贴图:用于对层叠过程中,对下层贴图(可以为基础贴图,也可以是蒙版贴图)的部分区域进行遮罩的贴图,包括遮罩区域和非遮罩区域。其中,蒙版贴图叠加在其他贴图上层后,其他贴图中与该蒙版贴图中遮罩区域对应的区域被遮挡,而与蒙版贴图中非遮罩区域对应的区域则未被遮挡,能够透过蒙版贴图被观察到。
示例性的,如图1所示,蒙版贴图11包括遮罩区域112和非遮罩区域111。基础贴图10与蒙版贴图11叠加时,蒙版贴图11的层级高于基础贴图10,因此基础贴图10中遮罩区域112对应的内容被遮挡,非遮罩区域111对应的内容未被遮挡,可以透过蒙版贴图11被观察到,从而最终呈现出虚拟宠物猫12的外观特征。
相关技术中,虚拟宠物的不同外观特征对应不同的贴图,一张贴图中包括多种外观特征,例如底色、花纹色以及手套色等。因此开发人员需要预先绘制多张具有不同外观特征的贴图以供选择。由此可知,制作虚拟宠物的美术资源量与虚拟宠物的外观自由度成一比一的定量关系。但是虚拟宠物贴图的制作需要耗费大量时间,另外由于虚拟宠物的外观特征与相应的贴图一一对应,因此虚拟宠物的外观相似度较高。
相关技术中,在涉及虚拟宠物的应用程序(Application,App)中,用户所持有的虚拟宠物往往从应用程序中已有的多个模型中选择其中一个,或者可以对虚拟宠物的个别外观特征进行修改,原因在于虚拟宠物的模型制作方式为整体替换模型或者贴图,即一张贴图中包括了虚拟宠物的多种外观特征。因此每增加一种虚拟宠物的外观特征,需要制作一张贴图。例如在应用程序中需呈现20种外观特征不同的虚拟宠物,需要制作20张贴图与之相对应,与此同时,用户也只能在这20种虚拟宠物中进行选择。也就是说,游戏中虚拟宠物外观定制的自由度和美术资源量即贴图呈1比1的定量关系,用户可选择的虚拟宠物的外观特征被限制。
在本申请实施例中,目标贴图由一层贴图和至少一层蒙版贴图叠加而成,通过贴图控制外观特征、通过不同的蒙版贴图调整贴图控制的外观特征以外的其他外观特征,从而实现虚拟宠物外观的多样化。另外,不同层级的目标蒙版贴图代表虚拟宠物不同的外观特征,用户可以通过对目标蒙版贴图的编辑操作,实现对虚拟宠物外观特征的个性化定制,因此并不需要每增加一种虚拟宠物的外观特征,制作一张贴图。在本申请实施例中,多个目标蒙版贴图能够与一张贴图进行叠加,通过改变目标蒙版贴图改变虚拟宠物的外观特征,一方面减少美术资源量,节约制作贴图的时间和成本,同时丰富了虚拟宠物的外观特征。
示例性的,请参考图2,以虚拟宠物为猫为例。若应用程序中需要呈现白猫201、黑白猫202以及黑白花猫203,相关技术中需要分别制作白猫贴图204、黑白猫贴图205以及黑白花猫贴图206。如果还需要呈现其他外观特征的猫,例如橘猫,仍需要制作一张对应的橘猫贴图。而在本申请实施例中,通过贴图207、第一目标蒙版贴图208以及第二目标蒙版贴图209的组合叠加即可呈现白猫201、黑白猫202以及黑白花猫203。比如,白猫201由贴图207生成,黑白猫202由贴图207以及第一目标蒙版贴图208叠加而成,黑白花猫203由贴图207、第一目标蒙版贴图208以及第二目标蒙版贴图209叠加而成。相比于相关技术,并不需要针对每一种猫的外观特征制作一张对应的贴图,通过贴图207、第一目标蒙版贴图208以及第二目标蒙版贴图209叠加即可实现多种不同的猫的外观特征。
其中,贴图207可以为基础贴图,也即是层级最低的贴图,该贴图207表征的外观特征是任意虚拟宠物猫共有的外观特征,因此该贴图207能够用于制作任意虚拟宠物猫。第一目标蒙版贴图208所表征的外观特征与第二目标蒙版贴图所表征的外观特征不同。例如,贴图207所表征的外观特征为白色毛发,第一目标蒙版贴图208所表征的外观特征为黑色毛发,第二目标蒙版贴图209所表征的外观特征为花色毛发,白猫201仅具有贴图207所表征的外观特征,不具有第一目标蒙版贴图208和第二目标蒙版贴图208所表征的外观特征,因此白猫201仅由贴图207生成。黑白猫202具有贴图207和第一目标蒙版贴图208所表征的外观特征,因此黑白猫202由贴图207以及第一目标蒙版贴图208叠加而成。黑白花猫203具有贴图207、第一目标蒙版贴图208以及第二目标蒙版贴图209所表征的外观特征,因此黑白花猫203由贴图207、第一目标蒙版贴图208以及第二目标蒙版贴图209叠加而成。
另外,采用本申请实施例中提供的虚拟宠物的外观编辑方法生成的虚拟宠物可应用于不同的应用程序中,该应用程序支持对虚拟宠物的编辑操作。可选地,应用程序可以是游戏类应用程序、社交类应用程序等,本申请实施例对此不作限定。下面以游戏类应用程序为例,对本申请实施例的实施环境予以说明。请参考图3,其示出了本申请一个实施例提供的实施 环境的示意图。该实施环境可以包括:终端310、服务器320。
终端310安装和运行有支持虚拟宠物编辑的应用程序311。当终端运行应用程序311时,终端310的屏幕上显示应用程序311的用户界面。应用程序311可以是游戏类应用程序、社交类应用程序等任意一种。在本申请实施例中,以该应用程序311是宠物养成类游戏来举例说明。终端310是用户312使用的终端,用户312使用终端310定制生成外观特征不同的虚拟宠物,并控制该虚拟宠物进行繁育、完成任务、战斗等。
可选地,终端310泛指多个终端中的一个。可选地,终端310可以是智能手机、平板电脑、电子书阅读器、动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)播放器、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器、膝上型便携计算机和台式计算机中的至少一种。
终端310通过无线网络或有线网络与服务器320相连。
服务器320包括一台服务器、多台服务器组成的服务器集群、云计算平台和虚拟化中心中的至少一种。服务器320用于为支持虚拟宠物编辑的应用程序提供后台服务。可选地,服务器320承担主要计算工作,终端310承担次要计算工作;或者,服务器320承担次要计算工作,终端310承担主要计算工作;或者,服务器320和终端310之间采用分布式计算架构进行协同计算。
在一个示意性的例子中,服务器320包括存储器321、处理器322、用户账号数据库323、素材服务模块324、面向用户的输入/输出接口(Input/Output Interface,I/O接口)325。其中,处理器322用于加载服务器320中存储的指令,处理用户账号数据库323和素材服务模块324中的数据;用户账号数据库323用于存储终端310的用户账号的数据,比如用户账号的头像、用户账号的昵称、用户账号的等级,用户账号所在的服务区;素材服务模块324用于提供不同种类的虚拟宠物的贴图以及模型,以支持对虚拟宠物的编辑;面向用户的I/O接口325用于通过无线网络或有线网络和终端310建立通信交换数据。另外需要说明的是,下述各个实施例中,虚拟宠物的外观编辑可以由终端独立完成、由服务器独立完成或者由终端和服务器配合完成,本申请实施例对此不作限定。为了方便表述,下述实施例均以终端完成虚拟宠物的外观编辑为例进行说明。
请参考图4,其示出了本申请一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图。本申请实施例以该方法用于图3所示实施环境中的终端310为例进行说明,该方法包括如下步骤:
步骤410,终端获取虚拟宠物的贴图,贴图表征虚拟宠物的外观特征。
在一种可能的实施方式中,当用户触发安装在终端上的应用程序,终端显示虚拟宠物的外观编辑界面,并获取虚拟宠物的贴图。可选地,该贴图为基础贴图,基础贴图是指层级最低的贴图,其他贴图(例如蒙版贴图)可以叠加在基础贴图之上,基础贴图不能叠加在其他贴图之上。基础贴图表征的外观特征为基础外观特征,基础外观特征是指同一类型的虚拟宠物共有的外观特征,例如基础外观特征为虚拟宠物的毛发底色等。
在一种可能的实施方式中,虚拟宠物的外观特征包括体型特征、五官特征以及毛发特征。可选地,体型特征是虚拟宠物的身体胖瘦、头部大小、脸颊宽窄、爪子大小、尾巴粗细及长短等,本申请实施例对此不作限定。可选地,五官特征是虚拟宠物的耳朵、眼睛、鼻嘴的大小、形状等,本申请实施例对此不作限定。可选地,毛发特征是毛发底色以及毛发底色的色块等,本申请实施例对此不作限定。
可选地,虚拟宠物的贴图用来表示虚拟宠物的毛发底色,毛发底色是指虚拟宠物的基础毛色,基础毛色是指同一类型的虚拟宠物的毛发中普遍性最高的毛色,是虚拟宠物的全身毛发的主要颜色。可选地,该基础毛色是同一类型的虚拟宠物共有的毛色。
可选地,不同类型的虚拟宠物的毛发底色可能相同也可能不同。
在一种可能的实施方式中,为了使得虚拟宠物的外观特征更接近自然界的动物,因此虚 拟宠物的毛发底色的类型可以是根据自然界中相对应的动物的毛发性状总结得出的。例如,通过对纯色猫、蓝猫、橘猫、豹猫、狸花、起司、加菲等15种不同品种的猫的毛发形状进行分析可知,猫的毛色主要是白色、灰色、黑色、黄色。具体请参考表一,其中列出了纯色猫的毛发形状。因此虚拟宠物猫的贴图可以是白色、灰色、黄色、黑色四种类型。
在一种可能的实施方式中,由于贴图的颜色种类是根据自然界中相应的动物的毛发性状总结得出的,因此贴图的颜色种类是固定的,也就是说贴图的色相是固定不变的,用户可以通过调整贴图的亮度和饱和度实现不同种类的贴图。例如虚拟宠物猫的贴图为白色、灰色、黄色、黑色四种类型,用户通过调整贴图的亮度和饱和度,从而获得乳白色、巧克力色、浅黄色、黑褐色等不同类型的贴图,进而丰富虚拟宠物的外观特征。
步骤420,终端响应于对目标外观特征的编辑操作,生成编辑后目标外观特征对应的目标蒙版贴图,目标外观特征为贴图表征的外观特征以外的外观特征。
在本申请实施例中,目标外观特征为虚拟宠物的附加特征,是贴图表征的外观特征以外的其他外观特征。可选地,贴图表征的外观特征是虚拟宠物的基础外观特征,目标外观特征是基础外观特征以外的外观特征。例如,贴图表征的外观特征为虚拟宠物的毛发底色,则目标外观特征是虚拟宠物异于毛发底色的部分。
可选地,目标外观特征是虚拟宠物毛发底色的重点色块,或者是花纹色块、手套色色块等,本申请实施例对此不作限定。
可选地,目标外观特征分布在虚拟宠物的脸、背、胸、腿、尾巴、四肢等区域,本申请实施例对此不作限定。
可选地,不同的虚拟宠物的目标外观特征可能相同也可能不同。
在一种可能的实施方式中,为了使得虚拟宠物的外观特征更接近自然界的动物,因此虚拟宠物的目标外观特征可以是根据自然界中相对应的动物的毛发性状总结得出的。例如,通过对纯色猫、蓝猫、橘猫、豹猫、狸花、起司、加菲等15种不同品种的猫的毛发形状进行分析得出猫的目标外观特征,具体请参考表一,其中列出了纯色猫的毛发形状。由表一可知,猫的目标外观特征可以是重点色以及花纹色。重点色主要分布在猫的脸、耳朵、身体、胸以及尾巴,重点色的形状可以是圆点或者片状。花纹色主要分布在猫的身体或者脸,花纹色的形状可以是鱼骨纹、古典斑、点斑、细纹斑等。由此可知,虚拟宠物猫的目标外观特征可以是毛发底色的重点色块,例如原点、片状等,重点色块可以分布在虚拟宠物猫的脸、身体、耳朵、尾巴等部位,目标外观特征也可以是毛发底色的花纹色块,例如鱼骨纹、点斑等,花纹色块可以分布在虚拟宠物猫的脸部、背部、胸部等部位,目标外观特征也可以是毛发底色的手套色块,例如高手套、低手套,手套色块主要分布在虚拟宠物猫的四肢。
表一
在一种可能的实施方式中,用户在虚拟宠物的外观编辑界面对目标外观特征进行编辑操作,终端生成编辑后的目标外观特征对应的目标蒙版贴图。
对目标外观特征的编辑操作是指用户在虚拟宠物的外观编辑界面通过编辑控件进行操作,调整目标外观特征的种类、颜色、形状、大小等。
可选地,编辑控件是按钮控件、滑块控件或调色盘控件等,本申请实施例对此不作限定。
示例性的,如图5所示,在虚拟宠物编辑界面506中,通过按钮控件501可选择目标外观特征的种类,通过调色盘控件502可选择目标外观特征的颜色,通过颜色滑块控件503可调整目标外观特征颜色的亮度以及饱和度,通过色块滑块控件504可调整目标外观特征的大小,通过边缘渐变滑块控件505可调整目标外观特征的渐变效果。
在本申请实施例中,目标蒙版贴图表征虚拟宠物的目标外观特征。
可选地,目标蒙版贴图是遮罩(MASK)贴图、蒙版贴图等,本申请实施例对此不作限定。
可选地,目标蒙版贴图的数量是一张或者多张,本申请实施例对此不作限定。
步骤430,终端对贴图和至少一层目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征。
在本申请实施例中,将一层目标贴图分成多层,即目标贴图由贴图和至少一层目标蒙版贴图叠加而成。由于不同层级的目标蒙版贴图对应不同类型的目标外观特征,用户可以根据自己的喜好选择不同的目标蒙版贴图与贴图进行叠加,形成不同的目标贴图,进而形成外观特征不同的虚拟宠物。
可选地,在目标蒙版贴图有多层的情况下,则多层目标蒙版贴图之间存在层级,也就是说不同的目标蒙版贴图叠加时存在先后的顺序。可选地,层级低的目标蒙版贴图先叠加,层级高的目标蒙版贴图后叠加。或者,层级高的目标蒙版贴图先叠加,层级低的目标蒙版贴图后叠加。
可选地,多个目标蒙版贴图能够与一张贴图进行叠加,不同层级的目标蒙版贴图对应不同的目标外观特征,同一层级的目标蒙版贴图对应相同的目标外观特征。对于同一目标外观特征可能存在不同的种类,因而对应不同的目标蒙版贴图。例如目标外观特征为花纹色,花纹色的形状可以是鱼骨纹、古典斑、点斑或细纹斑等,则不同形状的花纹色对应不同的目标蒙版贴图。
可选地,用户可以对不同层级的目标蒙版贴图分别进行相应的编辑操作,不同层级的目标蒙版贴图之间互不影响。
示例性的,如图6所示,以虚拟宠物猫为例,通过贴图601、第一目标蒙版贴图602、第二目标蒙版贴图603以及第三目标蒙版贴图604叠加,得到目标贴图(未示出)。其中,贴图601表征的外观特征为毛发底色,第一目标蒙版贴图602表征的外观特征为重点色块,第二目标蒙版贴图603表征的外观特征为花纹色块,第三目标蒙版贴图604表征的外观特征为手套色块。
步骤440,终端将目标贴图应用于虚拟宠物的三维模型。
将不同的目标贴图应用于虚拟宠物的三维模型,即可得到外观特征不同的虚拟宠物。
示例性的,如图6所示,终端将贴图601、第一目标蒙版贴图602、第二目标蒙版贴图603、以及第三目标蒙版贴图604叠加形成的目标贴图应用于虚拟宠物猫605的三维模型。
综上所述,在本申请实施例中,贴图表征虚拟宠物的外观特征,目标蒙版贴图表征目标外观特征,目标外观特征为贴图表征的外观特征以外的外观特征,通过贴图和不同层级的目标蒙版贴图的叠加,生成不同的目标贴图用于虚拟宠物的模型,进而生成外观特征不同的虚拟宠物。由于目标贴图由贴图和不同层级的目标蒙版贴图叠加而成,因此调整目标蒙版贴图即可生成不同的目标贴图,一方面避免每增加一种虚拟宠物,制作一张与之相匹配的贴图,另一方面通过有限的美术资源量可生成多种外观特征不同的虚拟宠物,提高虚拟宠物外观编 辑的自由度,丰富虚拟宠物的外观特征。
在本申请实施例中,通过编辑操作,调整蒙版贴图的蒙版颜色、遮罩范围、渐变范围以及遮罩位置,从而实现不同种类的目标蒙版贴图,进而丰富了虚拟宠物的外观特征。另外,由于目标蒙版贴图与贴图的尺寸大小不一致,因此在得到目标贴图之前,需要对目标蒙版贴图的大小进行调整,以适配贴图。请参考图7,其示出了本申请另一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图。
步骤710,终端获取虚拟宠物的贴图,贴图表征虚拟宠物的外观特征。
步骤710同步骤410,本申请实施例对此不作限定。
步骤720,终端响应于对目标外观特征的编辑操作,获取目标外观特征对应的蒙版贴图。
当用户在虚拟宠物的外观编辑界面对目标外观特征进行编辑操作时,终端获取目标外观特征对应的蒙版贴图。蒙版贴图是指当前目标外观特征对应的蒙版贴图。
在一种可能的实施方式中,当终端首次显示虚拟宠物的外观编辑界面,用户首次在外观编辑界面编辑目标外观特征,终端获取还未编辑的目标外观特征对应的蒙版贴图。
在另一种可能的实施方式中,当用户已完成对目标外观特征的编辑操作,中途退出应用程序,终端会自动保存退出应用程序前目标外观特征的编辑状态以及对应的蒙版贴图。当终端再次进入应用程序,用户基于上次退出游戏时的目标外观特征继续进行编辑操作,终端获取上次退出游戏前编辑的目标外观特征对应的蒙版贴图。
步骤730,终端基于编辑操作调整蒙版贴图,得到目标蒙版贴图,其中,蒙版贴图的调整方式包括调整蒙版颜色、调整遮罩范围、调整渐变范围或调整遮罩位置中的至少一种。
在本申请实施例中,蒙版贴图包括遮罩区域和非遮罩区域,在蒙版贴图的下层贴图中,该遮罩区域下方的贴图内容被遮挡,非遮罩区域下方的贴图内容未被遮挡,即可以透过目标蒙版贴图被观察到。可选地,遮罩区域为不透明的区域,非遮罩区域为透明的区域。
在一种可能的实施方式中,终端通过调整蒙版贴图的遮罩区域的颜色得到目标蒙版贴图。用户在虚拟宠物的外观编辑界面通过编辑控件选择颜色,调整目标外观特征的颜色,终端基于该编辑操作,调整蒙版贴图的颜色,从而得到目标蒙版贴图。
可选地,终端确定用户通过调色盘控件选择的色相,也就是颜色的类别,例如红色、白色、橘色等,进而调整蒙版贴图的颜色,得到目标蒙版贴图。
可选地,终端确定用户通过滑块控件调整的颜色的亮度以及饱和度,也就是颜色的深浅,例如,正红色、深红色、浅红色等,进而调整蒙版贴图的颜色,得到目标蒙版贴图。
示例性的,如图5所示,以目标外观特征为手套色色块为例,用户通过调色盘控件502选择手套色色块的色相,通过颜色滑块控件503选择手套色色块的饱和度以及亮度,终端基于该编辑操作,调整蒙版贴图,进而得到目标蒙版贴图。
在一种可能的实施方式中,终端通过调整蒙版贴图的遮罩区域的范围得到目标蒙版贴图。用户在虚拟宠物的外观编辑界面通过编辑控件调整目标外观特征的大小,终端基于该编辑操作,调整蒙版贴图的遮罩范围,从而得到目标蒙版贴图。
示例性,如图5所示,以目标外观特征为手套色色块为例,用户通过色块滑块控件504调整手套色色块的大小,终端基于该编辑操作,调整蒙版贴图,进而得到目标蒙版贴图。
在一种可能的实施方式中,终端通过调整蒙版贴图的遮罩区域的渐变范围得到目标蒙版贴图。用户在虚拟宠物的外观编辑界面通过编辑控件调整目标外观特征的边缘渐变,终端基于该编辑操作,调整蒙版贴图的渐变范围,从而得到目标蒙版贴图。
示例性的,如图5所示,以目标外观特征为手套色色块为例,用户通过边缘渐变滑块控件505调整手套色色块的边缘渐变效果,终端基于该编辑操作,调整蒙版贴图,进而得到目标蒙版贴图。
在一种可能的实施方式中,终端通过调整蒙版贴图的遮罩区域的位置得到目标蒙版贴图。用户在虚拟宠物的外观编辑界面通过编辑控件调整目标外观特征的位置,终端基于该编辑操 作,调整蒙版贴图的遮罩位置,从而得到目标蒙版贴图。
步骤740,在目标蒙版贴图与贴图的尺寸不同的情况下,终端基于贴图的尺寸对目标蒙版贴图进行缩放,其中,缩放后目标蒙版贴图的尺寸与贴图的尺寸匹配。
可选地,目标蒙版贴图的尺寸与贴图的尺寸匹配是指目标蒙版贴图的尺寸与贴图的尺寸相同。可选地,目标蒙版贴图的尺寸与贴图的尺寸匹配是指目标蒙版贴图的尺寸与贴图中需要叠加的区域的尺寸相同。
在一种可能的实施方式中,目标蒙版贴图的尺寸与贴图的尺寸相同,贴图与目标蒙版贴图叠加之前,不需要调整目标蒙版贴图的尺寸。
在另一种可能的实施方式中,目标蒙版贴图的尺寸与贴图的尺寸不同,贴图与目标蒙版贴图叠加之前,终端自动调整目标蒙版贴图的尺寸大小以适配贴图。
步骤750,终端基于各层目标蒙版贴图的贴图层级,在贴图上叠加至少一层目标蒙版贴图,得到目标贴图。
在一种可能的实施方式中,各层目标蒙版贴图之间具有贴图层级关系,也就是说各层目标蒙版贴图之间需按照一定的顺序叠加。贴图的层级顺序固定不变,其他各层目标蒙版贴图在贴图上依次按照顺序叠加,从而形成目标贴图。
在一种可能的实施方式中,目标蒙版贴图是重点色蒙版贴图、花纹色蒙版贴图或者手套色蒙版贴图等,本申请实施例对此不作限定。
其中重点色蒙版贴图是指不同于虚拟宠物毛发底色的重点色块。
可选地,该色块种类是点状、块状或片状等,本申请实施例对此不作限定。
示例性的,如图8所示,其中示出了不同种类的重点色蒙版贴图801与贴图叠加形成目标贴图(未示出),并将目标贴图应用于虚拟宠物猫802呈现的效果。
其中花纹色蒙版贴图是指不同于虚拟宠物毛发底色的花纹色块。
可选地,该花纹色块种类是斑状、纹状或条状等,本申请实施例对此不作限定。
示例性的,如图9所示,其中示出了不同种类的花纹色蒙版贴图901与贴图叠加形成目标贴图(未示出),并将目标贴图应用于虚拟宠物猫902呈现的效果。
其中手套色蒙版贴图是指毛发底色上的足部色块或手部色块。
可选地,该色块的种类是块状、点状或片状等,本申请实施例对此不作限定。
示例性的,如图10所示,其中示出了不同种类的手套色蒙版贴图1001与贴图叠加形成目标贴图(未示出),并将目标贴图应用于虚拟宠物猫1002呈现的效果。
在一种可能的方式中,由于各个目标蒙版贴图存在层级关系,所以手套色蒙版贴图的贴图层级高于花纹色蒙版贴图的贴图层级,花纹色蒙版贴图层级高于重点色蒙版贴图的贴图层级。
示例性的,如图11所示,其中贴图1101、重点色蒙版贴图1102、花纹色蒙版贴图1103以及手套色蒙版贴图1104叠加形成目标贴图,将目标贴图应用于虚拟宠物猫1105。前述贴图的层级关系为,第一层为贴图1101、第二层为重点色蒙版贴图1102、第三层为花纹色蒙版贴图1103、第四层为手套色蒙版贴图1104。此处重点在于举例描述重点色蒙版贴图1102、花纹色蒙版贴图1103以及手套色蒙版贴图1104的层级关系,因此图中并未示出图层的内容。
另外,在本申请实施例,通过不同种类的贴图以及不同层级的不同种类的目标蒙版贴图的排列组合,通过有限的美术资源量生成多种外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度。
在一种可能的实施方式中,每层目标蒙版贴图的种类相同,例如贴图有m种,目标蒙版贴图为i层,不同层级的目标蒙版贴图数量为M
1至M
i,生成目标贴图的数量N为:
N=m+(m×M
1+m×M
2……+m×Mi)+(m×M
1×M
2+m×M
1×M
3……+m×M
1×M
i)+……m×M
1×M
2×……×M
i。
也就是说,可以呈现N种外观特征不同的虚拟宠物。其中,m、i、M1至Mi以及N均为正整数。
示例性的,贴图有4种类型,目标蒙版贴图分别为重点色蒙版贴图,有6种类型;花纹色蒙版贴图,有6种类型;手套色蒙版贴图,有2种类型。将上述4种贴图以及14种目标蒙版贴图排列组合叠加,能够得到的目标贴图的个数为:4+4×6+4×6+4×2+4×6×2+4×6×2+4×6×6+4×6×6×2=588,也就是说可以呈现588种外观特征不同的虚拟宠物。相关技术中,若需要呈现588种外观特征不同的虚拟宠物,需要制作588种与之相比配的贴图,而在本申请实施例中,通过4种贴图以及14种目标蒙版贴图即可实现,减少了美术资源量的同时丰富了虚拟宠物的外观特征。
步骤760,终端将目标贴图应用于虚拟宠物的三维模型。
步骤760同步骤440,本申请实施例对此不进行赘述。
在本申请实施例中,通过调整蒙版贴图的蒙版颜色、遮罩范围、渐变范围以及遮罩位置,实现不同种类的目标蒙版贴图,进而丰富虚拟宠物的外观特征,通过有限的美术资源量生成多种外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度,另外,通过调整目标贴图的尺寸以适配贴图,以便得到与虚拟宠物三维模型相适配的目标贴图。
在本申请实施例中,蒙版贴图分为遮罩区域和非遮罩区域,遮罩区域的形态与虚拟宠物的目标外观特征的形态相对应。在一种可能的实施方式中,通过对遮罩区域进行染色得到目标蒙版贴图。请参考图12,其中示出了本申请一个示例性实施例提供的调整蒙版颜色的方法流程图。
步骤1201,终端基于编辑操作确定蒙版贴图的目标蒙版颜色。
用户在虚拟宠物的外观编辑界面通过编辑控件选择颜色,终端基于用户选择的颜色,确定目标蒙版颜色。
可选地,编辑操作是用户通过调色盘控件选择目标蒙版的色相,就是颜色的类别,例如黑色、白色、黄色等,本申请实施例对此不作限定。
可选地,编辑操作是用户通过滑块控件调整目标蒙版颜色的亮度以及饱和度,也就是颜色的深浅。例如正红色、深红色、浅红色等,本申请实施例对此不作限定。
示例性的,请参考图5,以目标外观特征为手套色色块为例,用户通过调色盘控件502选择目标蒙版颜色的色相为黄色,通过调整颜色滑块控件503调整饱和度和亮度,将颜色调整为橘黄色,终端确定目标蒙版颜色为橘黄色。
步骤1202,终端基于目标蒙版颜色对蒙版贴图中的遮罩区域进行染色,得到目标蒙版贴图。
终端基于确定的目标蒙版颜色对蒙版贴图中遮罩区域染色,得到目标蒙版贴图。
示例性的,图5中终端基于用户对目标外观特征的编辑操作,确定目标蒙版颜色为橘黄色,终端对蒙版贴图的遮罩区域染成橘黄色,进而得到目标蒙版贴图,此时在虚拟宠物外观的编辑界面,虚拟宠物507的手套色色块即呈现橘黄色。
通过对蒙版贴图的遮罩区域进行染色,实现目标蒙版贴图颜色的多样化,从而丰富虚拟宠物的外观特征,提高虚拟宠物外观编辑的自由度。
在另一种可能的实施方式中,通过调整遮罩范围的大小实现目标外观特征形状大小的变化。请参考图13,其中示出了本申请一个示例性实施例提供的调整遮罩范围的方法流程图。
步骤1301,终端基于编辑操作确定遮罩范围阈值。
在本申请实施例中,蒙版贴图的遮罩区域由若干色值不同的像素点组成,像素点的色值从遮罩区域的中心到边缘依次减小。其中像素点的色值用于控制像素点的染色程度。色值与像素点的染色程度呈正相关,也就是说,色值越大,像素点的染色程度越高,用户在虚拟宠物的外观编辑界面看到的目标外观特征的颜色越深,色值越小,像素点的染色程度越低,用户在虚拟宠物的外观编辑界面看到的目标外观特征的颜色越浅。因此,在本申请实施例中,遮罩范围阈值也就是指像素点的色值阈值。
在一种可能的实施方式中,色值的范围为1到0,遮罩区域中心的色值为1,越靠近遮罩区域边缘色值越小,也就是说遮罩区域中心像素点的染色程度最大,边缘像素点的染色程度最小。
在一种可能的实施方式中,用户在虚拟宠物的外观编辑界面通过编辑控件调整目标外观特征的大小,终端基于用户调整的大小,确定遮罩范围阈值。目标外观特征的大小不同,所对应的遮罩范围阈值也不同。
示例性的,如图5所示,以目标外观特征为手套色色块为例,用户通过色块滑块控件504调整手套色色块的大小,用户每滑动一次色块滑块控件504,则终端调整一次遮罩范围阈值,所呈现的手套色色块的大小也不同。例如,当前手套色色块大小为9,终端确定与之相对应的遮罩范围阈值,当用户向右移动色块滑块控件504使得手套色色块大小变为15,终端不断基于用户的编辑操作调整与之相对应的遮罩范围阈值,用户可以看到虚拟宠物猫的手套色色块逐渐变大。
步骤1302,在像素点的色值大于或等于遮罩范围阈值的情况下,终端确定像素点属于目标遮罩范围。
终端基于遮罩范围阈值以及蒙版贴图对应最大遮罩范围内的像素点的色值,确定目标遮罩范围。在一种可能的实施方式中,在像素点的色值大于或者等于遮罩范围阈值的情况下,则大于或者等于遮罩范围阈值的像素点构成的区域为目标遮罩范围。示例性的,结合图14对确定目标遮罩范围的过程予以说明。确定蒙版贴图1401对应最大遮罩范围内的像素点的色值为0.3,蒙版贴图1401与贴图1405叠加生成目标贴图应用于虚拟宠物猫1403。用户通过编辑控件对虚拟宠物猫1403的目标外观特征的大小进行调整,终端基于该编辑操作,确定遮罩范围阈值为0.5,此时在最大遮罩范围内色值大于等于0.5的像素点属于目标遮罩范围,进而确定目标蒙版贴图1402,目标蒙版贴图1402与贴图1405叠加生成目标贴图,将该目标贴图应用于虚拟宠物猫1404。
步骤1303,在像素点的色值小于遮罩范围阈值的情况下,终端确定像素点不属于目标遮罩范围。
在另一种可能的实施方式中,像素点的色值小于遮罩范围阈值,则像素点不属于目标遮罩范围,也就是说小于遮罩范围阈值的像素点不用于显示目标外观特征。示例性的,如图14所示,蒙版贴图1401中小于0.5的像素点不属于目标遮罩范围,在虚拟宠物的外观编辑界面,用户可以看到虚拟宠物猫1404的目标外观特征范围变小。
在本申请实施例中,通过确定遮罩范围阈值与像素点的色值之间的关系,进而确定目标遮罩范围,实现了虚拟宠物的目标外观特征大小的变化,丰富了虚拟宠物的外观特征。
在另一种可能的实施方式中,通过调整遮罩渐变范围实现目标外观特征边缘渐变的效果,请参考图15,其中示出了本申请一个示例性实施例提供的调整渐变范围的方法流程图。
步骤1501,终端基于编辑操作确定渐变范围阈值。
在一种可能的实施方式中,用户在虚拟宠物的外观编辑界面通过编辑控件调整目标外观特征的边缘渐变,终端基于该编辑操作,确定渐变范围阈值。目标外观特征的边缘渐变的效果不同,所对应的渐变范围阈值也不同。
示例性的,如图5所示,以目标外观特征为手套色色块为例,用户通过边缘渐变滑块控件505调整手套色色块的边缘渐变效果,用户每滑动一次边缘渐变滑块控件505,则终端调整一次渐变范围阈值,所呈现的手套色色块的边缘渐变效果也不同。例如,当前手套色色块边缘渐变为-15,终端确定与之相对应的渐变范围阈值,当用户向右移动边缘渐变滑块控件505使得手套色色块边缘渐变效果增加,终端不断基于用户的编辑操作调整与之相对应的渐变范围阈值,用户可以看到虚拟宠物猫507的手套色色块边缘渐变效果增大。
步骤1502,在像素点的色值小于渐变范围阈值的情况下,终端将像素点的色值修改为渐变范围阈值。
终端基于渐变范围阈值以及蒙版贴图对应遮罩范围内像素点的色值,调整遮罩范围内的渐变范围。在一种可能的实施方式中,在蒙版贴图对应遮罩范围内像素点的色值小于渐变范围区域的情况下,终端将像素点的色值修改为渐变范围阈值。由于不同层级的目标蒙版贴图存在层级关系,因此修改遮罩范围内的渐变范围是在调整遮罩范围的大小之上完成的,因此终端基于蒙版贴图对应遮罩范围内的像素点的色值调整渐变范围区域。示例性的,结合图16,对调整遮罩范围的渐变范围过程进行说明。终端确定蒙版贴图1601对应遮罩范围内的像素点的色值为0.5,蒙版贴图1601与贴图1605叠加生成目标贴图应用于虚拟宠物猫1603。用户通过编辑控件对虚拟宠物猫1603的目标外观特征的边缘渐变进行调整,终端基于该编辑操作,确定渐变范围阈值为0.7,此时将0.5-0.7范围内的像素点的色值调整为0.7,即改变其像素点对应的染色程度,进而确定目标蒙版贴图1602,目标蒙版贴图1602与贴图1605叠加生成目标贴图,将该目标贴图应用于虚拟宠物猫1604。
步骤1503,在像素点的色值大于或等于渐变范围阈值的情况下,终端保持像素点的色值不变。
在另一种可能的实施方式中,像素点的色值大于或者等于渐变范围阈值,保持像素点的色值不变,即保持其像素点对应的染色程度不变。示例性的,如图16,保持大于或等于0.7的像素点的色值不变,由此,在虚拟宠物的外观编辑界面,用户可以看到虚拟宠物猫1604的目标外观特征的边缘渐变效果。
在本申请实施例中,通过确定渐变范围阈值与像素点的色值之间的关系,实现了虚拟宠物的目标外观特征的边缘渐变的不同效果,丰富虚拟宠物的外观特征。
另外,在一种可能的实施方式中,由于虚拟宠物的一些外观特征只存在于固定的部位,例如虚拟宠物猫的手套色色块,因此一些蒙版贴图的遮罩区域和非遮罩区域的位置是固定不变的。
在另一种可能的实施方式中,蒙版贴图的遮罩区域和非遮罩区域的位置不是固定不变的,终端基于用户的编辑操作,将蒙版贴图的遮罩区域和非遮罩区域的位置进行交换,得到目标蒙版贴图。
示例性的,如图17所示,蒙版贴图的遮罩区域为1701和非遮罩区域1702,遮罩区域为1701和非遮罩区域1702未交换,该蒙版贴图与贴图1705叠加生成目标贴图,将该目标贴图应用于虚拟宠物猫1703。用户通过编辑控件改变目标外观特征的位置,终端基于该编辑操作,将遮罩区域1701变为非遮罩区域,将非遮罩区域1702变为遮罩区域,进而确定目标蒙版贴图,该目标蒙版贴图与贴图1705叠加生成目标贴图,将该目标贴图应用于虚拟宠物猫1704。和虚拟宠物猫1703相比,虚拟宠物猫1704的目标外观特征的位置发生了变化。
本申请实施例中,通过将蒙版贴图的遮罩区域和非遮罩区域的位置进行交换,丰富虚拟宠物的外观特征。
请参考图18,其示出了本申请另一个示例性实施例提供的虚拟宠物的外观编辑方法的流程图。
步骤1801,终端显示外观编辑界面。
用户在终端上开启游戏,终端显示虚拟宠物的外观编辑界面。该外观编辑界面中包括虚拟宠物、第一外观编辑控件以及第二外观编辑控件。第一外观编辑控件用于控制虚拟宠物的外观特征,第二外观编辑控件用于控制虚拟宠物的目标外观特征,目标外观特征是第一外观编辑控件控制的外观特征以外的外观特征。例如第一外观编辑控件用于控制虚拟宠物的基础外观特征,第二外观编辑控件用于控制基础外观特征以外的外观特征。
示例性的,如图19所示,虚拟宠物的外观编辑界面1901包括虚拟宠物1902、第一观编辑控件1903以及第二外观编辑控件1904。
步骤1802,终端响应于对第一外观编辑控件的触发操作,更新虚拟宠物的外观特征。
可选地,第一外观编辑控件用于控制虚拟宠物的基础外观特征,基础外观特征是指虚拟宠物的毛发底色。
在一种可能的实施方式中,第一外观编辑控件是按钮控件、滑块控件或调色盘控件等,本申请实施例对此不作限定。终端基于用户对外观编辑界面中的按钮控件、滑块控件、调色盘控件的操作,调整虚拟宠物的外观特征,例如调整虚拟宠物的毛发底色。
步骤1803,终端响应于对第二外观编辑控件的触发操作,更新虚拟宠物的目标外观特征,目标外观特征为第一外观编辑控件控制的外观特征以外的外观特征。
可选地,第一外观编辑控件用于控制虚拟宠物的基础外观特征,基础外观特征为虚拟宠物的毛发底色,目标外观特征是指虚拟宠物异于毛发底色的部分。终端基于用户对外观编辑界面中的按钮控件、滑块控件、调色盘控件的操作,调整虚拟宠物异于毛发底色的部分。
在一种可能的实施方式中,第二外观编辑控件是按钮控件、滑块控件或调色盘控件等,本申请实施例对此不作限定。
在一种可能的实施方式中,目标外观特征是虚拟宠物异于毛发底色的部分。
可选地,目标外观特征是虚拟宠物毛发的重点色块,也可以是花纹色块、手套色块等,本申请实施例对此不作限定。
步骤1804,终端在外观编辑界面中展示编辑后的虚拟宠物,其中,编辑后的虚拟宠物是由第一外观编辑控件控制的外观特征和第二外观编辑控件控制的目标外观特征结合后得到的虚拟宠物模型。
基于第一外观编辑控件控制的外观特征和第二外观编辑控件控制的目标外观特征结合后得到虚拟宠物模型,终端在外观编辑界面中展示该虚拟宠物模型。示例性的,如图19所示,以虚拟宠物猫为例。终端基于用户对第一外观编辑控件1903的操作,确定虚拟宠物猫的毛发底色为白色,终端在虚拟宠物的外观编辑界面1901中展示虚拟宠物猫1902的毛发底色为白色。终端基于用户对第二外观编辑控件1904的操作,确定虚拟宠物猫的手套色色块的颜色、色块大小、边缘渐变,终端在虚拟宠物的外观编辑界面1901中展示虚拟宠物猫1902的手套色色块颜色为黑色、色块大小为9、边缘渐变为-15。基于上述外观特征,终端在虚拟宠物外观编辑界面中呈现虚拟宠物1902。
综上所述,在本申请实施例中,由第一外观编辑控件控制的外观特征和第二外观编辑控件控制的目标外观特征得到虚拟宠物模型,通过调整第一外观编辑控件控制的外观特征以及第二外观编辑控件控制的目标外观特征进而生成外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度,丰富虚拟宠物的外观特征。
另外,在本申请实施例中,终端基于用户的编辑操作,替换虚拟宠物的五官模型、贴图等,丰富虚拟宠物的五官特征,提高虚拟宠物外观编辑的自由度。
在一种可能的实施方式中,终端基于用户的编辑操作,调整虚拟宠物的面部鼻子、嘴巴、唇瓣等,确定出不同种类的虚拟宠物。另外通过更换鼻头贴图的款式,体现同一虚拟宠物不同品种的差异。
示例性的,如图20所示,以虚拟宠物为猫为例。用户通过虚拟宠物的外观编辑界面2001中的按钮控件2002选择一个鼻嘴贴图,终端基于该编辑操作,变更虚拟宠物猫2003的鼻嘴类型,从而实现虚拟宠物猫2003的不同的外观特征。另外用户通过对滑块控件2004的滑动操作调整鼻子大小、鼻子高低、嘴巴高低、嘴瓣上下、嘴瓣开合,终端基于到该编辑操作,确定虚拟宠物猫2003的鼻子、嘴巴以及嘴瓣的形状,以实现虚拟宠物猫2003的不同的外观特征。
在一种可能的实施方式中,终端基于用户对虚拟宠物耳朵模型的编辑操作,调整虚拟宠物耳朵模型以及耳朵的大小。
示例性的,如图21,以虚拟宠物猫为例,用户根据按钮控件选择不同的耳朵模型2101,终端基于该编辑操作对虚拟宠物的耳朵模型进行替换,进而呈现不同外观特征的虚拟宠物猫 2102。
在另一种可能的实施方式中,终端基于用户的编辑操作,调整虚拟宠物眼睛的大小、瞳孔的大小和颜色,生成外观特征不同的虚拟宠物。另外,也可以通过调整眼睛眼角的旋转,呈现出无辜下垂眼、凶悍吊眼、普通杏眼等眼睛形态,进而呈现出虚拟宠物不同的表情。
示例性的,如图22,以虚拟宠物猫为例,用户通过调色盘控件2201以及颜色滑块控件2202选择虚拟宠物猫瞳孔的颜色,通过滑块控件2203调整虚拟宠物猫的眼睛大小、眼睛旋转以及瞳孔大小,终端基于该编辑操作,确定虚拟宠物猫的瞳孔颜色、大小以及眼睛的大小、旋转,最终呈现出不同外观特征的虚拟宠物猫2204。
可选地,终端基于用户的编辑操作,调整虚拟宠物装饰性贴花贴图,体现虚拟宠物的不同风格。
可选地,装饰性贴花贴图的样式是斑块、斑点或条形状等,本申请实施例对此不作限定。
可选地,装饰性贴花贴图的位置位于眉眼区域、腮部、嘴巴或者下巴区域等,本申请实施例对此不作限定。
示例性,如图23,以虚拟宠物猫为例,用户通过按钮控件,选择不同的装饰性贴花贴图2301,终端基于该编辑操作,调整虚拟宠物的装饰性贴花贴图,进而生成风格不同的虚拟宠物猫。
请参考图24,其是本申请一个示例性实施例提供的虚拟宠物的外观编辑装置的结构框图,该装置包括:
获取模块2401,用于获取虚拟宠物的贴图,贴图表征虚拟宠物的外观特征;
编辑模块2402,用于响应于对目标外观特征的编辑操作,生成编辑后目标外观特征对应的目标蒙版贴图,目标外观特征为贴图表征的外观特征以外的外观特征;
叠加模块2403,用于对贴图和至少一层目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;
应用模块2404,用于将目标贴图应用于虚拟宠物的三维模型。
可选地,编辑模块2402,包括:
获取单元,用于响应于对目标外观特征的编辑操作,获取目标外观特征对应的蒙版贴图;
调整单元,用于基于编辑操作调整蒙版贴图,得到目标蒙版贴图,其中,蒙版贴图的调整方式包括调整蒙版颜色、调整遮罩范围、调整渐变范围或调整遮罩位置中的至少一种。
可选地,蒙版贴图的调整方式为调整蒙版颜色;调整单元,用于:
基于编辑操作确定蒙版贴图的目标蒙版颜色;
基于目标蒙版颜色对目标蒙版贴图中的遮罩区域进行染色,得到目标蒙版贴图。
可选地,蒙版贴图的调整方式为调整遮罩范围;调整单元,包括:
第一确定子单元,用于基于编辑操作确定遮罩范围阈值;
第二确定子单元,用于基于遮罩范围阈值以及像素点的色值,确定目标遮罩范围,得到目标蒙版贴图,目标蒙版贴图的遮罩范围为目标遮罩范围,像素点是指蒙版贴图对应的最大遮罩范围内的像素点,像素点的色值用于控制像素点的染色程度。
可选地,第二确定子单元,用于:
在像素点的色值大于或等于遮罩范围阈值的情况下,确定像素点属于目标遮罩范围;
在像素点的色值小于遮罩范围阈值的情况下,确定像素点不属于目标遮罩范围。
可选地,蒙版贴图的调整方式为调整渐变范围;调整单元,包括:
第三确定子单元,用于基于编辑操作确定渐变范围阈值;
调整子单元,用于基于渐变范围阈值以及像素点的色值,调整遮罩范围内的渐变范围,得到目标蒙版贴图,像素点是指蒙版贴图对应的遮罩范围内的像素点,像素点的色值用于控制像素点的染色程度。
可选地,调整子单元,用于:
在像素点的色值小于渐变范围阈值的情况下,将像素点的色值修改为渐变范围阈值;
在像素点的色值大于或等于渐变范围阈值的情况下,保持像素点的色值不变。
可选地,蒙版贴图的调整方式为调整遮罩位置;调整单元,用于:
基于编辑操作交换蒙版贴图中的遮罩区域以及非遮罩区域,得到目标蒙版贴图。
可选地,叠加模块2403,用于:
基于各层目标蒙版贴图的贴图层级,在贴图上叠加至少一层目标蒙版贴图,得到目标贴图。
可选地,贴图表征虚拟宠物的毛发底色;
至少一层目标蒙版贴图包括重点色蒙版贴图、花纹色蒙版贴图或手套色蒙版贴图中的至少一种;
重点色蒙版贴图表征不同于毛发底色的重点色块;
花纹色蒙版贴图表征不同于毛发底色的花纹色块;
手套色蒙版贴图表征不同于毛发底色的足部色块或手部色块;
手套色蒙版贴图的贴图层级高于花纹色蒙版贴图的贴图层级,花纹色蒙版贴图的贴图层级高于重点色蒙版贴图的贴图层级。
可选地,装置还包括缩放模块,用于:
在目标蒙版贴图与贴图的尺寸不同的情况下,基于贴图的尺寸对目标蒙版贴图进行缩放,其中,缩放后目标蒙版贴图的尺寸与贴图的尺寸匹配。
综上,在本申请实施例中,贴图表征虚拟宠物的外观特征,目标蒙版贴图表征目标外观特征,目标外观特征为贴图表征的外观特征以外的外观特征,通过贴图和不同层级的目标蒙版贴图的叠加,生成不同的目标贴图用于虚拟宠物的模型,进而生成外观特征不同的虚拟宠物。由于目标贴图由贴图和不同层级的目标蒙版贴图叠加而成,因此调整目标蒙版贴图即可生成不同的目标贴图,一方面避免每增加一种虚拟宠物,制作一张与之相匹配的贴图,另一方面通过有限的美术资源量可生成多种外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度,丰富虚拟宠物的外观特征。
图25是本申请另一个示例性实施例提供的虚拟宠物的外观编辑装置的结构框图,该装置包括:
显示模块2501,用于显示外观编辑界面,外观编辑界面包括第一外观编辑控件和第二外观编辑控件;
第一更新模块2502,用于响应于对第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;
第二更新模块2503,用于响应于对第二外观编辑控件的触发操作,更新虚拟宠物的目标外观特征,目标外观特征为第一外观编辑控件控制的外观特征以外的外观特征;
展示模块2504,用于在外观编辑界面中展示编辑后的虚拟宠物,其中,编辑后的虚拟宠物是由第一外观编辑控件控制的外观特征和目标外观特征结合后得到的虚拟宠物模型。
综上,在本申请实施例中,由第一外观编辑控件控制的外观特征和第二外观编辑控件控制的目标外观特征得到虚拟宠物模型,通过调整第一外观编辑控件控制的外观特征以及第二外观编辑控件控制的目标外观特征进而生成外观特征不同的虚拟宠物,提高虚拟宠物外观编辑的自由度,丰富虚拟宠物的外观特征。
请参考图26,其示出了本申请一个示例性实施例提供的终端2600的结构框图。该终端2600可以是便携式移动终端,比如:智能手机、平板电脑、动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)播放器、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器。终端2600还可能被称为用户设备、便携式终端等其他名称。
通常,终端2600包括有:处理器2601和存储器2602。
处理器2601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器2601可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器2601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2601可以在集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2601还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器2602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器2601所执行以实现本申请实施例提供的方法。
在一些实施例中,终端2600还可选包括有:外围设备接口2603和至少一个外围设备。具体地,外围设备包括:射频电路2604或触摸显示屏2605中的至少一种。
外围设备接口2603可被用于将输入/输出(Input/Output,I/O)相关的至少一个外围设备连接到处理器2601和存储器2602。在一些实施例中,处理器2601、存储器2602和外围设备接口2603被集成在同一芯片或电路板上;在一些其他实施例中,处理器2601、存储器2602和外围设备接口2603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路2604用于接收和发射射频(Radio Frequency,RF)信号,也称电磁信号。射频电路2604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路2604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路2604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路2604可以通过至少一种无线通信协议来与其它终端进行通信。
触摸显示屏2605用于显示UI。该UI可以包括图形、文本、图标、视频及其它们的任意组合。触摸显示屏2605还具有采集在触摸显示屏2605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器2601进行处理。触摸显示屏2605用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,触摸显示屏2605可以为一个,设置终端2600的前面板;在另一些实施例中,触摸显示屏2605可以为至少两个,分别设置在终端2600的不同表面或呈折叠设计。
在一些实施例中,终端2600还包括有一个或多个传感器2606。该一个或多个传感器2606包括但不限于:加速度传感器2607、陀螺仪传感器2608以及压力传感器2609。
加速度传感器2607可以检测以终端2600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器2607可以用于检测重力加速度在三个坐标轴上的分量。处理器2601可以根据加速度传感器2607采集的重力加速度信号,控制触摸显示屏2605以横向视图或纵向视图进行用户界面的显示。加速度传感器2607还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器2608可以检测终端2600的机体方向及转动角度,陀螺仪传感器2608可以与加速度传感器2607协同采集用户对终端2600的3D动作。处理器2601根据陀螺仪传感器2608采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器2609可以设置在终端2600的侧边框和/或触摸显示屏2605的下层。当压力传感器2609设置在终端2600的侧边框时,可以检测用户对终端2600的握持信号,根据该握 持信号进行左右手识别或快捷操作。当压力传感器2609设置在触摸显示屏2605的下层时,可以根据用户对触摸显示屏2605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
本领域技术人员可以理解,图26中示出的结构并不构成对2600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供了一种终端,该终端包括处理器和存储器,该存储器中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
获取虚拟宠物的贴图,贴图表征虚拟宠物的外观特征;
响应于对目标外观特征的编辑操作,生成编辑后目标外观特征对应的目标蒙版贴图,目标外观特征为贴图表征的外观特征以外的外观特征;
对贴图和至少一层目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;
将目标贴图应用于虚拟宠物的三维模型。
可选地,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
响应于编辑操作,获取目标外观特征对应的蒙版贴图;
基于编辑操作调整蒙版贴图,得到目标蒙版贴图,其中,蒙版贴图的调整方式包括调整蒙版颜色、调整遮罩范围、调整渐变范围或调整遮罩位置中的至少一种。
可选地,蒙版贴图的调整方式为调整蒙版颜色;该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
基于编辑操作确定蒙版贴图的目标蒙版颜色;
基于目标蒙版颜色对蒙版贴图中的遮罩区域进行染色,得到目标蒙版贴图。
可选地,蒙版贴图的调整方式为调整遮罩范围;该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
基于编辑操作确定遮罩范围阈值;
基于遮罩范围阈值以及像素点的色值,确定目标遮罩范围,得到目标蒙版贴图,目标蒙版贴图的遮罩范围为目标遮罩范围,像素点是指蒙版贴图对应的最大遮罩范围内的像素点,像素点的色值用于控制像素点的染色程度。
可选地,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
在像素点的色值大于或等于遮罩范围阈值的情况下,确定像素点属于目标遮罩范围;
在像素点的色值小于遮罩范围阈值的情况下,确定像素点不属于目标遮罩范围。
可选地,蒙版贴图的调整方式为调整渐变范围;该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
基于编辑操作确定渐变范围阈值;
基于渐变范围阈值以及像素点的色值,调整遮罩范围内的渐变范围,得到目标蒙版贴图,像素点是指蒙版贴图对应的遮罩范围内的像素点,像素点的色值用于控制像素点的染色程度。
可选地,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
在像素点的色值小于渐变范围阈值的情况下,将像素点的色值修改为渐变范围阈值;
在像素点的色值大于或等于渐变范围阈值的情况下,保持像素点的色值不变。
可选地,蒙版贴图的调整方式为调整遮罩位置;该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
基于编辑操作交换蒙版贴图中的遮罩区域以及非遮罩区域,得到目标蒙版贴图。
可选地,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现 如下操作:
基于各层目标蒙版贴图的贴图层级,在贴图上叠加至少一层目标蒙版贴图,得到目标贴图。
可选地,贴图表征虚拟宠物的毛发底色;
至少一层目标蒙版贴图包括重点色蒙版贴图、花纹色蒙版贴图或手套色蒙版贴图中的至少一种;
重点色蒙版贴图表征不同于毛发底色的重点色块;
花纹色蒙版贴图表征不同于毛发底色的花纹色块;
手套色蒙版贴图表征不同于毛发底色的足部色块或手部色块;
手套色蒙版贴图的贴图层级高于花纹色蒙版贴图的贴图层级,花纹色蒙版贴图的贴图层级高于重点色蒙版贴图的贴图层级。
可选地,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
在目标蒙版贴图与贴图的尺寸不同的情况下,基于贴图的尺寸对目标蒙版贴图进行缩放,其中,缩放后目标蒙版贴图的尺寸与贴图的尺寸匹配。
本申请实施例还提供了一种终端,该终端包括处理器和存储器,该存储器中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如下操作:
显示外观编辑界面,外观编辑界面包括第一外观编辑控件和第二外观编辑控件;
响应于对第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;
响应于对第二外观编辑控件的触发操作,更新虚拟宠物的目标外观特征,目标外观特征为第一外观编辑控件控制的外观特征以外的外观特征;
在外观编辑界面中展示编辑后的虚拟宠物,其中,编辑后的虚拟宠物是由第一外观编辑控件控制的外观特征和目标外观特征结合后得到的虚拟宠物模型。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如上各个实施例所述的虚拟宠物的外观编辑方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式中提供的虚拟宠物的外观编辑方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。计算机可读存储介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (17)
- 一种虚拟宠物的外观编辑方法,所述方法包括:终端获取虚拟宠物的贴图,所述贴图表征所述虚拟宠物的外观特征;所述终端响应于对目标外观特征的编辑操作,生成编辑后所述目标外观特征对应的目标蒙版贴图,所述目标外观特征为所述贴图表征的外观特征以外的外观特征;所述终端对所述贴图和至少一层所述目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;所述终端将所述目标贴图应用于所述虚拟宠物的三维模型。
- 根据权利要求1所述的方法,其中,所述终端响应于对目标外观特征的编辑操作,生成编辑后所述目标外观特征对应的目标蒙版贴图,包括:所述终端响应于所述编辑操作,获取所述目标外观特征对应的蒙版贴图;所述终端基于所述编辑操作调整所述蒙版贴图,得到所述目标蒙版贴图,其中,所述蒙版贴图的调整方式包括调整蒙版颜色、调整遮罩范围、调整渐变范围或调整遮罩位置中的至少一种。
- 根据权利要求2所述的方法,其中,所述蒙版贴图的调整方式为调整蒙版颜色;所述终端基于所述编辑操作调整所述蒙版贴图,得到所述目标蒙版贴图,包括:所述终端基于所述编辑操作确定所述蒙版贴图的目标蒙版颜色;所述终端基于所述目标蒙版颜色对所述蒙版贴图中的遮罩区域进行染色,得到所述目标蒙版贴图。
- 根据权利要求2所述的方法,其中,所述蒙版贴图的调整方式为调整遮罩范围;所述终端基于所述编辑操作调整所述蒙版贴图,得到所述目标蒙版贴图,包括:所述终端基于所述编辑操作确定遮罩范围阈值;所述终端基于所述遮罩范围阈值以及像素点的色值,确定目标遮罩范围,得到所述目标蒙版贴图,所述目标蒙版贴图的遮罩范围为所述目标遮罩范围,所述像素点是指所述蒙版贴图对应的最大遮罩范围内的像素点,所述像素点的色值用于控制所述像素点的染色程度。
- 根据权利要求4所述的方法,其中,所述终端基于所述遮罩范围阈值以及像素点的色值,确定目标遮罩范围,包括:在所述像素点的色值大于或等于所述遮罩范围阈值的情况下,所述终端确定所述像素点属于所述目标遮罩范围;在所述像素点的色值小于所述遮罩范围阈值的情况下,所述终端确定所述像素点不属于所述目标遮罩范围。
- 根据权利要求2所述的方法,其中,所述蒙版贴图的调整方式为调整渐变范围;所述终端基于所述编辑操作调整所述蒙版贴图,得到所述目标蒙版贴图,包括:所述终端基于所述编辑操作确定渐变范围阈值;所述终端基于所述渐变范围阈值以及像素点的色值,调整所述遮罩范围内的渐变范围,得到所述目标蒙版贴图,所述像素点是指所述蒙版贴图对应的遮罩范围内的像素点,所述像素点的色值用于控制所述像素点的染色程度。
- 根据权利要求6所述的方法,其中,所述终端基于所述渐变范围阈值以及像素点的色 值,调整所述遮罩范围内的渐变范围,得到所述目标蒙版贴图,包括:在所述像素点的色值小于所述渐变范围阈值的情况下,所述终端将所述像素点的色值修改为所述渐变范围阈值;在所述像素点的色值大于或等于所述渐变范围阈值的情况下,所述终端保持所述像素点的色值不变。
- 根据权利要求2所述的方法,其中,所述蒙版贴图的调整方式为调整遮罩位置;所述终端基于所述编辑操作调整所述蒙版贴图,得到所述目标蒙版贴图,包括:所述终端基于所述编辑操作交换所述蒙版贴图中的遮罩区域以及非遮罩区域,得到所述目标蒙版贴图。
- 根据权利要求1至8任一所述的方法,其中,所述终端对所述贴图和至少一层所述目标蒙版贴图进行叠加,得到目标贴图,包括:所述终端基于各层所述目标蒙版贴图的贴图层级,在所述贴图上叠加至少一层所述目标蒙版贴图,得到所述目标贴图。
- 根据权利要求9所述的方法,其中,所述贴图表征所述虚拟宠物的毛发底色;至少一层所述目标蒙版贴图包括重点色蒙版贴图、花纹色蒙版贴图或手套色蒙版贴图中的至少一种;所述重点色蒙版贴图表征不同于所述毛发底色的重点色块;所述花纹色蒙版贴图表征不同于所述毛发底色的花纹色块;所述手套色蒙版贴图表征不同于所述毛发底色的足部色块或手部色块;所述手套色蒙版贴图的贴图层级高于所述花纹色蒙版贴图的贴图层级,所述花纹色蒙版贴图的贴图层级高于所述重点色蒙版贴图的贴图层级。
- 根据权利要求1至8任一所述的方法,其中,所述方法还包括:在所述目标蒙版贴图与所述贴图的尺寸不同的情况下,所述终端基于所述贴图的尺寸对所述目标蒙版贴图进行缩放,其中,缩放后所述目标蒙版贴图的尺寸与所述贴图的尺寸匹配。
- 一种虚拟宠物的外观编辑方法,所述方法包括:终端显示外观编辑界面,所述外观编辑界面包括第一外观编辑控件和第二外观编辑控件;所述终端响应于对所述第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;所述终端响应于对所述第二外观编辑控件的触发操作,更新所述虚拟宠物的目标外观特征,所述目标外观特征为所述第一外观编辑控件控制的外观特征以外的外观特征;所述终端在所述外观编辑界面中展示编辑后的所述虚拟宠物,其中,编辑后的所述虚拟宠物是由所述第一外观编辑控件控制的外观特征和所述目标外观特征结合后得到的虚拟宠物模型。
- 一种虚拟宠物的外观编辑装置,所述装置包括:获取模块,用于获取虚拟宠物的贴图,所述贴图表征所述虚拟宠物的外观特征;编辑模块,响应于对目标外观特征的编辑操作,生成编辑后所述目标外观特征对应的目标蒙版贴图,所述目标外观特征为所述贴图表征的外观特征以外的外观特征;叠加模块,用于对所述贴图和至少一层所述目标蒙版贴图进行叠加,得到目标贴图,其中,不同层级的目标蒙版贴图对应不同的目标外观特征;应用模块,用于将所述目标贴图应用于所述虚拟宠物的三维模型。
- 一种虚拟宠物的外观编辑装置,所述装置包括:显示模块,用于显示外观编辑界面,所述外观编辑界面包括第一外观编辑控件和第二外观编辑控件;第一更新模块,用于响应于对所述第一外观编辑控件的触发操作,更新虚拟宠物的外观特征;第二更新模块,用于响应于对所述第二外观编辑控件的触发操作,更新所述虚拟宠物的目标外观特征,所述目标外观特征为所述第一外观编辑控件控制的外观特征以外的外观特征;展示模块,用于在所述外观编辑界面中展示编辑后的所述虚拟宠物,其中,编辑后的所述虚拟宠物是由所述第一外观编辑控件控制的外观特征和所述目标外观特征结合后得到的虚拟宠物模型。
- 一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至11任一所述的虚拟宠物的外观编辑方法,或,实现如权利要求12所述的虚拟宠物的外观编辑方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至11任一所述的虚拟宠物的外观编辑方法,或,实现如权利要求12所述的虚拟宠物的外观编辑方法。
- 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令被处理器执行时实现如权利要求1至11任一所述的虚拟宠物的外观编辑方法,或,实现如权利要求12所述的虚拟宠物的外观编辑方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/325,617 US20230298253A1 (en) | 2021-11-05 | 2023-05-30 | Appearance editing method and apparatus for virtual pet, terminal, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111308492.7 | 2021-11-05 | ||
CN202111308492.7A CN114028808A (zh) | 2021-11-05 | 2021-11-05 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/325,617 Continuation US20230298253A1 (en) | 2021-11-05 | 2023-05-30 | Appearance editing method and apparatus for virtual pet, terminal, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023077965A1 true WO2023077965A1 (zh) | 2023-05-11 |
Family
ID=80136479
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/118479 WO2023077965A1 (zh) | 2021-11-05 | 2022-09-13 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230298253A1 (zh) |
CN (1) | CN114028808A (zh) |
WO (1) | WO2023077965A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114028808A (zh) * | 2021-11-05 | 2022-02-11 | 腾讯科技(深圳)有限公司 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105233498A (zh) * | 2015-09-23 | 2016-01-13 | 网易(杭州)网络有限公司 | 游戏角色染色方法、装置、用户终端及游戏系统 |
CN110322535A (zh) * | 2019-06-25 | 2019-10-11 | 深圳市迷你玩科技有限公司 | 自定义三维虚拟角色贴图的方法、终端及存储介质 |
US10777010B1 (en) * | 2018-03-16 | 2020-09-15 | Amazon Technologies, Inc. | Dynamic environment mapping for augmented reality |
CN113546411A (zh) * | 2021-07-22 | 2021-10-26 | 网易(杭州)网络有限公司 | 游戏模型的渲染方法、装置、终端和存储介质 |
CN114028808A (zh) * | 2021-11-05 | 2022-02-11 | 腾讯科技(深圳)有限公司 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109126136B (zh) * | 2018-07-27 | 2020-09-15 | 腾讯科技(深圳)有限公司 | 三维虚拟宠物的生成方法、装置、设备及存储介质 |
CN111951408B (zh) * | 2020-06-30 | 2024-03-29 | 重庆灵翎互娱科技有限公司 | 一种基于三维人脸的图像融合方法和设备 |
CN112634416B (zh) * | 2020-12-23 | 2023-07-28 | 北京达佳互联信息技术有限公司 | 虚拟形象模型的生成方法、装置、电子设备及存储介质 |
CN112598785B (zh) * | 2020-12-25 | 2022-03-25 | 游艺星际(北京)科技有限公司 | 虚拟形象的三维模型生成方法、装置、设备及存储介质 |
CN112755533B (zh) * | 2021-02-02 | 2022-12-13 | 腾讯科技(深圳)有限公司 | 虚拟载具涂装方法、装置、设备及存储介质 |
CN113223133A (zh) * | 2021-04-21 | 2021-08-06 | 深圳市腾讯网域计算机网络有限公司 | 三维模型换色方法和装置 |
CN113240760B (zh) * | 2021-06-29 | 2023-11-24 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置、计算机设备和存储介质 |
-
2021
- 2021-11-05 CN CN202111308492.7A patent/CN114028808A/zh active Pending
-
2022
- 2022-09-13 WO PCT/CN2022/118479 patent/WO2023077965A1/zh unknown
-
2023
- 2023-05-30 US US18/325,617 patent/US20230298253A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105233498A (zh) * | 2015-09-23 | 2016-01-13 | 网易(杭州)网络有限公司 | 游戏角色染色方法、装置、用户终端及游戏系统 |
US10777010B1 (en) * | 2018-03-16 | 2020-09-15 | Amazon Technologies, Inc. | Dynamic environment mapping for augmented reality |
CN110322535A (zh) * | 2019-06-25 | 2019-10-11 | 深圳市迷你玩科技有限公司 | 自定义三维虚拟角色贴图的方法、终端及存储介质 |
CN113546411A (zh) * | 2021-07-22 | 2021-10-26 | 网易(杭州)网络有限公司 | 游戏模型的渲染方法、装置、终端和存储介质 |
CN114028808A (zh) * | 2021-11-05 | 2022-02-11 | 腾讯科技(深圳)有限公司 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230298253A1 (en) | 2023-09-21 |
CN114028808A (zh) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11798246B2 (en) | Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same | |
US12056832B2 (en) | Controlling interactive fashion based on body gestures | |
US20230377189A1 (en) | Mirror-based augmented reality experience | |
WO2020233403A1 (zh) | 三维角色的个性化脸部显示方法、装置、设备及存储介质 | |
US11651572B2 (en) | Light and rendering of garments | |
US12067804B2 (en) | True size eyewear experience in real time | |
US11645805B2 (en) | Animated faces using texture manipulation | |
US20230120037A1 (en) | True size eyewear in real time | |
US20240305878A1 (en) | Inclusive camera | |
CN109908587A (zh) | 可繁殖的虚拟角色的形象参数生成方法、装置及存储介质 | |
WO2023077965A1 (zh) | 虚拟宠物的外观编辑方法、装置、终端及存储介质 | |
US20240248546A1 (en) | Controlling augmented reality effects through multi-modal human interaction | |
US20240282015A1 (en) | Augmented reality experience with lighting adjustment | |
US20240371197A1 (en) | True size eyewear experience in real time | |
US20240290043A1 (en) | Real-time fashion item transfer system | |
US20230316665A1 (en) | Surface normals for pixel-aligned object | |
US20240303904A1 (en) | Ray tracing between ar and real objects | |
US20230316666A1 (en) | Pixel depth determination for object | |
CN116983655A (zh) | 纹样的显示方法、装置、设备、介质及程序产品 | |
WO2024168175A1 (en) | Browsing-based augmented reality try-on experience | |
WO2024158717A1 (en) | Image generation from text and 3d object | |
CN116246310A (zh) | 生成目标会话表情的方法和装置 | |
CN117101146A (zh) | 装饰道具的生成方法、装置、存储介质及终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22888993 Country of ref document: EP Kind code of ref document: A1 |