CN113769393A - Method and device for generating character image, storage medium and electronic device - Google Patents

Method and device for generating character image, storage medium and electronic device Download PDF

Info

Publication number
CN113769393A
CN113769393A CN202111136167.7A CN202111136167A CN113769393A CN 113769393 A CN113769393 A CN 113769393A CN 202111136167 A CN202111136167 A CN 202111136167A CN 113769393 A CN113769393 A CN 113769393A
Authority
CN
China
Prior art keywords
role
type
virtual
library
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136167.7A
Other languages
Chinese (zh)
Inventor
邹存佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Perfect Time And Space Software Co ltd
Original Assignee
Shanghai Perfect Time And Space Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Perfect Time And Space Software Co ltd filed Critical Shanghai Perfect Time And Space Software Co ltd
Priority to CN202111136167.7A priority Critical patent/CN113769393A/en
Publication of CN113769393A publication Critical patent/CN113769393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating a character image, a storage medium and an electronic device, wherein the method comprises the following steps: determining a role type of a first virtual role; searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type; and calling a basic body of the role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types. The invention solves the technical problem of low configuration efficiency of the first character image in the related technology, reduces the model resource amount loaded by the terminal and reduces the memory.

Description

Method and device for generating character image, storage medium and electronic device
Technical Field
The invention relates to the technical field of computers, in particular to a role image generation method and device, a storage medium and an electronic device.
Background
In the related technology, the network game industry is developed vigorously nowadays, and the game is a good recreation and entertainment mode after work and life.
In the related art, a game generally includes only one face-pinching system and one universal huge feature library, and for different races and professions, in order to ensure visual differences of images of different character types, the system needs to establish a corresponding character model for each race and profession, for example, when 10 races exist in the game, 10 modeling needs to be performed to obtain 10 corresponding character models, and when a user selects a certain race to pinch a face, the face-pinching modeling is performed based on the character model of the race. Because modeling needs to be carried out on each role type and role models under a plurality of role types need to be stored and loaded, the work load of art is large, the art resources are large, and the occupied memory is large.
In view of the above problems in the related art, no effective solution has been found at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating a role image, a storage medium and an electronic device.
According to an embodiment of the present invention, there is provided a character image generating method, including: determining a role type of a first virtual role; searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type; and calling a basic body of the role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
Optionally, a first configuration instruction for a first type of material is received, where the target feature library includes the first type of material, and the first type of material is a material dedicated to the role type; calling first sub-models matched with the first type of materials based on the first configuration instruction, wherein each first type of material corresponds to one sub-model; and mounting the first sub-model to the basic body according to a preset mounting point.
Optionally, the method further includes: determining a bone hanging point corresponding to a first bone of the first sub-model, wherein the bone hanging point is a second bone of the basic body; mounting a first bone of the first sub-model onto a second bone of the base body.
Optionally, adjusting the basic body by using the target feature library includes: searching a target range parameter matched with the role type in the target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types; receiving a second configuration instruction aiming at the second type of materials; and adjusting the second type of materials within the adjusting range based on the second configuration instruction.
Optionally, before searching the target feature library from the first material library according to the role type, the method further includes: receiving a third configuration instruction, wherein the third configuration instruction is used for indicating the selected material library type; selecting the first material library from a plurality of types of material libraries based on the third configuration instructions, wherein each type of material library corresponds to one rendering style; and locally loading the first material library.
Optionally, after generating the first character image of the first virtual character, the method further includes: rendering and displaying a first role image of the first virtual role in a virtual scene, and adding a first rendering identifier in attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role; acquiring a second rendering identifier of a second virtual role in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first role image of the second virtual role; and if the second rendering identifier is inconsistent with the first rendering identifier, requesting a second role image of the second virtual role from the server, wherein the second role image is generated based on the first material library and corresponds to the first role image of the second virtual role, and displaying the received second role image of the second virtual role in the virtual scene.
Optionally, after generating the first character image of the first virtual character, the method further includes: selecting a second material library; determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of corresponding materials; generating a second character image of the first virtual character based on the image configuration parameters and the second material library.
Optionally, the basic body for invoking the role model includes: acquiring gender attribute information of the first virtual role; if the gender attribute information indicates that the first virtual role is of a first gender, calling a first basic body matched with the first gender; if the gender attribute information indicates that the first virtual role is of a second gender, a second basic body matched with the second gender is called, wherein the role model comprises the first basic body and the second basic body.
According to another embodiment of the present invention, there is provided a character image generating apparatus including: the determining module is used for determining the role type of the first virtual role; the searching module is used for searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type; and the configuration module is used for calling a basic body of the role model and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
Optionally, the configuration module includes: the receiving unit is used for receiving a first configuration instruction aiming at first-class materials, wherein the target feature library comprises the first-class materials, and the first-class materials are special materials of the role type; the calling unit is used for calling a first sub-model matched with the first type of materials based on the first configuration instruction, wherein each first type of material corresponds to one sub-model; and the mounting unit is used for mounting the first sub-model to the basic body according to a preset mounting point.
Optionally, the mounting unit includes: the mounting subunit determines a bone mounting point corresponding to a first bone of the first submodel, wherein the bone mounting point is a second bone of the basic body; mounting a first bone of the first sub-model onto a second bone of the base body.
Optionally, the configuration module includes: the searching unit is used for searching a target range parameter matched with the role type in the target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types; the receiving subunit is used for receiving a second configuration instruction aiming at the second type of materials; and the adjusting subunit is used for adjusting the second type of materials in the adjusting range based on the second configuration instruction.
Optionally, the apparatus further comprises: the receiving module is used for receiving a third configuration instruction before the searching module searches a target feature library from the first material library according to the role type, wherein the third configuration instruction is used for indicating the type of the selected material library; a selection module, configured to select the first material library from multiple types of material libraries based on the third configuration instruction, where each type of material library corresponds to one rendering style; and the loading module is used for locally loading the first material library.
Optionally, the apparatus further comprises: the display module is used for rendering and displaying a first role image of the first virtual role in a virtual scene after the configuration module generates the first role image of the first virtual role, and adding a first rendering identifier in the attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role; the obtaining module is used for obtaining a second rendering identifier of a second virtual role in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first role image of the second virtual role; and the reconfiguration module is used for requesting the server for a second role image of the second virtual role if the second rendering identifier is inconsistent with the first rendering identifier, wherein the second role image is generated based on the first material library and corresponds to the first role image of the second virtual role, and the received second role image of the second virtual role is displayed in the virtual scene.
Optionally, the configuration module includes: an obtaining unit, configured to obtain gender attribute information of the first virtual character; the calling unit is used for calling a first basic body matched with the first gender if the gender attribute information indicates that the first virtual role is the first gender; if the gender attribute information indicates that the first virtual role is of a second gender, a second basic body matched with the second gender is called, wherein the role model comprises the first basic body and the second basic body.
Optionally, the apparatus further comprises: the selection module is used for selecting the second material library; the determining module is used for determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of the corresponding materials; and the generating module is used for generating a second role image of the first virtual role based on the image configuration parameters and the second material library.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
By the method, the role type of the first virtual role is determined; searching a target feature library from a first material library according to role types, wherein the first material library comprises a plurality of sets of feature libraries, each set of feature library corresponds to one role type, calling a basic body of a role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types, the target feature library is searched from the first material library according to the role types, different feature libraries are configured for different role types, the first role images of different types can be configured on the same basic body, differentiation among different role types is realized through different feature libraries, the technical problem of low configuration efficiency of the first role image in the related technology is solved, the model resource amount loaded by a terminal is reduced, and the memory is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a character image generation computer according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for generating a character image according to an embodiment of the present invention;
FIG. 3 is a schematic view of a hair style being mounted according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a character image configured based on two feature libraries according to an embodiment of the present invention;
FIG. 5 is a block diagram of an apparatus for generating a character image according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile phone, a tablet, a server, a computer, or a similar electronic terminal. Taking the example of running on a computer, fig. 1 is a hardware structure block diagram of a character image generation computer according to an embodiment of the present invention. As shown in fig. 1, the computer may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is illustrative only and is not intended to limit the configuration of the computer described above. For example, a computer may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a character image generation method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a computer through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. In this embodiment, the processor 104 is configured to control the first virtual character to perform a designated operation to complete the game task in response to the human-machine interaction instruction and the game policy. The memory 104 is used for storing program scripts of the electronic game, configuration information, attribute information of the virtual character, and the like.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Optionally, the input/output device 108 further includes a human-computer interaction screen for acquiring a human-computer interaction instruction through a human-computer interaction interface and for presenting a picture in a virtual scene;
in this embodiment, a method for generating a character image is provided, and fig. 2 is a schematic flow chart of a method for generating a character image according to an embodiment of the present invention, as shown in fig. 2, the flow chart includes the following steps:
step S202, determining the role type of the first virtual role;
optionally, the virtual Character of this embodiment is a control Character displayed and presented in a virtual scene, where the virtual scene may be a virtual game scene, a virtual teaching scene, a virtual demonstration scene, and the like, where the virtual scene includes a plurality of virtual characters, and the virtual characters may be Controlled by user operations or system AI, and move and perform operations in the virtual scene such as a game scene, where the virtual Character may be Controlled by a user, for example, a PCC (Player-Controlled Character) in a virtual game Controlled by a master Player. In this embodiment, a virtual scene is taken as an example of a game scene.
Step S204, searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type;
before searching the target feature library from the first material library according to the role type, the scheme of the embodiment further includes: a feature library is configured for each character type in the virtual game, and the mapping relation between the character type and the feature library can be configured through an index table and the like.
Optionally, in the virtual game scene, the role type represents a classification to which the role belongs, and may be an attribute type of the first virtual role, such as species, race, occupation, level, group, and the like; for another example, in the virtual teaching scenario, the character type is an attribute type (or called identity type) of the first virtual character, such as a student, a teacher, a parent, and the like. Each feature library including the target feature library comprises a plurality of feature materials, the feature materials form a feature material set, the feature materials can be organs of virtual characters, parts on the organs, ornamentations and other elements capable of changing the image of the first character, and the feature materials are the minimum units of machine rendering.
And step S206, calling a basic body of the role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
In this embodiment, the basic body may be a model common to character types, and may have a binding common skeleton, and have corresponding common materials, such as a head, a torso, and the like that each character type may have.
Determining the role type of the first virtual role through the steps; searching a target feature library from a first material library according to role types, wherein the first material library comprises a plurality of sets of feature libraries, each set of feature library corresponds to one role type, calling a basic body of a role model, and adjusting the basic body by adopting the target feature library to generate a first role image of a first virtual role, wherein the basic body is a basic body universal for all role types, the target feature library is searched from the first material library according to the role types, different feature libraries are configured for different role types, the first role images of different types can be configured on the same basic body, differentiation among different role types is realized through different feature libraries, the technical problem of low configuration efficiency of the first role image in the related technology is solved, the model resource amount loaded by a terminal is reduced, and the memory is reduced.
Optionally, the role classification to which the first virtual role belongs may be determined according to the role identity corresponding to the first virtual role, and the role classification is used as the role type of the first virtual role, where there is a difference in the roles under different role classifications, for example, the classifications of race, occupation, and the like.
In some scenarios, determining the role type of the first virtual role comprises: acquiring ethnic attribute information of a first virtual role; and determining the race type of the first virtual role according to the race attribute information, and taking the race type as the role type. In addition, the occupation type under the ethnic type can be further obtained, and the occupation attribute information of the first virtual role is obtained; the occupation type of the first virtual role is determined according to the occupation attribute information, the occupation type is used as the role type, besides race and occupation, a plurality of major classes and minor classes can be set in the embodiment, and then the role types are classified at different levels.
Alternatively, species types and the like may be used in addition to occupational types and ethnic types.
In one embodiment of this embodiment, the adjusting the basic body using the target feature library includes: receiving a first configuration instruction aiming at first-class materials, wherein the target feature library comprises the first-class materials which are special materials of the role type; calling first sub-models matched with the first type of materials based on a first configuration instruction, wherein each first type of material corresponds to one sub-model; and mounting the first sub-model on the basic body according to the preset mounting points.
In some examples, in the process of generating the first character image of the first virtual character, in order to reduce the configuration amount of the user, the server may automatically load materials necessary for several character images, for example, materials that eyes and mouth are all character types, and after the server calls the basic body of the character model, the server further includes: searching second-class materials, wherein the second-class materials are general materials of all role types; and loading a second type of material, optionally, mounting a second sub-model corresponding to the second type of material on the basic body, displaying the basic body on which the second sub-model is mounted on an image configuration interface, and further selecting a special material of the role type by a user to adjust the basic body so as to generate the role image.
Optionally, the first configuration instruction carries a first material identifier selected by the user.
In one example, the role type has corresponding default materials, for example, all roles under the role type have tails, and the tails can be the default materials of the role type, that is, the materials can be directly adopted without being selected by a user, so that when the user selects the role type, the sub-model corresponding to the default materials of the role type is directly mounted on the basic body.
In one embodiment, the first type of material is a dedicated material of a role type to which the first virtual role belongs, the second type of material is a general material of all role types, the first type of material includes material 1, material 2, and material 3, the material 1, the material 2, and the material 3 are respectively tail, hairstyle, and corner, the second type of material includes material 4, material 5, the material 4, and the material 5 are respectively eye and mouth, after the role type (the role type has default material 6), the basic body is displayed on the image configuration interface, the basic body has eye and mouth corresponding to the material 4 and the material 5, respectively, and the sub-model corresponding to the default material 6 of the role type is mounted. For example, a monkey family has a tail, which is the default material, and when the user selects the family, the child model of the tail is automatically mounted. The user can select the material 2 on the configuration interface, trigger a first configuration instruction, and mount the first sub-model on the top of the basic body by calling the hair style (namely the first sub-model) corresponding to the selected material 2, wherein the preset mounting point of the hair is the highest point of the top of the basic body, and other first-class materials and second-class materials can be further mounted on the basic body, so that the first role image with the role type of 'monkey' is generated.
Optionally, the mounting the first sub-model to the base body according to the preset mounting point includes: determining a bone hanging point corresponding to a first bone of the first sub-model, wherein the bone hanging point is a second bone of the basic body; mounting a first bone of the first sub-model onto a second bone of the base body.
The skeleton of the sub-model is also mounted on the skeleton of the basic body, so that the user can continue to adjust the part corresponding to the sub-model, in one example, the sub-model is the corner of the target role type, after the user selects the target role type, the corner adding or not adding can be displayed on the image configuration interface, and after the corner adding is selected, the sub-model of the corner is mounted on the basic body, and the skeleton of the sub-model is mounted on the basic body, so that the user can further specifically adjust the corner according to the skeleton of the sub-model.
The scheme of the embodiment can be applied to a face pinching system of a virtual game, and different feature libraries can be configured for different types of characters, for example, different ethnic feature libraries are configured for different ethnicities (human, demon, spirit, etc.), and feature materials of the ethnic feature library include a face, a trunk, four limbs, a tail, ornamentation, and the like. For example, different feature libraries are configured for different professions (such as soldiers, craftsmen, fishermen, and legal engineers), and corresponding hair style libraries can be configured specifically for different professions.
Fig. 3 is a schematic view of a hairstyle being mounted according to an embodiment of the present invention, wherein the left side is a schematic view of the basic body and the right side is a schematic view of the hairstyle being mounted.
In one embodiment, adjusting the base body using the target feature library includes: searching a target range parameter matched with the role type in a target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types; receiving a second configuration instruction aiming at the second type of materials; and adjusting the second type of materials within the adjusting range based on the second configuration instruction.
Optionally, the adjustment range may be, but is not limited to: range of amplitude, shape, orientation (e.g. spatial relationship of a material to the base body or to other materials, e.g. nostril may be up or down, opposite mouth, or vice versa), location of bone hanging points, etc. The adjustment range is taken as an amplitude range for example, and for a second type of material common to all character types, the adjustable amplitude range of each character type is different.
In this embodiment, for different role types, different zoom sizes are configured for each material in the second type of material, for example, the zoom size of the material 4 of the first role type is 1cm to 10cm, the zoom size of the material 5 is 3cm to 12cm, for example, the zoom size of the material 4 of the second role type is 1cm to 5cm, and the zoom size of the material 5 is 5cm to 20 cm.
Optionally, the second configuration instruction carries a second material identifier selected by the user, and a range value of the range parameter.
In one example, the first type of material includes material 1, material 2, and material 3, the material 2 is adjusted in a range of 1cm to 10cm, the size of the representative material 2 can be adjusted in a range of 1cm to 10cm, the second type of material includes material 4 and material 5, the first type of material is a special material for the character type to which the first virtual character belongs, and the material 1, the material 2, and the material 3 are respectively tail, hair style, and horns, the second type of material is a general material for all the character types, and the material 4 and the material 5 are respectively eyes and mouth. The user selects the material 4 on the configuration interface, inputs a zoom parameter of 5cm, and triggers a second configuration instruction, wherein the second configuration instruction carries an adjustment parameter of the user for the material 4, and because the adjustment parameter is located in a target parameter range corresponding to the material 4 in the target feature library, the user positions the eye in the basic body based on the second configuration instruction, and then zooms the length of the eye according to the second configuration instruction.
In one case, the target feature library comprises a first type of material and a target range parameter corresponding to the first type of material, after a first configuration instruction is received, a first sub-model corresponding to the selected first type of material is called, then the first sub-model is mounted on the basic body, a fourth configuration instruction is further received, the fourth configuration instruction carries an adjusting parameter of a user for the first sub-model, and when the adjusting parameter is located in a target parameter range corresponding to the first type of material in the target feature library, the first sub-model is adjusted according to the adjusting parameter.
In some other examples, in addition to configuring the zoom size, the zoom size may be configured for the type of character, the shape of the material may be selectable (e.g., a monster may select triangular and rectangular eyes, and a human may select only elliptical eyes), the angle and orientation of the material may be adjusted based on the reference position, the material may correspond to a sub-model selectable hanging point (e.g., a monster may have eyes hanging on the top and neck, and a human may have eyes hanging on the face), the filling color of the material may be selectable, and the like.
When the embodiment is applied to a face-pinching system, the corresponding feature library can be called for loading by matching the race, occupation and the like of the current role, and the user can only select the material configured with the first role image in the feature library to adjust the basic body (such as adding some organs, ornamentations and the like), so as to pinch the first role image desired by the user. Aiming at common and common organs of a role, such as eyes, a nose and other organs of the role, a code is configured for each race or occupation, the code corresponds to the adjustable range of a certain organ, and if the adjustable range of the eye size of a human is 2-10, and the adjustable range of the eye size of a demon is 0-20, the maximum eye size of the human can only reach 10, and the maximum eye size of the demon can reach 20. In the face pinching system, a plurality of sub-models such as character hairstyles are hung on a basic body, and a sub-model of hair is hung on the basic body for presenting the hairstyle, so that the structure of the basic body is not influenced when the hairstyle is adjusted.
In some embodiments, before searching the target feature library from the first material library according to the role type, the method further comprises: receiving a third configuration instruction, wherein the third configuration instruction is used for indicating the selected material library type; selecting a first material library from a plurality of types of material libraries based on a third configuration instruction, wherein each type of material library corresponds to one rendering style; the first material library is loaded locally.
In this embodiment, in the initial configuration stage, a material library preferred by a user may be selected from a plurality of material libraries each including a plurality of sets of feature libraries corresponding to a plurality of role types as the target material library. For example, the system comprises a low-mode material library and a high-mode material library (the low mode and the high mode can correspond to different rendering styles), each new role type is created in the game, the art production personnel can draw characteristic materials of the low-mode material library and the high-mode material library, when a user enters the game, the user selects the material library of the version desired by the user through checking, selects the characteristic library matched with the role type from the material library, and further picks out the desired first role image.
In one embodiment of this embodiment, invoking the basic body of the role model includes: acquiring gender attribute information of a first virtual role; if the gender attribute information indicates that the first virtual role is the first gender, calling a first basic body matched with the first gender; and if the gender attribute information indicates that the first virtual role is of the second gender, calling a second basic body matched with the second gender, wherein the role model comprises the first basic body and the second basic body.
The first sex and the second sex are respectively male and female, the character model comprises two general basic bodies, and a uniform basic body can be set for male, female and neutral.
Based on the above embodiment, after generating the first character image of the first virtual character, the method further includes:
s11, rendering and displaying a first role image of a first virtual role in a virtual scene, and adding a first rendering identifier in attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role;
s12, acquiring a second rendering identifier of a second virtual character in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first character image of the second virtual character;
and S13, if the second rendering identifier is not consistent with the first rendering identifier, requesting a second character image of the second virtual character from the server, wherein the second character image is generated based on the first material library and corresponds to the first character image of the second virtual character, and displaying the received second character image of the second virtual character in the virtual scene.
The second character image of the second virtual character corresponds to the first character image of the second virtual character, and the second character image and the first character image can be generated by using materials in two material libraries in the same character type. For example, the first character image is generated using the feature library corresponding to the monster under the second material library, and the second character image is generated using the feature library corresponding to the monster under the first material library. The received second character image of the second virtual character is displayed in the virtual scene, so that the style of the virtual character rendered in the virtual scene is unified.
Fig. 4 is a schematic diagram of a first character image configured based on two feature libraries, which respectively correspond to two types of material libraries and a same organ, wherein the images configured by using the two material libraries have a certain visual difference, the first style is visually more realistic, and the second style is visually more quadratic.
In some examples, in order to reduce the storage amount, two sets of character images are not pre-stored locally, and when a uniform rendering style is required, if the second rendering identifier is inconsistent with the first rendering identifier, the first character image of the second virtual character is reconfigured by using the first material library, and the reconfigured first character image of the second virtual character (i.e., the second character image of the second virtual character) is displayed in the virtual scene.
In one example, reconfiguring the first persona representation of the second virtual persona using the first library of materials comprises: reading image configuration parameters of a first character image of a second virtual character; and generating a second character image of the second virtual character based on the image configuration parameters and the first material library.
Optionally, the avatar configuration parameters include: and configuring a material identifier of at least one material used by the first character image of the second virtual character and an adjusting parameter corresponding to the at least one material, wherein the material in each material library has a unique material identifier, and part or all of the materials in the first material library have corresponding materials in the second material library. Optionally, a corresponding relationship between the material in each target feature library in the first material library and the material in each target feature library in the second material library is established, and a mapping relationship between the target range parameter of the material in each target feature library in the first material library and the target range parameter of the material in each target feature library in the corresponding second material library. Generating a second character image of the second virtual character based on the image configuration parameters and the first material library, including: searching the materials in the first material library corresponding to the material identification according to the corresponding relation between the material identification and the materials in the image configuration parameters, mapping the adjusting parameters of each material in the image configuration parameters according to the mapping relation between the target range parameters so as to obtain the adjusting parameters corresponding to the materials in the first material library, and adjusting the role model basic body corresponding to the first material library according to the searched materials in the first material library and the corresponding adjusting parameters so as to obtain the second role image of the second virtual role. When the materials represented by the material identifiers in the image configuration parameters do not have corresponding materials in each target feature of the first material library, calling default materials corresponding to the material identifiers in the universal materials corresponding to the character model basic bodies according to the material identifiers, and adjusting the basic bodies of the character models by using the default parameters corresponding to the default materials. Through the mode, the second role image corresponding to the second virtual role under the first material library can be obtained according to the first role image of the second virtual role under the second material library, the two role images of the second virtual role are of the same role type, the consistency of virtual role expression in a virtual scene is realized, and when no corresponding material exists in the first material library, the second role image is generated by using the default material in the general material, so that the integrity of the generated second role image is ensured.
For example, character a uses a first material library configuration on user a's client, character a1, character B uses a second material library configuration on user B's client, character B1, in the game scene rendered and displayed on user a's client, in order to unify rendering style, character B1 needs to be replaced with character B2 unified with character a1 style, character B1 uses materials a2 and material B2 of the second material library, and corresponding adjustment parameters a2-2, B2-2, in the process of generating a new second character (B2), material a2 and material B2 are replaced with corresponding materials a1 and material B1 in the first material library (a2 and a1, B2 and B1 are all corresponding materials, for example, a2 and a1 are both noses), parameters a2-2 and B2-2 are mapped to parameters in the first material library a1 a1-1, b1-1, adjusting the character model basic body corresponding to the first material library according to the material a1, the material B1 and the corresponding adjusting parameters a1-1 and B1-1 to obtain a second character image B2, wherein the image A1 is also configured based on the material and the adjusting parameters of the first material library, so that the reconfigured B2 and A1 belong to the same rendering style, and the unification of the rendering styles is realized.
In some embodiments, after generating the first character image of the first virtual character, further comprising: selecting a second material library; determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of corresponding materials; generating a second character image of the first virtual character based on the image configuration parameters and the second material library. The process of generating the second character image of the first virtual character based on the image configuration parameters and the second material library is similar to the process of generating the second character image of the second virtual character, and is not repeated herein.
Optionally, the basic body of the character model includes a low-model basic body and a high-model basic body, when the user selects to use the high-model basic body to generate the character image, the character type under the high-model basic body can be further selected, and the game client can search the target feature library corresponding to the character type in the first material library corresponding to the high-model basic body based on the selection of the user. After the user finishes pinching the face to generate the high-mode character image, the game client can generate the corresponding low-mode character image based on the high-mode character image.
In one example, a first character image of a first virtual character is configured by a first material library of a high-modulus basic body, a second character image of the first virtual character is configured by a second material library of a low-modulus basic body, after face pinching image configuration is completed based on the high-modulus basic body, a set of corresponding low-modulus second character images are generated according to high-modulus image configuration parameters, and hair styles, eyes and the like in the low-modulus second character images correspond to the high modulus; if the low mode does not support certain character configuration parameters for the high mode, default data is used and both the high mode character and the low mode character are stored on the server as well.
In some scenarios, in order to ensure uniformity of rendering styles on the same client, the game server needs to use the same material library to load virtual characters displayed on the same client, specifically which material library is adopted based on a material library adopted by a first virtual character selected by a user or used by the user, in one example, the game scene of the client a includes three virtual characters, namely a character a, a character B and a character C, wherein the character a is a user's own virtual character, namely a first virtual character, the character B is a virtual character used by teammates and local teammates, namely a second virtual character, the character a and the character B are character images configured by using the first material library, the character C is a character image configured by using the second material library, the game server reads a second characteristic material set in the second material library adopted by the game server in a configuration table of the character images by analyzing the character image of the character C, and replacing the first characteristic material set in the corresponding first material library, or directly acquiring a previously pre-stored role image configured by adopting the first material library by adopting the role C in the game server, and issuing corresponding role rendering data to the client A, thereby unifying the rendering style on the client A.
On a game terminal or a server, the system is provided with a material library of a low-mode face pinching system and a material library of a high-mode face pinching system, the material libraries respectively correspond to a first rendering identifier and a second rendering identifier, and the high-mode material library is redrawn based on the low-mode material library, is of two different styles of rendering styles and is suitable for different user preferences. Two kinds of face systems of holding between the fingers can coexist and freely switch, the player can select low mould to hold between the fingers face system or high mould to hold between the fingers the role image that oneself wanted, show in the recreation, the player also can select to show in the recreation scene through the configuration in advance and hold between the fingers face system or high mould to hold between the fingers one of face system, if select to show low mould to hold between the fingers face system, no matter be with the low mould or the high mould in the recreation scene hold between the fingers the role image that the face system held between the fingers, only load the characteristic image that shows the low mould and hold between the fingers the face system, in order to guarantee the unification of rendering style on every game terminal, reduce the memory pressure that two kinds of material storehouses of simultaneous loading brought simultaneously. The system loads feature libraries of low-mode face pinching and high-mode face pinching at the client, and because each feature of the feature libraries in the low-mode face pinching system and the high-mode face pinching system is mapped one by one, the high-mode features can be directly switched into corresponding low-mode features, and as for the features which cannot be in one-to-one correspondence, default feature materials or material parameters are adopted for adaptation. The method breaks through the limitation that different ethnic groups need to adopt different basic bodies, and realizes the uniformity, systematicness and normalization of the role model.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a role image generation device is further provided for implementing the above embodiments and preferred embodiments, which have already been described and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a character image generating apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including: a determination module 50, a lookup module 52, a configuration module 54, wherein,
a determining module 50 for determining a role type of the first virtual role;
the searching module 52 is configured to search a target feature library from a first material library according to the role type, where the first material library includes multiple sets of feature libraries, and each set of feature library corresponds to one role type;
a configuration module 54, configured to invoke a basic entity of the role model, and adjust the basic entity by using the target feature library to generate a first role image of the first virtual role, where the basic entity is a basic entity common to all role types.
Optionally, the configuration module includes: the receiving unit is used for receiving a first configuration instruction aiming at first-class materials, wherein the target feature library comprises the first-class materials, and the first-class materials are special materials of the role type; the calling unit is used for calling a first sub-model matched with the first type of materials based on the first configuration instruction, wherein each first type of material corresponds to one sub-model; and the mounting unit is used for mounting the first sub-model to the basic body according to a preset mounting point.
Optionally, the mounting unit includes: and the mounting subunit is used for connecting a first skeleton of the first sub-model to a second skeleton of the base body by taking the skeleton mounting point as a connecting point.
Optionally, the configuration module includes: the searching unit is used for searching a target range parameter matched with the role type in the target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types; the receiving subunit is used for receiving a second configuration instruction aiming at the second type of materials; and the adjusting subunit is used for adjusting the second type of materials in the adjusting range based on the second configuration instruction.
Optionally, the apparatus further comprises: the receiving module is used for receiving a third configuration instruction before the searching module searches a target feature library from the first material library according to the role type, wherein the third configuration instruction is used for indicating the type of the selected material library; a selection module, configured to select the first material library from multiple types of material libraries based on the third configuration instruction, where each type of material library corresponds to one rendering style; and the loading module is used for locally loading the first material library.
Optionally, the apparatus further comprises: the display module is used for rendering and displaying a first role image of the first virtual role in a virtual scene after the configuration module generates the first role image of the first virtual role, and adding a first rendering identifier in the attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role; the obtaining module is used for obtaining a second rendering identifier of a second virtual role in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first role image of the second virtual role; and the reconfiguration module is used for requesting the server for a second role image of the second virtual role if the second rendering identifier is inconsistent with the first rendering identifier, wherein the second role image is generated based on the first material library and corresponds to the first role image of the second virtual role, and the received second role image of the second virtual role is displayed in the virtual scene.
Optionally, the configuration module includes: an obtaining unit, configured to obtain gender attribute information of the first virtual character; the calling unit is used for calling a first basic body matched with the first gender if the gender attribute information indicates that the first virtual role is the first gender; if the gender attribute information indicates that the first virtual role is of a second gender, a second basic body matched with the second gender is called, wherein the role model comprises the first basic body and the second basic body.
Optionally, the apparatus further comprises: the selection module is used for selecting the second material library; the determining module is used for determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of the corresponding materials; and the generating module is used for generating a second role image of the first virtual role based on the image configuration parameters and the second material library.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Fig. 6 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device includes a processor 61, a communication interface 62, a memory 63, and a communication bus 64, where the processor 61, the communication interface 62, and the memory 63 complete mutual communication through the communication bus 64, and the memory 63 is used for storing a computer program;
the processor 61 is configured to implement the following steps when executing the program stored in the memory 63: determining a role type of a first virtual role; searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type; and calling a basic body of the role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
Optionally, a first configuration instruction for a first type of material is received, where the target feature library includes the first type of material, and the first type of material is a material dedicated to the role type; calling first sub-models matched with the first type of materials based on the first configuration instruction, wherein each first type of material corresponds to one sub-model; and mounting the first sub-model to the basic body according to a preset mounting point.
Optionally, the method further includes: determining a bone hanging point corresponding to a first bone of the first sub-model; and connecting the first bone of the first sub-model to the second bone of the basic body by taking the bone hanging point as a connecting point.
Optionally, adjusting the basic body by using the target feature library includes: searching a target range parameter matched with the role type in the target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types; receiving a second configuration instruction aiming at the second type of materials; and adjusting the second type of materials within the adjusting range based on the second configuration instruction.
Optionally, before searching the target feature library from the first material library according to the role type, the method further includes: receiving a third configuration instruction, wherein the third configuration instruction is used for indicating the selected material library type; selecting the first material library from a plurality of types of material libraries based on the third configuration instructions, wherein each type of material library corresponds to one rendering style; and locally loading the first material library.
Optionally, after generating the first character image of the first virtual character, the method further includes: rendering and displaying a first role image of the first virtual role in a virtual scene, and adding a first rendering identifier in attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role; acquiring a second rendering identifier of a second virtual role in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first role image of the second virtual role; and if the second rendering identifier is inconsistent with the first rendering identifier, requesting a second role image of the second virtual role from the server, wherein the second role image is generated based on the first material library and corresponds to the first role image of the second virtual role, and displaying the received second role image of the second virtual role in the virtual scene.
Optionally, after generating the first character image of the first virtual character, the method further includes: selecting a second material library; determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of corresponding materials; generating a second character image of the first virtual character based on the image configuration parameters and the second material library.
Optionally, the basic body for invoking the role model includes: acquiring gender attribute information of the first virtual role; if the gender attribute information indicates that the first virtual role is of a first gender, calling a first basic body matched with the first gender; if the gender attribute information indicates that the first virtual role is of a second gender, a second basic body matched with the second gender is called, wherein the role model comprises the first basic body and the second basic body.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment provided by the present application, there is further provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the method for generating a first character image as described in any of the above embodiments.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for generating a first character image as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (11)

1. A method for generating a character image, comprising:
determining a role type of a first virtual role;
searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type;
and calling a basic body of the role model, and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
2. The method of claim 1, wherein adapting the base body using the target feature library comprises:
receiving a first configuration instruction aiming at a first type of materials, wherein the target feature library comprises the first type of materials, and the first type of materials are special materials of the role type;
calling first sub-models matched with the first type of materials based on the first configuration instruction, wherein each first type of material corresponds to one sub-model;
and mounting the first sub-model to the basic body according to a preset mounting point.
3. The method of claim 2, further comprising:
determining a bone hanging point corresponding to a first bone of the first sub-model, wherein the bone hanging point is a second bone of the basic body;
mounting a first bone of the first sub-model onto a second bone of the base body.
4. The method of claim 1, wherein adapting the base body using the target feature library comprises:
searching a target range parameter matched with the role type in the target feature library, wherein the target range parameter is used for indicating the adjustment range of a second type of material, the target range parameter is a special range parameter of the role type, the target feature library comprises the target range parameter of the second type of material, and the second type of material is a general material of all the role types;
receiving a second configuration instruction aiming at the second type of materials;
and adjusting the second type of materials within the adjusting range based on the second configuration instruction.
5. The method of claim 1, wherein prior to searching the target feature library from the first library of materials based on the role type, the method further comprises:
receiving a third configuration instruction, wherein the third configuration instruction is used for indicating the selected material library type;
selecting the first material library from a plurality of types of material libraries based on the third configuration instructions, wherein each type of material library corresponds to one rendering style;
and locally loading the first material library.
6. The method of claim 5, wherein after generating the first character image of the first virtual character, the method further comprises:
rendering and displaying a first role image of the first virtual role in a virtual scene, and adding a first rendering identifier in attribute information of the first virtual role, wherein the first rendering identifier is used for indicating a material library type corresponding to the first role image of the first virtual role;
acquiring a second rendering identifier of a second virtual role in the virtual scene, wherein the second rendering identifier is used for indicating a material library type corresponding to a first role image of the second virtual role;
and if the second rendering identifier is inconsistent with the first rendering identifier, requesting a second role image of the second virtual role from the server, wherein the second role image is generated based on the first material library and corresponds to the first role image of the second virtual role, and displaying the received second role image of the second virtual role in the virtual scene.
7. The method of claim 5, wherein after generating the first character image of the first virtual character, the method further comprises:
selecting a second material library;
determining image configuration parameters of the first character image, wherein the image configuration parameters are used for representing materials adopted by the first character image and adjustment parameters of corresponding materials;
generating a second character image of the first virtual character based on the image configuration parameters and the second material library.
8. The method of claim 1, wherein invoking a base body of a role model comprises:
acquiring gender attribute information of the first virtual role;
if the gender attribute information indicates that the first virtual role is of a first gender, calling a first basic body matched with the first gender; if the gender attribute information indicates that the first virtual role is of a second gender, a second basic body matched with the second gender is called, wherein the role model comprises the first basic body and the second basic body.
9. An apparatus for generating a character image, comprising:
the determining module is used for determining the role type of the first virtual role;
the searching module is used for searching a target feature library from a first material library according to the role types, wherein the first material library comprises a plurality of sets of feature libraries, and each set of feature library corresponds to one role type;
and the configuration module is used for calling a basic body of the role model and adjusting the basic body by adopting the target feature library to generate a first role image of the first virtual role, wherein the basic body is a basic body which is universal for all role types.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN202111136167.7A 2021-09-27 2021-09-27 Method and device for generating character image, storage medium and electronic device Pending CN113769393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136167.7A CN113769393A (en) 2021-09-27 2021-09-27 Method and device for generating character image, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136167.7A CN113769393A (en) 2021-09-27 2021-09-27 Method and device for generating character image, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN113769393A true CN113769393A (en) 2021-12-10

Family

ID=78853775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136167.7A Pending CN113769393A (en) 2021-09-27 2021-09-27 Method and device for generating character image, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113769393A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109126136A (en) * 2018-07-27 2019-01-04 腾讯科技(深圳)有限公司 Generation method, device, equipment and the storage medium of three-dimensional pet
CN110755847A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
CN111260763A (en) * 2020-01-21 2020-06-09 厦门美图之家科技有限公司 Cartoon image generation method, device, equipment and storage medium based on portrait
US20200202111A1 (en) * 2018-12-19 2020-06-25 Netease (Hangzhou) Network Co.,Ltd. Image Processing Method and Apparatus, Storage Medium and Electronic Device
CN112704881A (en) * 2020-12-29 2021-04-27 苏州亿歌网络科技有限公司 Dress assembling display method, device, server and storage medium
CN113240782A (en) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 Streaming media generation method and device based on virtual role
CN113362263A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and program product for changing the image of a virtual idol

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109126136A (en) * 2018-07-27 2019-01-04 腾讯科技(深圳)有限公司 Generation method, device, equipment and the storage medium of three-dimensional pet
US20200202111A1 (en) * 2018-12-19 2020-06-25 Netease (Hangzhou) Network Co.,Ltd. Image Processing Method and Apparatus, Storage Medium and Electronic Device
CN110755847A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
CN111260763A (en) * 2020-01-21 2020-06-09 厦门美图之家科技有限公司 Cartoon image generation method, device, equipment and storage medium based on portrait
CN112704881A (en) * 2020-12-29 2021-04-27 苏州亿歌网络科技有限公司 Dress assembling display method, device, server and storage medium
CN113240782A (en) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 Streaming media generation method and device based on virtual role
CN113362263A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and program product for changing the image of a virtual idol

Similar Documents

Publication Publication Date Title
CN111282268B (en) Plot showing method, plot showing device, plot showing terminal and storage medium in virtual environment
CN111179396B (en) Image generation method, image generation device, storage medium, and electronic device
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN110148191A (en) The virtual expression generation method of video, device and computer readable storage medium
JP2022545598A (en) Virtual object adjustment method, device, electronic device, computer storage medium and program
CN111729314A (en) Virtual character face pinching processing method and device and readable storage medium
US11194465B2 (en) Robot eye lamp control method and apparatus and terminal device using the same
CN114377405A (en) Role image data-based generation method and device
CN108176052A (en) Analogy method, device, storage medium, processor and the terminal of model building
CN115690282B (en) Virtual role adjusting method and device
CN113769393A (en) Method and device for generating character image, storage medium and electronic device
CN115690281B (en) Role expression driving method and device, storage medium and electronic device
CN104778752A (en) Method and device for establishing human body model
CN112402981A (en) Game scene control method and device, storage medium and electronic equipment
CN111599292A (en) Historical scene presenting method and device, electronic equipment and storage medium
CN113706675B (en) Mirror image processing method, mirror image processing device, storage medium and electronic device
CN113450444B (en) Method and device for generating illumination map, storage medium and electronic equipment
JP7301453B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM, AND ELECTRONIC DEVICE
CN111738967B (en) Model generation method and apparatus, storage medium, and electronic apparatus
CN114579506A (en) Inter-processor communication method, system, storage medium, and processor
CN109215095B (en) Data display method, device, storage medium and processor
CN112587922A (en) Model display method and device
CN112950304A (en) Information pushing method, device, equipment and storage medium
CN112560555A (en) Method, device and storage medium for expanding key points
CN113838175A (en) Information processing method, information processing apparatus, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination