CN115936970A - Virtual face image generation method and device, electronic equipment and storage medium - Google Patents

Virtual face image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115936970A
CN115936970A CN202210743816.8A CN202210743816A CN115936970A CN 115936970 A CN115936970 A CN 115936970A CN 202210743816 A CN202210743816 A CN 202210743816A CN 115936970 A CN115936970 A CN 115936970A
Authority
CN
China
Prior art keywords
face
avatar
image
target
face unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210743816.8A
Other languages
Chinese (zh)
Inventor
曾伟宏
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210743816.8A priority Critical patent/CN115936970A/en
Publication of CN115936970A publication Critical patent/CN115936970A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the disclosure provides a virtual face image generation method, a virtual face image generation device, an electronic device and a storage medium, wherein a target head portrait picture is obtained, and the target head portrait picture comprises a plurality of face units for forming a real face image; processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit; and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map. The target head portrait picture is analyzed into virtual image parameters representing the characteristics of the face unit, and then the corresponding face unit maps are determined based on the virtual image parameters, so that the restoration of the real face image based on a limited number of face unit maps is realized, the consistency of the generated virtual face image and the real face image is improved, and the accuracy of the virtual face image is improved.

Description

Virtual face image generation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, and in particular relates to a virtual face image generation method and device, an electronic device and a storage medium.
Background
At present, the identity used by a user in a virtual social environment based on the internet is gradually upgraded from simple cartoon characters and head portraits into stylized virtual images capable of expressing the real images of the user, so that more personalized identity expression of the user is realized, and the use experience is improved.
In the prior art, the generation of the stylized virtual face image mainly aims at the face image, namely, the stylized virtual face image similar to the real face image is generated by selecting the mapping materials of the face shape, the five sense organs and the hair style.
However, prior art approaches to generating an avatar are typically based on manual settings by the user. The scheme for automatically generating the virtual face image has the problems of large difference degree with the real face image, low accuracy and the like.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for generating an avatar, an electronic device and a storage medium, which are used for solving the problems of poor consistency between an automatically generated avatar and a real avatar, low accuracy and the like in the prior art.
In a first aspect, an embodiment of the present disclosure provides a method for generating an avatar, including:
acquiring a target head portrait picture, wherein the target head portrait picture comprises a plurality of face units for forming a real face image; processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit; and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map.
In a second aspect, an embodiment of the present disclosure provides an avatar generating apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target head portrait picture which comprises a plurality of face units used for forming a real face image;
the classification module is used for processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, and the virtual image parameters are used for representing the category of the corresponding face unit;
and the generating module is used for obtaining a face unit map corresponding to the face unit according to the virtual image parameters and generating a virtual face image corresponding to the target head image picture based on the face unit map.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of generating an avatar as described above in the first aspect and in various possible designs of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method for generating an avatar is implemented as described in the first aspect and various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method for generating an avatar, as described above in the first aspect and in various possible designs of the first aspect.
In the virtual face image generation method, the virtual face image generation device, the electronic device, and the storage medium provided by this embodiment, a target avatar picture is obtained, where the target avatar picture includes a plurality of face units for forming a real face image; processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit; and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map. The target head portrait picture is analyzed into the virtual image parameters representing the characteristics of the face units, and then the corresponding face unit maps are determined based on the virtual image parameters, so that the real face image is restored based on a limited number of face unit maps, the consistency of the generated virtual face image and the real face image is improved, and the accuracy of the virtual face image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a virtual face image generation method according to an embodiment of the present disclosure;
fig. 2 is a first flowchart illustrating a method for generating an avatar according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of generating an avatar parameter according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a specific implementation step of step S103 in the embodiment shown in FIG. 2;
fig. 5 is a schematic diagram illustrating a determination of a preset map type based on first mapping information according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart diagram of a virtual face image generation method according to an embodiment of the present disclosure;
fig. 7 is a schematic view of a visualization interface of an avatar engine according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a specific implementation step of step S201 in the embodiment shown in FIG. 6;
fig. 9 is a schematic diagram illustrating a determination of a preset map type corresponding to a target avatar parameter based on a distance value according to an embodiment of the present disclosure;
FIG. 10 is a flowchart illustrating a specific implementation step of step S205 in the embodiment shown in FIG. 6;
FIG. 11 is a flowchart illustrating a specific implementation step of step S206 in the embodiment shown in FIG. 6;
fig. 12 is a block diagram of an apparatus for generating an avatar according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 14 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The following explains an application scenario of the embodiment of the present disclosure:
the virtual face image generation method provided by the embodiment of the disclosure can be applied to an application scene for generating the virtual face image. More specifically, the method can be applied to an application scene for generating a stylized virtual image when a user registers on various application programs (APP), clients and network platforms. For example, in an interface for creating a character in a game application, preset map materials corresponding to five sense organs, a hairstyle, a face shape, props and the like provided by the application are selected and combined to generate a stylized virtual face image.
In the above application scenario, since the virtual facial image created by the user is a dynamic model, such as a character model in a game application, and needs to represent dynamic changes such as speaking, emotional expression, etc. in the game, the virtual facial image is realized by modeling a plurality of independent facial units (such as an eye shape, a nose shape, a mouth shape, a hair style, a face shape), and is limited by computing and storage resources, and the application and platform can only provide a limited number of models (mapping materials) of a plurality of preset facial units, such as 5 types of eye shapes, 8 types of mouth shapes, and 4 types of hair styles. An avatar capable of implementing dynamic changes in the application is generated by a combination of models of these face units.
Based on the limitations in the above scenarios, in the prior art, for the generation scheme of stylized virtual facial image, one implementation is to generate virtual facial image, so-called "pinching face", by manually selecting preset map materials and combining them by a user. However, the avatar generated by manual selection and combination tends to be large similar to the user's avatar.
The other implementation mode is that the real head portrait picture input by the user is wholly identified and classified through training the image processing model based on the neural network, a group of face unit maps corresponding to the classification result are obtained, and the virtual face image is generated after the face unit maps are combined. However, the scheme has high requirements on model performance, and needs a large amount of labeled sample data for training, and meanwhile, in the application scenario, the method is influenced by factors such as complex texture of the facial image, easy influence of shooting angle, and no strict control limit for classification of facial units, so that the sample labeling cost is high, the accuracy is low, further, the image processing model with sufficient performance is difficult to train, and accurate mapping from the real facial image to the virtual facial image is realized, and finally, the generated virtual facial image has the problems of poor consistency with the real facial image, low accuracy and the like.
In the embodiment of the disclosure, the head portrait picture is processed through the pre-trained head portrait classification model to obtain the avatar parameters of each corresponding face unit, then the corresponding face unit sticker is determined based on the avatar reference capable of clearly representing the characteristics of the face unit, and the avatar is generated through the face unit sticker, so that the decoupling classification of each face unit is realized, the classification difficulty is reduced, the generated avatar has better consistency with the real face image, and the problems are solved.
The method for generating an avatar according to the present disclosure may be implemented in a terminal device (client) or a server (server), where the server is taken as an implementation subject, fig. 1 is an application scenario diagram of the method for generating an avatar according to the present disclosure, as shown in fig. 1, the server communicates with a terminal device used by a user through a client-server network structure mode, where the terminal device is, for example, a smart phone, and specifically, the server receives an avatar picture sent by the terminal device, for example, a self-shot by the user using the terminal device, and processes the received avatar picture by using the method for generating an avatar according to the present disclosure, so as to generate an avatar similar to a real user facial avatar represented by the avatar picture. Then returning the virtual face image to the terminal equipment in the form of a picture, so that a user can view the three-dimensional virtual face image corresponding to the previously uploaded head portrait picture through the terminal equipment in the modes of zooming, rotating and the like; and saving the generated avatar in a storage medium local to or communicatively connected to the server as the avatar registered by the user.
Referring to fig. 2, fig. 2 is a first flowchart illustrating a method for generating an avatar according to an embodiment of the present disclosure. The method of the embodiment can be applied to a server or a terminal device, and the virtual face image generation method includes:
step S101: and acquiring a target head portrait picture, wherein the target head portrait picture comprises a plurality of face units for forming a real face image.
In this embodiment, a terminal device is taken as an execution subject, and specifically, the terminal device is, for example, a smartphone. The user target avatar picture may be a photograph taken by a camera containing a real facial image, such as a user's self-photograph. Of course, the target avatar picture may be an original picture taken by a camera, or a picture with special effects such as a filter and color matching added on the basis of the original picture, which is not limited herein. The face unit is an object constituting a real face image, and in the present embodiment, is used to refer to face elements such as an eye shape, a nose shape, a mouth shape, a hair style, a face shape, and the like. Illustratively, the target avatar image includes a plurality of face units, for example, face units 01 to 06, which respectively correspond to eye shape, nose shape, mouth shape, hair style, eyebrow shape, and face shape. The real face image is formed by combining a plurality of face units.
Step S102: and processing the target head portrait picture through the pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit.
Illustratively, the avatar classification model may be a model obtained after training based on a neural network, and the avatar classification model has an input of a picture and an output of parameter data implemented by a numerical value, an array or a matrix. Specifically, after a target head portrait picture is obtained, a plurality of virtual image parameters are obtained after the target head portrait picture is processed through a head portrait classification model, and each virtual image parameter corresponds to one face unit. Fig. 3 is a schematic diagram of generating avatar parameters according to an embodiment of the present disclosure, as shown in fig. 3, after a target avatar picture is input into an avatar classification model, the avatar classification model outputs 5 avatar parameters, each avatar parameter corresponds to a face unit, referring to fig. 3, an avatar parameter a corresponds to an eye shape, an avatar parameter B corresponds to a nose shape, an avatar parameter C corresponds to a mouth shape, an avatar parameter D corresponds to a hair style, and an avatar parameter E corresponds to a face shape. That is, avatar parameter a characterizes the eye shape, avatar parameter B characterizes the nose shape, avatar parameter C characterizes the mouth shape, avatar parameter D characterizes the hair style, and avatar parameter E characterizes the face shape.
Further, the characterizing of the avatar parameter may be implemented by one parameter identifier, or a set of multiple parameter identifiers. Correspondingly, the data format of the avatar parameter may be a numerical value, an array or a matrix, or may be other special data formats, which is not specifically limited herein and may be set according to specific needs. Specifically, for example, the avatar parameter E characterizes a face, and when the corresponding parameter value is 01, i.e., the avatar parameter E =01, a round face is characterized; when the corresponding parameter value is 02, that is, the avatar parameter E =02, the chinese face is characterized, thereby realizing the feature expression of the face unit of the "face shape".
The target head portrait picture is processed through the head portrait classification model, so that decoupling and parameterization of each face unit in the target head portrait picture are realized, and parameterized expression of each face unit (based on virtual image parameters) is realized. And then, the classes of the face units represented by the virtual image parameters of the face units are mapped to the corresponding material chartlet, so that the virtual face image can accurately restore the real face image.
Step S103: and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating the virtual face image corresponding to the target head portrait picture based on the face unit map.
Illustratively, after determining the avatar parameters, according to the preset mapping relationship, a face unit map corresponding to the avatar parameters may be determined, i.e. a map material in the avatar, such as an eye-shaped map material, a face-shaped map material, a mouth-shaped map material, etc. The face unit map may be a two-dimensional or three-dimensional image model, depending on the particular needs. And then, combining the face unit maps according to a preset position relation to generate a virtual face image.
In one possible implementation manner, as shown in fig. 4, the specific implementation steps of step S103 include:
step S1031: target avatar parameters corresponding to the target face unit are obtained.
Step S1032: and obtaining a corresponding preset map type according to the target virtual image parameters, wherein the preset map type represents a preset category of the face unit map.
Step S1033: and obtaining a face unit map corresponding to the target face unit according to the preset map type.
For example, in a normal case, a plurality of face units are included in the target avatar picture, so that after the target avatar picture is processed through the classification model processing, a corresponding number of multiple avatar parameters are generated, and then, part or all of all face units need to be processed in sequence, so as to obtain face unit maps corresponding to part or all face units respectively. The process of obtaining the face unit map based on the avatar parameters is similar for each face unit, and this embodiment is described as an example.
Specifically, avatar parameters, i.e., target avatar parameters, corresponding to a target face unit, e.g., corresponding to a "face shape" in the face avatar, are obtained. In one possible implementation, the target avatar parameter includes only one parameter, and the corresponding parameter value is an identifier characterizing the "face" category. For example, the target avatar parameter is face _ real =1, and the category representing "face" is "round face"; for another example, the target avatar parameter is face _ real =2, and the "face" type is characterized as "national face". According to the preset first mapping information, the target avatar parameters can be directly mapped to the face unit maps of the preset types, for example, based on the first mapping information, the preset map type corresponding to face _ real =1 is face _ mal =1; the type of the preset map corresponding to the face _ real =2 is face _ mal =2. The preset map types are preset types, calling is carried out from a preset material library based on the type identifiers, and then the corresponding face unit maps can be obtained, and the process is not repeated.
In another possible implementation, the target avatar parameter includes a plurality of avatar sub-parameters, the avatar sub-parameters being used to characterize a category of sub-features of the target face unit. Specifically, for example, the target avatar parameter is a parameter set, and face _ real = [ face _ real _ up, face _ real _ mid, and face _ real _ down ], where face _ real _ up, face _ real _ mid, and face _ real _ down are avatar sub-parameters respectively, and are used to represent categories of sub-features, and different categories represent different feature degrees. For example, face _ real _ up represents the width of the face forehead (sub-features of the face), with 3 discrete parameter values, face _ real _ up =1, face _ real _ up =2, face _ real _ up =3, representing three different widths of the face forehead (i.e., three categories of sub-features), respectively. Similarly, for example, face _ real _ mid represents the height of the cheekbone of the face (another sub-feature of the face), face _ real _ down represents the width of the bar below the face (another sub-feature of the face), and face _ real _ mid and face _ real _ down correspond to different discrete parameter values, respectively, to represent the types of the different sub-features of the face, and thus, the description thereof is omitted. The corresponding face unit presents different face characteristics through the combination of different virtual image sub-parameters, so that the preset map type corresponding to the target virtual image parameter is determined based on the mapping relation between the target virtual image parameter and the preset map type formed by a plurality of virtual image sub-parameters. Illustratively, the specific implementation method comprises the following steps: acquiring preset first mapping information, wherein the first mapping information is used for representing a mapping relation between a target virtual image parameter consisting of a plurality of virtual image sub-parameters and a preset map type; and determining the preset map type corresponding to the target virtual image parameter according to the first mapping information.
Fig. 5 is a schematic diagram of determining a preset map type based on first mapping information, as shown in fig. 5, after the avatar classification model is processed, when a target avatar parameter corresponding to a target face unit is face _ real _1= [1,2,6], determining that the corresponding preset map type is class I, i.e., a round face, through the first mapping information, and then obtaining a corresponding face unit map through the class I identifier; when the target avatar parameter corresponding to the target face unit is face _ real _2= [2,3,1], determining that the corresponding preset map type is class II, namely the square face, through the first mapping information, and obtaining the corresponding face unit map through the class II identification.
The face unit map of the virtual face image is generated based on the virtual face image parameters corresponding to the face units corresponding to the real face image, so that the virtual face image has similarity with the real face image in the target head image picture, and the real face image is accurately restored by the virtual face image. The specific implementation process of rendering based on the map material and generating the virtual face image is the prior art, and is not described herein again.
In the embodiment, a target head portrait picture is obtained, wherein the target head portrait picture comprises a plurality of face units for forming a real face image; processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit; and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map. The target head portrait picture is analyzed into the virtual image parameters representing the characteristics of the face units, and then the corresponding face unit maps are determined based on the virtual image parameters, so that the real face image is restored based on a limited number of face unit maps, the consistency of the generated virtual face image and the real face image is improved, and the accuracy of the virtual face image is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a second method for generating an avatar according to an embodiment of the present disclosure. In this embodiment, on the basis of the embodiment shown in fig. 2, the step S103 is further refined, and a process of training the avatar classification model is added, and the method for generating an avatar includes:
step S201: generating a training sample based on a preset virtual image engine, wherein the training sample comprises a simulated head portrait picture and corresponding simulated image parameters, the simulated head portrait picture comprises a plurality of face units for forming a simulated face image, and the simulated image parameters are used for representing the characteristics of the face units in the simulated head portrait picture.
Step S202: and training a preset neural network model to be convergent based on the training sample to obtain a head portrait classification model.
Exemplarily, in the present embodiment, steps S201 to S202 are steps of obtaining an avatar classification model through model training. The avatar engine is a model for generating a simulated avatar picture based on parameters in the prior art. The method comprises the steps of inputting different simulation image parameters, realizing setting of a face unit, and rendering a corresponding simulation head portrait picture, wherein the simulation head portrait picture is a picture similar to a real head portrait picture, a simulation face image contained in the simulation head portrait picture has abundant simulation textures, the simulation face image is similar to texture features of the real head portrait picture, and simulation of the real head portrait picture can be realized.
Fig. 7 is a schematic view of a visualization interface of an avatar engine according to an embodiment of the present disclosure, as shown in fig. 7, the visualization interface of the avatar engine includes a parameter area and a display area, where the parameter area is provided with a plurality of avatar parameters, such as a face parameter and an eye parameter shown in the figure. The characteristics of the face unit are changed by setting the simulation image parameters in the parameter area, so that a simulation head portrait picture is formed in the display area.
Referring to the process of generating the simulation avatar picture by the avatar engine shown in fig. 7, different simulation avatar parameters can be generated in a parameter traversal manner, and then the corresponding simulation avatar picture is generated. For example, the avatar parameters [ face =1; eye =2; mouth =4; a nose =2], calling an image engine to generate, namely generating a simulation head portrait picture p1 corresponding to the simulation image parameters; the emulated image parameter [ face =2; eye =2; mouth =1; no =7], an avatar engine is called to generate, that is, a simulated avatar picture p2 corresponding to the simulated avatar parameters can be generated. And by analogy, different simulation image parameters are generated by combining different parameter values of the simulation image parameters of each facial unit, and then the virtual image engine is sequentially called to render, so that different simulation head portrait pictures can be generated. The avatar engine is in the prior art, and the specific construction and rendering process of generating the image are not repeated.
And then, taking the simulation image parameters as labels of the corresponding simulation head portrait pictures, namely generating training samples. And training a preset neural network model based on a plurality of groups of training samples until the model converges, thus obtaining a head portrait classification model capable of mapping the picture as a parameter. The specific training process is not described herein.
In one possible implementation, the avatar parameter generated by the avatar engine includes a plurality of sub-parameters corresponding to each face unit, and the specific step of generating the training sample based on the preset avatar engine includes, for example, as shown in fig. 8:
step S2011: acquiring a plurality of preset image subparameters corresponding to each face unit, wherein the preset image subparameters are used for representing the categories of the subparameters of the face units;
step S2012: combining different parameter values of preset image sub-parameters corresponding to each face unit to generate a plurality of image parameter sets corresponding to each face unit, wherein the image parameter sets comprise one parameter value of each preset image sub-parameter corresponding to each face unit;
step S2013: traversing the combination formed by any image parameter group corresponding to each face unit to generate a plurality of simulated image parameters, and calling the virtual image engine based on the simulated image parameters to generate a corresponding number of simulated head portrait pictures.
Illustratively, the identification corresponding to each face unit includes: face _ sim, eye _ sim, mouth _ sim, where face _ sim represents a face shape (face unit); eye _ sim characterizes eye type; mouth _ sim characterizes the mouth shape. Further, the preset image sub-parameters face _ sim _1, face _ sim _2 and face _ sim _3 corresponding to face _ sim respectively represent the forehead width, the zygomatic bone height and the chin width. The preset image sub-parameters eye _ sim _1, eye _ sim _2 and eye _ sim _3 corresponding to eye _ sim respectively represent the eye length, the eye width and the eye color. The preset image sub-parameters mouth _ sim _1 and mouth _ sim _2 corresponding to mouth _ sim respectively represent the length and thickness of the lips. Each preset image sub-parameter corresponds to at least two preset parameter values, for example, the value of mouth _ sim _1 may be 1 or 2, and different lip lengths are represented. Other parameters are similar and will not be described in detail. The preset image sub-parameters corresponding to the face units and the corresponding parameter values are preset.
Furthermore, a plurality of image parameter groups corresponding to each face unit are generated by combining different parameter values of each preset image sub-parameter corresponding to each face unit.
For example, for eye type (eye _ sim), N sets of visual parameters are generated, including:
eye_arr_1=[eye_sim_1=1,eye_sim_2=3,eye_sim_3=6];
eye_arr_2=[eye_sim_1=2,eye_sim_2=1,eye_sim_3=1];
eye_arr_3=[eye_sim_1=4,eye_sim_2=2,eye_sim_3=2];
……
eye _ arr _ N = [ eye _ sim _1= x1, eye \ sim_2 = y1, eye__3 = z1]. Wherein N is an integer greater than 1; and x1, y1 and z1 are integers and represent type identifiers.
Generating a plurality of sets of image parameters for a face shape (face _ sim), including:
face_arr_1=[face_sim_1=1,face_sim_2=1,face_sim_3=6];
face_arr_2=[face_sim_1=6,face_sim_2=1,face_sim_3=1];
face_arr_3=[face_sim_1=4,face_sim_2=2,face_sim_3=3];
……
face _ arr _ M = [ face _ sim \' 1= x2, face \\ sim \ u 2= y2, face \ u sim \ -u 3= z2]. Wherein M is an integer greater than 1; and x2, y2 and z2 are integers and represent type identification.
Generating a plurality of sets of visual parameters for the mouth shape (mouth _ sim), including:
mouth_arr_1=[mouth_sim_1=1,mouth_sim_2=1];
mouth_arr_2=[mouth_sim_1=1,mouth_sim_2=2];
mouth_arr_3=[mouth_sim_1=2,mouth_sim_2=1];
……
mouth _ arr _ O = [ mouth _ sim _1= x3, mouth _sim _ _2= y3]. Wherein O is an integer greater than 1; and x3 and y3 are integers, and represent type identifiers.
Further, a plurality of simulated image parameters are generated by traversing a combination of any of the sets of image parameters corresponding to each of the face units. Specifically, any one of N image parameter sets (eye _ ar _1 to eye _ ar _ N) corresponding to the eye shape, any one of M image parameter sets (face _ ar _1 to face _ ar _ M) corresponding to the face shape, one of the O image parameter sets (mouth _ arr _1 to mouth _ arr _ O) corresponding to the eye pattern is selected and subjected to disordered combination to obtain a simulated image parameter, such as { eye _ arr _1, face \__2, mouth _arr _4}. And then, taking the simulation image parameters as input, and calling the virtual image engine to obtain the corresponding simulation head portrait picture. Simple calculation shows that based on the above example, N × M × O image parameter sets can be generated, and based on the above image parameter set avatar engine, N × M × O simulation head portrait pictures can be generated accordingly.
In the embodiment, the virtual image engine can output the characteristics of the simulated head portrait picture and the corresponding simulated image parameters, a large number of high-quality training samples can be generated at low cost in a parameter traversal mode, and then the head portrait classification model is obtained based on the training samples. Therefore, the training effect of the head portrait classification model is improved, the head portrait classification model can realize accurate mapping from the real head portrait picture to the virtual image parameters, and finally the consistency of the generated virtual face image and the real face image is improved.
Step S203: and acquiring a target head portrait picture, wherein the target head portrait picture comprises a plurality of face units for forming a real face image.
Step S204: and acquiring target avatar parameters corresponding to the target face unit, wherein the target avatar parameters comprise a plurality of first avatar subparameters, and the first avatar subparameters are used for representing the category of the subparameters of the target face unit.
Step S205: and acquiring a plurality of pre-classified image data, wherein each pre-classified image data represents the characteristics of a target face unit of one category, the pre-classified image data comprises a plurality of second virtual image sub-parameters, and the second virtual image sub-parameters represent the categories of the sub-characteristics of the target face unit of the corresponding category.
Illustratively, the pre-classified image data is standard data characterizing the target face unit features, and in some possible implementations, when the target avatar parameters are obtained, the preset map type corresponding to the target avatar parameters cannot be directly determined through a specific mapping relationship (first mapping information) as in the embodiment shown in fig. 2. Specifically, the category density corresponding to the target avatar parameter is large, and the density of the preset map type is small, for example, the face unit is a face type, and the preset map type corresponds to only two parameter values, namely, a "round face map" and a "square face map", respectively. The target virtual image parameters correspond to 10 parameter values, and the parameter values between the round face map and the square face map cannot be matched.
In order to solve the above problems, the present disclosure achieves accurate classification of target avatar parameters by acquiring a plurality of pre-classified avatar data, and comparing the second avatar sub-parameter in the pre-classified avatar data as a "standard value" of each preset map type with the target avatar parameters, and taking the one closest to the target avatar parameters (with the lowest parameter distance) as an optimal class.
Step S206: and obtaining the distance value between the target virtual image parameter and each pre-classified image data based on the first virtual image sub-parameter in the target virtual image parameter and the second virtual image sub-parameter in each pre-classified image data.
Step S207: and obtaining target pre-classification image data according to each distance value, wherein the target pre-classification image data is the pre-classification image data corresponding to the minimum distance value.
Step S208: and obtaining a corresponding preset map type according to the category of the face unit corresponding to the target pre-classified image data.
Exemplarily, step S206 is a step of calculating a distance value of the target avatar parameter and the pre-classified avatar data. Illustratively, the target avatar parameter includes a plurality of first avatar sub-parameters, the pre-classified avatar data includes a corresponding number of second avatar sub-parameters, and the sum of absolute values of differences between the first avatar sub-parameters and the corresponding second avatar sub-parameters is a distance value between the target avatar parameter and each of the pre-classified avatar data.
Fig. 9 is a schematic diagram of determining a preset map type corresponding to a target avatar parameter based on a distance value according to an embodiment of the present disclosure, as shown in fig. 9, a target avatar parameter face _ real includes two first avatar sub-parameters, which are represented as:
face_real=[face_real_1,face_real_2];
wherein, face _ real _1 represents the width of forehead, face _ real _2 represents the width of chin, as shown in the figure, face _ real _1=2; face _ real _2=4, the parameter value may be a normalized unit corresponding to the distance value. The pre-classified image data face _ data _ A corresponds to a round face shape; and pre-classified image data face _ data _ B corresponds to a 'square face shape', and the face _ data _ a and the face _ data _ B respectively comprise two second virtual image sub-parameters, which are specifically expressed as follows:
face_data_A=[face_data_A_1,face_data_A_2];
face_data_B=[face_data_B_1,face_data_B_2];
wherein, face _ data _ a _1=2, face _data _a _1=3; face _ data _ B _1=4,face _data _b _ _2=4. Based on the parameters, calculating the distance between the target virtual image parameter face _ real and the pre-classified image data face _ dataA as follows:
D1=|face_real_1-face_data_A_1|+|face_real_2-face_data_A_2|=1。
the distance between the target virtual image parameter face _ real and the pre-classified image data face _ dataA is as follows:
D2=|face_real_1-face_data_B_1|+|face_real_2-face_data_B_2|=2。
then, the face _ data _ a corresponding to the smaller D1 is determined as the target pre-classified image data, and further, according to the type of the face unit corresponding to the face _ data _ a, the preset map type corresponding to the target avatar parameter, that is, the "round face type" in the embodiment shown in fig. 9, is obtained.
Further, as shown in fig. 10, the specific implementation step of step S205 includes:
step S2051: acquiring a plurality of real head portrait pictures corresponding to the target category identification;
step S2052: processing each real head portrait picture through a head portrait classification model to obtain a plurality of initial virtual image sub-parameters corresponding to each real head portrait picture;
step S2053: and calculating the average value of the initial virtual image sub-parameters corresponding to each real head portrait picture to obtain second virtual image sub-parameters corresponding to each initial virtual image sub-parameter.
Step S2054: if the target class identifier is the tail class identifier, ending the cycle; otherwise, return to step S2051.
Illustratively, a real head portrait picture is a real person image captured by a camera, and multiple corresponding real head portrait pictures are obtained for each preset category of the target face unit. For example, the target face unit takes the face shape as an example, 10 representative real-person pictures are collected for different face shapes, such as a square face, a sunflower seed face, a round face and a pear-shaped face, and then 10 initial avatar parameters corresponding to the square-face picture, the sunflower seed-face picture, the round-face picture and the pear-shaped face picture are obtained by sequentially processing a group of 10 square-face pictures, sunflower seed-face pictures, round-face pictures and pear-shaped face pictures through an avatar classification model, wherein each initial avatar parameter includes a plurality of initial avatar subparameters, for example, 3 initial avatar subparameters, which respectively represent a forehead width, a cheekbone height and a chin width, that is, a forehead width, a cheekbone height and a chin width represented by the 3 initial avatar subparameters, so as to express the face shape features. Then, the same term averaging is performed on the 10 initial avatar parameters corresponding to each group of photos, namely 10 forehead width averaging, 10 cheekbone height averaging, and 10 chin width averaging, so as to obtain an average value corresponding to each initial avatar sub-parameter, namely a second avatar sub-parameter. Then, a plurality of second avatar sub-parameters of the same group are formed into a pre-classified avatar data. And then, judging whether the target type identification of the indication target face unit is at the end, if so, continuing to process the next target face unit until obtaining a plurality of pre-classified image data corresponding to each face unit.
It should be noted that, in order to solve the above technical problem, in this embodiment, when multiple corresponding real avatar pictures are obtained for each preset category of the target face unit, the preset category is consistent with the preset map type corresponding to the face unit map in the material library, so as to achieve the purpose of serving as a "standard value" of each preset map type.
In the embodiment, by acquiring each preset category of the target face unit, a plurality of corresponding real head portrait pictures are acquired and are averagely superposed to obtain the classification representation corresponding to the real face image, so that the target virtual image parameters are accurately mapped with the preset map type under the condition of no accurate mapping information, and the accuracy of the preset map type is improved.
Illustratively, as shown in fig. 11, the specific implementation step of step S206 includes:
step S2061: and acquiring a weighting coefficient corresponding to each first virtual character subparameter.
Step S2062: and aiming at each pre-classified image data, calculating the weighted distance between each first virtual image sub-parameter and the corresponding second virtual image sub-parameter respectively based on the weighted coefficient corresponding to each first virtual image sub-parameter.
Step S2063: and obtaining a distance value corresponding to the pre-classified image data based on the sum of the weighted distances.
In this embodiment, different weighting coefficients are used for weighting calculation of different first avatar sub-parameters, a larger weighting coefficient is set for more critical face units, such as eye shapes and face shapes, and a smaller weighting coefficient is set for non-critical face units, such as ear shapes, so as to further improve the accuracy of the determined preset map type and improve the consistency of the avatar and the real avatar. The weighting coefficient corresponding to each first avatar subparameter may be preset, and is not described herein again.
In the embodiment of the disclosure, a plurality of pre-classified image data are obtained, the second avatar sub-parameter in the pre-classified image data is used as a "standard value" of each preset map type, and is compared with the target avatar parameter, and the one closest to the target avatar parameter (with the smallest distance value) is used as the optimal category, so as to realize the accurate classification of the target avatar parameter.
Step S209: size information of a face unit map is obtained, and the size information represents the outline size of the face unit map, wherein the face unit map comprises a basic face unit map and a non-basic face unit map.
Step S210: and determining the size coefficient of each non-basic face unit map according to the proportional relation of the size information of the basic face unit map and the size information of the non-basic face unit map.
Step S211: and adjusting the contour of each non-basic face unit map based on the size coefficient of each non-basic face unit map to generate a corrected face unit map, and generating a virtual face image corresponding to the target head portrait picture based on the basic face unit maps and the corrected face unit maps.
Illustratively, after the face unit maps corresponding to the face units are obtained, the face unit maps are rendered, and then the corresponding virtual face images can be generated. In the process, the influence of the precision of each face unit map and the identification accuracy is received, which may cause the problem that the size ratio between each face unit map is inaccurate, thereby causing the distortion of the generated virtual face image. In this embodiment, after obtaining the face unit maps corresponding to the face units, the face unit maps are first obtained by dividing the face unit maps into a basic face unit map and a non-basic face unit map. Illustratively, the facial unit map corresponding to the facial form is a base facial unit map; other face unit maps corresponding to the eye shape, the nose shape, and the like are non-basic face unit maps, which is only exemplary, and other face unit maps or a plurality of face unit maps may also be used as basic face unit maps, which is not described herein again.
Then, size information of the outline size represented by the basic face unit map and the non-basic face unit map is obtained, and the size information can be directly obtained as attribute information of the face unit map; or obtained by measurement. The method comprises the steps of determining the size coefficient of each non-basic face unit map according to the proportional relation of the size information of the basic face unit map and the size information of the non-basic face unit maps, and then scaling the non-basic face unit maps based on a preset standard size coefficient, wherein for example, if the size coefficient of the face unit map corresponding to the nose type is 0.2 and is greater than the standard size coefficient corresponding to the nose type by 0.08, the non-basic face unit maps are required in proportion, so that the proportion of the face unit map corresponding to the nose type and the face unit map corresponding to the face type is more real, and the reality of a virtual face image is improved.
In this embodiment, the implementation manners of step S203 to step S204 are already described in step 103 in the embodiment shown in fig. 2 of the present disclosure, and are not described again.
Fig. 12 is a block diagram of an avatar generating apparatus according to an embodiment of the present disclosure, corresponding to the avatar generating method according to the above embodiment. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 12, the virtual face image generating apparatus 3 includes:
an obtaining module 31, configured to obtain a target head portrait picture, where the target head portrait picture includes a plurality of face units for forming a real face image;
the processing module 32 is configured to process the target avatar image through the pre-trained avatar classification model to obtain an avatar parameter corresponding to each face unit, where the avatar parameter is used to represent a category of the corresponding face unit;
and the generating module 33 is configured to obtain a face unit map corresponding to the face unit according to the avatar parameter, and generate an avatar corresponding to the target avatar picture based on the face unit map.
In a possible implementation manner, the generating module 33, when obtaining the face unit map corresponding to the face unit according to the avatar parameter, is specifically configured to: acquiring target virtual image parameters corresponding to the target face unit; obtaining a corresponding preset map type according to the target virtual image parameters, wherein the preset map type represents a preset category of the face unit map; and obtaining a face unit map corresponding to the target face unit according to the preset map type.
In one possible implementation, the target avatar parameter includes a plurality of avatar sub-parameters, the avatar sub-parameters being used to characterize a category of sub-features of the target face unit; the generating module 33 is specifically configured to, when obtaining the corresponding preset map type according to the target avatar parameter: acquiring preset first mapping information, wherein the first mapping information is used for representing a mapping relation between a target virtual image parameter consisting of a plurality of virtual image sub-parameters and a preset map type; and determining the preset map type corresponding to the target avatar parameter according to the first mapping information.
In one possible implementation, the target avatar parameter includes a plurality of first avatar sub-parameters, the first avatar sub-parameters characterizing a category of sub-features of the target face unit; the generating module 33 is specifically configured to, when obtaining the corresponding preset map type according to the target avatar parameter: acquiring a plurality of pre-classified image data, wherein each pre-classified image data represents the characteristics of a target face unit of one category, the pre-classified image data comprises a plurality of second virtual image sub-parameters, and the second virtual image sub-parameters represent the categories of the sub-characteristics of the target face unit of the corresponding category; obtaining a distance value between the target virtual image parameter and each pre-classified image data based on a first virtual image sub-parameter in the target virtual image parameter and a second virtual image sub-parameter in each pre-classified image data; obtaining target pre-classification image data according to each distance value, wherein the target pre-classification image data is the pre-classification image data corresponding to the minimum distance value; and obtaining a corresponding preset map type according to the category of the face unit corresponding to the target pre-classified image data.
In a possible implementation manner, the generating module 33, when acquiring a plurality of pre-classified character data, is specifically configured to: for each preset category of target face unit, sequentially performing the following steps: acquiring a plurality of real head portrait pictures corresponding to a target face unit of a preset category; processing each real head portrait picture through the head portrait classification model to obtain a plurality of initial virtual image sub-parameters corresponding to each real head portrait picture; and calculating the average value of the initial virtual image sub-parameters corresponding to each real head portrait picture to obtain second virtual image sub-parameters corresponding to each initial virtual image sub-parameter.
In a possible implementation manner, the generating module 33 is specifically configured to, when obtaining the distance value between the avatar parameter and each pre-classified avatar data based on the first avatar sub-parameter in the target avatar parameter and the second avatar sub-parameter in each pre-classified avatar data: acquiring a weighting coefficient corresponding to each first virtual character sub-parameter; and aiming at each pre-classified image data, calculating the weighted distance between each first virtual image sub-parameter and the corresponding second virtual image sub-parameter respectively based on the weighted coefficient corresponding to each first virtual image sub-parameter, and obtaining the distance value corresponding to the pre-classified image data based on each weighted distance.
In a possible implementation manner, the generating module 33, when generating the virtual face image corresponding to the target avatar picture based on the face unit map, is specifically configured to: acquiring size information of a face unit map, wherein the size information represents the outline size of the face unit map, and the face unit map comprises a basic face unit map and a non-basic face unit map; determining a size coefficient of each non-basic face unit map according to the proportional relation of the size information of the basic face unit map and the size information of the non-basic face unit map; and adjusting the contour of each non-basic face unit map based on the size coefficient of each non-basic face unit map to generate a corrected face unit map, and generating a virtual face image corresponding to the target head portrait picture based on the basic face unit maps and the corrected face unit maps.
In a possible implementation manner, before acquiring the target avatar picture, the processing module 32 is further configured to: generating a training sample based on a preset virtual image engine, wherein the training sample comprises a simulated head portrait picture and corresponding simulated image parameters, the simulated head portrait picture comprises a plurality of face units for forming a simulated face image, and the simulated image parameters are used for representing the characteristics of the face units in the simulated head portrait picture; and training a preset neural network model to be convergent based on the training sample to obtain a head portrait classification model.
In a possible implementation manner, the processing module 32, when generating the training sample based on the preset avatar engine, is specifically configured to: acquiring a plurality of preset image sub-parameters corresponding to each face unit, wherein the preset image sub-parameters are used for representing the category of the sub-features of the face unit; combining different parameter values of preset image sub-parameters corresponding to each face unit to generate a plurality of image parameter groups corresponding to each face unit, wherein each image parameter group comprises one parameter value of each preset image sub-parameter corresponding to each face unit; traversing the combination formed by any image parameter group corresponding to each face unit to generate a plurality of simulated image parameters, and calling the virtual image engine based on the simulated image parameters to generate a corresponding number of simulated head portrait pictures.
The obtaining module 31, the processing module 32, and the generating module 33 are connected in sequence. The virtual face image generating device 3 provided in this embodiment may implement the technical solutions of the above method embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 13, the electronic device 4 includes:
a processor 41, and a memory 42 communicatively coupled to the processor 41;
the memory 42 stores computer-executable instructions;
processor 41 executes computer-executable instructions stored by memory 42 to implement the avatar generation method in the embodiment shown in fig. 2-11.
Wherein optionally the processor 41 and the memory 42 are connected by a bus 43.
The relevant descriptions and effects corresponding to the steps in the embodiments corresponding to fig. 2 to fig. 11 can be understood, and are not described in detail herein.
Referring to fig. 14, a schematic structural diagram of an electronic device 900 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 14, the electronic device 900 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing apparatus 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication device 909 may allow the electronic apparatus 900 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 14 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing apparatus 901.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided an avatar generation method, including:
acquiring a target head portrait picture, wherein the target head portrait picture comprises a plurality of face units for forming a real face image; processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit; and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map.
According to one or more embodiments of the present disclosure, obtaining a face unit map corresponding to the face unit according to the avatar parameter includes: acquiring target virtual image parameters corresponding to the target face unit; obtaining a corresponding preset map type according to the target virtual image parameters, wherein the preset map type represents a preset category of the face unit map; and obtaining a face unit map corresponding to the target face unit according to the preset map type.
According to one or more embodiments of the present disclosure, the target avatar parameters include a plurality of avatar sub-parameters for characterizing a category of sub-features of the target face unit; obtaining a corresponding preset map type according to the target avatar parameter, wherein the preset map type comprises: acquiring preset first mapping information, wherein the first mapping information is used for representing a mapping relation between a target virtual image parameter formed by the virtual image sub-parameters and the preset map type; and determining a preset map type corresponding to the target avatar parameter according to the first mapping information.
According to one or more embodiments of the present disclosure, the target avatar parameters include a plurality of first avatar sub-parameters for characterizing a category of sub-features of the target face unit; the step of obtaining the corresponding preset map type according to the target virtual image parameters comprises the following steps: acquiring a plurality of pre-classified image data, wherein each pre-classified image data represents the characteristics of a target face unit of a category, the pre-classified image data comprises a plurality of second virtual image sub-parameters, and the second virtual image sub-parameters represent the categories of the sub-characteristics of the target face units of the corresponding categories; obtaining a distance value between the target avatar parameter and each pre-classified avatar data based on a first avatar sub-parameter in the target avatar parameter and a second avatar sub-parameter in each pre-classified avatar data; obtaining target pre-classification image data according to the distance values, wherein the target pre-classification image data is the minimum pre-classification image data corresponding to the distance values; and obtaining a corresponding preset map type according to the category of the face unit corresponding to the target pre-classified image data.
According to one or more embodiments of the present disclosure, the obtaining a plurality of pre-classified character data includes: for each preset category of the target face unit, sequentially performing the following steps: acquiring a plurality of real head portrait pictures corresponding to the target face unit of the preset category; processing each real head portrait picture through the head portrait classification model to obtain a plurality of initial virtual image sub-parameters corresponding to each real head portrait picture; and calculating the average value of the initial virtual image sub-parameters corresponding to each real head portrait picture to obtain second virtual image sub-parameters corresponding to each initial virtual image sub-parameter.
According to one or more embodiments of the present disclosure, obtaining a distance value between the avatar parameter and each of the pre-classified avatar data based on a first avatar subparameter of the target avatar parameter and a second avatar subparameter of each of the pre-classified avatar data comprises: acquiring a weighting coefficient corresponding to each first virtual character subparameter; and calculating the weighted distance between each first virtual character sub-parameter and the corresponding second virtual character sub-parameter according to the pre-classified image data and the weighted coefficient corresponding to each first virtual character sub-parameter, and obtaining the distance value corresponding to the pre-classified image data according to each weighted distance.
According to one or more embodiments of the present disclosure, the generating an avatar corresponding to the target avatar picture based on the face unit map includes: acquiring size information of the face unit map, wherein the size information represents the outline size of the face unit map, and the face unit map comprises a basic face unit map and a non-basic face unit map; determining a size coefficient of each non-base face unit map according to a proportional relationship between size information of the base face unit map and size information of the non-base face unit map; and adjusting the contour of each non-basic face unit map based on the size coefficient of each non-basic face unit map to generate a corrected face unit map, and generating a virtual face image corresponding to the target head portrait picture based on the basic face unit maps and the corrected face unit maps.
According to one or more embodiments of the present disclosure, before the acquiring the target avatar picture, the method further includes: generating a training sample based on a preset virtual image engine, wherein the training sample comprises a simulated head portrait picture and corresponding simulated image parameters, the simulated head portrait picture comprises a plurality of face units for forming a simulated face image, and the simulated image parameters are used for representing the characteristics of the face units in the simulated head portrait picture; and training a preset neural network model to be converged based on the training sample to obtain the head portrait classification model.
According to one or more embodiments of the present disclosure, the generating of the training sample based on the preset avatar engine includes: acquiring a plurality of preset image subparameters corresponding to each face unit, wherein the preset image subparameters are used for representing the category of the subparameters of the face unit; combining different parameter values of the preset image sub-parameters corresponding to each face unit to generate a plurality of image parameter groups corresponding to each face unit, wherein each image parameter group comprises one parameter value of each preset image sub-parameter corresponding to each face unit; traversing the combination formed by any image parameter group corresponding to each face unit to generate a plurality of simulated image parameters, and calling the virtual image engine based on the simulated image parameters to generate a corresponding number of simulated head portrait pictures.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided an avatar generation apparatus including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target head portrait picture which comprises a plurality of face units used for forming a real face image;
the processing module is used for processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, and the virtual image parameters are used for representing the category of the corresponding face unit;
and the generating module is used for obtaining a face unit map corresponding to the face unit according to the virtual image parameters and generating the virtual face image corresponding to the target head image picture based on the face unit map.
According to one or more embodiments of the present disclosure, when obtaining the face unit map corresponding to the face unit according to the avatar parameter, the generating module is specifically configured to: acquiring target virtual image parameters corresponding to the target face unit; obtaining a corresponding preset map type according to the target virtual image parameters, wherein the preset map type represents a preset category of the face unit map; and obtaining a face unit map corresponding to the target face unit according to the preset map type.
According to one or more embodiments of the present disclosure, the target avatar parameters include a plurality of avatar sub-parameters for characterizing a category of sub-features of the target face unit; the generating module is specifically configured to, when obtaining the corresponding preset map type according to the target avatar parameter: acquiring preset first mapping information, wherein the first mapping information is used for representing a mapping relation between a target avatar parameter consisting of the plurality of avatar sub-parameters and the preset map type; and determining a preset map type corresponding to the target avatar parameter according to the first mapping information.
According to one or more embodiments of the present disclosure, the target avatar parameters include a plurality of first avatar sub-parameters for characterizing a category of sub-features of the target face unit; the generation module is specifically configured to, when obtaining the corresponding preset map type according to the target avatar parameter: acquiring a plurality of pre-classified image data, wherein each pre-classified image data respectively represents the characteristics of a target face unit of one category, the pre-classified image data comprises a plurality of second virtual image sub-parameters, and the second virtual image sub-parameters represent the categories of the sub-characteristics of the target face unit of the corresponding category; obtaining a distance value between the target avatar parameter and each pre-classified avatar data based on a first avatar sub-parameter in the target avatar parameter and a second avatar sub-parameter in each pre-classified avatar data; obtaining target pre-classification image data according to the distance values, wherein the target pre-classification image data is the minimum pre-classification image data corresponding to the distance values; and obtaining a corresponding preset map type according to the category of the face unit corresponding to the target pre-classified image data.
According to one or more embodiments of the present disclosure, when obtaining a plurality of pre-classified visual data, the generating module is specifically configured to: sequentially performing, for each preset category of the target face unit, the following steps: acquiring a plurality of real head portrait pictures corresponding to the target face unit of the preset category; processing each real head portrait picture through the head portrait classification model to obtain a plurality of initial virtual image sub-parameters corresponding to each real head portrait picture; and calculating the average value of the initial virtual image sub-parameters corresponding to each real head portrait picture to obtain second virtual image sub-parameters corresponding to each initial virtual image sub-parameter.
According to one or more embodiments of the present disclosure, the generating module, when obtaining the distance value between the avatar parameter and each of the pre-classified avatar data based on the first avatar sub-parameter in the target avatar parameter and the second avatar sub-parameter in each of the pre-classified avatar data, is specifically configured to: acquiring a weighting coefficient corresponding to each first virtual image sub-parameter; and calculating the weighted distance between each first virtual character sub-parameter and the corresponding second virtual character sub-parameter according to the pre-classified image data and the weighted coefficient corresponding to each first virtual character sub-parameter, and obtaining the distance value corresponding to the pre-classified image data according to each weighted distance.
According to one or more embodiments of the present disclosure, when generating the virtual face image corresponding to the target avatar picture based on the face unit map, the generating module is specifically configured to: acquiring size information of the face unit map, wherein the size information represents the outline size of the face unit map, and the face unit map comprises a basic face unit map and a non-basic face unit map; determining a size coefficient of each non-base face unit map according to a proportional relationship between size information of the base face unit map and size information of the non-base face unit map; and adjusting the contour of each non-basic face unit map based on the size coefficient of each non-basic face unit map to generate a corrected face unit map, and generating a virtual face image corresponding to the target head portrait picture based on the basic face unit maps and the corrected face unit maps.
According to one or more embodiments of the present disclosure, before the obtaining of the target avatar picture, the processing module is further configured to: generating a training sample based on a preset virtual image engine, wherein the training sample comprises a simulated head portrait picture and corresponding simulated image parameters, the simulated head portrait picture comprises a plurality of face units for forming a simulated face image, and the simulated image parameters are used for representing the characteristics of the face units in the simulated head portrait picture; and training a preset neural network model to be converged based on the training sample to obtain the head portrait classification model.
According to one or more embodiments of the present disclosure, when generating the training sample based on the preset avatar engine, the processing module is specifically configured to: acquiring a plurality of preset image sub-parameters corresponding to each face unit, wherein the preset image sub-parameters are used for representing the category of the sub-features of the face unit; combining different parameter values of the preset image sub-parameters corresponding to each face unit to generate a plurality of image parameter groups corresponding to each face unit, wherein each image parameter group comprises one parameter value of each preset image sub-parameter corresponding to each face unit; and traversing a combination formed by any image parameter group corresponding to each face unit to generate a plurality of simulation image parameters, and calling the virtual image engine based on the simulation image parameters to generate a corresponding number of simulation head portrait pictures.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device, including: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of generating an avatar as described above in the first aspect and in various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the avatar generation method as described in the first aspect above and in various possible designs of the first aspect.
In a fifth aspect, the disclosed embodiments provide a computer program product comprising a computer program that, when executed by a processor, implements an avatar generation method as described above in the first aspect and in various possible designs of the first aspect.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (13)

1. An avatar generating method, comprising:
acquiring a target head portrait picture, wherein the target head portrait picture comprises a plurality of face units for forming a real face image;
processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, wherein the virtual image parameters are used for representing the category of the corresponding face unit;
and obtaining a face unit map corresponding to the face unit according to the virtual image parameters, and generating a virtual face image corresponding to the target head portrait picture based on the face unit map.
2. The method according to claim 1, wherein said deriving a face unit map corresponding to said face unit based on said avatar parameters comprises:
acquiring target virtual image parameters corresponding to the target face unit;
obtaining a corresponding preset map type according to the target virtual image parameter, wherein the preset map type represents a preset category of the face unit map;
and obtaining a face unit map corresponding to the target face unit according to the preset map type.
3. The method according to claim 2, wherein the target avatar parameter comprises a plurality of avatar sub-parameters characterizing a category of sub-features of the target face unit; obtaining a corresponding preset map type according to the target avatar parameter, wherein the preset map type comprises:
acquiring preset first mapping information, wherein the first mapping information is used for representing a mapping relation between a target avatar parameter consisting of the plurality of avatar sub-parameters and the preset map type;
and determining a preset map type corresponding to the target avatar parameter according to the first mapping information.
4. The method according to claim 2, wherein the target avatar parameters include a plurality of first avatar sub-parameters characterizing a category of sub-features of the target face unit; obtaining a corresponding preset map type according to the target avatar parameter, wherein the preset map type comprises:
acquiring a plurality of pre-classified image data, wherein each pre-classified image data represents the characteristics of a target face unit of a category, the pre-classified image data comprises a plurality of second virtual image sub-parameters, and the second virtual image sub-parameters represent the categories of the sub-characteristics of the target face units of the corresponding categories;
obtaining a distance value between the target avatar parameter and each pre-classified avatar data based on a first avatar sub-parameter in the target avatar parameter and a second avatar sub-parameter in each pre-classified avatar data;
obtaining target pre-classification image data according to the distance values, wherein the target pre-classification image data is the minimum pre-classification image data corresponding to the distance values;
and obtaining a corresponding preset map type according to the category of the face unit corresponding to the target pre-classified image data.
5. The method of claim 4, wherein said obtaining a plurality of pre-classified character data comprises:
sequentially performing, for each preset category of the target face unit, the following steps:
acquiring a plurality of real head portrait pictures corresponding to the target face unit of the preset category;
processing each real head portrait picture through the head portrait classification model to obtain a plurality of initial virtual image sub-parameters corresponding to each real head portrait picture;
and calculating the average value of the initial virtual image sub-parameters corresponding to each real head portrait picture to obtain second virtual image sub-parameters corresponding to each initial virtual image sub-parameter.
6. The method of claim 4, wherein obtaining a distance value of the avatar parameter from each of the pre-classified avatar data based on a first avatar sub-parameter in the target avatar parameter and a second avatar sub-parameter in each of the pre-classified avatar data comprises:
acquiring a weighting coefficient corresponding to each first virtual character subparameter;
and aiming at each pre-classified image data, calculating the weighted distance between each first virtual image sub-parameter and the corresponding second virtual image sub-parameter respectively based on the weighted coefficient corresponding to each first virtual image sub-parameter, and obtaining the distance value corresponding to the pre-classified image data based on each weighted distance.
7. The method according to any of claims 1-6, wherein said generating an avatar corresponding to said target avatar picture based on said face unit map comprises:
acquiring size information of the face unit map, wherein the size information represents the outline size of the face unit map, and the face unit map comprises a basic face unit map and a non-basic face unit map;
determining a size coefficient of each non-basic face unit map according to a proportional relation between size information of the basic face unit map and size information of the non-basic face unit map;
and adjusting the contour of each non-basic face unit map based on the size coefficient of each non-basic face unit map to generate a corrected face unit map, and generating a virtual face image corresponding to the target head portrait picture based on the basic face unit maps and the corrected face unit maps.
8. The method according to any one of claims 1-6, further comprising, prior to said obtaining a target avatar picture:
generating a training sample based on a preset virtual image engine, wherein the training sample comprises a simulated head portrait picture and corresponding simulated image parameters, the simulated head portrait picture comprises a plurality of face units for forming a simulated face image, and the simulated image parameters are used for representing the characteristics of the face units in the simulated head portrait picture;
and training a preset neural network model to be converged based on the training sample to obtain the head portrait classification model.
9. The method of claim 8, wherein generating training samples based on a preset avatar engine comprises:
acquiring a plurality of preset image subparameters corresponding to each face unit, wherein the preset image subparameters are used for representing the category of the subparameters of the face unit;
combining different parameter values of the preset image sub-parameters corresponding to each face unit to generate a plurality of image parameter groups corresponding to each face unit, wherein the image parameter groups comprise one parameter value of each preset image sub-parameter corresponding to each face unit;
and traversing a combination formed by any image parameter group corresponding to each face unit to generate a plurality of simulation image parameters, and calling the virtual image engine based on the simulation image parameters to generate a corresponding number of simulation head portrait pictures.
10. An avatar generation apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target head portrait picture which comprises a plurality of face units used for forming a real face image;
the processing module is used for processing the target head portrait picture through a pre-trained head portrait classification model to obtain virtual face image parameters corresponding to each face unit, and the virtual image parameters are used for representing the category of the corresponding face unit;
and the generating module is used for obtaining a face unit map corresponding to the face unit according to the virtual image parameters and generating a virtual face image corresponding to the target head image picture based on the face unit map.
11. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the avatar generation method of any of claims 1-9.
12. A computer-readable storage medium characterized in that a computer-executable instruction is stored therein, which when executed by a processor, implements the virtual facial image generation method of any one of claims 1 to 9.
13. A computer program product comprising a computer program that when executed by a processor implements the avatar generation method of any of claims 1-9.
CN202210743816.8A 2022-06-27 2022-06-27 Virtual face image generation method and device, electronic equipment and storage medium Pending CN115936970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210743816.8A CN115936970A (en) 2022-06-27 2022-06-27 Virtual face image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210743816.8A CN115936970A (en) 2022-06-27 2022-06-27 Virtual face image generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115936970A true CN115936970A (en) 2023-04-07

Family

ID=86554544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210743816.8A Pending CN115936970A (en) 2022-06-27 2022-06-27 Virtual face image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115936970A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
CN112164135A (en) * 2020-09-30 2021-01-01 山西智优利民健康管理咨询有限公司 Virtual character image construction device and method
CN112634416A (en) * 2020-12-23 2021-04-09 北京达佳互联信息技术有限公司 Method and device for generating virtual image model, electronic equipment and storage medium
CN113096235A (en) * 2021-03-09 2021-07-09 北京达佳互联信息技术有限公司 Virtual object control method and device, electronic equipment and storage medium
JP2021144706A (en) * 2020-03-09 2021-09-24 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド Generating method and generating apparatus for virtual avatar
CN114266695A (en) * 2021-12-24 2022-04-01 阿里巴巴(中国)有限公司 Image processing method, image processing system and electronic equipment
CN114549709A (en) * 2022-02-24 2022-05-27 重庆长安汽车股份有限公司 User-defined 3D virtual image generation system and method
CN114648601A (en) * 2022-03-07 2022-06-21 北京百度网讯科技有限公司 Virtual image generation method, electronic device, program product and user terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
JP2021144706A (en) * 2020-03-09 2021-09-24 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド Generating method and generating apparatus for virtual avatar
CN112164135A (en) * 2020-09-30 2021-01-01 山西智优利民健康管理咨询有限公司 Virtual character image construction device and method
CN112634416A (en) * 2020-12-23 2021-04-09 北京达佳互联信息技术有限公司 Method and device for generating virtual image model, electronic equipment and storage medium
CN113096235A (en) * 2021-03-09 2021-07-09 北京达佳互联信息技术有限公司 Virtual object control method and device, electronic equipment and storage medium
CN114266695A (en) * 2021-12-24 2022-04-01 阿里巴巴(中国)有限公司 Image processing method, image processing system and electronic equipment
CN114549709A (en) * 2022-02-24 2022-05-27 重庆长安汽车股份有限公司 User-defined 3D virtual image generation system and method
CN114648601A (en) * 2022-03-07 2022-06-21 北京百度网讯科技有限公司 Virtual image generation method, electronic device, program product and user terminal

Similar Documents

Publication Publication Date Title
US11062494B2 (en) Electronic messaging utilizing animatable 3D models
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
CN114187633B (en) Image processing method and device, and training method and device for image generation model
US11455765B2 (en) Method and apparatus for generating virtual avatar
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
WO2022012085A1 (en) Face image processing method and apparatus, storage medium, and electronic device
WO2021146930A1 (en) Display processing method, display processing apparatus, electronic device and storage medium
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
EP3923244A1 (en) Animated image driving method and apparatus based on artificial intelligence
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN109754464B (en) Method and apparatus for generating information
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN115335820A (en) Location mapping for large-scale augmented reality
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
KR20230162107A (en) Facial synthesis for head rotations in augmented reality content
CN117078790B (en) Image generation method, device, computer equipment and storage medium
CN113744286A (en) Virtual hair generation method and device, computer readable medium and electronic equipment
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN115965840A (en) Image style migration and model training method, device, equipment and medium
CN110322571B (en) Page processing method, device and medium
US11741650B2 (en) Advanced electronic messaging utilizing animatable 3D models
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
JP2023178274A (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination