CN114998554A - Three-dimensional cartoon face modeling method and device - Google Patents
Three-dimensional cartoon face modeling method and device Download PDFInfo
- Publication number
- CN114998554A CN114998554A CN202210483519.4A CN202210483519A CN114998554A CN 114998554 A CN114998554 A CN 114998554A CN 202210483519 A CN202210483519 A CN 202210483519A CN 114998554 A CN114998554 A CN 114998554A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- face
- cartoon
- degree
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000000605 extraction Methods 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 abstract description 14
- 210000004709 eyebrow Anatomy 0.000 description 26
- 238000004891 communication Methods 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 8
- 210000000056 organ Anatomy 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000011835 investigation Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2024—Style variation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a three-dimensional cartoon face modeling method and a device, wherein the method comprises the following steps: extracting two-dimensional face features of the face of a target person in the two-dimensional image; extracting three-dimensional face features of the face according to the three-dimensional depth image of the target person; and fusing the two-dimensional face features, the three-dimensional face features and the preset cartoon style features to generate a three-dimensional cartoon face model of the target face. Therefore, the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real human face based on the neural network in the related technology are solved.
Description
Technical Field
The application relates to the technical field of computer graphics and deep learning, in particular to a three-dimensional cartoon face modeling method and device.
Background
The three-dimensional reconstruction is widely applied to the fields of 3D games, animations, movies and the like. As one of the main and most recognizable parts, the demand for three-dimensional face reconstruction is increasing.
The calculation improvement of the computer, the mobile terminal and other equipment is benefited, the technology for extracting the characteristics of the real human face by utilizing deep learning tends to be mature, and the related technology can generate the cartoon image of the real human face based on the neural network.
However, in the related technology, only the professional technicians can manually perform the cartoon image modeling, the popularization is difficult, and the generated three-dimensional cartoon face model has similar style, cannot realize diversification, does not have real face identification degree, and needs to be improved.
Disclosure of Invention
The application provides a three-dimensional cartoon face modeling method and device, and aims to solve the technical problems that the style of a generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on a real face based on a neural network in the related technology.
An embodiment of a first aspect of the present application provides a three-dimensional cartoon face modeling method, including the following steps: extracting two-dimensional face features of the face of a target person in the two-dimensional image; extracting three-dimensional face features of the face according to the three-dimensional depth map of the target person; and fusing the two-dimensional face features, the three-dimensional face features and preset cartoon style features to generate a three-dimensional cartoon face model of the target face.
Optionally, in an embodiment of the present application, the generating a three-dimensional cartoon face model of the target face includes: generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and preset cartoon style characteristics; and calculating the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face features, the three-dimensional face features and the preset cartoon style features based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value, so as to obtain a final three-dimensional cartoon face model.
Optionally, in an embodiment of the present application, after generating the new three-dimensional cartoon face model, the method further includes: obtaining a current feature weight value of at least one feature under a current iteration round; and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
Optionally, in an embodiment of the present application, the calculation formula of the presentation degree is:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V The degree of the original characteristics in the model before the V-th iteration is represented, a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V And (m) is the proportion of each characteristic in the V-th wheel in the satisfaction degree.
Optionally, in an embodiment of the application, the preset cartoon style feature comprises at least one of at least one disney cartoon style feature, at least one japanese cartoon style feature, and at least one Meta cartoon style feature.
An embodiment of a second aspect of the present application provides a three-dimensional cartoon face modeling apparatus, including: the first extraction module is used for extracting two-dimensional face features of the face of a target person in the two-dimensional image; the second extraction module is used for extracting the three-dimensional face features of the face according to the three-dimensional depth map of the target person; and the modeling module is used for fusing the two-dimensional face features, the three-dimensional face features and preset cartoon style features to generate a three-dimensional cartoon face model of the target face.
Optionally, in an embodiment of the present application, the modeling module includes: the fusion unit is used for generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and preset cartoon style characteristics; and the calculating unit is used for calculating the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face feature, the three-dimensional face feature and the preset cartoon style feature based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value to obtain a final three-dimensional cartoon face model.
Optionally, in an embodiment of the present application, the modeling module is further configured to obtain a current feature weight value of at least one feature in a current iteration round; and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
Optionally, in an embodiment of the present application, the calculation formula of the presentation degree is:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V The degree of the original characteristics in the model before the V-th iteration is represented, a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V And (m) is the proportion of each characteristic in the V-th wheel in the satisfaction degree.
Optionally, in an embodiment of the application, the preset cartoon style feature comprises at least one of at least one disney cartoon style feature, at least one japanese cartoon style feature, and at least one Meta cartoon style feature.
An embodiment of a third aspect of the present application provides an electronic device, including: the three-dimensional cartoon face modeling method comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the program to realize the three-dimensional cartoon face modeling method according to the embodiment.
In a fourth aspect of the present application, a computer-readable storage medium is provided, where the computer-readable storage medium stores computer instructions for causing the computer to execute the three-dimensional cartoon face modeling method according to the foregoing embodiment.
According to the embodiment of the application, the three-dimensional face features of the target person can be extracted by utilizing the three-dimensional depth map, the two-dimensional face features and the cartoon style features in the two-dimensional image of the target person are fused, so that the three-dimensional cartoon face model of the target person is generated, the generated three-dimensional face model has higher recognition degree, the style can be changed along with the change of the cartoon style, the flexibility is higher, the modeling requirement is effectively met, and the use experience is improved. Therefore, the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real human face based on the neural network in the related technology are solved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a three-dimensional cartoon face modeling method according to an embodiment of the present application;
FIG. 2 is a diagram of a Meta cartoon style according to one embodiment of the present application;
FIG. 3 is a flow chart of a method for modeling a three-dimensional cartoon face according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a three-dimensional cartoon face modeling apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a three-dimensional cartoon face modeling method and apparatus according to an embodiment of the present application with reference to the drawings. In order to solve the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real human face based on the neural network in the related technology mentioned in the background technology center, the application provides a three-dimensional cartoon human face modeling method. Therefore, the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real human face based on the neural network in the related technology are solved.
Specifically, fig. 1 is a schematic flow chart of a three-dimensional cartoon face modeling method provided in an embodiment of the present application.
As shown in fig. 1, the three-dimensional cartoon face modeling method comprises the following steps:
in step S101, two-dimensional face features of the face of the target person in the two-dimensional image are extracted.
It can be understood that the face features can be in a vector form extracted by a neural network and features representing the styles of various facial organs, representing the characteristics of a certain organ of a certain kind of people, and the extraction method can be, for example, obtained by training and learning based on a deep learning model or obtained by modeling detail features of the facial organs, specifically, taking eyebrows as an example, the eyebrows can be long eyebrows, short eyebrows, thick eyebrows, thin eyebrows, splayed eyebrows, upper eyebrows, straight eyebrows, new eyebrows, angle eyebrows, knit the brows eyebrows and the like according to the eyebrow shape.
In the actual implementation process, the embodiment of the application can extract the two-dimensional face features of the face of the target person in the two-dimensional image and the two-dimensional feature operators in the traditional graphics by using a neural network or the traditional computer graphics, can extract the face features from a plane angle, and is favorable for subsequent fusion with the three-dimensional face features, so that a three-dimensional face cartoon model with higher identification degree is generated.
In step S102, three-dimensional face features of the face are extracted from the three-dimensional depth map of the target person.
It can be understood that the face is stereoscopic, and is difficult to present all face features of the target person only by conventional two-dimensional face feature extraction, and particularly, the two-dimensional image may be limited by a light and shadow effect, so that the recognition degree of the face contour of the face is poor, the three-dimensional feature refers to a feature obtained after processing a three-dimensional depth map, the extraction mode is similar to that of the two-dimensional feature, for example, feature extraction is performed by Deep learning technology based on a neural network algorithm, and feature extraction is performed based on traditional graphical face modeling representation technology, and the form is slightly different from that of the two-dimensional feature.
Therefore, the embodiment of the application utilizes the traditional computer graphics to extract the three-dimensional face features of the target person according to the three-dimensional depth map of the target person, namely, the three-dimensional feature operator in the traditional graphics, so that the limitation of extracting the face features of the two-dimensional image can be effectively removed, and the subsequent generation of a three-dimensional face cartoon model with higher identification degree is facilitated.
In step S103, the two-dimensional face features, the three-dimensional face features, and the preset cartoon style features are fused to generate a three-dimensional cartoon face model of the target face.
As a possible implementation manner, in the embodiment of the present application, corresponding key features may be extracted from the two-dimensional face features and the three-dimensional face features, and the key features extracted from the two-dimensional face features, the corresponding key features extracted from the three-dimensional face features, and the preset cartoon style features are fused, so as to obtain a three-dimensional cartoon face model that better conforms to the specific style of the target face of the target character image.
The key features refer to features which have higher figure identification degree and figure satisfaction degree for a finally presented three-dimensional cartoon face model and are consistent with the cartoon style. The three characteristics, namely the two-dimensional face characteristic, the three-dimensional face characteristic and the preset cartoon style characteristic, respectively comprise different specific characteristic families, each characteristic represents a certain characteristic of a face organ, and the specific fusion mode is that the display degree of the different characteristics in the final face modeling result is controlled by a weighting method, so that the representative characteristic of a certain person is highlighted, other secondary characteristics are relatively weakened, and the requirement of meeting the identification degree index is met.
According to the method and the device for generating the three-dimensional human face model, the two-dimensional human face features, the three-dimensional human face features and the preset cartoon style features are fused, so that the generated three-dimensional human face model has higher recognition degree, can perform style conversion along with the change of the cartoon style, and is higher in flexibility and higher in character satisfaction degree.
Optionally, in an embodiment of the present application, generating a three-dimensional cartoon face model of a target face includes: generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and preset cartoon style characteristics; and calculating the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face features, the three-dimensional face features and the preset cartoon style features based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value to obtain a final three-dimensional cartoon face model.
In the actual execution process, the embodiment of the application can obtain the fusion characteristics by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and the preset cartoon style characteristics, so as to generate the initial three-dimensional cartoon face model of the target person.
Further, the initial three-dimensional cartoon face model can be optimized through calculation, specifically, the presentation degree of the initial three-dimensional cartoon face model can be calculated, when the presentation degree is smaller than a preset threshold, the embodiment of the application can linearly weight at least one of the two-dimensional face feature, the three-dimensional face feature and the preset cartoon style feature based on a preset standard, so as to generate a new three-dimensional cartoon face model, and iterative optimization is realized until the presentation degree of the new three-dimensional cartoon face model is greater than or equal to the preset threshold, so that a final three-dimensional cartoon face model is obtained.
The preset standard can be a standard of human identification and human satisfaction, specifically, the identification and the satisfaction serve a target group, the former is measured by the Loss of the input image and the output model on key features, the latter is based on the investigation of the target group, such as the scoring of results on a questionnaire, and in addition, the human satisfaction is taken as the main point when the two contradict with each other.
It should be noted that the preset threshold may be set by those skilled in the art according to actual situations; the preset standard may be set by a person skilled in the art according to an application scenario of the three-dimensional cartoon face model and a research result of a target crowd, and is not limited specifically herein.
Optionally, in an embodiment of the present application, the calculation formula of the presentation degree is:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V The degree of the original characteristics in the model before the V-th iteration is represented, a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V (m) is the V-th wheelThe characteristics of the Chinese character take the proportion of satisfaction.
Further, the terms in parentheses, namely:
for the updated value of the weight occupied by each facial feature in the V-th round, each feature weight updated value is applied to the feature rendering degree T of the previous round V Then the degree t of the original characteristics presented in the model after the V-th iteration is obtained V 。
Optionally, in an embodiment of the present application, after generating the new three-dimensional cartoon face model, the method further includes: obtaining a current feature weight value of at least one feature under a current iteration round; and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
Specifically, with reference to the above formula, the embodiment of the present application may obtain each feature weight value in the vth round by linear weighting of the vth round iteration according to each key feature under the criteria of the person identification degree and the person satisfaction degree, and further, the embodiment of the present application may multiply the variation value by the degree presentation value in the last round to obtain the presentation value in the next round.
Optionally, in one embodiment of the application, the preset cartoon style feature comprises at least one of at least one disney cartoon style feature, at least one japanese cartoon style feature, and at least one Meta cartoon style feature.
In some embodiments, the preset cartoon style may be, but is not limited to, a Disney cartoon style, a Japanese cartoon style, or a Meta cartoon style, for example, as shown in FIG. 2, a Meta cartoon style, a right side of a Zackberg, and a left side of a cartoon character generated from facial features of the Zackberg.
According to the embodiment of the application, corresponding style characteristics can be extracted from the styles to serve as preset cartoon style characteristics, and the preset cartoon style characteristics are fused with the two-dimensional face characteristics and the three-dimensional face characteristics, so that the three-dimensional face cartoon model with the specific style is generated.
It should be noted that the preset cartoon style characteristics may be set by those skilled in the art according to actual requirements, such as application scenarios of the three-dimensional human face cartoon model or group research results, and the like, which is not limited herein.
The working principle of the three-dimensional cartoon face modeling method according to the embodiment of the present application is described in detail with reference to fig. 3.
As shown in fig. 3, the embodiment of the present application includes the following steps:
step S301: and extracting the two-dimensional image characteristics of the target person. It can be understood that the face features can be in a vector form extracted by a neural network and features representing the styles of various facial organs, representing the characteristics of a certain organ of a certain kind of people, and the extraction method can be, for example, obtained by training and learning based on a deep learning model or obtained by modeling detail features of the facial organs, specifically, taking eyebrows as an example, the eyebrows can be long eyebrows, short eyebrows, thick eyebrows, thin eyebrows, splayed eyebrows, upper eyebrows, straight eyebrows, new eyebrows, angle eyebrows, knit the brows eyebrows and the like according to the eyebrow shape.
In the actual implementation process, the embodiment of the application can extract the two-dimensional face features of the face of the target person in the two-dimensional image and the two-dimensional feature operators in the traditional graphics by using a neural network or the traditional computer graphics, can extract the face features from a plane angle, and is favorable for subsequent fusion with the three-dimensional face features, so that a three-dimensional face cartoon model with higher identification degree is generated.
Step S302: and extracting the three-dimensional image characteristics of the target person. It can be understood that the face is stereoscopic, and only conventional two-dimensional face feature extraction is used, so that all face features of a target person are difficult to present, especially a two-dimensional image may be limited by a light and shadow effect, so that the recognition degree of the face contour of the face is poor, the three-dimensional feature refers to a feature obtained after processing a three-dimensional depth map, the extraction mode is similar to that of a two-dimensional feature, for example, feature extraction is performed by a Deep learning technology based on a neural network algorithm, and feature extraction is performed based on a traditional graphical face modeling representation technology, and the form is slightly different from that of a two-dimensional feature.
Therefore, the embodiment of the application utilizes the traditional computer graphics to extract the three-dimensional face features of the target person according to the three-dimensional depth map of the target person, namely, the three-dimensional feature operator in the traditional graphics, so that the limitation of extracting the face features of the two-dimensional image can be effectively removed, and the subsequent generation of a three-dimensional face cartoon model with higher identification degree is facilitated.
Step S303: and extracting the cartoon style characteristics. In some embodiments, the cartoon style may be a Disney cartoon style, a Japanese cartoon style, a Meta cartoon style, or the like, for example, as shown in FIG. 2, a Meta cartoon style, a right side with a Zackberg, and a left side with a cartoon character generated from facial features of the Zackberg. According to the embodiment of the application, corresponding style characteristics can be extracted from the styles to serve as preset cartoon style characteristics, and the preset cartoon style characteristics are fused with the two-dimensional face characteristics and the three-dimensional face characteristics, so that the three-dimensional face cartoon model with the specific style is generated.
It should be noted that the cartoon style may be set by a person skilled in the art according to actual requirements, such as an application scene of the three-dimensional human face cartoon model or a group investigation result, and the like, which is not limited herein.
Step S304: and fusing the characteristics to generate an initial three-dimensional human face cartoon model. As a possible implementation manner, in the embodiment of the present application, corresponding key features may be extracted from the two-dimensional face features and the three-dimensional face features, and the key features extracted from the two-dimensional face features, the corresponding key features extracted from the three-dimensional face features, and the preset cartoon style features are fused, so as to obtain a three-dimensional cartoon face model that better conforms to the specific style of the target face of the target person image.
The key features refer to features which have higher figure identification degree and figure satisfaction degree for a finally presented three-dimensional cartoon face model and are consistent with a cartoon style. The three characteristics, namely the two-dimensional face characteristic, the three-dimensional face characteristic and the preset cartoon style characteristic, respectively comprise different specific characteristic families, each characteristic represents a certain characteristic of a face organ, the specific fusion mode is that the display degree of the different characteristics in the final face modeling result is controlled by a weighting method, so that the representative characteristic of a certain person is highlighted, other secondary characteristics are relatively weakened, and the requirement of meeting the identification degree index is met.
According to the embodiment of the application, the two-dimensional face features, the three-dimensional face features and the preset cartoon style features are fused, so that the generated three-dimensional face model has higher identification degree, can be subjected to style conversion along with the change of the cartoon style, and is higher in flexibility and higher in character satisfaction degree.
Step S305: and performing iterative optimization on the initial three-dimensional human face cartoon model to obtain a final three-dimensional human face cartoon model. Further, the initial three-dimensional cartoon face model can be optimized through calculation, specifically, the presentation degree of the initial three-dimensional cartoon face model can be calculated, when the presentation degree is smaller than a preset threshold, the embodiment of the application can linearly weight at least one of the two-dimensional face feature, the three-dimensional face feature and the preset cartoon style feature based on a preset standard, so as to generate a new three-dimensional cartoon face model, and iterative optimization is realized until the presentation degree of the new three-dimensional cartoon face model is greater than or equal to the preset threshold, so that a final three-dimensional cartoon face model is obtained.
The preset standard can be a standard of human identification and human satisfaction, specifically, the identification and the satisfaction serve a target group, the former is measured by the Loss of the input image and the output model on key features, the latter is based on the investigation of the target group, such as the scoring of results on a questionnaire, and in addition, the human satisfaction is taken as the main point when the two contradict with each other.
Wherein, the calculation formula of the presentation degree is as follows:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V Representing the degree of the original characteristics in the model before the V-th iteration, wherein a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V And (m) is the proportion of each characteristic in the V-th wheel in the satisfaction degree.
Further, the terms in parentheses, namely:
for the updated value of the weight occupied by each facial feature in the V-th round, each feature weight updated value is applied to the feature rendering degree T of the previous round V Then the degree t of the original characteristics presented in the model after the V-th iteration is obtained V 。
Further, the embodiment of the present application may obtain each feature weight value in the vth round by linear weighting of the vth round iteration according to each key feature under the criteria of the figure identification degree and the figure satisfaction degree.
It should be noted that the preset threshold may be set by those skilled in the art according to actual situations; the preset standard may be set by a person skilled in the art according to an application scenario of the three-dimensional cartoon face model and a research result of a target crowd, and is not limited specifically herein.
According to the three-dimensional cartoon face modeling method provided by the embodiment of the application, the three-dimensional face features of the target person can be extracted by utilizing the three-dimensional depth map, and the two-dimensional face features and the cartoon style features in the two-dimensional image of the target person are fused, so that the three-dimensional cartoon face model of the target person is generated, the generated three-dimensional face model has higher identification degree, the style can be changed along with the change of the cartoon style, the flexibility is higher, the modeling requirement is effectively met, and the use experience is improved. Therefore, the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real face based on the neural network in the related technology are solved.
Next, a three-dimensional cartoon face modeling apparatus proposed according to an embodiment of the present application is described with reference to the drawings.
Fig. 4 is a schematic block diagram of a three-dimensional cartoon face modeling apparatus according to an embodiment of the present application.
As shown in fig. 4, the three-dimensional cartoon face modeling apparatus 10 includes: a first extraction module 100, a second extraction module 200, and a modeling module 300.
Specifically, the first extraction module 100 is configured to extract two-dimensional face features of a face of a target person in a two-dimensional image.
And a second extraction module 200, configured to extract a three-dimensional face feature of the face according to the three-dimensional depth map of the target person.
And the modeling module 300 is used for fusing the two-dimensional face features, the three-dimensional face features and the preset cartoon style features to generate a three-dimensional cartoon face model of the target face.
Optionally, in an embodiment of the present application, the modeling module 300 includes: a fusion unit and a calculation unit.
The fusion unit is used for generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and the preset cartoon style characteristics.
And the computing unit is used for computing the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face feature, the three-dimensional face feature and the preset cartoon style feature based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value to obtain the final three-dimensional cartoon face model.
Optionally, in an embodiment of the present application, the modeling module 300 is further configured to obtain a current feature weight value of at least one feature in a current iteration round; and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
Optionally, in an embodiment of the present application, the calculation formula of the presentation degree is:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V The degree of the original characteristics in the model before the V-th iteration is represented, a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V And (m) is the proportion of each characteristic in the V-th wheel in the satisfaction degree.
Optionally, in one embodiment of the application, the preset cartoon style feature comprises at least one of at least one discone cartoon style feature, at least one japanese cartoon style feature, and at least one Meta cartoon style feature.
It should be noted that the explanation of the embodiment of the three-dimensional cartoon face modeling method is also applicable to the three-dimensional cartoon face modeling apparatus of the embodiment, and details are not repeated here.
According to the three-dimensional cartoon face modeling device provided by the embodiment of the application, the three-dimensional face features of the target person can be extracted by utilizing the three-dimensional depth map, and the two-dimensional face features and the cartoon style features in the two-dimensional image of the target person are fused, so that the three-dimensional cartoon face model of the target person is generated, the generated three-dimensional face model has higher identification degree, the style can be changed along with the change of the cartoon style, the flexibility is higher, the modeling requirement is effectively met, and the use experience is improved. Therefore, the technical problems that the style of the generated cartoon image is single and the recognition degree is low due to the fact that the cartoon image modeling is carried out on the real human face based on the neural network in the related technology are solved.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 501, a processor 502, and a computer program stored on the memory 501 and executable on the processor 502.
The processor 502 executes the program to implement the three-dimensional cartoon face modeling method provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 503 for communication between the memory 501 and the processor 502.
A memory 501 for storing computer programs that can be run on the processor 502.
The memory 501 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 501, the processor 502 and the communication interface 503 are implemented independently, the communication interface 503, the memory 501 and the processor 502 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
Alternatively, in practical implementation, if the memory 501, the processor 502 and the communication interface 503 are integrated on a chip, the memory 501, the processor 502 and the communication interface 503 may complete communication with each other through an internal interface.
The processor 502 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above three-dimensional cartoon face modeling method.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (10)
1. A three-dimensional cartoon face modeling method is characterized by comprising the following steps:
extracting two-dimensional face features of the face of a target person in the two-dimensional image;
extracting three-dimensional face features of the face according to the three-dimensional depth map of the target person; and
and fusing the two-dimensional face features, the three-dimensional face features and preset cartoon style features to generate a three-dimensional cartoon face model of the target face.
2. The method of claim 1, wherein the generating the three-dimensional cartoon face model of the target face comprises:
generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and preset cartoon style characteristics;
and calculating the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face features, the three-dimensional face features and the preset cartoon style features based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value, so as to obtain a final three-dimensional cartoon face model.
3. The method of claim 2, further comprising, after generating the new three-dimensional cartoon face model:
obtaining a current feature weight value of at least one feature under a current iteration round;
and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
4. A method according to claim 2 or 3, characterized in that the calculation formula of the degree of presentation is:
wherein, K represents the total characteristic quantity strongly related to the character identification degree, M represents the total characteristic quantity strongly related to the character satisfaction degree index, T V The degree of the original characteristics in the model before the V-th iteration is represented, a represents the proportion of the character identification degree to the final characteristic presentation, 1-a represents the proportion of the character satisfaction degree to the final characteristic presentation, and D V (k) Is the proportion of each characteristic in the V-th wheel in the identification degree, S V (m) is a V-th wheelThe characteristics of the Chinese character take the proportion of satisfaction.
5. The method of any of claims 1-4, wherein the preset cartoon style features comprise at least one of at least one Disney cartoon style feature, at least one Japanese cartoon style feature, and at least one Meta cartoon style feature.
6. A three-dimensional cartoon face modeling device is characterized by comprising:
the first extraction module is used for extracting two-dimensional face features of the face of a target person in the two-dimensional image;
the second extraction module is used for extracting the three-dimensional face features of the face according to the three-dimensional depth map of the target person; and
and the modeling module is used for fusing the two-dimensional face features, the three-dimensional face features and preset cartoon style features to generate a three-dimensional cartoon face model of the target face.
7. The apparatus of claim 6, wherein the modeling module comprises:
the fusion unit is used for generating an initial three-dimensional cartoon face model of the target face according to fusion characteristics obtained by fusing the two-dimensional face characteristics, the three-dimensional face characteristics and preset cartoon style characteristics;
and the calculating unit is used for calculating the presenting degree of the initial three-dimensional cartoon face model, linearly weighting at least one of the two-dimensional face feature, the three-dimensional face feature and the preset cartoon style feature based on a preset standard when the presenting degree is smaller than a preset threshold value, generating a new three-dimensional cartoon face model, and performing iterative optimization until the presenting degree of the new three-dimensional cartoon face model is larger than or equal to the preset threshold value to obtain a final three-dimensional cartoon face model.
8. The apparatus of claim 7, wherein the modeling module is further configured to obtain a current feature weight value of the at least one feature at a current iteration; and obtaining the presentation degree of the new three-dimensional cartoon face model of the current iteration round according to the current characteristic weight value and the presentation degree of the new three-dimensional cartoon face model of the previous iteration round.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the three-dimensional cartoon face modeling method of any one of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, the program being executable by a processor for implementing a method for modeling a three-dimensional cartoon face according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210483519.4A CN114998554B (en) | 2022-05-05 | 2022-05-05 | Three-dimensional cartoon face modeling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210483519.4A CN114998554B (en) | 2022-05-05 | 2022-05-05 | Three-dimensional cartoon face modeling method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114998554A true CN114998554A (en) | 2022-09-02 |
CN114998554B CN114998554B (en) | 2024-08-20 |
Family
ID=83024474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210483519.4A Active CN114998554B (en) | 2022-05-05 | 2022-05-05 | Three-dimensional cartoon face modeling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114998554B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN111354079A (en) * | 2020-03-11 | 2020-06-30 | 腾讯科技(深圳)有限公司 | Three-dimensional face reconstruction network training and virtual face image generation method and device |
CN111402394A (en) * | 2020-02-13 | 2020-07-10 | 清华大学 | Three-dimensional exaggerated cartoon face generation method and device |
CN114299206A (en) * | 2021-12-31 | 2022-04-08 | 清华大学 | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium |
CN114419253A (en) * | 2022-01-13 | 2022-04-29 | 广州虎牙科技有限公司 | Construction and live broadcast method of cartoon face and related device |
-
2022
- 2022-05-05 CN CN202210483519.4A patent/CN114998554B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN111402394A (en) * | 2020-02-13 | 2020-07-10 | 清华大学 | Three-dimensional exaggerated cartoon face generation method and device |
CN111354079A (en) * | 2020-03-11 | 2020-06-30 | 腾讯科技(深圳)有限公司 | Three-dimensional face reconstruction network training and virtual face image generation method and device |
CN114299206A (en) * | 2021-12-31 | 2022-04-08 | 清华大学 | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium |
CN114419253A (en) * | 2022-01-13 | 2022-04-29 | 广州虎牙科技有限公司 | Construction and live broadcast method of cartoon face and related device |
Non-Patent Citations (2)
Title |
---|
YI ZHENG等: "Cartoon Face Recognition: A Benchmark Dataset", HTTPS://ARXIV.ORG/ABS/1907.13394, 27 June 2020 (2020-06-27) * |
署光等: "基于稀疏形变模型的三维卡通人脸生成", 电子学报, 15 August 2010 (2010-08-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN114998554B (en) | 2024-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961369B (en) | Method and device for generating 3D animation | |
CN109859296A (en) | Training method, server and the storage medium of SMPL parametric prediction model | |
CN109583509B (en) | Data generation method and device and electronic equipment | |
JP7129529B2 (en) | UV mapping to 3D objects using artificial intelligence | |
CN112419170A (en) | Method for training occlusion detection model and method for beautifying face image | |
CN113628327B (en) | Head three-dimensional reconstruction method and device | |
JP2024522287A (en) | 3D human body reconstruction method, apparatus, device and storage medium | |
CN112862807B (en) | Hair image-based data processing method and device | |
CN113362263B (en) | Method, apparatus, medium and program product for transforming an image of a virtual idol | |
CN111652123B (en) | Image processing and image synthesizing method, device and storage medium | |
US20210158593A1 (en) | Pose selection and animation of characters using video data and training techniques | |
CN108463823A (en) | A kind of method for reconstructing, device and the terminal of user's Hair model | |
CN111369428A (en) | Virtual head portrait generation method and device | |
CN107578467B (en) | Three-dimensional modeling method and device for medical instrument | |
WO2024174422A1 (en) | Model generation method and apparatus, electronic device, and storage medium | |
CN113822965A (en) | Image rendering processing method, device and equipment and computer storage medium | |
CN108573192B (en) | Glasses try-on method and device matched with human face | |
CN114904268A (en) | Virtual image adjusting method and device, electronic equipment and storage medium | |
CN107766803A (en) | Video personage based on scene cut dresss up method, apparatus and computing device | |
CN115994944A (en) | Three-dimensional key point prediction method, training method and related equipment | |
CN113808249A (en) | Image processing method, device, equipment and computer storage medium | |
CN116977539A (en) | Image processing method, apparatus, computer device, storage medium, and program product | |
CN114998554B (en) | Three-dimensional cartoon face modeling method and device | |
CN113223128B (en) | Method and apparatus for generating image | |
CN114648601A (en) | Virtual image generation method, electronic device, program product and user terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |