CN113240810A - Face model fusion method, system and equipment - Google Patents

Face model fusion method, system and equipment Download PDF

Info

Publication number
CN113240810A
CN113240810A CN202110466692.9A CN202110466692A CN113240810A CN 113240810 A CN113240810 A CN 113240810A CN 202110466692 A CN202110466692 A CN 202110466692A CN 113240810 A CN113240810 A CN 113240810A
Authority
CN
China
Prior art keywords
model
face
dimensional
preset
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110466692.9A
Other languages
Chinese (zh)
Inventor
郭睿
明利
李骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Feather Trace Technology Co ltd
Original Assignee
Shenzhen Feather Trace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Feather Trace Technology Co ltd filed Critical Shenzhen Feather Trace Technology Co ltd
Priority to CN202110466692.9A priority Critical patent/CN113240810A/en
Publication of CN113240810A publication Critical patent/CN113240810A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system and equipment for fusing a face model, wherein the method for fusing the face model comprises the following steps: constructing a user face model and a target face model according to a preset face model construction mode; extracting a first model facial form and a first model facial form of the user facial model; extracting a second model facial form and a second model facial form of the target face model; and fusing the first model facial features, the first model facial form, the second model facial features and the second model facial form according to preset weights to obtain a three-dimensional face model. The method and the device can accurately express the three-dimensional face model referred to by the user shaping, so that the three-dimensional face model can accurately express the actual face shaped by the user, and the experience of the user can be improved.

Description

Face model fusion method, system and equipment
Technical Field
The invention relates to the technical field of face recognition, in particular to a face model fusion method, a face model fusion system and face model fusion equipment.
Background
With the rapid development of the medical and beauty industry, face-lifting gradually becomes a hot topic of people, no matter whether the face is finely adjusted or integrally adjusted, the consumer is most likely to feel that the consumer cannot check the appearance of the face-lifting, the appearance after face-lifting cannot be clearly understood by intelligence according to the description imagination of doctors, and the communication time between the consumer and the doctors is repeated and the period is long, so the communication effect directly influences the success rate of the operation and the consumption experience.
Three-dimensional face synthesis is a computer graphics technology applied to many fields, and a corresponding face model is constructed based on a single photo. Although there is also face-lifting simulation software in the related art, the existing face-lifting simulation software can only recombine according to the facial features of different target faces selected by the user, and the existing face-lifting technology cannot directly replace the original facial features and facial shapes with the appearance of the target faces, so that the situation after the face-lifting of the user cannot be accurately represented by directly displaying the reorganization of the facial features and facial shapes of different target faces.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a face model fusion method, which can accurately express the real face after face-lifting and improve the user experience.
The invention further provides a face model fusion system.
The invention also provides the electronic control equipment.
In a first aspect, an embodiment of the present invention provides a face model fusion method, including:
constructing a user face model and a target face model according to a preset face model construction mode;
extracting a first model facial form and a first model facial form of the user facial model;
extracting a second model facial form and a second model facial form of the target face model;
and fusing the first model facial features, the first model facial form, the second model facial features and the second model facial form according to preset weights to obtain a three-dimensional face model.
The face model fusion method of the embodiment of the invention at least has the following beneficial effects: the method comprises the steps of constructing a user face model and a target face model according to a preset face construction mode, extracting a first model facial organ and a first model face form from the user face model, extracting a second model facial organ and a second model face form from the target face model, and fusing the first model facial organ and the second model facial organ according to preset weights and/or fusing the first model facial organ and the second model face form according to the preset weights to obtain a three-dimensional face model capable of accurately expressing a user shaping reference, so that the three-dimensional face model can accurately express an actual face shaped by a user, and the experience of the user can be improved.
According to the face model fusion method of another embodiment of the present invention, the preset face model construction method includes any one or more of the following: a single-photo face model construction mode, a multi-photo face model construction mode and a scanning face construction mode.
According to the face model fusion method of the other embodiments of the present invention, the preset weight includes: presetting a weight of a five sense organs and a weight of a face shape, and fusing the first model five sense organs, the first model face shape, the second model five sense organs and the second model face shape according to the preset weights to obtain a three-dimensional face model, wherein the method comprises the following steps:
fusing the first model face and the second model face according to the preset face weight to obtain a three-dimensional model face;
fusing the first model facial features and the second model facial features according to the preset facial feature weight to obtain three-dimensional model facial features;
and constructing the three-dimensional face model according to the three-dimensional model face shape and the three-dimensional model five sense organs.
According to another embodiment of the present invention, the method for fusing a face model, which is a three-dimensional model face, by fusing the first model face and the second model face according to the preset face weight comprises:
extracting a first three-dimensional grid and a first face texture of the first model face;
extracting a second three-dimensional mesh and a second facial texture of the second model face;
performing grid fusion on the first three-dimensional grid and the second three-dimensional grid according to the preset face weight to generate a face three-dimensional grid;
performing texture fusion on the first face texture and the second face texture according to the preset face weight to obtain face texture data;
and mapping the three-dimensional mesh of the face shape by the face texture data to obtain the three-dimensional model face shape.
According to the human face model fusion method of the other embodiments of the invention, the preset facial weights include any one or more of the following: a first preset weight of five sense organs, a second preset weight of five sense organs, a third preset weight of five sense organs and a fourth preset weight of five sense organs; the first model facial features include any one or more of: a first model eyebrow, a first model eye, a first model nose, and a first model mouth; the second model facial features include any one or more of: a second model eyebrow, a second model eye, a second model nose, and a second model mouth; the three-dimensional model five sense organs comprise any one or more of the following: three-dimensional model eyebrows, three-dimensional model eyes, three-dimensional model nose and three-dimensional model mouth.
According to another embodiment of the present invention, the method for fusing a human face model, in which the first model facial features and the second model facial features are fused according to the preset facial feature weight to obtain three-dimensional model facial features, includes:
fusing the first model eyebrow and the second model eyebrow according to the first preset facial feature weight to obtain the three-dimensional model eyebrow;
and/or, fusing the first model eye and the second model eye according to the second preset weight of the five sense organs to obtain the three-dimensional model eye;
and/or fusing the first model nose and the second model nose according to the third preset weight of the five sense organs to obtain the three-dimensional model nose;
and/or fusing the first model mouth and the second model mouth according to the fourth preset weight of the five sense organs to obtain a three-dimensional model mouth.
According to the human face model fusion method of the other embodiments of the invention, further comprising:
acquiring a face photo set, wherein the face photo set comprises a plurality of face photos;
constructing a face model database according to the face photo set, wherein the face model database comprises a plurality of face models corresponding to the face photos;
and acquiring a user operation instruction to determine that the face model corresponding to the user operation instruction in the face model database is the target face model.
According to another embodiment of the present invention, the method for fusing a face model, which performs mesh fusion on the first three-dimensional mesh and the second three-dimensional mesh according to the preset face weight to generate a three-dimensional mesh of a face shape, includes:
aligning grid points of the first three-dimensional grid and the second three-dimensional grid;
and fusing the aligned first three-dimensional grid and the aligned second three-dimensional grid according to the preset facial form weight to obtain the facial form three-dimensional grid.
In a second aspect, an embodiment of the present invention provides a face model fusion system, including:
the construction module is used for constructing a user face model and a target face model according to a preset face model construction mode;
the first extraction module is used for extracting a first model facial form and a first model facial form of the user face model;
the second extraction module is used for extracting a second model facial form and a second model facial form of the target face model;
and the fusion module is used for fusing the first model facial feature, the first model facial form, the second model facial feature and the second model facial form according to preset weight to obtain a three-dimensional human face model.
The human face model fusion system of the embodiment of the invention at least has the following beneficial effects: the method comprises the steps of constructing a user face model and a target face model according to a preset face model construction mode, extracting a first model facial organ and a first model facial form from the user face model, extracting a second model facial organ and a second model facial form from the target face model, and fusing the first model facial organ and the second model facial organ according to preset weights and/or fusing the first model facial form and the second model facial form according to the preset weights to obtain a three-dimensional face model capable of accurately expressing a user shaping reference, so that the three-dimensional face model can accurately express an actual face shaped by a user, and the experience of the user can be improved.
In a third aspect, an embodiment of the present invention provides an electronic control apparatus including:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of face model fusion according to the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a face model fusion method according to the present invention;
FIG. 2 is a schematic diagram of a user face and a target face in a specific embodiment of a face model fusion method in an embodiment of the present invention;
FIG. 3-A is a schematic view of a target face in an embodiment of the face model fusion method in the present invention;
FIG. 3-B is a schematic view of a target face B in an embodiment of the face model fusion method according to the present invention;
FIG. 3-C is a schematic view of a C target face in an embodiment of the face model fusion method in the present invention;
FIG. 3-Y is a schematic view of a Y-user face in an embodiment of the face model fusion method in the present invention;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of a face model fusion method according to the present invention;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of a face model fusion method according to the present invention;
FIG. 6 is a schematic flow chart diagram illustrating another embodiment of a face model fusion method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a face model fusion method according to another embodiment of the present invention;
FIG. 8 is a block diagram of a face model fusion system according to an embodiment of the present invention;
fig. 9 is a block diagram of an embodiment of an electronic control device according to the present invention.
Reference numerals: 100. building a module; 200. a first extraction module; 300. (ii) a A second extraction module; 400. a fusion module; 500. a processor; 600. a memory.
Detailed Description
The concept and technical effects of the present invention will be clearly and completely described below in conjunction with the embodiments to fully understand the objects, features and effects of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention.
In the description of the embodiments of the present invention, if "a number" is referred to, it means one or more, if "a plurality" is referred to, it means two or more, if "greater than", "less than" or "more than" is referred to, it is understood that the number is not included, and if "greater than", "lower" or "inner" is referred to, it is understood that the number is included. If reference is made to "first" or "second", this should be understood to distinguish between features and not to indicate or imply relative importance or to implicitly indicate the number of indicated features or to implicitly indicate the precedence of the indicated features.
With the rapid development of the medical and aesthetic industries, two common medical communication modes exist in the market, and the first mode is to find a reference picture for comparison to provide a surgical scheme for a client. The second method is to scan 2D pictures through a three-dimensional scanning device, and convert the 2D pictures into a 3D model through an artificial intelligence algorithm. However, the two schemes can achieve the effect of simulating face-lifting, but the operation is too complicated, certain requirements are met for the operation capability of a user, meanwhile, the efficiency is still low in the process of simulating face deformation, and the operation time is long. Moreover, the facial form and the five sense organs of the face can only be directly replaced, and then the original five sense organs of the user cannot be integrated into a picture or the five sense organs replaced by the model in the cosmetic surgery, and are also influenced by the original five sense organs, so that the presented 3D model cannot accurately express the face of the user after the cosmetic surgery.
Based on the above, the application discloses a method, a system and a device for fusing face models, which can perform weight fusion on two face models to obtain a face which is more in line with real face-lifting, so that the accuracy of the face model fusion is improved.
In a first aspect, referring to fig. 1, an embodiment of the present invention discloses a face model fusion method, including:
s1000, constructing a user face model and a target face model according to a preset face model construction mode;
s2000, extracting a first model facial feature and a first model facial form of the user face model;
s3000, extracting a second model facial feature and a second model facial form of the target face model;
and S4000, fusing the first model facial features, the first model facial form, the second model facial features and the second model facial form according to preset weights to obtain the three-dimensional face model.
In order to realize the reference of the face-lifting effect, a user face model and a target face model are firstly created according to a preset face model construction mode. After the user face model and the target face model are constructed, the user face model and the target face model need to be fused to obtain a three-dimensional face model needing face-lifting. The face shaping method comprises the steps of extracting a first model facial organ and a first model facial form of a user face model, then extracting a second model facial organ and a second model facial form of a target face model, and fusing the first model facial organ, the second model facial organ, the first model facial form and the second model facial form according to preset weights to obtain a three-dimensional face model. The preset weight is customized according to a user, whether the facial form and facial form of the target face model are completely replaced or not is determined according to the face-lifting requirement of the user, the facial form and facial form of the user part are reserved, the three-dimensional face model which meets the user requirement better is obtained, the three-dimensional face model is fused with the face which meets the actual face-lifting of the user better, and the user experience is improved.
In some embodiments, the preset face model construction mode includes any one or more of the following: a single-photo face model construction mode, a multi-photo face model construction mode and a scanning face construction mode. The single-photo face model construction method mainly comprises the steps of collecting a single face photo, extracting face key points and face textures according to the single face photo, constructing a face grid according to the face key points, and mapping the face grids according to the face textures to obtain a face model corresponding to the single face photo. The multi-picture face model construction mode mainly comprises the steps of collecting a plurality of face pictures at different angles, respectively constructing corresponding face models through the face pictures at the different angles, and then carrying out weight fitting on the face models to obtain more accurate face models. The scanning face construction mode scans the face of a user or a target through a 3D face scanner, and then constructs a corresponding face model according to the face grid and the face texture obtained through scanning. The user face model and the target face model can be constructed according to any one or more of a single-picture face model construction mode, a multi-picture face model construction mode and a scanning face construction mode. If the target figure which the user needs to be beautified is difficult to scan, a single-photo face model construction mode or a multi-photo face model construction mode can be adopted for construction, and the user face model can be constructed in any one of the single-photo face model construction mode, the multi-photo face model construction mode and the scanning face construction mode. Therefore, by setting various face model construction modes, a user can conveniently select the face model according to actual requirements, and the use experience of the user is improved.
In this embodiment, the user face model and the target face model are constructed by a single-photo face model, so a user face photo and a target face photo need to be collected. The method comprises the steps of firstly collecting key points and face textures of a user face picture and a target face picture, aligning bone points and key points of a preset standard face model to obtain a basic face model, then carrying out difference calculation according to a basic chartlet and face textures of the preset standard face model to obtain basic texture data, and mapping the basic texture data to the basic face model to obtain corresponding user face models and target face models, so that the user face models and the target face models are easy and accurate to fuse.
Referring to fig. 2 and 3, in some embodiments, the face model fusion method further includes:
s5000, acquiring a face photo set, wherein the face photo set comprises a plurality of face photos;
s6000, constructing a face model database according to the face photo set, wherein the face model database comprises a plurality of face models corresponding to the face photos;
and S7000, obtaining the user operation instruction to determine that the face model corresponding to the user operation instruction in the face model database is the target face model.
The target face model is only one, but a user can upload a plurality of face photo sets, then single face or a plurality of face photos of the same face in the face photo sets are subjected to face model fusion to obtain a face model database, so that the corresponding face model is selected from the face model database to be used as the target face model, the user can quickly switch different face models to be used as the target face model and the user face model to be fused to obtain a three-dimensional face model, the experience of the user is improved, and the target face model is not uploaded with one target face photo independently every time, so that the target face model is switched more quickly.
Specifically, if the face photo of the user is Y, A, B, C face photos of the target user are obtained, then a corresponding face model database is built according to A, B, C face photos, if the user needs to be integrated into an A target user, the A face model time is set as a target face model according to a user operation instruction of the user selecting the A target user, and then the A face model and the Y user face model are fused according to preset weight to obtain a corresponding three-dimensional face model. When a user needs to check the face fusion of a Y user and a C target user, the C face model is set as a target face model according to a user operation instruction of the C target user selected by the user, and then the C face model and the Y user face model are fused according to preset weight to obtain a corresponding three-dimensional face model. Therefore, a face model database is established, and the corresponding three-dimensional face model can be quickly generated when the target face model needs to be switched quickly, so that the experience of the user is improved.
Referring to fig. 4, in some embodiments, the preset weights include: presetting a weight of five sense organs and a weight of facial form, wherein the step S400 comprises:
s4100, fusing the first model face and the second model face according to preset face weight to obtain a three-dimensional model face;
s4200, fusing the first model facial features and the second model facial features according to preset facial feature weights to obtain three-dimensional model facial features;
s4300, constructing a three-dimensional face model according to the three-dimensional model face shape and the three-dimensional model five sense organs.
According to the face-lifting requirements of most users, the face and the five sense organs of a certain target face are not completely shaped, but a certain face or a part of the five sense organs of different target faces are selected for adjustment. Therefore, the first model face and the second model face are fused according to the preset face weight to obtain the three-dimensional model face, so that the user can only check the fusion of the face, and the specific fusion degree can self-define the preset face weight. The three-dimensional model face obtained by fusing the first model face and the second model face according to the preset face weight better meets the user requirements so as to improve the experience of the user. The first model facial features and the second model facial features are fused according to the preset facial feature weight to obtain the three-dimensional model facial features, and then the user can customize the preset facial feature weight according to the requirement to obtain the three-dimensional model facial features meeting the requirement of the user, so that the experience of the user is improved. The three-dimensional face model and the three-dimensional facial features are obtained by respectively fitting according to the preset face weight and the preset facial feature weight, and then the three-dimensional face model and the three-dimensional facial features are constructed to obtain the three-dimensional facial features, so that the face of a target face or the five-dimensional facial features can be selected according to user requirements and fused according to the weights to obtain the three-dimensional face model, and the three-dimensional face model meets the user requirements better, and the user experience is improved.
Referring to fig. 5, in some embodiments, step S4100 includes:
s4110, extracting a first three-dimensional grid and a first face texture of the first model face;
s4120, extracting a second three-dimensional grid and a second face texture of the second model face;
s4130, carrying out grid fusion on the first three-dimensional grid and the second three-dimensional grid according to a preset face weight to generate a face three-dimensional grid;
s4140, performing texture fusion on the first face texture and the second face texture according to a preset face weight to obtain face texture data;
s4150, mapping the face three-dimensional grid by using the face texture data to obtain a three-dimensional model face.
The method comprises the steps of extracting a first three-dimensional grid and a first face texture of a first model face, and extracting a second three-dimensional grid and a second face texture of a second model face, wherein the first face texture is mainly the face contour and the skin color of a user face, and the second face texture is mainly the face contour and the skin color of a target face. Therefore, the three-dimensional mesh of the face is obtained by carrying out mesh fusion on the first three-dimensional mesh and the second three-dimensional mesh according to the preset face weight, and then the texture fusion is carried out on the first face texture and the second face texture according to the preset face weight to obtain face texture data, so that the three-dimensional mesh of the face can be subjected to texture mapping according to the texture data to obtain the face of the three-dimensional model, and the obtained face of the three-dimensional model is accurate and meets the requirements of a user.
Referring to fig. 6, in some embodiments, step S4130 includes:
s4131, aligning grid points of the first three-dimensional grid and the second three-dimensional grid;
s4132, fusing the aligned first three-dimensional grid and the aligned second three-dimensional grid according to a preset face weight to obtain a face three-dimensional grid.
The method comprises the steps of aligning grid points of a first three-dimensional grid and a second three-dimensional grid, and fusing the aligned first three-dimensional grid and the aligned second three-dimensional grid according to a preset face weight to obtain a face three-dimensional grid meeting the user requirements, so that the user experience is improved.
If the types of the first three-dimensional grid and the second three-dimensional grid are not consistent, user image data of a user face model are obtained, grid supplementation is carried out on the second three-dimensional grid according to the symmetry of the user image data to generate a candidate three-dimensional grid consistent with the type of the first three-dimensional grid, and then grid fusion is carried out on the candidate three-dimensional grid and the first three-dimensional grid according to a preset face weight to obtain a face three-dimensional grid meeting the user requirements. The first face texture and the second face texture are fused mainly through contour fusion and skin color fusion according to the preset face weight, and then skin color filling and contour texture filling are carried out on the face three-dimensional grid according to fused face texture data to obtain the three-dimensional model face, so that the three-dimensional model face is generated accurately and meets the requirements of a user.
In some embodiments, the preset weight of the five sense organs includes any one or more of: a first preset weight of five sense organs, a second preset weight of five sense organs, a third preset weight of five sense organs and a fourth preset weight of five sense organs; the first model facial features include any one or more of: a first model eyebrow, a first model eye, a first model nose, and a first model mouth; the second model facial features include any one or more of: a second model eyebrow, a second model eye, a second model nose, and a second model mouth; the three-dimensional model five sense organs include any one or more of the following: three-dimensional model eyebrows, three-dimensional model eyes, three-dimensional model nose and three-dimensional model mouth.
Referring to fig. 7, step S4200 includes:
s4210, fusing the first model eyebrow and the second model eyebrow according to a first preset weight of five sense organs to obtain a three-dimensional model eyebrow;
s4220, and/or fusing the first model eye and the second model eye according to a second preset weight of the five sense organs to obtain a three-dimensional model eye;
s4230, and/or fusing the first model nose and the second model nose according to a third preset weight of five sense organs to obtain a three-dimensional model nose;
s4240, and/or fusing the first model mouth and the second model mouth according to a fourth preset weight of five sense organs to obtain a three-dimensional model mouth.
The facial features comprise eyebrows, eyes, a nose and a mouth, and a user is generally not satisfied with the whole facial features and is only not satisfied with one organ of the facial features in the aspect of face-lifting, so that the user can self-define and set a first preset facial feature weight, a second preset facial feature weight, a third preset facial feature weight, a fourth preset facial feature weight and a fifth preset facial feature weight, the first model eyebrows and the second model eyebrows are fused according to the first preset facial feature weight set by the user to obtain three-dimensional model eyebrows, the fusion degree of the eyebrows of the face of the user and the eyebrows of the target face of the user, which are adjusted by the user according to requirements, can be determined, the three-dimensional model eyebrows meeting the requirement of the eyebrows of the user can be obtained, and accurate shaping reference of the user can be provided. And fusing the first model eye and the second model eye according to a second preset weight of the five sense organs set by the user to obtain a three-dimensional model eye so as to obtain a three-dimensional model eyebrow which meets the reshaping requirement of the user's eye. And fusing the first model nose and the second model nose according to a third preset weight of the five sense organs set by the user to obtain a three-dimensional model nose so as to obtain the three-dimensional model nose required by the reshaping of the nose of the user. And fusing the first model mouth and the second model mouth according to the fourth preset weight of the five sense organs set by the user to obtain a three-dimensional model mouth so as to obtain the three-dimensional model mouth required by the mouth shaping of the user. Therefore, eyebrows, eyes, a nose and a mouth are correspondingly adjusted according to the weight of any one preset five sense organs set by a user to obtain the three-dimensional model five sense organs which can be accurately referenced by the user for face-lifting, so that the experience of the user is improved.
The first preset weight of the facial features, the second preset weight of the facial features, the third preset weight of the facial features and the fourth preset weight of the facial features can be set according to user definition, and the set first preset weight of the facial features, the set second preset weight of the facial features, the set third preset weight of the facial features and the set fourth preset weight of the facial features can be adjusted according to requirements, so that the degree of fusion of a user face model and a target face model by the user can be met, and the three-dimensional face model meeting the shaping reference of the user can be obtained.
The following describes a face model fusion method according to an embodiment of the present invention in detail with a specific embodiment with reference to fig. 1 to 7. It is to be understood that the following description is only exemplary, and not a specific limitation of the invention.
The method comprises the steps of firstly collecting key points and face textures of a user face picture and a target face picture, then aligning bone points and key points of a preset standard face model to obtain a basic face model, then carrying out difference value calculation according to a basic chartlet and the face textures of the preset standard face model to obtain basic texture data, and then mapping the basic texture data to the basic face model to obtain a corresponding user face model and a corresponding target face model. And after the user face model and the target face model are obtained, extracting a first model face shape and a first model facial feature of the user face model, and extracting a second model face shape and a second model facial feature of the target face model. The three-dimensional face model is obtained by extracting a first three-dimensional grid and a first face texture of a first model face, extracting a second three-dimensional grid and a second face texture of a second model face, aligning grid points of the first three-dimensional grid and the second three-dimensional grid, fusing the aligned first three-dimensional grid and the aligned second three-dimensional grid according to preset face weight to obtain a face three-dimensional grid meeting user requirements, performing texture fusion on the first face texture and the second face texture according to preset face weight to obtain face texture data, and performing texture mapping on the face three-dimensional grid according to the texture data to obtain the three-dimensional model face. And finally, fusing the first model eyebrow and the second model eyebrow according to the first preset facial feature weight to obtain a three-dimensional model eyebrow so as to obtain the three-dimensional model eyebrow which meets the requirement of shaping the eyebrow of the user. And fusing the first model eye and the second model eye according to the second preset weight of the five sense organs to obtain a three-dimensional model eye so as to obtain a three-dimensional model eyebrow which meets the reshaping requirement of the eyes of the user. And fusing the first model nose and the second model nose according to the third preset weight of the five sense organs to obtain a three-dimensional model nose so as to obtain the three-dimensional model nose required by the reshaping of the nose of the user. And fusing the first model mouth and the second model mouth according to the fourth preset weight of the five sense organs to obtain a three-dimensional model mouth so as to obtain the three-dimensional model mouth required by the mouth shaping of the user. And finally, filling the three-dimensional model eyebrow and/or the three-dimensional model eye and/or the three-dimensional model nose and/or the three-dimensional model mouth with the three-dimensional model face to obtain the three-dimensional face model, so as to obtain the three-dimensional face model capable of accurately expressing the shaping requirements of the user, so as to provide accurate shaping reference for the user, and further improve the experience of the user.
In a second aspect, referring to fig. 8, an embodiment of the present invention further discloses a face model fusion system, including: a construction module 100, a first extraction module 200, a second extraction module 300 and a fusion module 400; the construction module 100 is used for constructing a user face model and a target face model according to a preset face model construction mode; the first extraction module 200 is used for extracting a first model facial form and a first model facial form of a user face model; the second extraction module 300 is used for extracting a second model facial form and a second model facial form of the target face model; the fusion module 400 is configured to fuse the first model facial feature, the first model facial form, the second model facial feature and the second model facial form according to a preset weight to obtain a three-dimensional human face model.
The method comprises the steps of constructing a user face model and a target face model according to a preset face model construction mode, extracting a first model facial organ and a first model face form from the user face model, extracting a second model facial organ and a second model face form from the target face model, and fusing the first model facial organ and the second model facial organ according to preset weights and/or fusing the first model facial organ and the second model face form according to the preset weights to obtain a three-dimensional face model capable of accurately expressing a user shaping reference, so that the three-dimensional face model can accurately express an actual face shaped by a user, and the experience of the user can be improved.
The specific operation process of the face model fusion system refers to the face model fusion method of the first aspect, and is not described herein again.
In a third aspect, referring to fig. 9, an embodiment of the present invention further discloses an electronic control apparatus, including:
at least one processor 500, and,
a memory 600 communicatively coupled to the at least one processor 500; wherein the content of the first and second substances,
the memory 600 stores instructions executable by the at least one processor 500 to enable the at least one processor 500 to perform the face model fusion method according to any of the first aspects.
The electronic control equipment can be mobile terminal equipment or non-mobile terminal equipment. The mobile terminal equipment can be a mobile phone, a tablet computer, a notebook computer, a palm computer, vehicle-mounted terminal equipment, wearable equipment, a super mobile personal computer, a netbook, a personal digital assistant, CPE, UFI (wireless hotspot equipment) and the like; the non-mobile terminal equipment can be a personal computer, a television, a teller machine or a self-service machine and the like; the embodiments of the present invention are not particularly limited.
The memory 600 may be an external memory or an internal memory, and the external memory is an external memory card, such as a Micro SD card. The external memory card communicates with the processor through the external memory interface to realize the data storage function. For example, files such as music, video, etc. are saved in an external memory card. The internal memory 600 may be used to store computer-executable program code, which includes instructions.
Processor 500 may include one or more processing units, such as: the processor 500 may include an Application Processor (AP), a modem processor, a graphics processing unit 500500 (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor 500, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated in one or more of the processors 500.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A face model fusion method is characterized by comprising the following steps:
constructing a user face model and a target face model according to a preset face model construction mode;
extracting a first model facial form and a first model facial form of the user facial model;
extracting a second model facial form and a second model facial form of the target face model;
and fusing the first model facial features, the first model facial form, the second model facial features and the second model facial form according to preset weights to obtain a three-dimensional face model.
2. The method for fusing the face models according to claim 1, wherein the preset face model construction mode comprises any one or more of the following modes: a single-photo face model construction mode, a multi-photo face model construction mode and a scanning face construction mode.
3. The human face model fusion method of claim 1, wherein the preset weight comprises: presetting a weight of a five sense organs and a weight of a face shape, and fusing the first model five sense organs, the first model face shape, the second model five sense organs and the second model face shape according to the preset weights to obtain a three-dimensional face model, wherein the method comprises the following steps:
fusing the first model face and the second model face according to the preset face weight to obtain a three-dimensional model face;
fusing the first model facial features and the second model facial features according to the preset facial feature weight to obtain three-dimensional model facial features;
and constructing the three-dimensional face model according to the three-dimensional model face shape and the three-dimensional model five sense organs.
4. The human face model fusion method of claim 3, wherein the fusing the first model face shape and the second model face shape according to the preset face shape weight to obtain a three-dimensional model face shape comprises:
extracting a first three-dimensional grid and a first face texture of the first model face;
extracting a second three-dimensional mesh and a second facial texture of the second model face;
performing grid fusion on the first three-dimensional grid and the second three-dimensional grid according to the preset face weight to generate a face three-dimensional grid;
performing texture fusion on the first face texture and the second face texture according to the preset face weight to obtain face texture data;
and mapping the three-dimensional mesh of the face shape by the face texture data to obtain the three-dimensional model face shape.
5. The human face model fusion method of claim 3, wherein the preset facial feature weight comprises any one or more of the following: a first preset weight of five sense organs, a second preset weight of five sense organs, a third preset weight of five sense organs and a fourth preset weight of five sense organs; the first model facial features include any one or more of: a first model eyebrow, a first model eye, a first model nose, and a first model mouth; the second model facial features include any one or more of: a second model eyebrow, a second model eye, a second model nose, and a second model mouth; the three-dimensional model five sense organs comprise any one or more of the following: three-dimensional model eyebrows, three-dimensional model eyes, three-dimensional model nose and three-dimensional model mouth.
6. The method for fusing the human face model according to claim 5, wherein the fusing the first model facial features and the second model facial features according to the preset facial feature weight to obtain three-dimensional model facial features comprises:
fusing the first model eyebrow and the second model eyebrow according to the first preset facial feature weight to obtain the three-dimensional model eyebrow;
and/or, fusing the first model eye and the second model eye according to the second preset weight of the five sense organs to obtain the three-dimensional model eye;
and/or fusing the first model nose and the second model nose according to the third preset weight of the five sense organs to obtain the three-dimensional model nose;
and/or fusing the first model mouth and the second model mouth according to the fourth preset weight of the five sense organs to obtain a three-dimensional model mouth.
7. The face model fusion method according to any one of claims 1 to 6, further comprising:
acquiring a face photo set, wherein the face photo set comprises a plurality of face photos;
constructing a face model database according to the face photo set, wherein the face model database comprises a plurality of face models corresponding to the face photos;
and acquiring a user operation instruction to determine that the face model corresponding to the user operation instruction in the face model database is the target face model.
8. The human face model fusion method of claim 4, wherein the mesh fusion of the first three-dimensional mesh and the second three-dimensional mesh according to the preset face weight to generate the face three-dimensional mesh comprises:
aligning grid points of the first three-dimensional grid and the second three-dimensional grid;
and fusing the aligned first three-dimensional grid and the aligned second three-dimensional grid according to the preset facial form weight to obtain the facial form three-dimensional grid.
9. A face model fusion system, comprising:
the construction module is used for constructing a user face model and a target face model according to a preset face model construction mode;
the first extraction module is used for extracting a first model facial form and a first model facial form of the user face model;
the second extraction module is used for extracting a second model facial form and a second model facial form of the target face model;
and the fusion module is used for fusing the first model facial feature, the first model facial form, the second model facial feature and the second model facial form according to preset weight to obtain a three-dimensional human face model.
10. An electronic control apparatus, characterized by comprising:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a face model fusion method according to any one of claims 1 to 7.
CN202110466692.9A 2021-04-28 2021-04-28 Face model fusion method, system and equipment Pending CN113240810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110466692.9A CN113240810A (en) 2021-04-28 2021-04-28 Face model fusion method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110466692.9A CN113240810A (en) 2021-04-28 2021-04-28 Face model fusion method, system and equipment

Publications (1)

Publication Number Publication Date
CN113240810A true CN113240810A (en) 2021-08-10

Family

ID=77129761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110466692.9A Pending CN113240810A (en) 2021-04-28 2021-04-28 Face model fusion method, system and equipment

Country Status (1)

Country Link
CN (1) CN113240810A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719326A (en) * 2016-01-19 2016-06-29 华中师范大学 Realistic face generating method based on single photo

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719326A (en) * 2016-01-19 2016-06-29 华中师范大学 Realistic face generating method based on single photo

Similar Documents

Publication Publication Date Title
WO2021129642A1 (en) Image processing method, apparatus, computer device, and storage medium
CN108305312B (en) Method and device for generating 3D virtual image
US7764828B2 (en) Method, apparatus, and computer program for processing image
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN109272566A (en) Movement expression edit methods, device, equipment, system and the medium of virtual role
CN108447017A (en) Face virtual face-lifting method and device
KR101743763B1 (en) Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN112188304A (en) Video generation method, device, terminal and storage medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN114821675B (en) Object processing method and system and processor
CN109659006A (en) Facial muscle training method, device and electronic equipment
CN111833236A (en) Method and device for generating three-dimensional face model simulating user
WO2017040615A1 (en) Methods, systems and instruments for creating partial model of a head for use in hair transplantation
CN111311733A (en) Three-dimensional model processing method and device, processor, electronic device and storage medium
CN109447031A (en) Image processing method, device, equipment and storage medium
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN111028318A (en) Virtual face synthesis method, system, device and storage medium
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar
CN110751026B (en) Video processing method and related device
KR20220080576A (en) System and method for generating avatar character
CN113240810A (en) Face model fusion method, system and equipment
CN111861822A (en) Patient model construction method, equipment and medical education system
CN113240811B (en) Three-dimensional face model creating method, system, equipment and storage medium
US20230050535A1 (en) Volumetric video from an image source
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination