CN109377556B - Face image feature processing method and device - Google Patents
Face image feature processing method and device Download PDFInfo
- Publication number
- CN109377556B CN109377556B CN201811400818.7A CN201811400818A CN109377556B CN 109377556 B CN109377556 B CN 109377556B CN 201811400818 A CN201811400818 A CN 201811400818A CN 109377556 B CN109377556 B CN 109377556B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- dimensional
- processed
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000001815 facial effect Effects 0.000 claims abstract description 64
- 238000001514 detection method Methods 0.000 claims abstract description 54
- 239000011159 matrix material Substances 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000013507 mapping Methods 0.000 claims description 29
- 230000036544 posture Effects 0.000 claims description 26
- 239000003550 marker Substances 0.000 claims description 15
- 230000008569 process Effects 0.000 abstract description 11
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 9
- 230000014509 gene expression Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 208000007256 Nevus Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 206010027145 Melanocytic naevus Diseases 0.000 description 1
- 206010000496 acne Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides a face image feature processing method and device, wherein face feature points are obtained by detecting an image to be processed of a target user, a three-dimensional face model and a posture matrix corresponding to the image to be processed are obtained by combining pre-stored three-dimensional face basic coefficients, and a pre-stored reference face template is mapped to the template to be processed consistent with the image to be processed to obtain a template to be processed which marks positions with mark features which the user wants to keep. And carrying out mark feature detection on the image to be processed, and removing other mark features except the mark features corresponding to the areas of the mark features which the user wants to keep on the template to be processed. Resulting in a processed image of the landmark features that the user wants to preserve. Through the process, the effects of automatically retaining the characteristics specified by the user and removing other spots and flaws can be achieved, and a rapid, automatic and customized facial image processing scheme can be provided for the user.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for processing human face image features.
Background
When a user takes a portrait photo and performs speckle removing processing by using image processing software, some personalized facial features which the user likes and wants to keep, such as nevi, may be removed at the same time. With the increasing development of face image beautifying technology, the face beautifying effect of users tends to be personalized, customized and intelligent.
Although in the existing image processing software, a user can remove spots, acne and the like manually and selectively remove and retain facial features, the manual operation mode is complicated and is not intelligent enough.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method and an apparatus for processing facial image features to solve the above problems.
The embodiment of the application provides a method for processing facial image features, which comprises the following steps:
the method comprises the steps of obtaining an image to be processed of a target user, carrying out face detection on the image to be processed to obtain a facial feature point of the image to be processed, and carrying out face three-dimensional reconstruction according to a prestored three-dimensional face basic coefficient corresponding to the target user and the facial feature point to obtain a three-dimensional face model and a posture matrix corresponding to the image to be processed;
mapping a pre-stored reference face template corresponding to the target user to a template to be processed with the size consistent with that of the image to be processed according to texture coordinates of the vertex of the average face model in the three-dimensional face model and the posture matrix, wherein the area with the pixel value of 255 on the template to be processed obtained after mapping corresponds to the position of a mark feature which is required to be reserved by the user on the image to be processed;
and carrying out mark feature detection on the image to be processed, and removing other mark features except the mark feature corresponding to the region with the pixel value of 255 on the template to be processed in the detected mark features.
Optionally, the pre-stored three-dimensional face shape base coefficient corresponding to the target user is obtained by:
acquiring a plurality of reference images of a target user under different postures;
and carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face basic coefficient of the three-dimensional face model of the target user.
Optionally, the pre-stored reference face template corresponding to the target user is obtained through the following steps:
acquiring front face images of the target user contained in the multiple reference images, and performing face detection on the front face images to obtain face feature points of the front face images;
carrying out human face three-dimensional reconstruction according to the obtained three-dimensional facial form base system and facial feature points of the front human face image to obtain a front three-dimensional human face model and a front attitude matrix corresponding to the front human face image;
carrying out mark feature detection on the front face image and obtaining a mark feature selected by a user;
and processing the mark features selected by the user to obtain a front face template, and obtaining a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model and the front posture matrix.
Optionally, the step of performing marker feature detection on the front face image to obtain a marker feature on the front face image includes:
carrying out mark feature detection on the front face image to obtain mark features on the front face image;
and obtaining the mark features selected by the user in the mark features on the front face image based on the selection of the user.
Optionally, the step of processing the landmark features selected by the user to obtain a front face template, and obtaining a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model, and the front pose matrix includes:
setting the pixel value of the area of the mark feature selected by the user on the front face image to be 255 and setting the pixel values of the areas of other mark features to be 0 so as to generate a front face template with the size consistent with that of the front face image;
obtaining a texture expansion image of the front three-dimensional face model according to texture coordinates of the peak of the average face model in the front three-dimensional face model and the front attitude matrix;
and mapping the front face template to the texture expansion image to obtain a reference face template corresponding to the target user.
The embodiment of the present application further provides a facial image feature processing device, the device includes:
the first reconstruction module is used for obtaining an image to be processed of a target user, carrying out face detection on the image to be processed to obtain a facial feature point of the image to be processed, and carrying out face three-dimensional reconstruction according to a prestored three-dimensional face base coefficient corresponding to the target user and the facial feature point to obtain a three-dimensional face model and a posture matrix corresponding to the image to be processed;
the mapping module is used for mapping a pre-stored reference face template corresponding to the target user to a to-be-processed template with the size consistent with that of the to-be-processed image according to texture coordinates of a vertex of an average face model in the three-dimensional face model and the posture matrix, wherein a region with a pixel value of 255 on the to-be-processed template obtained after mapping corresponds to the position of a mark feature which a user wants to keep on the to-be-processed image;
and the removal processing module is used for carrying out mark feature detection on the image to be processed and carrying out removal processing on other mark features except the mark feature corresponding to the region with the pixel value of 255 on the detected mark features.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring a plurality of reference images of a target user under different postures;
and the second reconstruction module is used for carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face basic coefficient of the three-dimensional face model of the target user.
Optionally, the apparatus further comprises:
the detection module is used for obtaining the front face images of the target users contained in the multiple reference images and carrying out face detection on the front face images to obtain face characteristic points of the front face images;
the third modeling block is used for carrying out human face three-dimensional reconstruction according to the obtained three-dimensional face base system and the facial feature points of the front face image to obtain a front three-dimensional face model and a front attitude matrix corresponding to the front face image;
the mark feature acquisition module is used for carrying out mark feature detection on the front face image and acquiring a mark feature selected by a user;
and the reference face template obtaining module is used for processing the mark features selected by the user to obtain a front face template, and obtaining a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model and the front attitude matrix.
Optionally, the flag feature obtaining module includes:
the detection unit is used for carrying out mark feature detection on the front face image to obtain mark features on the front face image;
and the mark feature obtaining unit is used for obtaining the mark features selected by the user in the mark features on the front face image based on the selection of the user.
Optionally, the reference face template obtaining module includes:
the front face template generating unit is used for setting the pixel value of the area of the mark feature selected by the user on the front face image to be 255 and setting the pixel values of the areas of other mark features to be 0 so as to generate a front face template with the size consistent with that of the front face image;
the texture expansion image obtaining unit is used for obtaining a texture expansion image of the front three-dimensional face model according to texture coordinates of the peak of the average face model in the front three-dimensional face model and the front attitude matrix;
and the mapping unit is used for mapping the front face template to the texture expansion image to obtain a reference face template corresponding to the target user.
According to the method and the device for processing the facial image features, the facial feature points of the target user are obtained by detecting the image to be processed of the target user, the three-dimensional facial model and the posture matrix corresponding to the image to be processed are obtained by combining the prestored three-dimensional facial form base coefficient, and the prestored reference facial template corresponding to the target user is mapped to the template to be processed consistent with the image to be processed so as to obtain the template to be processed, wherein the template to be processed marks the position with the mark feature which the user wants to keep. And carrying out mark feature detection on the image to be processed, and removing other mark features except the mark features corresponding to the areas of the mark features which the user wants to keep on the template to be processed. Resulting in a processed image of the landmark features that the user wants to preserve. Through the process, the effects of automatically retaining the characteristics specified by the user and removing other spots and flaws can be achieved, and a rapid, automatic and customized facial image processing scheme can be provided for the user.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a face image feature processing method according to an embodiment of the present application.
Fig. 3 is another flowchart of a face image feature processing method according to an embodiment of the present application.
Fig. 4 is another flowchart of a face image feature processing method according to an embodiment of the present application.
Fig. 5 is a flowchart of sub-steps of step S150 in fig. 4.
Fig. 6 is a flowchart of the substeps of step S160 in fig. 4.
Fig. 7 is a functional block diagram of a facial image feature processing apparatus according to an embodiment of the present application.
Fig. 8 is a block diagram of another functional module of the facial image feature processing apparatus according to the embodiment of the present application.
Fig. 9 is a functional block diagram of a flag feature obtaining module according to an embodiment of the present application.
Fig. 10 is a functional block diagram of a reference face template obtaining module according to an embodiment of the present application.
Icon: 100-an electronic device; 110-face image feature processing means; 111-a first reconstruction module; 112-a mapping module; 113-a removal processing module; 114-an acquisition module; 115-a second reconstruction module; 116-a detection module; 117-a third reconstruction module; 118-a signature feature acquisition module; 1181-a detection unit; 1182-a signature feature obtaining unit; 119-a reference face template acquisition module; 1191 — a front face template generation unit; 1192-texture expansion map obtaining unit; 1193-a mapping unit; 120-a processor; 130-memory.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
As shown in fig. 1, an embodiment of the present application provides an electronic device 100, where the electronic device 100 includes a memory 130, a processor 120, and a facial image feature processing apparatus 110.
The memory 130 is electrically connected to the processor 120 directly or indirectly to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The facial image feature processing device 110 includes at least one software functional module which can be stored in the memory 130 in the form of software or firmware (firmware). The processor 120 is configured to execute executable computer programs stored in the memory 130, for example, software functional modules and computer programs included in the facial image feature processing apparatus 110, so as to implement a facial image feature processing method.
The Memory 130 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 130 is used for storing a program, and the processor 120 executes the program after receiving the execution instruction.
The processor 120 may be an integrated circuit chip having signal processing capabilities. The Processor 120 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 120 may be any conventional processor or the like.
It is to be understood that the configuration shown in fig. 1 is merely exemplary, and that the electronic device 100 may include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
In this embodiment, the electronic device 100 may be, but is not limited to, a terminal device with an image processing function, such as a smart phone, a notebook computer, a tablet computer, a mobile internet device, and a server.
With reference to fig. 2, an embodiment of the present invention further provides a face image feature processing method applicable to the electronic device 100. Wherein the method steps defined by the method related flows may be implemented by the electronic device 100. The specific process shown in fig. 2 will be described in detail below.
Step S210, obtaining an image to be processed of a target user, performing face detection on the image to be processed to obtain face characteristic points of the image to be processed, and performing face three-dimensional reconstruction according to prestored three-dimensional face base coefficients corresponding to the target user and the face characteristic points to obtain a three-dimensional face model and a posture matrix corresponding to the image to be processed.
In this embodiment, before the face image of the target user is formally processed, the face image of the target user needs to be processed in advance according to multiple face images of the target user, so as to obtain a three-dimensional face shape base coefficient of the face image of the target user, which represents the face shape of the target user, and a reference face template marked with a mark feature that the target user wants to keep.
Referring to fig. 3, in this embodiment, before step S210 is executed, the method for processing facial image features further includes the following steps:
and step S110, acquiring a plurality of reference images of the target user under different postures.
And step S120, carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face basic coefficient of the three-dimensional face model of the target user.
Multiple reference images of the target user including the target user in different postures can be collected in advance, for example, the reference images of the target user in multiple different postures such as a top view angle, a bottom view angle, a left side angle and a right side angle. Wherein each reference image comprises a face image of the target user.
And based on the multiple reference images, carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face base coefficient of a three-dimensional face model of the target user. It should be noted that, for the same user, the face shapes of different face images are the same, that is, the real face shape of the user is considered not to change with the change of the posture and the expression. Different facial images have differences in facial expression and pose. Namely, the models obtained by three-dimensional reconstruction in different face images of the same user have the same face shape and different expressions and postures.
In this embodiment, a three-dimensional deformation model solving method is adopted when three-dimensional face reconstruction is performed. In the three-dimensional deformation model solving method, a human face space is set as a linear space, and a linear combination projection obtained according to pre-collected three-dimensional human face data approaches a human face on a two-dimensional picture to establish a human face space base so as to finally obtain a fitted three-dimensional human face model. The obtained three-dimensional face model comprises an average face model, a three-dimensional face model and a three-dimensional expression model. The average face model can be a fixed model, the three-dimensional face model is composed of three-dimensional face base coefficients, and the three-dimensional expression model is composed of expression base coefficients.
As described above, since the three-dimensional face shape base coefficients are the same for the same user, the three-dimensional expression models corresponding to different face images of the user can be obtained on the basis of the three-dimensional face shape base coefficients of the user.
In addition, in this embodiment, a posture matrix may also be obtained by minimizing the three-dimensional face model and the face feature points. Alternatively, the attitude matrix can be solved in a minimization manner according to the following formula:
Error=MVP*M-P2d
wherein MVP represents a pose matrix of a human face, M represents a three-dimensional human face model of the human face, P2dAnd when the face features are expressed, the formula shows that when the Error obtains the minimum value, the posture matrix corresponding to the face three-dimensional model can be obtained. Referring to fig. 4, the method for processing facial image features provided in this embodiment further includes the following steps:
step S130, obtaining the front face images of the target user contained in the multiple reference images, and performing face detection on the front face images to obtain face feature points of the front face images.
And step S140, performing human face three-dimensional reconstruction according to the obtained three-dimensional face shape base system and the facial feature points of the front human face image to obtain a front three-dimensional human face model and a front posture matrix corresponding to the front human face image.
The accuracy of processing and marking images containing the front face of the human face in the human face images obtained under different postures is high. Therefore, in the present embodiment, the frontal face images of the target user included in the plurality of reference images can be obtained. And carrying out face detection on the obtained front face image to obtain face characteristic points of the front face image. The facial feature points include facial features and cheap peripheral contour feature points, including but not limited to key points in the face for representing features such as eyebrows, eyes, nose, mouth and facial outer contour.
And performing face three-dimensional reconstruction according to the obtained three-dimensional face form base coefficient of the target user and the face characteristic points of the front face image to obtain a front three-dimensional face model and a front attitude matrix corresponding to the front face image. Here, the manner of obtaining the front three-dimensional face model and the front pose matrix is the same as the manner of obtaining the face three-dimensional model and the pose matrix when the reference image for the target user is processed, and reference may be made to the above description, and details are not repeated here.
And step S150, carrying out mark feature detection on the front face image and obtaining the mark features selected by the user.
Referring to fig. 5, in the present embodiment, the step S150 may include the following sub-steps:
step S151, performing mark feature detection on the front face image to obtain a mark feature on the front face image.
And step S152, obtaining the mark features selected by the user in the mark features on the front face image based on the selection of the user.
In this embodiment, a landmark feature detection is performed on a front face image of a target user, where the landmark feature is a spot, a mole, a scar, or other special features on the face image. After the mark features on the front face image are obtained, the obtained mark features can be displayed to a user for the user to select whether to retain certain mark features. Optionally, all the detected sign features may be displayed to the user for selection, or after obtaining each sign feature, each sign feature may be framed in a manner of an external matrix, and the sign features arranged in the front by a preset number, for example, four or five, according to the size of the external matrix are displayed to the user for selection. Specifically, the present embodiment is not limited, and may be set according to requirements.
After the logo features are presented to the user, the user may manually select the logo features where a reservation is needed, such as "a nevus beauty" or other logo features that the user feels more aesthetically pleasing. And obtaining the mark features selected by the user in the mark features on the front face image based on the selection of the user.
Step S160, the mark features selected by the user are processed to obtain a front face template, and a reference face template corresponding to the target user is obtained according to the front face template, the front three-dimensional face model and the front attitude matrix.
Referring to fig. 6, in the present embodiment, the step S160 may include the following sub-steps:
step S161 is to set the pixel value of the region of the mark feature selected by the user on the front face image to 255, and set the pixel values of the regions of the other mark features to 0, so as to generate a front face template having the same size as the front face image.
And step S162, obtaining a texture expansion image of the front three-dimensional face model according to texture coordinates of the peak of the average face model in the front three-dimensional face model and the front attitude matrix.
And step S163, mapping the front face template to the texture expansion image to obtain a reference face template corresponding to the target user.
In this embodiment, after the mark feature selected by the user and needing to be retained is obtained, the pixel points in the area of the mark feature selected by the user on the front face image are filled with white, that is, the pixel value of the area of the corresponding mark feature is set to be 255. And filling black pixels in the regions of the other mark features except the selected mark feature in the front face image, namely setting the pixel values of the regions of the mark features to be 0, so that a front face template with the size consistent with that of the front face image can be generated.
In this embodiment, a texture expansion map of the front three-dimensional face model is obtained according to the texture coordinates of the vertices of the average face model in the front face model and the front pose matrix. The texture expansion in this embodiment is a common method for converting three-dimensional mesh model parameters into a two-dimensional space in the prior art, and the process thereof is not described in detail here.
And mapping the obtained front face template to the expanded texture expansion image to obtain a reference face template of the target user. Wherein, the reference face template is marked with mark features which the user wants to keep. And when any one face image of the target user is obtained subsequently, the face image can be processed in a mode of mapping the reference face template to the face image so as to obtain an image which meets the requirements of the user and is used for removing spots, moles and the like.
Step S220, mapping a pre-stored reference face template corresponding to the target user to a template to be processed with the size consistent with that of the image to be processed according to the texture coordinates of the vertex of the average face model in the three-dimensional face model and the posture matrix, wherein the region with the pixel value of 255 on the template to be processed obtained after mapping corresponds to the position of the mark feature which the user wants to keep on the image to be processed.
Step S230, performing mark feature detection on the image to be processed, and performing removal processing on other mark features except the mark feature corresponding to the region with the pixel value of 255 on the template to be processed.
In this embodiment, after the to-be-processed image of the target user is obtained, the face detection may be performed on the to-be-processed image to obtain the facial feature points of the to-be-processed image. The facial feature points comprise facial features and cheap peripheral contour feature points in the image to be processed, including but not limited to key points in the face for representing features such as eyebrows, eyes, nose, mouth and facial outer contour.
And performing human face three-dimensional reconstruction according to the three-dimensional face form base coefficient and the facial feature points which are obtained and reserved and correspond to the target user to obtain a three-dimensional human face model and a posture matrix which correspond to the image to be processed. For the method for obtaining the three-dimensional face model and the pose matrix, reference may be made to the above description, and details are not repeated here.
And mapping the reference face template corresponding to the target user obtained in the step to a to-be-processed template with the size consistent with that of the to-be-processed image according to the texture coordinates and the attitude matrix of the vertex of the average face model in the three-dimensional face model of the to-be-processed image. Thus, the area with the pixel value of 255 on the obtained mapped template to be processed corresponds to the position of the mark feature which the user wants to keep on the image to be processed.
Optionally, the to-be-processed image is subjected to marker feature detection to obtain a marker feature in the to-be-processed image. In the obtained mark features, the mark feature corresponding to the region with the pixel value of 255 in the mapped template to be processed is the mark feature that the user wants to retain, and the other mark features are the mark features that the user wants to remove by processing.
Optionally, the marker features other than the marker feature corresponding to the region with the pixel value of 255 on the template to be processed in the image to be processed may be removed, so that a processed image with the marker features desired by the user is obtained.
Referring to fig. 7, an embodiment of the present invention further provides a facial image feature processing apparatus 110 applied to the electronic device 100. The facial image feature processing apparatus 110 includes a first reconstruction module 111, a mapping module 112, and a removal processing module 113.
The first reconstruction module 111 is configured to obtain an image to be processed of a target user, perform face detection on the image to be processed to obtain a facial feature point of the image to be processed, and perform face three-dimensional reconstruction according to a pre-stored three-dimensional face shape base coefficient corresponding to the target user and the facial feature point to obtain a three-dimensional face model and a pose matrix corresponding to the image to be processed.
The mapping module 112 is configured to map a pre-stored reference face template corresponding to the target user to a template to be processed, which is of a size consistent with that of the image to be processed, according to the texture coordinates of the vertex of the average face model in the three-dimensional face model and the pose matrix, where an area with a pixel value of 255 on the template to be processed obtained after mapping corresponds to a position of a landmark feature that a user wants to keep on the image to be processed.
The removal processing module 113 is configured to perform marker feature detection on the image to be processed, and perform removal processing on other marker features except for the marker feature corresponding to the region with the pixel value of 255 on the template to be processed in the detected marker features.
Referring to fig. 8, in the present embodiment, the facial image feature processing apparatus 110 further includes an acquisition module 114, a second re-modeling block 115, a detection module 116, a third re-modeling block 117, a marker feature obtaining module 118, and a reference facial template obtaining module 119.
The acquisition module 114 is configured to acquire multiple reference images of the target user in different postures.
The second modeling block 115 is configured to perform face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face shape base coefficient of the three-dimensional face model of the target user.
The detection module 116 is configured to obtain front face images of the target user included in the multiple reference images, and perform face detection on the front face images to obtain face feature points of the front face images.
And the third modeling block 117 is configured to perform three-dimensional face reconstruction according to the obtained three-dimensional face shape base system and the facial feature points of the front face image to obtain a front three-dimensional face model and a front pose matrix corresponding to the front face image.
The mark feature obtaining module 118 is configured to perform mark feature detection on the front face image, and obtain a mark feature selected by a user.
The reference face template obtaining module 119 is configured to process the mark feature selected by the user to obtain a front face template, and obtain a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model, and the front pose matrix.
Referring to fig. 9, in the present embodiment, the mark characteristic obtaining module 118 includes a detecting unit 1181 and a mark characteristic obtaining unit 1182.
The detecting unit 1181 is configured to perform mark feature detection on the front face image to obtain a mark feature on the front face image.
The mark feature obtaining unit 1182 is configured to obtain, based on a selection of a user, a mark feature selected by the user in the mark features on the front face image.
Referring to fig. 10, in the present embodiment, the reference face template obtaining module 119 includes a front face template generating unit 1191, a texture expansion map obtaining unit 1192, and a mapping unit 1193.
The front face template generating unit 1191 is configured to set a pixel value of a region of a flag feature selected by a user on the front face image to 255, and set pixel values of regions of other flag features to 0, so as to generate a front face template having a size that is consistent with that of the front face image.
The texture expansion map obtaining unit 1192 is configured to obtain a texture expansion map of the front three-dimensional face model according to the texture coordinates of the vertex of the average face model in the front three-dimensional face model and the front pose matrix.
The mapping unit 1193 is configured to map the front face template to the texture expansion map to obtain a reference face template corresponding to the target user.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and redundant description is not repeated here.
To sum up, the method and the device for processing facial image features provided in the embodiments of the present application obtain facial feature points of a target user by detecting a to-be-processed image, obtain a three-dimensional facial model and a posture matrix corresponding to the to-be-processed image by combining pre-stored three-dimensional facial form base coefficients, and map a pre-stored reference facial template corresponding to the target user onto a to-be-processed template consistent with the to-be-processed image to obtain a to-be-processed template marked with a position where a user wants to retain a mark feature. And carrying out mark feature detection on the image to be processed, and removing other mark features except the mark features corresponding to the areas of the mark features which the user wants to keep on the template to be processed. Resulting in a processed image of the landmark features that the user wants to preserve. Through the process, the effects of automatically retaining the characteristics specified by the user and removing other spots and flaws can be achieved, and a rapid, automatic and customized facial image processing scheme can be provided for the user.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A face image feature processing method is characterized by comprising the following steps:
the method comprises the steps of obtaining an image to be processed of a target user, carrying out face detection on the image to be processed to obtain face characteristic points of the image to be processed, and carrying out face three-dimensional reconstruction according to prestored three-dimensional face basic coefficients corresponding to the target user and the face characteristic points to obtain a three-dimensional face model and a posture matrix corresponding to the image to be processed;
mapping a pre-stored reference face template corresponding to the target user to a template to be processed with the size consistent with that of the image to be processed according to texture coordinates of the vertex of the average face model in the three-dimensional face model and the attitude matrix, wherein the region with the pixel value of 255 on the template to be processed obtained after mapping corresponds to the position of a mark feature which is required to be reserved by the user on the image to be processed;
performing mark feature detection on the image to be processed, and performing removal processing on other mark features except the mark feature corresponding to the region with the pixel value of 255 on the detected mark features;
the pre-stored three-dimensional face basic coefficient corresponding to the target user is obtained by the following steps:
acquiring a plurality of reference images of a target user under different postures; carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face basic coefficient of a three-dimensional face model of the target user;
the attitude matrix is obtained by adopting a minimization mode according to the three-dimensional face model and the face characteristic points according to the following formula:
Error=MVP*M-P2d
wherein MVP represents a pose matrix of a human face, M represents a three-dimensional human face model of the human face, P2dRepresenting facial features.
2. The facial image feature processing method according to claim 1, wherein the pre-stored reference facial template corresponding to the target user is obtained by:
acquiring front face images of the target user contained in the multiple reference images, and performing face detection on the front face images to obtain face feature points of the front face images;
carrying out human face three-dimensional reconstruction according to the obtained three-dimensional facial form base system and facial feature points of the front human face image to obtain a front three-dimensional human face model and a front attitude matrix corresponding to the front human face image;
carrying out mark feature detection on the front face image and obtaining a mark feature selected by a user;
and processing the mark features selected by the user to obtain a front face template, and obtaining a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model and the front attitude matrix.
3. The method for processing the facial image feature of claim 2, wherein the step of performing the marker feature detection on the front facial image to obtain the marker feature on the front facial image comprises:
carrying out mark feature detection on the front face image to obtain mark features on the front face image;
and obtaining the mark features selected by the user in the mark features on the front face image based on the selection of the user.
4. The method of claim 2, wherein the step of processing the selected landmark features to obtain a front face template, and obtaining a reference face template corresponding to the target user according to the front face template, the front three-dimensional face model and the front pose matrix comprises:
setting the pixel value of the area of the mark feature selected by the user on the front face image to be 255, and setting the pixel values of the areas of other mark features to be 0 so as to generate a front face template with the size consistent with that of the front face image;
obtaining a texture expansion map of the front three-dimensional face model according to texture coordinates of the peak of the average face model in the front three-dimensional face model and the front attitude matrix;
and mapping the front face template to the texture expansion image to obtain a reference face template corresponding to the target user.
5. An apparatus for processing a face image feature, the apparatus comprising:
the first reconstruction module is used for obtaining an image to be processed of a target user, carrying out face detection on the image to be processed to obtain a face characteristic point of the image to be processed, and carrying out face three-dimensional reconstruction according to a prestored three-dimensional face basic coefficient corresponding to the target user and the face characteristic point to obtain a three-dimensional face model and a posture matrix corresponding to the image to be processed;
the mapping module is used for mapping a pre-stored reference face template corresponding to the target user to a template to be processed, the size of which is consistent with that of the image to be processed, according to texture coordinates of a vertex of an average face model in the three-dimensional face model and the attitude matrix, wherein an area with a pixel value of 255 on the template to be processed obtained after mapping corresponds to the position of a mark feature which a user wants to keep on the image to be processed;
the removal processing module is used for carrying out mark feature detection on the image to be processed and carrying out removal processing on other mark features except the mark feature corresponding to the region with the pixel value of 255 on the detected mark features;
the device further comprises:
the acquisition module is used for acquiring a plurality of reference images of a target user under different postures; the second reconstruction module is used for carrying out face detection and face three-dimensional reconstruction on the multiple reference images to obtain a three-dimensional face basic coefficient of a three-dimensional face model of the target user;
the attitude matrix is obtained by adopting a minimization mode according to the three-dimensional face model and the face characteristic points according to the following formula:
Error=MVP*M-P2d
wherein MVP represents a pose matrix of a face, M represents a three-dimensional face model of the face, and P2dRepresenting facial features.
6. The facial image feature processing apparatus according to claim 5, said apparatus further comprising:
the detection module is used for obtaining the front face images of the target users contained in the multiple reference images and carrying out face detection on the front face images to obtain face characteristic points of the front face images;
the third modeling block is used for carrying out human face three-dimensional reconstruction according to the obtained three-dimensional face base system and the facial feature points of the front face image to obtain a front three-dimensional face model and a front attitude matrix corresponding to the front face image;
the sign feature acquisition module is used for carrying out sign feature detection on the front face image and acquiring a sign feature selected by a user;
and the reference face template obtaining module is used for processing the mark characteristics selected by the user to obtain a front face template, and obtaining the reference face template corresponding to the target user according to the front face template, the front three-dimensional face model and the front attitude matrix.
7. The facial image feature processing device according to claim 6, wherein the marker feature obtaining module comprises:
the detection unit is used for carrying out mark feature detection on the front face image to obtain mark features on the front face image;
and the mark feature obtaining unit is used for obtaining the mark feature selected by the user in the mark features on the front face image based on the selection of the user.
8. The apparatus for processing facial image features according to claim 6, wherein the reference facial template obtaining module comprises:
the front face template generating unit is used for setting the pixel value of the area of the mark feature selected by the user on the front face image to be 255 and setting the pixel values of the areas of other mark features to be 0 so as to generate a front face template with the size consistent with that of the front face image;
the texture expansion image obtaining unit is used for obtaining a texture expansion image of the front three-dimensional face model according to texture coordinates of the peak of the average face model in the front three-dimensional face model and the front attitude matrix;
and the mapping unit is used for mapping the front face template to the texture expansion image to obtain a reference face template corresponding to the target user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811400818.7A CN109377556B (en) | 2018-11-22 | 2018-11-22 | Face image feature processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811400818.7A CN109377556B (en) | 2018-11-22 | 2018-11-22 | Face image feature processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109377556A CN109377556A (en) | 2019-02-22 |
CN109377556B true CN109377556B (en) | 2022-11-01 |
Family
ID=65377203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811400818.7A Active CN109377556B (en) | 2018-11-22 | 2018-11-22 | Face image feature processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109377556B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428579A (en) * | 2020-03-03 | 2020-07-17 | 平安科技(深圳)有限公司 | Face image acquisition method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN103593870A (en) * | 2013-11-12 | 2014-02-19 | 杭州摩图科技有限公司 | Picture processing device and method based on human faces |
CN104299250A (en) * | 2014-10-15 | 2015-01-21 | 南京航空航天大学 | Front face image synthesis method and system based on prior model |
CN104318262A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for replacing skin through human face photos |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184249B (en) * | 2015-08-28 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | Method and apparatus for face image processing |
-
2018
- 2018-11-22 CN CN201811400818.7A patent/CN109377556B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN103593870A (en) * | 2013-11-12 | 2014-02-19 | 杭州摩图科技有限公司 | Picture processing device and method based on human faces |
CN104318262A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for replacing skin through human face photos |
CN104299250A (en) * | 2014-10-15 | 2015-01-21 | 南京航空航天大学 | Front face image synthesis method and system based on prior model |
Also Published As
Publication number | Publication date |
---|---|
CN109377556A (en) | 2019-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11514593B2 (en) | Method and device for image processing | |
CN109118569B (en) | Rendering method and device based on three-dimensional model | |
CN108305312B (en) | Method and device for generating 3D virtual image | |
US11494915B2 (en) | Image processing system, image processing method, and program | |
US10366533B2 (en) | Image processing device and image processing method | |
CN111428579A (en) | Face image acquisition method and system | |
US9519968B2 (en) | Calibrating visual sensors using homography operators | |
CN110163832B (en) | Face fusion method and device and terminal | |
JP6685827B2 (en) | Image processing apparatus, image processing method and program | |
CN110390632B (en) | Image processing method and device based on dressing template, storage medium and terminal | |
KR101556992B1 (en) | 3d scanning system using facial plastic surgery simulation | |
KR20170008638A (en) | Three dimensional content producing apparatus and three dimensional content producing method thereof | |
JP7312685B2 (en) | Apparatus and method for identifying articulatable portions of physical objects using multiple 3D point clouds | |
CN108682050B (en) | Three-dimensional model-based beautifying method and device | |
US20150269759A1 (en) | Image processing apparatus, image processing system, and image processing method | |
CN111401234B (en) | Three-dimensional character model construction method and device and storage medium | |
CN113628327A (en) | Head three-dimensional reconstruction method and equipment | |
JP2009124231A (en) | Image correction device, and image correction method | |
CN106997613A (en) | Generated according to the 3D models of 2D images | |
EP3547661A1 (en) | Image processing device, image processing method, and program | |
CN111742352B (en) | Method for modeling three-dimensional object and electronic equipment | |
CN109377556B (en) | Face image feature processing method and device | |
CN110533761B (en) | Image display method, electronic device and non-transient computer readable recording medium | |
CN109040612B (en) | Image processing method, device and equipment of target object and storage medium | |
CN116912417A (en) | Texture mapping method, device, equipment and storage medium based on three-dimensional reconstruction of human face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |