CN109242760B - Face image processing method and device and electronic equipment - Google Patents

Face image processing method and device and electronic equipment Download PDF

Info

Publication number
CN109242760B
CN109242760B CN201810935096.9A CN201810935096A CN109242760B CN 109242760 B CN109242760 B CN 109242760B CN 201810935096 A CN201810935096 A CN 201810935096A CN 109242760 B CN109242760 B CN 109242760B
Authority
CN
China
Prior art keywords
face
dimensional
original
image
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810935096.9A
Other languages
Chinese (zh)
Other versions
CN109242760A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810935096.9A priority Critical patent/CN109242760B/en
Publication of CN109242760A publication Critical patent/CN109242760A/en
Application granted granted Critical
Publication of CN109242760B publication Critical patent/CN109242760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/18

Abstract

The invention provides a method and a device for processing a face image and electronic equipment, wherein the method comprises the following steps: acquiring an original face three-dimensional model corresponding to an original face image; acquiring three-dimensional geometric characteristics of an original human face three-dimensional model; comparing the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group to obtain a target face template similar to the original face three-dimensional model; and adjusting the original face image according to the three-dimensional geometric characteristics of the target face template to obtain the target face image. Therefore, the human face in the adjusted human face image is more natural and real, a micro-shaping effect is accurately provided for a user, the user can obtain the micro-shaping effect of the human face conveniently, and the user experience is improved.

Description

Face image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image technologies, and in particular, to a method and an apparatus for processing a face image.
Background
With the development of terminal device technology, terminal devices such as mobile phones and tablet computers can provide various functions for users, and in the process of using the terminal devices, for example, a user can replace a face template such as a star or the face of another person with a face in an original face image through face changing software in the terminal devices, so that the face in the face image to be replaced is replaced with the face of another person, and a face changing effect is checked.
However, due to the difference between the face image to be replaced and the faces in the template, for example, the skin colors of two faces are different, and this way of directly replacing the faces in the face image to be replaced with the faces of other people easily makes the presented face changing effect unreal, and the face shaping effect cannot be accurately provided for the user, so that the user experience is not ideal.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, one object of the present invention is to provide a method for processing a face image, in which a face in an original face image is adjusted by using three-dimensional geometric features of a similar face template, so that facial features and facial shapes of the face in the adjusted face image are similar to the face template, and the skin of the face in the face image before and after adjustment is unchanged, so that the face in the adjusted face image is more natural and real, a micro-shaping effect is accurately provided for a user, the user can obtain the micro-shaping effect of the face conveniently, and the user experience is improved.
The second purpose of the invention is to provide a processing device of a human face image.
A third object of the invention is to propose an electronic device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
The embodiment of the invention provides a method for processing a face image, which comprises the following steps: acquiring an original face three-dimensional model corresponding to an original face image; acquiring three-dimensional geometric characteristics of the original human face three-dimensional model; comparing the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group to obtain a target face template similar to the original face three-dimensional model; and adjusting the original face image according to the three-dimensional geometric characteristics of the target face template to obtain a target face image.
Another embodiment of the present invention provides a device for processing a face image, including: the first acquisition module is used for acquiring an original face three-dimensional model corresponding to an original face image; the second acquisition module is used for acquiring the three-dimensional geometric characteristics of the original human face three-dimensional model; the comparison module is used for comparing the three-dimensional geometric characteristics of the original human face three-dimensional model with the three-dimensional geometric characteristics of all human face templates in the human face template group to obtain a target human face template similar to the original human face three-dimensional model; and the processing module is used for adjusting the original face image according to the three-dimensional geometric characteristics of the target face template so as to obtain a target face image.
Another embodiment of the present invention provides an electronic device, including: the face image processing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the face image processing method of the embodiment is realized.
Yet another embodiment of the present invention provides a non-transitory computer-readable storage medium, wherein when executed by a processor, the computer program implements the method for processing a face image according to the above embodiment.
Yet another embodiment of the present invention provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for processing a face image according to the above embodiment is performed.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
the method comprises the steps of obtaining an original human face three-dimensional model corresponding to an original human face, obtaining three-dimensional geometric characteristics of the original human face three-dimensional model, comparing the three-dimensional geometric characteristics of the original human face three-dimensional model with the three-dimensional geometric characteristics of all human face templates in a human face template group to obtain a target human face template similar to the original human face three-dimensional model, and adjusting the human face in an original human face image according to the three-dimensional geometric characteristics of the target human face template. Therefore, the human face in the original human face image is adjusted through the three-dimensional geometric features of the similar human face template, so that the facial features and the facial shapes of the human face in the adjusted human face image are similar to the human face template, and the skin of the human face in the human face image before and after adjustment is unchanged, so that the human face in the adjusted human face image is more natural and real, a micro-shaping effect is accurately provided for a user, the user can conveniently obtain the micro-shaping effect of the human face, and the user experience is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method of processing a face image according to a first embodiment of the present invention;
FIG. 2 is a first flowchart of the refinement of step 104;
FIG. 3 is a detailed flow chart II of step 104;
FIG. 4 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a face image processing apparatus according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a face image processing apparatus according to another embodiment of the present invention;
FIG. 7 is a schematic diagram of the internal structure of an electronic device 200 according to an embodiment of the invention;
FIG. 8 is a schematic diagram of an image processing circuit as one possible implementation;
fig. 9 is a schematic diagram of an image processing circuit as another possible implementation.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method, an apparatus and an electronic device for processing a face image according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart of a method of processing a face image according to a first embodiment of the present invention.
As shown in fig. 1, the method for processing a face image may include:
step 101, obtaining an original face three-dimensional model corresponding to an original face image.
It should be noted that, in different application scenarios, the original face three-dimensional model corresponding to the original face image may be obtained in multiple ways, and this embodiment provides several possible implementation ways:
as a possible implementation manner, after the original face image is obtained, three-dimensional reconstruction may be performed on the original face image based on the preset three-dimensional reconstruction, so as to obtain an original face three-dimensional model corresponding to the original face image through three-dimensional reconstruction.
As another possible implementation manner, after the original face image is obtained, the face feature point recognition may be performed on the original face image, and a corresponding face three-dimensional model is obtained according to the face feature point recognition result, where the obtained face three-dimensional model is the original face three-dimensional model corresponding to the original face image.
Specifically, the face three-dimensional model corresponding to the face feature point recognition result may be obtained according to a correspondence between pre-stored face feature points and the face three-dimensional model, and the obtained face three-dimensional model is an original face three-dimensional model corresponding to an original face image.
It should be noted that the three-dimensional model of the face in the corresponding relationship is established in advance. For example, a three-dimensional face model may be established by performing three-dimensional reconstruction based on depth information of a corresponding face image and a two-dimensional face image, or a three-dimensional face model may be established in advance according to the face image and a preset three-dimensional reconstruction algorithm, which is not limited in this embodiment.
And 102, acquiring three-dimensional geometric characteristics of the original human face three-dimensional model.
The three-dimensional geometric features may include, but are not limited to, information on bridge height, eye spacing to face width ratio, lip thickness, lip width, lip thickness to lip width ratio, nose length, nose width, cheek length, cheek width, etc.
It should be understood that, in different application scenarios, the three-dimensional geometric features of the original face three-dimensional model may be obtained in multiple ways, and this embodiment provides several possible implementation manners:
as a possible implementation manner, the three-dimensional geometric features corresponding to the original face three-dimensional model may be obtained according to a pre-stored corresponding relationship between the face three-dimensional model and the three-dimensional geometric features.
As another possible implementation mode, after the original face three-dimensional model is obtained, the model feature points in the original face three-dimensional model can be determined, and the three-dimensional geometric features of the original face three-dimensional model are obtained by analyzing the coordinate information of the model feature points.
The model feature points of the three-dimensional face model refer to key points of a face in the surface of the three-dimensional face model.
And 103, comparing the three-dimensional geometric characteristics of the original human face three-dimensional model with the three-dimensional geometric characteristics of all human face templates in the human face template group to obtain a target human face template similar to the original human face three-dimensional model.
It should be noted that the face templates in the face template group may include but are not limited to star face templates, and may also be other user face templates, for example, a face template with good overall facial features, or a face template with good cosmetic effect.
It should be understood that there may be one or more target face templates similar to the faces in the original face image.
When a plurality of candidate face models similar to the summarized faces of the original face images exist, the plurality of candidate face models can be provided for the user, the face model selected by the user from the plurality of candidate face models can be obtained, and the face model selected by the user is used as the target face model.
As an exemplary embodiment, in order to reduce the user operations and improve the intelligence, as an exemplary embodiment, the three-dimensional geometric features of the original three-dimensional model of the face may be compared with the three-dimensional geometric features of all face templates in the face template set, so as to obtain a target face template most similar to the original three-dimensional model of the face.
The specific process of comparing the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group to obtain a target face template most similar to the original face three-dimensional model is as follows: and then, according to the similarity between the original human face three-dimensional model and each human face template, obtaining the maximum similarity in the similarity, and taking the human face template corresponding to the maximum similarity as a target human face template.
And step 104, adjusting the original face image according to the three-dimensional geometric characteristics of the target face template to obtain the target face image.
It should be noted that, in different application scenarios, the original face image is adjusted according to the three-dimensional geometric feature of the target face template, so that the modes of obtaining the target face image are different, which is exemplified as follows:
as an example, as shown in fig. 2, step 104 may include:
step 201, obtaining a three-dimensional transformation vector of each model feature point in the original human face three-dimensional model according to the three-dimensional geometric features of the target human face template and the three-dimensional geometric features of the original human face three-dimensional model.
Step 202, obtaining the pose information of the face in the original face image.
The posture information may include, but is not limited to, information such as a rotation direction and a rotation angle.
It should be noted that, in different application scenarios, the pose information of the face in the original face image can be obtained in different manners. Examples are as follows:
as an example, face feature points may be extracted from an original face image, and pose information of a face in the original face image may be determined according to the extracted face feature points.
The face feature points may be feature points of key positions on the face, such as eye feature points, eyebrow feature points, ear feature points, mouth feature points, and the like.
As another example, when the original face image includes depth information, the pose information of the face in the original face image may be determined according to the depth information of the face in the original face image and the reference face three-dimensional model.
Specifically, according to the depth information of the face in the original face image, three-dimensional information of the face in the original face image is determined, the three-dimensional information of the face in the original face image is compared with the three-dimensional information of the face in the original face three-dimensional model, and the posture information of the face in the original face image is determined according to the comparison result.
And step 203, mapping the three-dimensional transformation vector of each model feature point in the original human face three-dimensional model to a two-dimensional plane according to the pose information of the human face to obtain the two-dimensional transformation vector of each image feature point in the two-dimensional human face image.
Wherein, it needs to be understood that the model feature points in the original face three-dimensional template correspond to the image feature points in the two-dimensional face image.
The model feature points of the three-dimensional face model are key points of the face in the surface of the three-dimensional face model.
And 204, transforming corresponding image feature points in the original face image according to the two-dimensional transformation vector of each image feature point in the two-dimensional face image to obtain a target face image.
It should be noted that the original face three-dimensional model corresponds to the model feature point of the face template.
In the example, three-dimensional transformation vectors of each model feature point between the original face and the similar face template are determined through three-dimensional geometric features of the similar face template and the original face three-dimensional model, the three-dimensional transformation vectors of each model feature point are mapped to a two-dimensional plane according to pose information of the face in the original face image, so that two-dimensional transformation vectors of corresponding image feature points between the original face and the similar face template are obtained, the corresponding image feature points in the original face image are transformed through the two-dimensional transformation vectors of the corresponding image feature points, so that the face in the transformed face image (namely the target face image) has the characteristics of the face in the face template, the face in the transformed face image is close to the face in the similar face template, and the micro-shaping effect of the original face image is obtained by combining the skin color and facial features of a user.
As another exemplary embodiment, as shown in fig. 3, step 104 may include:
step 301, adjusting the original face three-dimensional template according to the three-dimensional geometric characteristics of the target face template to obtain a target face three-dimensional model.
Step 302, adjusting the original face image according to the target face three-dimensional model to obtain a target face image.
As an exemplary embodiment, after obtaining the target three-dimensional model of the face, pose information of the face in the original face image may be obtained, and then, according to the pose information of the face, the target three-dimensional model of the face is mapped to a two-dimensional plane to obtain an adjusted face image, and the adjusted face image is correspondingly replaced with corresponding pixels of the original face image to obtain the target face image.
In this example, the three-dimensional model of the original face is adjusted according to the three-dimensional geometric features of the target face template to obtain a three-dimensional model of the target face, then the three-dimensional model of the target face is mapped to a two-dimensional plane according to the pose information of the face to obtain an adjusted face image, and then the corresponding pixels of the original face image are correspondingly replaced by the adjusted face image to obtain the target face image. Therefore, in the process of face adjustment, the face in the original face image is adjusted by combining the posture information of the face and the three-dimensional geometric characteristics of the similar face template, so that the influence of the face posture difference on the face adjustment effect is improved, the skin color in the face image is not changed before and after adjustment, the face in the target face image is natural and real, the provided face micro-shaping effect is good, the user can conveniently obtain the micro-shaping effect of the user, and the user experience is improved.
As another exemplary implementation manner, after the target face three-dimensional model is obtained, model feature points in the target face three-dimensional model may also be obtained, image feature points in the original face image may also be obtained, and image feature points corresponding to the original face image may be directly replaced according to a mapping relationship between the model feature points of the target face three-dimensional model and the image feature points of the original face image, so as to obtain the target face image.
The model feature points of the three-dimensional face model refer to key points of a face in the surface of the three-dimensional face model.
The image feature points of the original face image refer to key points of a face in the original face image.
The processing method of the face image of the embodiment of the invention comprises the steps of obtaining an original face three-dimensional model corresponding to an original face, obtaining three-dimensional geometric characteristics of the original face three-dimensional model, comparing the three-dimensional geometric characteristics of the original face three-dimensional model with the three-dimensional geometric characteristics of all face templates in a face template group to obtain a target face template similar to the original face three-dimensional model, and adjusting the face in the original face image according to the three-dimensional geometric characteristics of the target face template. Therefore, the human face in the original human face image is adjusted through the three-dimensional geometric features of the similar human face template, so that the facial features and the facial shapes of the human face in the adjusted human face image are similar to the human face template, and the skin of the human face in the human face image before and after adjustment is unchanged, so that the human face in the adjusted human face image is more natural and real, a micro-shaping effect is accurately provided for a user, the user can conveniently obtain the micro-shaping effect of the human face, and the user experience is improved.
Based on the above embodiment, in order to meet the requirement of the user for beautifying the target face image, as an exemplary implementation manner, after the target face image is obtained, the face skin color recognition may be performed on the target face image, makeup information corresponding to the face skin color recognition result is obtained, and the target face image is beautified according to the makeup information. Therefore, the beautifying information matched with the face skin color in the target face image is adopted to beautify the target face image, so that the beautifying in the target face image is matched with the face skin color, and the beautifying effect is more natural and real.
The makeup information may include, but is not limited to, eye shadow, blush, beautiful pupil, and lip color, which is not limited by this embodiment.
In order to implement the above embodiments, the present invention further provides a processing apparatus for a face image.
Fig. 4 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention.
As shown in fig. 4, the apparatus for processing a face image includes a first obtaining module 110, a second obtaining module 120, a comparing module 130, and a processing module 140, wherein:
the first obtaining module 110 is configured to obtain an original face three-dimensional model corresponding to an original face image.
And a second obtaining module 120, configured to obtain a three-dimensional geometric feature of the original face three-dimensional model.
And the comparison module 130 is configured to compare the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group, so as to obtain a target face template similar to the original face three-dimensional model.
And the processing module 140 is configured to adjust the original face image according to the three-dimensional geometric feature of the target face template to obtain a target face image.
The three-dimensional geometric features may include, but are not limited to, information on bridge height, eye spacing to face width ratio, lip thickness, lip width, lip thickness to lip width ratio, nose length, nose width, cheek length, cheek width, etc.
In an embodiment of the present invention, on the basis of the embodiment shown in fig. 4, as shown in fig. 5, the processing module 140 may include:
the first processing unit 141 is configured to obtain a three-dimensional transformation vector of each model feature point in the original face three-dimensional model according to the three-dimensional geometric feature of the target face template and the three-dimensional geometric feature of the original face three-dimensional model.
The obtaining unit 142 is configured to obtain pose information of a face in an original face image.
And the mapping unit 143 is configured to map the three-dimensional transformation vector of each model feature point in the original human face three-dimensional model to a two-dimensional plane according to the pose information of the human face, so as to obtain a two-dimensional transformation vector of each image feature point in the two-dimensional human face image.
The second processing unit 144 is configured to transform the corresponding image feature points in the original face image according to the two-dimensional transformation vector of each image feature point in the two-dimensional face image, so as to obtain a target face image.
In an embodiment of the present invention, based on the embodiment shown in fig. 4, as shown in fig. 6, the processing module 140 may include:
the first adjusting unit 145 is configured to adjust the original face three-dimensional template according to the three-dimensional geometric feature of the target face template, so as to obtain a target face three-dimensional model.
And a second adjusting unit 146, configured to adjust the original face image according to the target face three-dimensional model to obtain a target face image.
In an embodiment of the present invention, the second adjusting unit 146 is specifically configured to: acquiring the pose information of the face in the original face image, mapping the target face three-dimensional model to a two-dimensional plane according to the pose information of the face to obtain an adjusted face image, and correspondingly replacing corresponding pixels of the original face image with the adjusted face image to obtain the target face image.
In an embodiment of the present invention, the alignment module 130 is specifically configured to: and according to the similarity between the original human face three-dimensional model and each human face template, obtaining the maximum similarity in the similarity, and taking the human face template corresponding to the maximum similarity as a target human face template.
It should be noted that the explanation of the embodiment of the method for processing a face image is also applicable to the apparatus for processing a face image of the embodiment, and is not repeated herein.
In order to meet the requirement of the user for beautifying the target face image, as an exemplary implementation manner, the apparatus may further include a beautifying module (not shown in the figure), where the beautifying module is configured to perform face skin color recognition on the target face image, acquire makeup information corresponding to the face skin color recognition result, and beautify the target face image according to the makeup information. Therefore, the beautifying information matched with the face skin color in the target face image is adopted to beautify the target face image, so that the beautifying in the target face image is matched with the face skin color, and the beautifying effect is more natural and real.
The makeup information may include, but is not limited to, eye shadow, blush, beautiful pupil, and lip color, which is not limited by this embodiment.
The processing device of the face image of the embodiment of the invention acquires an original face three-dimensional model corresponding to an original face, acquires the three-dimensional geometric characteristics of the original face three-dimensional model, compares the three-dimensional geometric characteristics of the original face three-dimensional model with the three-dimensional geometric characteristics of all face templates in a face template group to obtain a target face template similar to the original face three-dimensional model, and adjusts the face in the original face image according to the three-dimensional geometric characteristics of the target face template. Therefore, the human face in the original human face image is adjusted through the three-dimensional geometric features of the similar human face template, so that the facial features and the facial shapes of the human face in the adjusted human face image are similar to the human face template, and the skin of the human face in the human face image before and after adjustment is unchanged, so that the human face in the adjusted human face image is more natural and real, a micro-shaping effect is accurately provided for a user, the user can conveniently obtain the micro-shaping effect of the human face, and the user experience is improved.
In order to implement the above embodiments, the present invention further provides an electronic device.
Fig. 7 is a schematic diagram of the internal structure of the electronic device 200 according to an embodiment of the present invention. The electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected by a system bus 210. Memory 230 of electronic device 200 stores, among other things, an operating system and computer-readable instructions. The computer readable instructions can be executed by the processor 220 to implement the method for processing the face image according to the embodiment of the present application. The processor 220 is used to provide computing and control capabilities that support the operation of the entire electronic device 200. The display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc.
It will be understood by those skilled in the art that the structure shown in fig. 7 is only a schematic diagram of a part of the structure related to the present application, and does not constitute a limitation to the electronic device 200 to which the present application is applied, and a specific electronic device 200 may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
As one possible implementation manner, please refer to fig. 8, which provides an image processing circuit according to an embodiment of the present application, and the image processing circuit can be implemented by using hardware and/or software components.
As shown in fig. 8, the image processing circuit specifically includes: an image unit 310, a depth information unit 320, and a processing unit 330. Wherein the content of the first and second substances,
and an image unit 310 for outputting a two-dimensional human body image.
A depth information unit 320 for outputting depth information.
In the embodiment of the present application, a two-dimensional image may be obtained by the image unit 310, and depth information corresponding to the image may be obtained by the depth information unit 320.
The processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and configured to perform three-dimensional reconstruction according to the two-dimensional image obtained by the image unit and the corresponding depth information obtained by the depth information unit, so as to obtain a reconstructed three-dimensional model.
In this embodiment, the two-dimensional image obtained by the image unit 310 may be sent to the processing unit 330, the depth information corresponding to the image obtained by the depth information unit 320 may be sent to the processing unit 330, and the processing unit 330 may perform three-dimensional reconstruction according to the image and the depth information and output a reconstructed three-dimensional model.
Further, as a possible implementation manner of the present application, referring to fig. 9, on the basis of the embodiment shown in fig. 8, the image processing circuit may further include:
as a possible implementation manner, the image unit 310 may specifically include: an Image sensor 311 and an Image Signal Processing (ISP) processor 312 electrically connected to each other. Wherein, the first and the second end of the pipe are connected with each other,
and an image sensor 311 for outputting raw image data.
And an ISP processor 312 for outputting an image according to the original image data.
In the embodiment of the present application, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including images in YUV format or RGB format, which can be used to determine one or more control parameters of the image sensor 311. Where the image sensor 311 may include an array of color filters (e.g., bayer filters), and corresponding photosites, the image sensor 311 may acquire light intensity and wavelength information captured by each photosite and provide a set of raw image data that may be processed by the ISP processor 312. The ISP processor 312 processes the raw image data to obtain an image in YUV format or RGB format, and sends the image to the processing unit 330.
The ISP processor 312 may process the raw image data in a plurality of formats on a pixel-by-pixel basis when processing the raw image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
As a possible implementation manner, the depth information unit 320 includes a structured light sensor 321 and a depth map generating chip 322, which are electrically connected. Wherein the content of the first and second substances,
a structured light sensor 321 for generating an infrared speckle pattern.
The depth map generating chip 322 is used for outputting depth information according to the infrared speckle pattern; the depth information comprises a depth map.
In this embodiment of the application, the structured light sensor 321 projects speckle structured light to a subject, obtains structured light reflected by the subject, and obtains an infrared speckle pattern according to imaging of the reflected structured light. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, so as to obtain a Depth Map (Depth Map), wherein the Depth Map indicates the Depth of each pixel point in the infrared speckle pattern. The depth map generating chip 322 sends the depth map to the processing unit 330.
As a possible implementation manner, the processing unit 330 includes: a CPU331 and a GPU (Graphics Processing Unit) 332 electrically connected. Wherein the content of the first and second substances,
the CPU331 is configured to align the image and the depth map according to the calibration data, and output a three-dimensional model according to the aligned image and depth map.
And the GPU332 is used for determining the matched target three-dimensional template according to the three-dimensional model and outputting information associated with the target three-dimensional template.
In the embodiment of the present application, the CPU331 acquires a human body image from the ISP processor 312, acquires a depth map from the depth map generating chip 322, and aligns the two-dimensional image with the depth map by combining with calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the image. Further, the CPU331 performs three-dimensional reconstruction based on the depth information and the image, and obtains a three-dimensional model.
The CPU331 sends the three-dimensional model to the GPU332, so that the GPU332 performs the three-dimensional model processing method as described in the foregoing embodiments according to the three-dimensional model, thereby implementing position adjustment on part of the key points in the three-dimensional model, and/or rendering the skin texture of the three-dimensional model.
Specifically, the GPU332 may determine a matched target three-dimensional template according to the three-dimensional model, and then label the image according to information associated with the target three-dimensional template, and output the image with labeled information.
Further, the image processing circuit may further include: a display unit 340.
The display unit 340 is electrically connected to the GPU332, and is configured to display the image of the annotation information.
In particular, the GPU332 processes the resulting enhanced image for display by the display 340.
Optionally, the image processing circuit may further include: an encoder 350 and a memory 360.
In this embodiment, the beautified image processed by the GPU332 may be further encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
In one embodiment, the Memory 360 may be multiple or divided into multiple Memory spaces, and the image data processed by the GPU312 may be stored in a dedicated Memory, or a dedicated Memory space, and may include a DMA (Direct Memory Access) feature. Memory 360 may be configured to implement one or more frame buffers.
The above process is described in detail below with reference to fig. 9.
As shown in fig. 9, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including images in YUV format or RGB format, which can be used to determine one or more control parameters of the image sensor 311, and sends to the CPU331.
As shown in fig. 9, the structured light sensor 321 projects speckle structured light to a subject, acquires structured light reflected by the subject, and forms an image according to the reflected structured light to obtain an infrared speckle pattern. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, thereby obtaining a Depth Map (Depth Map). The depth map generating chip 322 sends the depth map to the CPU331.
The CPU331 acquires a two-dimensional human body image from the ISP processor 312, acquires a depth map from the depth map generation chip 322, and aligns the human face image with the depth map by combining calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the image. Further, the CPU331 performs three-dimensional reconstruction from the depth information and the two-dimensional image to obtain a reconstructed three-dimensional model.
The CPU331 sends the three-dimensional body model to the GPU332, so that the GPU332 executes the three-dimensional model processing method described in the foregoing embodiment according to the three-dimensional body model, thereby implementing position adjustment of a part of key points in the three-dimensional model, and/or rendering of skin texture of the three-dimensional model. The processed three-dimensional model processed by the GPU332 may be displayed by the display 340 and/or encoded by the encoder 350 and stored in the memory 360.
For example, the following steps are performed to implement the control method by using the processor 220 in fig. 7 or by using the image processing circuit (specifically, the CPU331 and the GPU 332) in fig. 9:
the CPU331 acquires a two-dimensional original face image and depth information corresponding to the original face image; the CPU331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model of the original face; the three-dimensional model comprises a plurality of key points, a model frame formed by connecting the key points and skin textures covering the model frame; the CPU331 further obtains the three-dimensional geometric characteristics of the original face three-dimensional model; the GPU332 compares the three-dimensional geometric characteristics of the original face three-dimensional model with the three-dimensional geometric characteristics of all face templates in the face template group to obtain a target face template similar to the original face three-dimensional model; and adjusting the original face image according to the three-dimensional geometric characteristics of the target face template to obtain the target face image.
In order to achieve the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium, which when instructions in the storage medium are executed by a processor, enables the processing method of a face image as in the above embodiments to be performed.
In order to implement the above embodiments, the present invention further provides a computer program product, which, when executed by an instruction processor in the computer program product, executes the processing method of the face image according to the above embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (11)

1. A method for processing a face image is characterized by comprising the following steps:
acquiring an original face three-dimensional model corresponding to an original face image;
acquiring three-dimensional geometric characteristics of the original human face three-dimensional model;
comparing the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group to obtain a target face template similar to the original face three-dimensional model;
adjusting the original face image according to the three-dimensional geometrical characteristics of the target face template to obtain a target face image;
the comparing the three-dimensional geometric features of the original face three-dimensional model with the three-dimensional geometric features of all face templates in the face template group to obtain a target face template similar to the original face three-dimensional model comprises:
obtaining the similarity between the original human face three-dimensional model and each human face template according to the three-dimensional geometric characteristics of the original human face three-dimensional model and the three-dimensional geometric characteristics of each human face template in the human face template group;
and according to the similarity between the original face three-dimensional model and each face template, obtaining the maximum similarity in the similarities, and taking the face template corresponding to the maximum similarity as the target face template.
2. The method according to claim 1, wherein the adjusting the original face image according to the three-dimensional geometric feature of the target face template to obtain the target face image comprises:
obtaining a three-dimensional transformation vector of each model feature point in the original human face three-dimensional model according to the three-dimensional geometric features of the target human face template and the three-dimensional geometric features of the original human face three-dimensional model;
acquiring the pose information of the face in the original face image;
mapping the three-dimensional transformation vector of each model feature point in the original human face three-dimensional model to a two-dimensional plane according to the posture information of the human face to obtain the two-dimensional transformation vector of each image feature point in the two-dimensional human face image;
and transforming the corresponding image feature points in the original face image according to the two-dimensional transformation vector of each image feature point in the two-dimensional face image to obtain a target face image.
3. The method according to claim 1, wherein the adjusting the original face image according to the three-dimensional geometric feature of the target face template to obtain the target face image comprises:
adjusting the original face three-dimensional template according to the three-dimensional geometrical characteristics of the target face template to obtain a target face three-dimensional model;
and adjusting the original face image according to the target face three-dimensional model to obtain a target face image.
4. The method according to claim 3, wherein the adjusting the original face image according to the three-dimensional model of the target face to obtain the target face image comprises:
acquiring the pose information of the face in the original face image;
mapping the target human face three-dimensional model to a two-dimensional plane according to the posture information of the human face to obtain an adjusted human face image;
and correspondingly replacing the corresponding pixels of the original face image with the adjusted face image to obtain the target face image.
5. An apparatus for processing a face image, comprising:
the first acquisition module is used for acquiring an original face three-dimensional model corresponding to an original face image;
the second acquisition module is used for acquiring the three-dimensional geometric characteristics of the original human face three-dimensional model;
the comparison module is used for comparing the three-dimensional geometric characteristics of the original human face three-dimensional model with the three-dimensional geometric characteristics of all human face templates in the human face template group to obtain a target human face template similar to the original human face three-dimensional model;
the processing module is used for adjusting the original face image according to the three-dimensional geometric characteristics of the target face template to obtain a target face image;
the comparison module is specifically configured to:
obtaining the similarity between the original human face three-dimensional model and each human face template according to the three-dimensional geometric characteristics of the original human face three-dimensional model and the three-dimensional geometric characteristics of each human face template in the human face template group;
and according to the similarity between the original face three-dimensional model and each face template, obtaining the maximum similarity in the similarities, and taking the face template corresponding to the maximum similarity as the target face template.
6. The apparatus of claim 5, wherein the processing module comprises:
the first processing unit is used for obtaining a three-dimensional transformation vector of each model feature point in the original human face three-dimensional model according to the three-dimensional geometric features of the target human face template and the three-dimensional geometric features of the original human face three-dimensional model;
the acquiring unit is used for acquiring the pose information of the face in the original face image;
the mapping unit is used for mapping the three-dimensional transformation vector of each model feature point in the original human face three-dimensional model to a two-dimensional plane according to the posture information of the human face so as to obtain the two-dimensional transformation vector of each image feature point in the two-dimensional human face image;
and the second processing unit is used for transforming the corresponding image feature points in the original face image according to the two-dimensional transformation vector of each image feature point in the two-dimensional face image so as to obtain a target face image.
7. The apparatus of claim 5, wherein the processing module comprises:
the first adjusting unit is used for adjusting the original human face three-dimensional template according to the three-dimensional geometric characteristics of the target human face template to obtain a target human face three-dimensional model;
and the second adjusting unit is used for adjusting the original face image according to the target face three-dimensional model so as to obtain a target face image.
8. The apparatus according to claim 7, wherein the second adjusting unit is specifically configured to:
and acquiring pose information of the face in the original face image, mapping the target face three-dimensional model to a two-dimensional plane according to the pose information of the face to obtain an adjusted face image, and correspondingly replacing corresponding pixels of the original face image with the adjusted face image to obtain the target face image.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and operable on the processor, when executing the program, implementing the method of processing a face image according to any one of claims 1 to 4.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for processing a face image according to any one of claims 1 to 4.
11. A computer program product, characterized in that when the instructions in the computer program product are executed by a processor, the method for processing a face image according to any one of claims 1-4 is performed.
CN201810935096.9A 2018-08-16 2018-08-16 Face image processing method and device and electronic equipment Active CN109242760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810935096.9A CN109242760B (en) 2018-08-16 2018-08-16 Face image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810935096.9A CN109242760B (en) 2018-08-16 2018-08-16 Face image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109242760A CN109242760A (en) 2019-01-18
CN109242760B true CN109242760B (en) 2023-02-28

Family

ID=65070846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810935096.9A Active CN109242760B (en) 2018-08-16 2018-08-16 Face image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109242760B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084675A (en) * 2019-04-24 2019-08-02 文允 Commodity selling method, the network terminal and the device with store function on a kind of line
CN111008935B (en) * 2019-11-01 2023-12-12 北京迈格威科技有限公司 Face image enhancement method, device, system and storage medium
CN112991191A (en) * 2019-12-13 2021-06-18 北京金山云网络技术有限公司 Face image enhancement method and device and electronic equipment
CN113076779A (en) * 2020-01-03 2021-07-06 甄选医美邦(杭州)网络科技有限公司 Shaping simulation matching method, shaping simulation matching system, readable storage medium and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853523A (en) * 2010-05-18 2010-10-06 南京大学 Method for adopting rough drawings to establish three-dimensional human face molds
CN104036546A (en) * 2014-06-30 2014-09-10 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN106022267A (en) * 2016-05-20 2016-10-12 北京师范大学 Automatic positioning method of weak feature point of three-dimensional face model
WO2018076437A1 (en) * 2016-10-25 2018-05-03 宇龙计算机通信科技(深圳)有限公司 Method and apparatus for human facial mapping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853523A (en) * 2010-05-18 2010-10-06 南京大学 Method for adopting rough drawings to establish three-dimensional human face molds
CN104036546A (en) * 2014-06-30 2014-09-10 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN106022267A (en) * 2016-05-20 2016-10-12 北京师范大学 Automatic positioning method of weak feature point of three-dimensional face model
WO2018076437A1 (en) * 2016-10-25 2018-05-03 宇龙计算机通信科技(深圳)有限公司 Method and apparatus for human facial mapping

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种自动鲁棒的三维人脸重建方法;杨臻等;《微计算机信息》;20071105(第31期);全文 *
基于网格变形的从图像重建三维人脸;董洪伟;《计算机辅助设计与图形学学报》;20120715(第07期);全文 *

Also Published As

Publication number Publication date
CN109242760A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN108447017B (en) Face virtual face-lifting method and device
CN109242760B (en) Face image processing method and device and electronic equipment
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
KR102362544B1 (en) Method and apparatus for image processing, and computer readable storage medium
CN109118569B (en) Rendering method and device based on three-dimensional model
JP5463866B2 (en) Image processing apparatus, image processing method, and program
CN109102559B (en) Three-dimensional model processing method and device
US11069151B2 (en) Methods and devices for replacing expression, and computer readable storage media
CN108171789B (en) Virtual image generation method and system
JP7129502B2 (en) Face image processing method and device, image equipment and storage medium
CN108682050B (en) Three-dimensional model-based beautifying method and device
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
KR102386642B1 (en) Image processing method and apparatus, electronic device and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN109191584B (en) Three-dimensional model processing method and device, electronic equipment and readable storage medium
CN109191393B (en) Three-dimensional model-based beauty method
WO2020034698A1 (en) Three-dimensional model-based special effect processing method and device, and electronic apparatus
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN111311733A (en) Three-dimensional model processing method and device, processor, electronic device and storage medium
WO2022258013A1 (en) Image processing method and apparatus, electronic device and readable storage medium
CN110852934A (en) Image processing method and apparatus, image device, and storage medium
CN113421197B (en) Processing method and processing system of beautifying image
CN112150387B (en) Method and device for enhancing stereoscopic impression of five sense organs on human images in photo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant