CN115631516A - Face image processing method, device and equipment and computer readable storage medium - Google Patents

Face image processing method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN115631516A
CN115631516A CN202110800842.5A CN202110800842A CN115631516A CN 115631516 A CN115631516 A CN 115631516A CN 202110800842 A CN202110800842 A CN 202110800842A CN 115631516 A CN115631516 A CN 115631516A
Authority
CN
China
Prior art keywords
face
feature point
forehead
point
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110800842.5A
Other languages
Chinese (zh)
Inventor
郑青青
赵远远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110800842.5A priority Critical patent/CN115631516A/en
Publication of CN115631516A publication Critical patent/CN115631516A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a method, a device and equipment for processing a face image and a computer readable storage medium; the method comprises the following steps: carrying out face key point detection on a face image to be processed to obtain a face key point set; determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a lower jaw in the face key point set; respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point; performing interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; and based on the forehead feature point set, carrying out image processing on the face image to be processed to obtain an image processing result. By the method and the device, the efficiency and the accuracy of face image processing can be improved.

Description

Face image processing method, device and equipment and computer readable storage medium
Technical Field
The present application relates to artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for processing a face image.
Background
At present, through a human face feature point model based on deep machine learning, feature points on most key parts in a human face image to be processed, such as the outlines of the nose, the cheek, the eyes and the like, can be acquired. However, for the forehead area, because semantic information of the forehead feature points output by the human face feature point model is ambiguous due to the influence of factors such as a hair style and a hairline shape, the feature points of the forehead area are often not included in the human face key point set output by the existing neural network. In order to obtain feature points of the forehead region, at present, a face shape is mainly fitted by using a geometric model-based modeling method and by using a plurality of polynomials for splicing or an elliptical shape. The method not only has complex calculation, but also has low adaptation degree to various different face shapes, thereby reducing the efficiency and the accuracy of the face image processing.
Disclosure of Invention
The embodiment of the application provides a face image processing method, a face image processing device and a computer readable storage medium, which can improve the efficiency and accuracy of face image processing.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a face image processing method, which comprises the following steps:
carrying out face key point detection on a face image to be processed to obtain a face key point set;
determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a lower jaw in the face key point set; the first forehead feature point represents the forehead highest point on the external contour of the human face;
respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point; the third feature point and the fourth feature point belong to the face key point set;
performing interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; the forehead feature point set represents a forehead contour corresponding to the face to be processed;
and carrying out image processing on the face image to be processed based on the forehead feature point set to obtain an image processing result.
An embodiment of the present application provides a face image processing apparatus, including: .
The face key point detection model is used for detecting face key points of a face image to be processed to obtain a face key point set;
the determining module is used for determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a lower jaw in the face key point set; the first forehead feature point represents the forehead highest point on the face external contour; respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point;
the interpolation fitting module is used for carrying out interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; the forehead feature point set represents a forehead contour corresponding to the face to be processed;
and the processing module is used for carrying out image processing on the face image to be processed based on the forehead feature point set to obtain an image processing result.
In the above apparatus, the determining module is further configured to determine the first forehead feature point on the vector of the second feature point pointing to the first feature point based on a distance between the first feature point and the second feature point.
In the above apparatus, the determining module is further configured to multiply a preset distance adjustment factor by an abscissa and an ordinate of the first feature point, respectively, to obtain a first transverse product and a first longitudinal product; the preset adjustment factor is a numerical value greater than 1; calculating a difference value obtained by subtracting the preset adjusting factor from a preset threshold value, and multiplying the difference value by the abscissa and the ordinate of the second characteristic point respectively to obtain a second transverse product and a second longitudinal product; taking the sum of the first transverse product and the second transverse product as the abscissa of the first forehead feature point; and taking the sum of the first longitudinal product and the second longitudinal product as the ordinate of the first forehead feature point, thereby determining the first forehead feature point.
In the above apparatus, the determining module is further configured to calculate a first distance between the second feature point and the first feature point, and calculate a second distance between a preset fixed distance point and the first feature point; under the condition that the angle of the human face changes, the distance change between the preset fixed distance point and the first characteristic point is smaller than a preset change threshold value; the preset fixed distance point belongs to the face key point set; and obtaining the preset adjusting factor based on the ratio of the first distance to the second distance.
In the above apparatus, the determining module is further configured to calculate a first feature vector and a first length of the first feature point pointing to the first forehead feature point, a second feature vector and a second length of the first feature point pointing to the third feature point, and a third feature vector and a third length of the first feature point pointing to the fourth feature point; calculating a middle included angle between the first feature vector and the second feature vector so as to determine a first middle feature vector; calculating a first average of the first length and the second length; determining the second forehead feature point according to the first intermediate feature vector and the first average value; calculating a middle included angle between the first characteristic vector and the third characteristic vector so as to determine a second middle characteristic vector; calculating a second average of the first length and the third length; and determining the third forehead feature point according to the second intermediate feature vector and the second average value.
In the above apparatus, the interpolation fitting module is further configured to obtain a forehead curve constraint corresponding to the face to be processed according to the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point, and the fourth feature point; and performing interpolation fitting among the third characteristic point, the second forehead characteristic point, the first forehead characteristic point, the third forehead characteristic point and the fourth characteristic point based on the forehead curve constraint to obtain the forehead characteristic point set comprising the first forehead characteristic point, the second forehead characteristic point and the third forehead characteristic point.
In the above apparatus, the processing module is further configured to divide the to-be-processed face image into a plurality of real face regions according to the face key point set and the forehead feature point set; acquiring a preset special effect face corresponding to the face image to be processed; the preset special effect face is a face template containing a preset special effect image; the preset special-effect face comprises a plurality of preset face areas obtained by performing the same feature point calculation and division processing on the face template in advance; obtaining a special effect pixel corresponding to each face pixel in the to-be-processed face image in the preset special effect face according to the corresponding relation between the real face areas and preset face areas; and carrying out pixel fusion on each face pixel and the corresponding special effect pixel to obtain an image processing result of superposition of the face image to be processed and the preset special effect image.
In the above apparatus, the processing module is further configured to perform triangular mesh division by using each feature point in the face key point set and the forehead feature point set as a vertex and using a mesh division algorithm to obtain the plurality of real face regions formed by triangular meshes.
In the above apparatus, the processing module is further configured to perform, for each face pixel, weighting calculation according to a vertex position of a target real face region where each face pixel is located, so as to obtain a relative position of each face pixel in the target real face region; determining a target preset face area corresponding to the target real face area according to the corresponding relation between the real face areas and preset face areas; and determining pixels corresponding to the relative positions in the target preset face area according to the vertex positions of the target preset face area, and taking the pixels as special-effect pixels corresponding to each face pixel.
In the above device, the processing module is further configured to perform parallel fusion processing on the each face pixel point and the corresponding material pixel through a graphics processing module, so as to obtain the image processing result.
In the device, the processing module is further configured to perform chromaticity fusion on each face pixel point and the corresponding material pixel point to obtain an intermediate chromaticity; respectively performing fusion intensity adjustment on the chromaticity of each face pixel point and the intermediate chromaticity by presetting a fusion intensity factor to obtain a first adjustment result and a second adjustment result; and combining the first adjustment result and the second adjustment result to obtain an image fusion result corresponding to each face pixel point, and taking the image fusion result as the image processing result.
In the above apparatus, the processing module is further configured to perform at least one of face segmentation, face alignment, face recognition, and face synthesis on the to-be-processed face image based on the forehead feature point set, so as to obtain the image processing result.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to implement the method provided by the embodiment of the present application when the processor executes the executable instructions.
The embodiment of the application has the following beneficial effects:
according to the method and the device, a first forehead feature point, a second forehead feature point and a third forehead feature point on a forehead contour can be preliminarily positioned according to key points in a non-forehead region in a face key point set, such as feature points representing the nasal root, the outer side of a left face contour and the outer side of a right face contour; on the basis, more forehead feature points are calculated by adopting a self-adaptive interpolation method, so that a forehead feature point set corresponding to a smoother forehead contour is obtained. The calculation process of the embodiment of the application is faster, and robust forehead feature points can be determined by a self-adaptive interpolation method for various different face types, so that the forehead feature point identification efficiency and accuracy are improved, and the face image processing efficiency and accuracy based on the forehead feature point set are improved.
Drawings
Fig. 1 is an alternative structural schematic diagram of a human face image processing system architecture provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an alternative face image processing apparatus according to an embodiment of the present application;
fig. 3 is an alternative flow chart of a face image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating an optional effect of a face key point set according to an embodiment of the present application;
fig. 5 is a schematic diagram of an alternative effect of the first forehead feature point provided by the embodiment of the present application;
fig. 6 is a schematic diagram illustrating an alternative effect of the second forehead feature point provided by the embodiment of the present application;
fig. 7 is a schematic diagram illustrating an alternative effect of the third forehead feature point according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating an alternative effect of a forehead feature point set provided in an embodiment of the present application;
fig. 9 is an alternative flow chart of a face image processing method according to an embodiment of the present application;
fig. 10 is a schematic diagram of an optional module structure of the face image processing apparatus applied to the on-line makeup function according to the embodiment of the present application;
fig. 11 is an optional flowchart of the face image processing method according to the embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order or importance, but rather "first \ second \ third" may, where permissible, be interchanged in a particular order or sequence so that embodiments of the present application described herein can be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
2) Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The face key point detection is an important basic link in a face recognition task, and the accurate detection of the face key points plays a key role in many practical applications and scientific research topics, such as face posture recognition and correction, expression recognition, mouth shape recognition and the like. Therefore, how to obtain high-precision face key points is a popular research problem in the fields of computer vision, image processing and the like. The research of the human face key point detection is also challenging under the influence of factors such as human face posture, shielding and the like.
3) Machine Learning (ML) is a multi-domain cross subject, and relates to multi-domain subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the computer vision technology of artificial intelligence and the like, and is specifically explained by the following embodiment:
with the advent of the video age, more and more users can make videos and express contents through short videos and live broadcasts. When a video is recorded or live broadcast, the beautiful makeup can effectively modify the face shape, increase the three-dimensional sense of the face, improve the personal quality and the self confidence of the user on the self image, and attract more attention and enthusiasm of audiences. However, a good looking makeup often takes a long time and requires various makeup accessories, and also requires a certain makeup experience for the user to make targeted makeup modifications (including concealer, foundation makeup, makeup fixation, eyebrow makeup, eye makeup, lip makeup, etc.) according to the skin and five sense organs of the user. This is a high threshold for a large user population. The automatic makeup technology in the related technology mainly adopts a method of carrying out face recognition positioning on an image frame to be processed, then carrying out mapping change on a makeup texture pattern of a virtual makeup to obtain a deformed texture, and finally fusing the deformed texture and the face to be processed to obtain the effect of automatically superposing the virtual makeup on the face of the image frame to be processed. The face identification and positioning in the above process of the related art are mainly based on the following two methods:
1. the face is segmented through the face analysis model, and each part of the face can be obtained, so that affine transformation can be carried out on the makeup texture, and the mapping relation between the face to be processed and the makeup texture is obtained;
2. the face key points of the face to be processed can be obtained through the face key point model, meanwhile, feature point marking is carried out on the makeup texture, and the mapping relation between the face to be processed and the makeup texture can also be obtained through the conversion relation between the face to be processed and the feature points of the makeup texture.
For the first method, a method for identifying and positioning a face by using a face segmentation mode is time-consuming, is difficult to be directly applied to scenes with high real-time requirements, such as video call, live video and the like, and a segmentation result is easy to have a large segmentation error at a shielding position, so that virtual makeup fitting is unnatural. Moreover, the accuracy is low through the face image processing.
For the second method, the influence of factors such as hairstyle and hairline shape often exists in a real face, so that semantic information recognition of a forehead point by a face key point model in the prior art is ambiguous, the output face key points often do not contain feature points of a forehead area, and virtual makeup which needs to be applied to the forehead part, such as martial beauty making up, highlight dressing, special effects of stickers and the like, cannot be applied. In the related art, there are modeling methods based on geometric models, which assume that the face shape can be fitted by splicing a plurality of polynomials or fitting an ellipse, but the method is complex in calculation, increases the calculation amount and processing time of a network model, reduces the efficiency of face image processing, and cannot be well adapted to various face shapes and the changes of various face angles (such as head raising, head lowering, side faces and the like), so that the point positions of the key points of the forehead are inaccurate, the virtual makeup overlay effect is not good, and the accuracy of face image processing is reduced.
In addition, the two modes are that the affine transformation relation between the face to be processed and the makeup texture is obtained through a few feature points to obtain an affine mapping matrix, and the mapping relation between the face to be processed and the makeup texture is obtained by performing translation at a certain distance, rotation at a certain angle or scaling at a certain proportion on the face to be processed. In the related technology, the accuracy of the mapping relation obtained by a few feature points is low, so that the attaching effect of the forehead area is not natural easily, and the accuracy of the face image processing is also reduced.
The following describes an exemplary application of the electronic device provided in the embodiments of the present application, and the electronic device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application when the electronic device is implemented as a server will be explained.
Referring to fig. 1, fig. 1 is an optional architecture diagram of a face image processing system 100 according to an embodiment of the present application, in order to support a face image processing application, such as an online makeup application, a live video makeup application, and the like, a terminal 400 (an exemplary terminal 400-1 and a terminal 400-2 are shown) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400-1 is configured to receive an operation of the anchor user through the graphical interface 410-1, and collect a face image of the anchor user as a to-be-processed face image based on the operation of the anchor user. And receiving an image processing mode specified by the anchor user through the graphical interface 410-1, for example, selecting a virtual makeup that is desired to be applied to the current face image to be processed from a virtual makeup list displayed on the graphical interface 410-1, and sending the face image to be processed and an identifier of the virtual makeup specified by the operation to the server 200.
The server 200 is configured to perform face key point detection on a face image to be processed to obtain a face key point set; determining a first forehead characteristic point according to a first characteristic point representing a nasal root and a second characteristic point representing a lower jaw in the face key point set; the first forehead feature point represents the forehead highest point on the external contour of the face; according to the first forehead feature point and the first feature point, respectively combining a third feature point representing the outer side of the left face contour and a fourth feature point representing the outer side of the right face contour, and determining a second forehead feature point and a third forehead feature point; the third characteristic point and the fourth characteristic point belong to a face key point set; performing interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; representing a forehead contour corresponding to the face image to be processed by the forehead feature point set; based on the forehead feature point set, combining the face key point set to obtain face feature points capable of marking each part in the face image to be processed; the server 200 may acquire an image of a preset virtual makeup from the database 500 by using an identifier of the virtual makeup designated by the first user, and further fuse the preset virtual makeup with the face image to be processed according to the feature points of the face at each part in the face image to be processed to obtain the beauty effect of the face image to be processed, and the beauty effect is used as an image processing result to realize an image processing process of the face image to be processed. The server 200 further sends the image processing result, i.e. the beauty and make-up effect of the face image to be processed, to the terminal 400-1 and the terminal 400-2 through the network 300, and synchronously displays the image processing result to the anchor user and the audience user of the terminal 400-2 on the graphical interface 410-1 and the graphical interface 410-2 of the terminal 400-2.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud services, a cloud database, cloud computing, cloud functions, cloud storage, a network service, cloud communication, middleware services, domain name services, security services, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
It should be noted that, when the electronic device is implemented as a terminal, the terminal may collect a face image to be processed, and locally execute the face image processing method provided in the embodiment of the present application to obtain an image processing result.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a server 200 according to an embodiment of the present disclosure, where the server 200 shown in fig. 2 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable connected communication between these components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are designated as bus system 240 in FIG. 2.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and can also include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 250 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), and the like;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows a face image processing apparatus 255 stored in the memory 250, which may be software in the form of programs and plug-ins, and includes the following software modules: a face keypoint detection model 2551, a determination module 2552, an interpolation fitting module 2553 and a processing module 2554, which are logical and therefore can be arbitrarily combined or further split depending on the functions implemented.
The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the facial image processing method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
The face image processing method provided by the embodiment of the present application will be described with reference to exemplary applications and implementations of the server provided by the embodiment of the present application.
Referring to fig. 3, fig. 3 is an optional schematic flow chart of the face image processing method according to the embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
S101, carrying out face key point detection on a face image to be processed to obtain a face key point set.
The face image processing method provided by the embodiment of the application can be applied to scenes in which images containing faces are processed by using an artificial intelligence technology, and exemplarily beautifies the faces, such as scenes in which makeup and beauty are performed on line, face video synthesis, face recognition and the like.
In S101, the face image processing apparatus obtains a face image to be processed, and performs face key point detection on the face image to be processed to obtain a face key point set.
In S101, the detection of the key points of the face is to detect and locate key portions of the face, including eyebrows, eyes, nose, mouth, face contour, etc., of the face from a given face image, i.e., a face image to be processed, and mark the key points at the positions of the detected key portions. Illustratively, a plurality of feature points are marked on contour lines of detected eyes, noses and the like, and each feature point contains coordinate position information and semantic information in the face image to be processed. Here, the semantic information represents a key part where the feature point is located.
In some embodiments, the face image processing apparatus may detect a face key point set from the face image to be processed by using a face key point detection model or a face feature point model. Here, the face key point detection model or the face feature point model is a neural network model obtained by deep machine learning training based on labeled face image sample data, and may be, for example, a 68-point face feature point model based on a face detection Dlib library, or a 96-point or 106-point face feature point model, which is specifically selected according to actual circumstances, and the embodiment of the present application is not limited. Fig. 4 shows a schematic diagram of 106 key points, that is, feature points, of a human face to be processed, which are output by a 106-point human face feature point model in the embodiment of the present application.
Here, as can be seen from fig. 4, the feature points of the forehead area are not included in the face key point set obtained by detecting the face key points, and the feature points of the forehead area need to be calculated by the method in the embodiment of the present application, so as to obtain the key points of each part of the whole face, and improve the accuracy of face image processing.
S102, determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a mandible in a face key point set; the first forehead feature point represents the forehead highest point on the external contour of the human face.
In S102, the face image processing apparatus may determine a first feature point representing a nasion and a second feature point representing a mandible according to semantic information of each face key point in the face key point set, and determine a highest point of a forehead region, that is, a first forehead feature point, on an external contour of a face in the face image to be processed according to positions of the first feature point and the second feature point and a distance between the first feature point and the second feature point and by combining a structure of a normal face and a face ratio.
In some embodiments, when the face keypoint set comprises 106 face keypoints as shown in FIG. 4, where points 0-32 mark the face outer contour, points 33-37 mark the left eyebrow upper contour, points 38-42 mark the right eyebrow upper contour, points 43-51 mark the nose midline and nose lower contour, points 52-57 mark the left eye contour, points 58-63 mark the right eye contour, and so on. Each face key point contains respective semantic information, for example, 49 semantic information is the nasion, 0 semantic information is the outermost left face, 32 semantic information is the outermost right face, and 16 semantic information is the mandible or the mandible. The face image processing device may determine 49 points as the first feature points and 16 points as the second feature points according to semantic information of each face key point.
Here, semantic information of face key points obtained by different face key point detection methods or network model detection may be slightly different, and in practical application, the first feature point or the second feature point is not limited to be determined according to the literal meaning of the face key point semantic information, and the first feature point or the second feature point may also be determined according to the representation meaning of semantic information in the actual face key points, or other information that is included in the face key points and can represent the point as a key point on a nasion or a mandible position, and is specifically selected according to the practical situation, which is not limited in the embodiment of the present application.
In some embodiments, based on the prior knowledge of the normal face structure, the forehead highest point may be located on a straight line where the nasion point and the mandible point are located, and in the case that the first feature point characterizing the nasion and the second feature point characterizing the mandible are determined, the face image processing device may determine the first forehead feature point, that is, the location of the forehead highest point, on a vector where the second feature point points to the first feature point, in combination with the normal face ratio, such as, for example, the distance ratio between the forehead highest point and the nasion and the distance between the nasion and the mandible, on the basis of the distance between the first feature point and the second feature point.
In some embodiments, the face image processing apparatus may calculate the abscissa and the ordinate of the first forehead feature point in the face image to be processed through formula (1) and formula (2), so as to determine the first forehead feature point, as follows:
x=(1-r)×x 2 +r×x 1 (1)
y=(1-r)×y 2 +r×y 1 (2)
in the formula (1) and the formula (2), r is a preset distance adjusting factor, and the coordinate of the first characteristic point is (x) 1 ,y 1 ) The coordinates of the second feature point are (x) 2 ,y 2 ). The face image processing device can multiply the preset distance adjusting factor with the abscissa and the ordinate of the first feature point respectively to obtain a first transverse product r x 1 With the first longitudinal product r x y 1 (ii) a And calculating a difference value obtained by subtracting a preset adjusting factor from the preset threshold value, and multiplying the difference value by the abscissa and the ordinate of the second characteristic point respectively to obtain a second transverse product (1-r) x 2 With a second longitudinal product (1-r) x y 2 (ii) a The formula (1) and the formula (2) show the case that the preset threshold value is 1; the human face image processing device takes the sum of the first transverse product and the second transverse product as the abscissa of the first forehead characteristic point; and taking the sum of the first longitudinal product and the second longitudinal product as the longitudinal coordinate of the first forehead feature point, wherein the preset adjusting factor is a numerical value larger than 1 and is used for constraining the coordinate of the first forehead feature point on a vector pointing to the first feature point from the second feature point.
In some embodiments, the facial image processing apparatus may calculate the first forehead feature point, labeled as point 109, according to the 43 points characterizing the nasion and the 16 points characterizing the mandible in fig. 4, as shown in fig. 5.
In some embodiments, the face image processing apparatus may also determine, based on the first feature point and the second feature point, a position of a forehead vertex as the first forehead feature point according to a similar geometric calculation manner and data such as a distance, a proportion, or a direction between several selected face key points, by combining feature points representing other key portions in the face key point set, such as face key points representing eyebrows, eyes, and the like, and specifically select the position according to an actual situation, which is not limited in this embodiment of the present application.
In some embodiments, the face image processing apparatus may also determine the first forehead feature point from a preset range interval in a preset range interval near a vector pointing to the first feature point from the second feature point, and the first forehead feature point is specifically selected according to an actual situation, which is not limited in the embodiment of the present application.
S103, respectively combining a third feature point representing the outer side of the left face contour and a fourth feature point representing the outer side of the right face contour based on the first forehead feature point and the first feature point, and determining a second forehead feature point and a third forehead feature point; the third feature point and the fourth feature point belong to a face key point set.
In S103, the facial image processing apparatus may determine, from the set of facial key points, a third feature point, such as the 0 th point in fig. 4, representing the outer side of the left face contour and a fourth feature point, such as the 32 th point in fig. 4, representing the outer side of the right face contour according to semantic information of each facial key point. The human face image processing device further can determine a second forehead feature point between the outer side of the left face contour and the highest point of the forehead based on the first forehead feature point and the first feature point and in combination with the third feature point; and based on the first forehead feature point and the first feature point, a third forehead feature point between the outer side of the right face contour and the highest point of the forehead is determined by combining a fourth feature point representing the outer side of the right face contour.
In some embodiments, the face image processing apparatus may calculate, according to the coordinates of the first feature point and the first forehead feature point, a first feature vector and a first length pointing to the first forehead feature point from the first feature point, and a second feature vector and a second length pointing to a third feature point from the first feature point. The face image processing device can calculate a middle included angle between the first feature vector and the second feature vector, and determines a first middle feature vector according to the middle included angle; the face image processing device calculates a first average value of the first length and the second length. Thus, the face image processing apparatus can process the face image based on the first intermediate feature vector (u) 3 ,v 3 ) And the first average value l 3 And calculating to obtain the position coordinate of the second forehead characteristic point, namely determining the second forehead characteristic point.
In an exemplary manner, the first and second electrodes are,as shown in fig. 6, in the case where the first feature point is 43 points, the first forehead feature point is 109 points, and the third feature point is 0 point, the face image processing apparatus may calculate a first feature vector (u) pointing to 109 points from 43 points 1 ,v 1 ) And a second eigenvector (u) pointed to point 0 by 43 points 2 ,v 2 ) The first intermediate eigenvector (u) is determined according to the intermediate angle 3 ,v 3 ) Wherein (u) 3 ,v 3 )=(0.5×(u 1 +u 2 ),0.5×(v 1 +v 2 )). The face image processing apparatus calculates the distance between 43 and 109 points as the first length l 1 And calculating the distance between 43 points and 0 point as the second length l 2 Calculating l 1 And l 2 First average value of 3 I.e. l 3 =0.5×(l 1 +l 2 ). Further, the face image processing apparatus may be based on (u) 3 ,v 3 ) And l 3 And determining a second forehead characteristic point which is marked as 111 points.
Here, it should be noted that the method for calculating the intermediate included angle and the average length in the embodiment of the present application is an example of calculating the first intermediate feature vector and the first average value according to the first feature point, the second feature point and the first forehead feature point, and further obtaining the second forehead feature point, and in actual use, the adjustment of the calculation manner may be performed according to actual situations, for example, the manner of calculating the face proportion adjustment angle or calculating the length according to the human image, and the like, and the method is specifically selected according to the actual situations, and the embodiment of the present application is not limited.
Similarly, the face image processing device may calculate a third feature vector and a third length of the first feature point pointing to the fourth feature point; calculating a middle included angle between the first feature vector and the third feature vector so as to determine a second middle feature vector; calculating a second average of the first length and the third length; and determining a third forehead feature point according to the second intermediate feature vector and the second average value. Illustratively, as shown in fig. 7, the fourth feature point is 32 points, and the face image processing apparatus may calculate a first feature vector pointing to 109 points from 43 points, and point pointing to 43 pointsThird feature vector (u) to point 32 4 ,v 4 ) Intermediate angle therebetween, thereby determining a second intermediate feature vector (u) 5 ,v 5 ) And calculating a first length l between 43 and 109 1 Third length l between 43 and 32 4 Calculating l 1 And l 2 Second average value of l 5 (ii) a Further, according to (u) 5 ,v 5 ) And l 5 And determining a third forehead characteristic point which is marked as 107.
It should be noted that, in the embodiment of the present application, the execution order of calculating the second forehead feature point and the third forehead feature point by the face image processing device is not limited, and may be any sequential order or parallel execution, and specifically, the selection is performed according to an actual situation.
In some embodiments, the face image processing apparatus may also determine the second forehead feature point and the third forehead feature point according to the unit feature vector and the length between the feature point and the feature point by respectively combining the third feature point and the fourth feature point according to the first feature point, the first forehead feature point and other face key points, for example, 36 points on the top edge of the left eyebrow, 39 points on the top edge of the right eyebrow, and so on in fig. 4. Specifically, corresponding calculation can be performed according to actually selected face key points, and the embodiment of the present application is not limited.
S104, performing interpolation fitting based on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; and representing the forehead contour corresponding to the face to be processed by the forehead feature point set.
In S104, under the condition that the first forehead feature point, the second forehead feature point, and the third forehead feature point are determined, the face image processing apparatus may use the first forehead feature point, the second forehead feature point, and the third forehead feature point as anchor points for preliminary positioning of a forehead region in the face image to be processed, and perform interpolation calculation by combining adjacent face key points, such as the third feature point and the fourth feature point, to obtain more forehead feature points, thereby obtaining a forehead feature point set.
In the embodiment of the present application, the interpolation calculation refers to fitting new feature points between the specified feature points, so that the curve of the forehead region is smoother, and the connection with other regions, such as the face contour, is smoother. The face image processing device can obtain forehead curve constraints corresponding to the face to be processed according to the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point and the fourth feature point; here, the forehead curve constraint may be a curve function representing a curvature of the forehead, and the face image processing apparatus may obtain a fitting relation for fitting a new feature point according to the forehead curve constraint, so as to perform interpolation fitting between the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point, and the fourth feature point based on the forehead curve constraint, and obtain a forehead feature point set including the first forehead feature point, the second forehead feature point, and the third forehead feature point.
In some embodiments, the interpolation calculation may use a Catmull-Rom polynomial fitting manner, or may use other interpolation manners passing through a specified point, such as a bilinear interpolation manner, a cube interpolation manner, and the like, which is specifically selected according to actual situations, and the embodiment of the present application is not limited. In addition, the number of interpolation between specified feature points is not limited in the embodiment of the present application, and an integer greater than 1 may be specified arbitrarily.
In some embodiments, based on fig. 7, the facial image processing apparatus may perform interpolation calculation between 0 point, 111 point, 109 point, 107 point and 32 point to obtain the forehead feature points of 112 point, 110 point, 108 point and 106 point shown in fig. 8, and the facial image processing apparatus uses 106 point-112 point as the forehead feature point set. It can be seen that points 106-112 mark the outline of the forehead region of the face to be processed.
And S105, carrying out image processing on the face image to be processed based on the forehead feature point set to obtain an image processing result.
In S105, based on the forehead feature point set obtained in the foregoing step, the face image processing apparatus may obtain feature points of all key portions including the forehead region on the face image to be processed, so as to further execute an image processing process with higher accuracy, and obtain an image processing result.
In some embodiments, the face image processing apparatus may perform image processing on a forehead region in the face to be processed, such as performing image processing on a forehead region, such as performing image processing on a sticker, a beauty treatment, and virtual makeup generation; the face image processing device may also perform image processing on the entire face region by combining the forehead feature point set and the face key point set, for example, perform at least one of face segmentation, face alignment, face recognition, and face synthesis on the face image to be processed, to obtain an image processing result.
Exemplarily, the face image processing device may perform face segmentation according to the forehead feature point set and the face key point set from the whole image including the background image to obtain a whole face region, so as to implement functional applications such as face matting, and further may perform more image processing operations based on the whole face region obtained by segmentation, such as superimposing virtual makeup effects, so as to implement a makeup and beauty function; or replacing the background image outside the whole face area; or, the whole face area is replaced by other images, so that the function application of face replacement or face shielding and the like is realized, and the selection is specifically performed according to the actual situation, which is not limited in the embodiment of the application.
For example, the facial image processing apparatus may generate an avatar corresponding to the facial image to be processed, such as an avatar or a cartoon image that is similar to a real face, by performing face alignment processing according to each key part marked by the forehead feature point set and the face key point set, so that the avatar presents facial features similar to the real face, thereby implementing functional applications such as an animation character obtained by pinching a face by a real person in a game or a video or a cartoon character expression driven by a real person expression, and the specific selection is performed according to an actual situation, which is not limited in the embodiment of the present application.
Illustratively, the facial image processing apparatus may further extract at least one key part from the human face according to the forehead feature point set and the face key point set, and perform image processing on the extracted at least one key part, or perform face synthesis on one or more facial features of the person 1 and one or more facial features of the person 2, so as to achieve an effect after beauty treatment of different face parts presented to a user in the beauty industry, or generate a face special effect in a video clipping application, and so on. Or, the face image processing apparatus may also perform image processing processes such as face recognition, expression recognition and the like according to the forehead feature point set and the face key point set, specifically, the image processing process is selected according to actual conditions, and the embodiment of the present application is not limited.
It can be understood that, in the embodiment of the present application, according to key points in a non-forehead region in a face key point set, such as feature points representing the nasal root, the outer side of a left face contour and the outer side of a right face contour, a first forehead feature point, a second forehead feature point and a third forehead feature point on the forehead contour can be preliminarily positioned; on the basis, more forehead feature points are calculated by adopting a self-adaptive interpolation method, so that a forehead feature point set corresponding to a smoother forehead contour is obtained. The calculation process of the embodiment of the application is faster, and robust forehead feature points can be determined by a self-adaptive interpolation method for various different face types, so that the forehead feature point identification efficiency and accuracy are improved, and the face image processing efficiency and accuracy based on the forehead feature point set are improved.
In some embodiments, the applicant finds that, with respect to a face image to be processed acquired at a normal angle, in the face image to be processed acquired in the case of raising the head, the distance between a first feature point, such as 43 points, representing the nasion and a second feature point, such as 16 points, representing the mandible becomes larger, and the distance from the first feature point to a first forehead feature point, such as 109 points, becomes smaller; in the face image to be processed collected under the condition of head lowering, the distance between the first feature point and the second feature point becomes smaller, and the distance between the first feature point and the first forehead feature point becomes larger. In order to enable the forehead feature point set to adapt to changes of human faces under various angles, the embodiment of the application can dynamically set the preset adjustment factor so as to calculate the first forehead feature point, and further obtain the forehead feature point set. The process of dynamically setting the preset adjustment factor may be implemented by executing S001-S002, as follows:
s001, calculating a first distance between the second characteristic point and the first characteristic point, and calculating a second distance between a preset fixed distance point and the first characteristic point; when the angle of the face changes, the distance change between the preset fixed distance point and the first characteristic point is smaller than a preset change threshold value; the preset fixed distance points belong to a face key point set.
And S002, obtaining a preset adjusting factor based on the ratio of the first distance to the second distance.
In the embodiment of the application, under the condition that the angle of the face changes, the change of the distance between the preset fixed distance point in the face key point set and the first feature point is smaller than a preset change threshold value. Illustratively, through a large number of experiments, in the face key point set shown in fig. 4, the distance between 49 points representing the tip of the nose and the first feature point, namely 43 points representing the root of the nose, is basically unchanged along with the rotation of the face. Therefore, the face image processing apparatus calculates a first distance between the second feature point and the first feature point and a second distance between the preset fixed distance point and the first feature point, which can take the 49 points as the preset fixed distance points; and then obtaining a preset adjusting factor based on the ratio of the first distance to the second distance.
In some embodiments, the above calculation process may be implemented by equation (3), as follows:
Figure BDA0003164434910000201
in the formula (3), (x) 3 ,y 3 ) In order to preset the coordinates of the fixed distance points,
Figure BDA0003164434910000202
is a first distance between the second feature point and the first feature point,
Figure BDA0003164434910000203
for presetting a second distance between the fixed distance point and the first characteristic point, beta and gamma are preset adjusting parameters for enabling the rootCalculating the first forehead feature point from r may be maintained on the vector pointing from the second feature point to the first feature point. Wherein β may be a value greater than; γ may be a value less than 1 and greater than 0; illustratively, β may be 1.35 and γ may be 0.8.
It should be noted that, in the embodiment of the present application, specific feature points for calculating the preset adjustment factor are not limited, and other two-point distances that do not change with the rotation of the face may also be introduced, and the values of β and γ are fine-tuned accordingly to calculate the preset adjustment factor.
It can be understood that the preset adjustment factor is dynamically set by introducing the preset fixed distance point irrelevant to the angle change of the human face, and the first forehead feature point is calculated according to the preset adjustment factor, so that the first forehead feature point with higher accuracy can be calculated under the conditions of head raising, head lowering and the like of the human face, the method adapts to the conditions of various angle changes of the human face, and the accuracy of human face image processing is improved.
In some embodiments, referring to fig. 9, fig. 9 is an optional flowchart of the method provided in the embodiment of the present application, and S105 in fig. 3 may be implemented by executing S1051 to S1054, which will be described with reference to the steps.
S1051, dividing the face image to be processed into a plurality of real face regions according to the face key point set and the forehead feature point set.
In S1051, the face image processing apparatus may perform mesh division on the face to be processed according to the feature points included in the face key point set and the forehead feature point set, and divide the face image to be processed into a plurality of real face regions. Here, it can be seen that, through mesh division, the face image processing apparatus divides the face image to be processed into more detailed texture regions, and compared with the current method of performing affine transformation by obtaining key portions of each face according to semantic segmentation, in the embodiment of the present application, the granularity of a plurality of real face regions obtained at the division stage is finer, which is beneficial to improving the accuracy of face image processing based on the plurality of real face regions.
In some embodiments, the face image processing apparatus may perform triangular mesh division by using each feature point in the face key point set and the forehead feature point set as a vertex, and using a mesh division algorithm to obtain a plurality of real face regions formed by triangular meshes.
In some embodiments, the face image processing apparatus may also perform other forms of grid division, such as a four-corner grid division, to divide the face image to be processed, which is specifically selected according to actual situations, and the embodiment of the present application is not limited.
In some embodiments, the mesh division algorithm may be a Delaunay triangulation algorithm, or may also be a Loop algorithm, a Doo-bin algorithm, or a Catmull-Clark algorithm, and the like, which is specifically selected according to an actual situation, and the embodiments of the present application are not limited.
S1052, acquiring a preset special effect face corresponding to the face image to be processed; the preset special effect face is a face template containing a preset special effect image; the preset special-effect face comprises a plurality of preset face areas obtained by carrying out same feature point calculation and division processing on the face template in advance.
In S1052, the face image processing apparatus may obtain a preset special effect face for performing special effect processing on the face image to be processed. Here, the preset special effect face is a face template containing a preset special effect image; illustratively, the face template may be a template generated from a standard face, the preset special effect image may be a virtual makeup, and the preset special effect face may be a virtual makeup applied to the template of the standard face.
In this embodiment, the face image processing apparatus may also perform face key point detection on the preset special effect face through the same process in S101 to S104 to obtain a special effect face key point set corresponding to the preset special effect face, and calculate a special effect forehead feature point set corresponding to the preset special effect face based on the special effect face key point set. Here, when performing the face key point detection and the interpolation calculation, the face image processing apparatus may process the preset special effect face according to the same number of feature points as the face image to be processed, so that the total number of feature points in the special effect face key point set and the special effect forehead feature point set corresponding to the preset special effect face is consistent with the total number of feature points in the face image to be processed. Furthermore, the face image processing apparatus may perform mesh division on the preset special effect face according to the special effect face key point set and the special effect forehead feature point set by using the same mesh division method as in S1051, so that the divided preset special effect face includes a plurality of preset face regions in one-to-one correspondence with a plurality of real face regions corresponding to the face image to be processed. Illustratively, a triangular region composed of 33 points, 34 points and 64 points in the plurality of real face regions corresponds to the preset face region, which is also a triangular region composed of 33 points, 34 points and 64 points.
In some embodiments, the preset special effect image may also be other special effect images, such as a sticker, a filter, and the like, which are selected according to actual situations, and the embodiment of the present application is not limited.
In some embodiments, the face image processing apparatus may perform feature point calculation and division processing on a preset feature face once, store the obtained feature point label distribution and the divided grid region, and then directly acquire the stored preset feature face data and fuse the preset feature face data with different face images to be processed.
S1053, obtaining a special effect pixel corresponding to each face pixel in the face image to be processed in a preset special effect face according to the corresponding relation between the real face regions and the preset face regions.
In S1053, since the plurality of real face regions correspond to the plurality of preset face regions one to one, for each face pixel in the face image to be processed, the face image processing apparatus may sample each face pixel in the corresponding preset face region according to a correspondence relationship between the plurality of real face regions and the plurality of preset face regions, so as to obtain a special effect pixel corresponding to each face pixel in the preset special effect face.
In some embodiments, each face pixel may determine the relative position of each face pixel in the real face region where it is located according to the respective vertex coordinates of the real face region where it is located. The face image processing device can perform weighting calculation according to the vertex position of the target real face area where each face pixel is located to obtain the relative position of each face pixel in the target real face area; taking the real face area where each face pixel is located as a target real face area, and taking a preset face area corresponding to the target real face area as a target preset face area according to the corresponding relation between the plurality of real face areas and the plurality of preset face areas, namely determining the target preset face area corresponding to the target real face area; the face image processing device may determine, according to each vertex position in the target preset face region, a pixel of a relative position in the target real face region at a corresponding position in the target preset face region as a special-effect pixel corresponding to each face pixel. Therefore, the face image processing device obtains the corresponding special effect pixel of each face pixel in the preset special effect face.
Here, it should be noted that, since the pixel resolution of the face image to be processed may be different from the pixel resolution of the preset special-effect face, the correspondence between the face pixels and the special-effect pixels is not limited to a one-to-one correspondence relationship. For example, when the pixel resolution of the face image to be processed is higher than the pixel resolution of the preset special effect face, a situation may occur in which at least one face pixel corresponds to the same special effect pixel in the preset special effect face. The embodiments of the present application are not limited.
S1054, carrying out pixel fusion on each face pixel and the corresponding special effect pixel to obtain an image processing result of superposition of the face image to be processed and the preset special effect image.
In S1054, the face image processing apparatus may perform pixel fusion between each face pixel and its corresponding special effect pixel when determining the special effect pixel corresponding to each face pixel, and superimpose the corresponding special effect pixel on each face pixel, thereby obtaining an image processing result in which the face image to be processed and the preset special effect image are superimposed.
In some embodiments, the face image processing apparatus may perform pixel fusion in a fusion manner such as positive film lamination, normal fusion, and highlight fusion, and the embodiment of the present application does not limit a specific fusion manner. The face image processing device may perform chroma fusion on color information, such as RGB values, of each face pixel and its corresponding special effect pixel, and may also perform pixel fusion on gray information, luminance information, and the like of each face pixel and its corresponding special effect pixel, specifically select according to an actual situation, which is not limited in this embodiment of the present application.
Compared with the current methods such as affine mapping and the like, the method has the advantages that the mapping relation obtained by subdividing the grids is more accurate, the makeup texture is more attached to the face in various postures, the makeup is more natural, and the accuracy of face image processing is improved.
In some embodiments, for the above S1053 and S1054, the face image Processing apparatus may determine, in parallel, a special effect pixel corresponding to each face pixel point through a Graphics Processing module, such as a Graphics Processing Unit (GPU), and perform parallel fusion Processing on each face pixel point and the corresponding special effect pixel to obtain an image Processing result.
In some embodiments, the facial image processing apparatus may perform chroma fusion on each facial pixel and its corresponding special effect pixel to obtain an intermediate chroma, as shown in formula (4), as follows:
Color 3 =Color 1 ×Color 2 (4)
in the formula (4), color 1 For each face pixel RGB value, color 2 RGB value, color, of the special effect pixel corresponding to each face pixel 3 Is the intermediate chromaticity.
The human face image processing device respectively adjusts the fusion intensity of the chromaticity and the intermediate chromaticity of each human face pixel point through a preset fusion intensity factor to obtain a first adjustment result and a second adjustment result; and combining the first adjustment result and the second adjustment result to obtain an image fusion result corresponding to each face pixel point, as an image processing result, as shown in formula (5), as follows:
Color 4 =(1-α)*Color 1 ×α*Color 3 (5)
in the formula (5), α is a predetermined fusion strength factor, (1- α) × Color 1 As a result of the first adjustment, α Color 3 As a result of the second adjustment, color 4 And an image fusion result corresponding to the face pixel point is obtained. The human face image processing device can simultaneously obtain the image fusion result corresponding to each human face pixel point as the image processing result by carrying out parallel processing on each human face pixel point in the same process.
It can be understood that, in the embodiment of the application, the image processing module is used for processing the sampling and the fusion between the makeup texture and the face to be processed in parallel, so that the calculation amount and the processing time of a central processing unit on the electronic equipment can be greatly reduced, the face image processing efficiency is improved, and the real-time requirements of users in video and live scenes can be met.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application can be applied to an online makeup function during video live broadcasting, and a user can record live content in real time through live application on live terminals such as a mobile phone and the like as a video anchor, upload the live content to a background server of the live application through a network, and send the live content to audience terminals used by audiences through the background server to present the live content. When the facial image processing device is applied to realizing an on-line makeup function, the facial image processing device provided by the embodiment of the application can be deployed in a background server of live broadcast application, and comprises an identification module, an adaptive interpolation module, a mapping module and a fusion module shown in fig. 10. The facial image processing apparatus may utilize the recognition module, the adaptive interpolation module, the mapping module and the fusion module to implement the application of the on-line beauty makeup by executing the processes of S201 to S204 as shown in fig. 11, as follows:
s201, performing key point detection on the face to be processed in the current live video frame through an identification module to obtain a face key point set.
In S201, when it is detected that the video anchor starts the online makeup function in the live broadcast application, the face image processing device may perform face key point detection on the face image to be processed in the current live broadcast video frame in the video stream uploaded by the live broadcast terminal through the 106 face key point detection models included in the identification module, to obtain 106 face key points, i.e., a face key point set, to which different semantics are labeled. Here, the 106 individual face key points do not include the feature points of the forehead region.
And S202, calculating a forehead feature point set according to the face key point set through a self-adaptive interpolation module.
Here, the process of S202 is consistent with the process description in S102-S104 described above, and is not described here again.
And S203, obtaining the mapping relation between each face pixel and the virtual makeup pixel in the face image to be processed through a mapping module.
Here, the virtual makeup pixels correspond to special effect pixels, and the process of S203 is the same as that described in S1051 to S1053, and will not be described again here.
And S204, fusing each face pixel and the virtual makeup pixel through a fusion module to obtain an image processing result of applying virtual makeup to the face image to be processed.
Here, the execution process of S204 is consistent with the description of S1054, and is not described here again.
It can be understood that when the face image processing method provided by the embodiment of the application is applied to a scene in which face beautifying is performed through virtual makeup, feature points in a forehead area can be fitted in a manner of performing adaptive interpolation through a face key point model, and a more robust forehead feature point set is obtained. And moreover, the mapping relation between the face to be processed and the makeup texture can be more accurately acquired through a grid construction mode. And, through calculating the graphics rendering mode, can obtain more laminating and natural fusion effect at the fusion module to can realize real-time accurate virtual makeup appearance effect, reduce user's makeup threshold, and practice thrift the time of spending and the cost of makeup material in making up, provide real-time and accurate virtual makeup for the user, and can make up the sense of feeling more natural and docile that virtual makeup was looked up, satisfy user's makeup and make up demand, promote user's satisfaction and viscidity to the platform.
Continuing with the exemplary structure of the facial image processing apparatus 255 provided in the embodiments of the present application as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the facial image processing apparatus 255 of the memory 250 may include:
the face key point detection model 2551 is used for detecting face key points of a face image to be processed to obtain a face key point set;
a determining module 2552, configured to determine a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a mandible in the face key point set; the first forehead feature point represents the forehead highest point on the face external contour; respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point; the third feature point and the fourth feature point belong to the face key point set;
an interpolation fitting module 2553, configured to perform interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; the forehead feature point set represents a forehead contour corresponding to the face to be processed;
and the processing module 2554 is configured to perform image processing on the face image to be processed based on the forehead feature point set, so as to obtain an image processing result.
In some embodiments, the determining module 2552 is further configured to determine the first forehead feature point on the vector of the second feature point pointing to the first feature point based on a distance between the first feature point and the second feature point.
In some embodiments, the determining module 2552 is further configured to multiply a preset distance adjustment factor by the abscissa and the ordinate of the first feature point, respectively, to obtain a first transverse product and a first longitudinal product; the preset adjusting factor is a numerical value larger than 1; calculating a difference value obtained by subtracting the preset adjusting factor from a preset threshold value, and multiplying the difference value by the abscissa and the ordinate of the second characteristic point respectively to obtain a second transverse product and a second longitudinal product; taking the sum of the first transverse product and the second transverse product as the abscissa of the first forehead feature point; and taking the sum of the first longitudinal product and the second longitudinal product as the ordinate of the first forehead feature point, thereby determining the first forehead feature point.
In some embodiments, the determining module 2552 is further configured to calculate a first distance between the second feature point and the first feature point, and calculate a second distance between a preset fixed distance point and the first feature point; under the condition that the angle of the face changes, the change of the distance between the preset fixed distance point and the first characteristic point is smaller than a preset change threshold value; the preset fixed distance points belong to the face key point set; and obtaining the preset adjusting factor based on the ratio of the first distance to the second distance.
In some embodiments, the determining module 2552 is further configured to calculate a first feature vector and a first length of the first feature point pointing to the first forehead feature point, a second feature vector and a second length of the first feature point pointing to the third feature point, and a third feature vector and a third length of the first feature point pointing to the fourth feature point; calculating a middle included angle between the first feature vector and the second feature vector so as to determine a first middle feature vector; calculating a first average of the first length and the second length; determining the second forehead feature point according to the first intermediate feature vector and the first average value; calculating a middle included angle between the first feature vector and the third feature vector so as to determine a second middle feature vector; calculating a second average of the first length and the third length; and determining the third forehead feature point according to the second intermediate feature vector and the second average value.
In some embodiments, the interpolation fitting module 2553 is further configured to obtain a forehead curve constraint corresponding to the face to be processed according to the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point and the fourth feature point; and performing interpolation fitting among the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point and the fourth feature point based on the forehead curve constraint to obtain the forehead feature point set comprising the first forehead feature point, the second forehead feature point and the third forehead feature point.
In some embodiments, the processing module 2554 is further configured to divide the to-be-processed face image into a plurality of real face regions according to the face key point set and the forehead feature point set; acquiring a preset special effect face corresponding to the face image to be processed; the preset special effect face is a face template containing a preset special effect image; the preset special-effect face comprises a plurality of preset face areas obtained by performing same feature point calculation and division processing on the face template in advance; obtaining a special effect pixel corresponding to each face pixel in the to-be-processed face image in the preset special effect face according to the corresponding relation between the real face areas and preset face areas; and performing pixel fusion on each face pixel and the corresponding special-effect pixel to obtain an image processing result obtained by superposing the face image to be processed and the preset special-effect image.
In some embodiments, the processing module 2554 is further configured to perform triangular mesh division by using a mesh division algorithm with each feature point in the face key point set and the forehead feature point set as a vertex, so as to obtain the plurality of real face regions formed by triangular meshes.
In some embodiments, the processing module 2554 is further configured to, for each face pixel, perform weighting calculation according to a vertex position of a target real face region where the face pixel is located, to obtain a relative position of the face pixel in the target real face region; determining a target preset face area corresponding to the target real face area according to the corresponding relation between the real face areas and preset face areas; and determining pixels corresponding to the relative positions in the target preset face area according to the vertex positions of the target preset face area, and taking the pixels as special-effect pixels corresponding to each face pixel.
In some embodiments, the processing module 2554 is further configured to perform parallel fusion processing on the each face pixel point and the corresponding material pixel through a graphics processing module, so as to obtain the image processing result.
In some embodiments, the processing module 2554 is further configured to perform chroma fusion on each human face pixel point and the corresponding material pixel point to obtain an intermediate chroma; respectively performing fusion intensity adjustment on the chromaticity of each face pixel point and the intermediate chromaticity by presetting a fusion intensity factor to obtain a first adjustment result and a second adjustment result; and combining the first adjustment result and the second adjustment result to obtain an image fusion result corresponding to each face pixel point as the image processing result.
In some embodiments, the processing module 2554 is further configured to perform at least one of face segmentation, face alignment, face recognition and face synthesis on the face image to be processed based on the forehead feature point set, so as to obtain the image processing result.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 3, 9, and 11.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of a program, software module, script, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may, but need not, correspond to files in a file system, may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HT ML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the key points in the non-forehead area in the face key point set, such as the feature points representing the nasal root, the outer side of the left face contour and the outer side of the right face contour, the first forehead feature point, the second forehead feature point and the third forehead feature point on the forehead contour can be preliminarily positioned; on the basis, more forehead feature points are calculated by adopting a self-adaptive interpolation method, so that a forehead feature point set corresponding to a smoother forehead contour is obtained. The calculation process of the embodiment of the application is faster, and robust forehead feature points can be determined by a self-adaptive interpolation method for various different face types, so that the forehead feature point identification efficiency and accuracy are improved, and the face image processing efficiency and accuracy based on the forehead feature point set are improved. Moreover, the preset adjusting factor is dynamically set by introducing the preset fixed distance point irrelevant to the angle change of the face, and the first forehead feature point is calculated according to the preset adjusting factor, so that the first forehead feature point with higher accuracy can be calculated under the conditions of head raising, head lowering and the like of the face, the face image processing method is suitable for the conditions of various face angle changes, and the face image processing accuracy is improved. In addition, by means of grid construction, a more detailed topological structure of the face to be processed and the texture of the special effect image, such as a topological structure of the makeup texture, can be obtained, and then the texture mapping relation between the face to be processed and the texture can be obtained. And the image processing module is used for processing sampling and fusion between the makeup texture and the face to be processed in parallel, so that the calculation amount and the processing time of a central processing unit on the electronic equipment can be greatly reduced, the face image processing efficiency is improved, and the real-time requirements of users in video and live scenes can be met.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A face image processing method is characterized by comprising the following steps:
carrying out face key point detection on a face image to be processed to obtain a face key point set;
determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a lower jaw in the face key point set; the first forehead feature point represents the forehead highest point on the face external contour;
respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point; the third feature point and the fourth feature point belong to the face key point set;
performing interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; the forehead feature point set represents a forehead contour corresponding to the face to be processed;
and carrying out image processing on the face image to be processed based on the forehead feature point set to obtain an image processing result.
2. The method of claim 1, wherein determining a first forehead feature point based on a first feature point characterizing a nose root and a second feature point characterizing a mandible in the face key point set comprises:
and determining the first forehead feature point on the vector of the second feature point pointing to the first feature point based on the distance between the first feature point and the second feature point.
3. The method according to claim 2, wherein the determining the first forehead feature point on the vector of the second feature point pointing to the first feature point based on the distance between the first feature point and the second feature point comprises:
multiplying a preset distance adjusting factor by the abscissa and the ordinate of the first characteristic point respectively to obtain a first transverse product and a first longitudinal product; the preset adjusting factor is a numerical value larger than 1;
calculating a difference value obtained by subtracting the preset adjusting factor from a preset threshold value, and multiplying the difference value by the abscissa and the ordinate of the second characteristic point respectively to obtain a second transverse product and a second longitudinal product;
taking the sum of the first transverse product and the second transverse product as the abscissa of the first forehead feature point;
and taking the sum of the first longitudinal product and the second longitudinal product as the ordinate of the first forehead feature point, thereby determining the first forehead feature point.
4. The method of claim 3, further comprising:
calculating a first distance between the second characteristic point and the first characteristic point, and calculating a second distance between a preset fixed distance point and the first characteristic point; under the condition that the angle of the human face changes, the distance change between the preset fixed distance point and the first characteristic point is smaller than a preset change threshold value; the preset fixed distance point belongs to the face key point set;
and obtaining the preset adjusting factor based on the ratio of the first distance to the second distance.
5. The method according to claim 1, wherein determining a second brow feature point and a third brow feature point based on the first brow feature point and the first feature point in combination with a third feature point and a fourth feature point respectively characterizing outer sides of left and right face contours comprises:
calculating a first feature vector and a first length of the first feature point pointing to the first forehead feature point, a second feature vector and a second length of the first feature point pointing to the third feature point, and a third feature vector and a third length of the first feature point pointing to the fourth feature point;
calculating a middle included angle between the first characteristic vector and the second characteristic vector so as to determine a first middle characteristic vector;
calculating a first average of the first length and the second length;
determining the second forehead feature point according to the first intermediate feature vector and the first average value;
calculating a middle included angle between the first characteristic vector and the third characteristic vector so as to determine a second middle characteristic vector;
calculating a second average of the first length and the third length;
and determining the third forehead feature point according to the second intermediate feature vector and the second average value.
6. The method of claim 1, wherein the interpolating a fit based on the first forehead feature point, the second forehead feature point, and the third forehead feature point to obtain a forehead feature point set, comprises:
obtaining forehead curve constraints corresponding to the face to be processed according to the third feature points, the second forehead feature points, the first forehead feature points, the third forehead feature points and the fourth feature points;
and performing interpolation fitting among the third feature point, the second forehead feature point, the first forehead feature point, the third forehead feature point and the fourth feature point based on the forehead curve constraint to obtain the forehead feature point set comprising the first forehead feature point, the second forehead feature point and the third forehead feature point.
7. The method according to any one of claims 1 to 6, wherein the image processing the face image to be processed based on the forehead feature point set to obtain an image processing result, includes:
dividing the face image to be processed into a plurality of real face regions according to the face key point set and the forehead feature point set;
acquiring a preset special effect face corresponding to the face image to be processed; the preset special effect face is a face template containing a preset special effect image; the preset special-effect face comprises a plurality of preset face areas obtained by performing the same feature point calculation and division processing on the face template in advance;
obtaining a special effect pixel corresponding to each face pixel in the to-be-processed face image in the preset special effect face according to the corresponding relation between the real face areas and preset face areas;
and performing pixel fusion on each face pixel and the corresponding special-effect pixel to obtain an image processing result obtained by superposing the face image to be processed and the preset special-effect image.
8. The method according to claim 7, wherein the dividing the face image to be processed into a plurality of real facial regions according to the face key point set and the forehead feature point set comprises:
and taking each feature point in the face key point set and the forehead feature point set as a vertex, and performing triangular mesh division by using a mesh division algorithm to obtain the plurality of real face regions formed by triangular meshes.
9. The method according to claim 8, wherein obtaining a special effect pixel corresponding to each face pixel in the to-be-processed face image in the preset special effect face according to a corresponding relationship between the plurality of real face regions and a plurality of preset face regions comprises:
for each face pixel, performing weighted calculation according to the vertex position of the target real face area where each face pixel is located to obtain the relative position of each face pixel in the target real face area;
determining a target preset face area corresponding to the target real face area according to the corresponding relation between the real face areas and preset face areas;
and determining the corresponding pixels of the relative positions in the target preset face area according to the vertex positions of the target preset face area, and taking the corresponding pixels as the special effect pixels corresponding to each face pixel.
10. The method according to claim 7, wherein the performing pixel fusion on each face pixel and its corresponding special effect pixel to obtain an image processing result obtained by superimposing the face image to be processed and the preset special effect image includes:
and performing parallel fusion processing on each face pixel point and the corresponding material pixel through a graphic processing module to obtain the image processing result.
11. The method according to claim 10, wherein the performing, by a graphics processing module, parallel fusion processing on the each face pixel point and its corresponding material pixel to obtain the image processing result comprises:
carrying out chroma fusion on the each face pixel point and the corresponding material pixel point to obtain intermediate chroma;
respectively performing fusion intensity adjustment on the chromaticity of each face pixel point and the intermediate chromaticity by presetting a fusion intensity factor to obtain a first adjustment result and a second adjustment result;
and combining the first adjustment result and the second adjustment result to obtain an image fusion result corresponding to each face pixel point, and taking the image fusion result as the image processing result.
12. The method according to any one of claims 1 to 6, wherein the image processing the face image to be processed based on the forehead feature point set comprises:
and based on the forehead feature point set, performing at least one of face segmentation, face alignment, face identification and face synthesis on the face image to be processed to obtain an image processing result.
13. A face image processing apparatus characterized by comprising:
the face key point detection model is used for detecting face key points of a face image to be processed to obtain a face key point set;
the determining module is used for determining a first forehead feature point based on a first feature point representing a nasal root and a second feature point representing a lower jaw in the face key point set; the first forehead feature point represents the forehead highest point on the external contour of the human face; respectively combining a third characteristic point representing the outer side of the left face contour and a fourth characteristic point representing the outer side of the right face contour based on the first forehead characteristic point and the first characteristic point, and determining a second forehead characteristic point and a third forehead characteristic point; the third feature point and the fourth feature point belong to the face key point set;
the interpolation fitting module is used for carrying out interpolation fitting on the first forehead feature point, the second forehead feature point and the third forehead feature point to obtain a forehead feature point set; the forehead feature point set represents a forehead contour corresponding to the face to be processed;
and the processing module is used for carrying out image processing on the face image to be processed based on the forehead feature point set to obtain an image processing result.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the method of any one of claims 1 to 12.
CN202110800842.5A 2021-07-15 2021-07-15 Face image processing method, device and equipment and computer readable storage medium Pending CN115631516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110800842.5A CN115631516A (en) 2021-07-15 2021-07-15 Face image processing method, device and equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110800842.5A CN115631516A (en) 2021-07-15 2021-07-15 Face image processing method, device and equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115631516A true CN115631516A (en) 2023-01-20

Family

ID=84903562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110800842.5A Pending CN115631516A (en) 2021-07-15 2021-07-15 Face image processing method, device and equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115631516A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471429A (en) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471429A (en) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system
CN116471429B (en) * 2023-06-20 2023-08-25 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system

Similar Documents

Publication Publication Date Title
US9552668B2 (en) Generation of a three-dimensional representation of a user
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN110136243A (en) A kind of three-dimensional facial reconstruction method and its system, device, storage medium
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
CN108876886B (en) Image processing method and device and computer equipment
US11587288B2 (en) Methods and systems for constructing facial position map
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
US11461970B1 (en) Methods and systems for extracting color from facial image
CN114821675B (en) Object processing method and system and processor
CN112221145A (en) Game face model generation method and device, storage medium and electronic equipment
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
US20200126314A1 (en) Method and system of automated facial morphing for eyebrow hair and face color detection
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN115631516A (en) Face image processing method, device and equipment and computer readable storage medium
CN115546361A (en) Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN115936796A (en) Virtual makeup changing method, system, equipment and storage medium
CN114820907A (en) Human face image cartoon processing method and device, computer equipment and storage medium
JP7145359B1 (en) Inference model construction method, inference model construction device, program, recording medium, configuration device and configuration method
US20240013500A1 (en) Method and apparatus for generating expression model, device, and medium
Zhang et al. Style Transfer for 360 images
CN117011430A (en) Game resource processing method, apparatus, device, storage medium and program product
CN114742951A (en) Material generation method, image processing method, device, electronic device and storage medium
CN113822964A (en) Method, device and equipment for optimizing rendering of image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40078753

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination