CN110879983B - Face feature key point extraction method and face image synthesis method - Google Patents

Face feature key point extraction method and face image synthesis method Download PDF

Info

Publication number
CN110879983B
CN110879983B CN201911128636.3A CN201911128636A CN110879983B CN 110879983 B CN110879983 B CN 110879983B CN 201911128636 A CN201911128636 A CN 201911128636A CN 110879983 B CN110879983 B CN 110879983B
Authority
CN
China
Prior art keywords
image
face
face image
key points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911128636.3A
Other languages
Chinese (zh)
Other versions
CN110879983A (en
Inventor
闫宏伟
梅齐勇
聂猛猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fly Vr Co ltd
Original Assignee
Fly Vr Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fly Vr Co ltd filed Critical Fly Vr Co ltd
Priority to CN201911128636.3A priority Critical patent/CN110879983B/en
Publication of CN110879983A publication Critical patent/CN110879983A/en
Application granted granted Critical
Publication of CN110879983B publication Critical patent/CN110879983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method for extracting key points of facial features and a method for synthesizing facial images, wherein the method for extracting the key points of the facial features comprises the following steps: the method comprises the steps of receiving collected image information, carrying out binarization processing on the image information to obtain a binarized image, and detecting a face image in the image information; and extracting characteristic key points of the face image, acquiring coordinates of the characteristic key points, and triangulating the face image according to the coordinates of the characteristic key points to acquire a plurality of triangular areas. By implementing the invention, the characteristic key point coordinates of the acquired image information and the image information to be synthesized are extracted, so that the face recognition efficiency can be remarkably improved.

Description

Face feature key point extraction method and face image synthesis method
Technical Field
The invention relates to the field of computer image recognition and processing, in particular to a method for extracting key points of facial features and a method for synthesizing facial images.
Background
Today, where science and technology are growing, intelligent computer devices play an increasingly important role in our lives. For more convenient life, computer image processing technology has been applied to aspects of our life, especially in the field of image processing. And along with the reduction of the hardware cost of the computer, the improvement of the operation speed of the central processing unit and the specialized development of the image recognition and processing technology, the method is widely and conveniently applied in the actual life.
Currently, the process of image recognition technology is divided into information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision. Most image recognition technologies are based on OPENCV open source libraries, and various other open source libraries and various existing algorithms (such as DLIB, image binarization algorithm and OTSU algorithm) are loaded to realize various new algorithms, so that computer space is used to be minimized, different recognition requirements are met, and the aim of processing different images is achieved in various ways. However, the existing technology has high limitation on the form of the extracted face, and when face information is extracted, a more correct face image needs to be provided, so that the recognition efficiency is low.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defects that the prior related technology has a lot of limitation on the form of the extracted face image, and when the face image information is extracted, a more correct face image needs to be provided and the recognition efficiency is lower, so that the method for extracting the key points of the face features is provided.
According to a first aspect, an embodiment of the present invention discloses a method for extracting key points of facial features, including: the method comprises the steps of receiving collected image information, carrying out binarization processing on the image information to obtain a binarized image, detecting a target face image in the image information, extracting characteristic key points of the target face image, obtaining coordinates of the characteristic key points, and carrying out triangulation on the face image according to the coordinates to obtain a plurality of triangular areas.
According to a second aspect, an embodiment of the present invention discloses a face image synthesis method, including: the method comprises the steps of respectively receiving a first image and a second image which are acquired, extracting face feature key points of the first image by adopting the face feature key point extraction method in the first aspect to obtain a plurality of triangular areas of a first target face image, extracting face feature key points of the second image by adopting the face feature key point extraction method in the first aspect to obtain a plurality of triangular areas of a second target face image, affine-fusing the triangular areas of the first target face image to the corresponding triangular areas of the second target face image according to the triangular areas of the first target face image and the second target face image, and obtaining a face synthetic image.
With reference to the first aspect, in a first implementation manner of the first aspect, the detecting a face image of the image information includes: detecting the binarized image by using a face area detection model preset by the system to obtain at least one face area in the binarized image, storing the starting point coordinates X, Y of the detected face area and the width and height of the face area, calculating the area of the face area according to the starting point coordinates X, Y of the face area and the width and height of the face area, and selecting the face area with the largest area in the binarized image as a target face image.
With reference to the first aspect, in a second implementation manner of the first aspect, the extracting feature key points of the target face image, and acquiring coordinates of the feature key points includes: and detecting the target face image by calling a face feature key point detection model preset by the system, extracting a plurality of feature key points in the target face image, and caching the coordinates of each feature key point.
With reference to the second aspect, in a first implementation manner of the second aspect, according to the multiple triangular areas of the first target face image and the second target face image, affine the multiple triangular areas of the first target face image to the corresponding multiple triangular areas of the second target face image, performing image edge fusion to obtain a face synthesized image includes: affine the first target face image to the second target face image through a perspective transformation matrix algorithm according to a plurality of triangular areas of the first target face image and the second target face image to obtain a third target face image, carrying out gray processing on the third target face image, and carrying out edge fusion by calling a seamless fusion model preset by a system to obtain a fourth target face image, wherein the fourth target face image is the face synthetic image.
According to a third aspect, an embodiment of the present invention discloses an extraction device for key points of facial features, including: the system comprises a first receiving module, a processing module, a detection module, a feature key point extracting module, a subdivision module and a triangle area obtaining module, wherein the first receiving module is used for receiving acquired image information, the processing module is used for carrying out binarization processing on the image information to obtain a binarized image, the detection module is used for detecting a target face image of the image information, the feature key point extracting module is used for extracting feature key points of the target face image, acquiring coordinates of the feature key points, and the subdivision module is used for carrying out triangulation on the target face image according to the coordinates to obtain a plurality of triangle areas.
According to a fourth aspect, an embodiment of the present invention discloses a face image synthesizing apparatus, including: the second receiving module is used for respectively receiving the acquired first image and second image, the first face feature key point extracting module is used for extracting the face feature key points of the first image by adopting the face feature key point extracting device of the third aspect to obtain a plurality of triangular areas of the first target face image, the second face feature key point extracting module is used for extracting the face feature key points of the second image by adopting the face feature key point extracting device of the third aspect to obtain a plurality of triangular areas of the second target face image, and the affine module is used for affine the triangular areas of the first target face image to the corresponding triangular areas of the second target face image according to the triangular areas of the first target face image and the second target face image to obtain the face synthetic image.
According to a fifth aspect, an embodiment of the present invention discloses a system for extracting key points of facial features, including: at least one control device, the control device is configured to perform the method for extracting feature key points of a face according to the first aspect or any implementation manner of the first aspect, and extract feature key point coordinates of a target face image in the received image information.
According to a sixth aspect, an embodiment of the present invention discloses a face image synthesis system, including: at least one control device, the control device is used for executing the face image synthesizing method according to the second aspect or any implementation manner of the second aspect, and synthesizing the target face image in the received image information.
According to a seventh aspect, an embodiment of the present invention discloses a computer readable and writable storage medium, on which a computer program is stored, the computer program implementing the steps of the face feature key point extraction method according to the first aspect or any implementation manner of the first aspect, or the steps of the face image synthesis method according to the second aspect or any implementation manner of the second aspect when being executed by a processor.
The technical scheme of the invention has the following advantages:
the invention provides a face feature key point extraction method, a face image synthesis method and a face image synthesis device, wherein the face feature key point extraction method is used for receiving collected image information; performing binarization processing on the image information to obtain a binarized image; detecting a target face image in the image information; extracting characteristic key points of the target face image, and acquiring coordinates of the characteristic key points; and carrying out triangulation on the target face image according to the coordinates of the characteristic key points to obtain a plurality of triangular areas. The coordinate information of the characteristic key points of the target face image is obtained by combining the obtained characteristic key points of the target face image, so that the requirement on the form of the extracted target face image in the extraction process of the characteristic key points of the face is reduced, a more correct face image is not needed, the extraction efficiency of the characteristic key points of the face is improved, and the face recognition efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a specific example of a method for extracting key points of facial features in embodiment 1 of the present invention;
fig. 2 is a flowchart of a specific example of a face image synthesizing method in embodiment 2 of the present invention;
fig. 3 is a specific flowchart of a face image of detected image information in the method for extracting key points of face features in embodiment 1 of the present invention;
fig. 4 is a specific flowchart of acquiring feature key point coordinates in the method for extracting a face feature key point in embodiment 1 of the present invention;
FIG. 5 is a flowchart of a face image synthesis method according to embodiment 2 of the present invention;
fig. 6 is a flowchart of a specific example of an extraction device for key points of facial features in embodiment 3 of the present invention;
fig. 7 is a flowchart of a specific example of a face image synthesizing apparatus in embodiment 4 of the present invention;
fig. 8 is a block diagram showing the structure of a control device in embodiment 5 of the present invention;
fig. 9 is a block diagram showing the configuration of a first controller in a control apparatus in embodiment 5 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1
The embodiment of the invention provides a method for extracting key points of facial features, which is applied to a specific application scene in which people want to extract the characteristic values of facial regions in acquired image information in a process of extracting key points of facial features, and as shown in fig. 1, the method for extracting the key points of the facial features in the embodiment comprises the following steps:
step S11: and receiving the acquired image information. The image information collected here is specifically classified into two types, the first type may be image information including a face image collected by an image capturing device, for example, the captured image information may be a main face image in a composite image, or may be image information including face information preset in a system, and the second type may be an image including face image information as a background image, or may be a background image including face information collected by an image capturing device, or may be a background image including face information preset in a system.
Step S12: and carrying out binarization processing on the image information to obtain a binarized image. Specifically, the image binarization processing herein means that gray values of all pixel points on an image are set to 0 or 255, that is, the whole image can show obvious black-and-white effect, so that data volume in the image can be obviously reduced, for example, an image collected by an image capturing device is generally a color image, the pixel points on the color image have different brightness levels, and the extraction of coordinates of key points of features of a face image can influence the detection speed of the face image under the background, so that the extraction efficiency of the key points of the face features is influenced.
Step S13: and detecting the face image in the target image information. Specifically, the image information directly collected by the image capturing apparatus generally includes a variety of information, such as face image information, body image information of a person, background image information, and the like, any of which may be collected by the image capturing apparatus, and the system detects a face image having key points of face features among the variety of information.
In an embodiment, the step S13 may specifically include the following steps in the execution process, as shown in fig. 3:
step S131: and detecting the binarized image by using a face area detection model preset by the system to obtain at least one face area in the binarized image. Specifically, the face region detection model preset by the system refers to a face detection model in an open computer source code visual library (Open Source Computer Vision Library, OPENCV), any region on an image can be detected by running the face region detection model, and it is noted that at least one region exists on the acquired image information, and the model may also detect two or more regions.
Step S132: the starting point coordinates X, Y of the detected face area and the width and height of the face area are saved, specifically, the detected face area is returned to be rectangular, circular, elliptical or triangular by a face area detection model in an OPENCV preset by the system, for example, the model detects that two areas exist in an image, the two areas are respectively converted into a rectangular area by the model preset by the system to be a first rectangular area and a second rectangular area, and at the moment, the first starting point coordinates, the width W1 and the height H1 of the detected first rectangular area, and the second starting point coordinates, the width W2 and the height H2 of the second rectangular area are saved.
Step S133: the area of the face region is calculated according to the starting point coordinates X, Y of the face region and the width and height of the face region, specifically, two regions detected by the model in the acquired image information can be calculated according to the width and height of the regions by using the following formula.
S=f(x)W1*f(x)H1
Wherein, S represents the area of the area to be calculated, W1 represents the width of the rectangular area, H1 represents the height of the rectangular area, f (x) represents the weight value, and f (x) can be set according to the specific scene in practical application, which is not limited in the invention.
Step S134: and selecting the face area with the largest area in the binarized image as a target face image. In a specific embodiment, the areas of all the areas in the binarized image are calculated through the steps, the areas are respectively S1 and S2, the sizes of the S1 and S2 are directly compared, and the area with the largest area is selected as the target face image.
In the steps S131-S134, the binarized image is detected by the face region detection model preset by the system to obtain all the face regions in the binarized image, then the starting point coordinates X, Y of the detected face regions and the width and height of the face regions are stored, finally the face region area is calculated by the starting point coordinates X, Y of the face regions and the width and height of the face regions, and the face region with the largest area is determined as the target face image.
Step S14: and extracting characteristic key points of the face image, and acquiring coordinates of the characteristic key points. Specifically, there are 68 feature key points of the face image, and the coordinates of the 68 feature key points are extracted and cached.
In an embodiment, the step S14 may specifically include the following steps in the execution process, as shown in fig. 4:
step S141: and detecting a target face image by calling a system preset face feature key point detection model, extracting a plurality of feature key points in the face image, and caching coordinates of each feature key point. Specifically, the system preset is a face feature key point detection model in the OPENCV, and the model is called to detect information of face feature key points in the face image, for example, the face feature key points include: eyebrows; eyes, such as upper eyelid, lower eyelid; nose, e.g. nasal bridge, nostril; mouth, e.g. upper lip, lower lip; the contour of the face, e.g., two cheek edge locations, chin edge locations. In general, the face has 68 feature key points, and if the face feature key point detection model in other on-off source libraries is called, the feature key points of the face may be 72 points, 96 points or more points.
By implementing step S141, the facial feature key points and the coordinates of the cached feature key points are extracted by using the facial feature key point detection model, so that the feature key points and the feature key point coordinates of the facial feature are accurately extracted, the requirements on the facial shape when the facial feature key points are extracted are reduced, and the efficiency of extracting the facial feature key points and the coordinates thereof is improved.
Step S15: and triangulating the face image according to the coordinates of the characteristic key points to obtain a plurality of triangular areas. In this embodiment, the coordinates of the feature key points may be the coordinates of the feature key points of the face extracted in any one of the above embodiments, the face image is a face image area extracted by the method in any one of the above embodiments, and the principle of triangulation is that, assuming that V is a finite point set on a two-dimensional real number domain, edge E is a closed line segment formed by points in the point set as end points, and E is a set of E. Then a triangulation t= (V, E) of the point set V is a plan G which simultaneously fulfils the following conditions: 1. edges in the plan view do not contain any points in the point set except for the end points; 2. no intersecting edges; 3. all the faces in the plan view are triangular faces, and the aggregate of all the triangular faces is the convex hull of the scattered point set V. Specifically, the finite point set V on the two-dimensional real number domain may be a feature key point set in the face image, and the plan may be a plurality of triangular areas obtained after triangulation.
According to the extraction method of the face feature key points, binarization processing is carried out on the received collected image information to obtain a binarized image, then the face image in the image information is detected, then the face image feature key points are extracted according to the face image, the coordinates of the feature key points are stored, finally triangulation is carried out on the target face image according to the coordinates of the feature key points to obtain a plurality of triangular areas, and as the obtained coordinates of the face feature key points are the coordinates relative to a coordinate system, the absolute coordinates of the face feature key points are changed under the condition that the face shape is relatively not right, but the relative coordinates of the face feature key points are not changed, triangulation operation is carried out, the face image with the plurality of triangular areas can be directly obtained, therefore the requirements on the shape of the face image are very low, the face feature key point extraction efficiency is improved, and face recognition efficiency is further improved.
Example 2
The embodiment of the invention provides a face image synthesis method, which is applied to specific application scenes such as role playing, face changing, interesting photographing and the like, and comprises the steps of replacing face information in a second image by face information in a first image, wherein the face image synthesis method in the embodiment is shown in fig. 2 and comprises the following steps:
step S21: the acquired first and second images are received, respectively. In this embodiment, the first image may be image information including a face image acquired by the image capturing apparatus, for example, captured image information, which may be a main face image in a composite image, or may be image information including face information preset by the system, that is, the first image specifically provides a face image having significant features for the composite image, for example, face images of friends, colleagues, or the like that actually exist in life; the second image may be an image including face image information as a background image, and at this time, the second image information provides the composite image with a background and a rough outline of the face, and similarly, may be a background image including face information acquired by an imaging device, or may be a background image including face information preset by a system, for example, a face image which can be searched on a network, or may be a star or hero person, or may be a face image which actually exists in life.
Step S22: the method described in embodiment 1 is used to extract key points of facial features from the first image, so as to obtain a plurality of triangular areas of the first target facial image. Specifically, by adopting the method for extracting the key points of the facial features described in the above embodiment, the coordinates of the key points of the facial features are extracted from the received first image information, and the triangulation is performed on the first target image information according to the coordinates of the key points of the features, so as to obtain a facial image composed of a plurality of triangular areas.
Step S23: and extracting key points of the face features of the second image by adopting the method described in the embodiment 1 to obtain a plurality of triangular areas of the second target face image. Specifically, the face feature key point extraction method described in the above embodiment is adopted, face and feature key points are extracted from the received second image information, and triangulation is performed on the second image information according to the feature key point coordinates, so as to obtain a face image composed of a plurality of triangular areas.
The steps S22 and S23 may be performed not sequentially or simultaneously.
Step S24: and affine-fusing the triangular areas of the first target face image to the corresponding triangular areas of the second target face image according to the triangular areas of the first target face image and the second target face image to obtain a face synthetic image.
In an embodiment, the step S24 may specifically include the following steps in the execution process, as shown in fig. 5:
step S241: and affine the first target face image to the second target face image through a perspective transformation matrix algorithm according to a plurality of triangular areas of the first target face image and the second target face image, so as to obtain a third target face image. Specifically, the first target face image and the second target face image are images after triangulation obtained through the steps in the above embodiment, the perspective transformation matrix algorithm is one of the three-angle radial transformation, and the perspective transformation can keep the straightness of the images, namely, the straight lines in the original images are still straight lines after the perspective transformation.
Step S242: and carrying out graying treatment on the third-purpose face image, and carrying out edge treatment by calling a seamless fusion model preset by the system to obtain a fourth-purpose face image, wherein the fourth-purpose face image is a face synthetic image. In this embodiment, the graying process refers to a Gray color expressed by color if r=g=b in the RGB model, where the Gray value refers to a Gray value stored in only one byte per pixel of the Gray image, and the Gray range is 0-255 if the color of the original point is RGB (R, G, B), by converting it into Gray by the following method, for example, floating point algorithm, gray=r×0.3+g×0.59+b×0.11. Specifically, there are various methods of graying an image, for example, a component method: the brightness of three components in the color image is used as the gray value of three gray images, and one gray image can be selected according to application requirements. Calculated according to the following formula:
f1(i,j)=R(i,j)f2(i,j)=G(i,j)f3(i,j)=B(i,j)
where fk (i, j) (k=1, 2, 3) is the gray value of the converted gray image at (i, j).
And edge fusion is carried out on the synthesized image through a seamless fusion model in an OPENCV preset by the system, after affine is carried out on the first face image to the second face image, the skin colors of the face images on the two images are slightly different, the edge connection position is very hard due to simple affine, and the seamless fusion model is adopted to carry out the edge fusion on the synthesized image, so that the face synthesized image with natural edge transition of the face image is obtained.
According to the face image synthesis method, the acquired first image and the acquired second image are subjected to the face feature key point extraction method according to the embodiment, so that a plurality of triangular areas of the first target face image are obtained respectively, then the triangular areas of the first target face image are affine to the triangular areas of the corresponding second target face image according to the triangular areas of the first target face image and the second target face image, image edge fusion is carried out, and finally the face synthesis image is obtained. The two face images are respectively extracted with the face feature key point values, so that the face image recorded with the feature key point coordinates is obtained, and the face image can be directly affine to another face image, so that the synthesized face image is quickly and accurately obtained, the defect that the face image to be synthesized has higher morphological requirement in the face synthesis process is overcome, and the face synthesis efficiency and accuracy are improved.
Example 3
The embodiment provides an extraction device for key points of facial features, as shown in fig. 6, including:
the first receiving module 61 is configured to receive the acquired image information, and details of implementation of the first receiving module may be described in connection with step S11 of the above-described method embodiment.
The processing module 62 is configured to perform binarization processing on the image information to obtain a binarized image, and details of implementation can be found in the related description of step S12 of the above method embodiment.
The detection module 63 is configured to detect a target face image of the image information, and details of implementation of the detection module can be found in the description of step S13 of the above-described method embodiment.
The feature key point extracting module 64 is configured to extract feature key points of the target face image, obtain coordinates of the feature key points, and refer to the related description of step S14 in the above method embodiment for details.
The subdivision module 65 is configured to triangulate the target face image according to coordinates of the feature key points to obtain a plurality of triangular regions, and details of implementation can be found in the related description of step S15 of the above method embodiment.
The extraction device of the facial feature key points solves the defect that the prior related technology has higher requirement on the form of facial images, and the extraction device only acquires the coordinates of the facial feature key points so as to perform triangulation operation, so that the efficiency of extracting the facial image feature key points is improved, and the efficiency of face recognition is further improved.
Example 4
The present embodiment provides a face image synthesizing apparatus, as shown in fig. 7, including:
the second receiving module 71 is configured to receive the acquired first image and the second image respectively, and details of implementation can be found in the description related to step S21 of the above method embodiment.
The first face feature key point extracting module 72 is configured to extract the face feature key points of the first image by using the face feature key point extracting device of embodiment 3, so as to obtain a plurality of triangular areas of the first target face image, and details of implementation can be seen in the related description of step S22 of the above method embodiment.
The second face feature key point extracting module 73 is configured to extract the face feature key points of the second image by using the face feature key point extracting device of embodiment 2, so as to obtain a plurality of triangular areas of the second target face image, and details of implementation can be seen in the related description of step S23 of the foregoing method embodiment.
The affine module 74 is configured to affine the multiple triangular areas of the first destination face image to the corresponding multiple triangular areas of the second destination face image according to the multiple triangular areas of the first destination face image and the second destination face image, and perform image edge fusion to obtain a face synthesized image, and details of implementation can be seen from the related description of step S24 of the above method embodiment.
The face image synthesizing device in the embodiment of the invention extracts the key point values of the face features of the two face images respectively to obtain the face image recorded with the key point coordinates of the features, and can directly affine replace the face image, thereby solving the defect of higher morphological requirement of the face image to be synthesized in the prior related face synthesizing technology, and rapidly and accurately obtaining the synthesized face image, thereby improving the efficiency, accuracy and flexibility of face synthesis.
Example 5
The present embodiment provides a control apparatus, as shown in fig. 8, including:
first communication module 811: the system is used for transmitting data, receiving collected image information or respectively receiving collected first image information and second image information; the first communication module can be a Bluetooth module and a Wi-Fi module, and then communicates through a set wireless communication protocol.
First controller 812: is connected to the first communication module 811, as shown in fig. 9, and includes: at least one processor 91; and a memory 92 communicatively coupled to the at least one processor 91; the memory 92 stores instructions executable by the at least one processor 91, and when data information is received, the at least one processor 91 is caused to execute the method for extracting key points of facial features shown in fig. 1 or the method for synthesizing facial images shown in fig. 2, in fig. 9, one processor is taken as an example, the processor 91 and the memory 92 are connected through the bus 90, and in this embodiment, the first communication module may be a wireless communication module, for example, a bluetooth module, a Wi-Fi module, or the like, or may be a wired communication module. The transmission between the first controller 812 and the first communication module 811 is wireless.
The memory 92, as a non-transitory computer readable storage medium, may be used to store a non-transitory software program, a non-transitory computer executable program, and a module, such as a program instruction/module corresponding to a method for extracting key points of facial features or a method for synthesizing facial images in the embodiments of the present application. The processor 91 executes various functional applications of the server and data processing, that is, a method of extracting key points of facial features or a method of synthesizing a facial image in the above-described embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 92.
Memory 92 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of a processing device operated by the server, or the like. In addition, the memory 92 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 92 may optionally include memory remotely located relative to processor 91, which may be connected to the network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 92 that, when executed by the one or more processors 91, perform the method described in any of the above embodiments.
The embodiment of the invention also provides a non-transitory computer readable medium, which stores computer instructions for causing a computer to execute the method for extracting key points of facial features or the method for synthesizing facial images described in any of the above embodiments, wherein the storage medium may be a magnetic Disk, a compact disc, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (9)

1. The extraction method of the key points of the facial features is characterized by comprising the following steps:
receiving collected image information;
performing binarization processing on the image information to obtain a binarized image;
detecting a target face image in the image information;
extracting characteristic key points of the target face image, and acquiring coordinates of the characteristic key points;
triangulating the target face image according to the coordinates to obtain a plurality of triangular areas;
the detecting the face image of the image information comprises:
detecting the binarized image by using a face area detection model preset by a system to obtain at least one face area in the binarized image;
storing the starting point coordinates X, Y of the detected face area and the width and height of the face area;
calculating the area of the face region according to the starting point coordinates X, Y of the face region and the width and height of the face region;
selecting a face region with the largest area in the binarized image as a target face image;
the storing the detected starting point coordinates X, Y of the face region and the width and height of the face region includes:
converting the detected face region into a rectangular region through a face region detection model;
storing the detected starting point coordinates X, Y of the rectangular area and the width and height of the face area;
the face area is calculated by the following formula:
S=f(x)W1*f(x)H1
where S represents the area of the region to be calculated, W1 represents the width of the rectangular region, H1 represents the height of the rectangular region, and f (x) represents the weight value.
2. The face image synthesis method is characterized by comprising the following steps of:
respectively receiving the acquired first image and second image;
extracting key points of the face features of the first image by adopting the method of claim 1 to obtain a plurality of triangular areas of the first target face image;
extracting key points of the face features of the second image by adopting the method of claim 1 to obtain a plurality of triangular areas of the second target face image;
and affine-fusing the triangular areas of the first target face image to the corresponding triangular areas of the second target face image according to the triangular areas of the first target face image and the second target face image to obtain a face synthetic image.
3. The method for extracting feature key points of a face according to claim 1, wherein the extracting feature key points of the target face image, and acquiring coordinates of the feature key points comprises:
and detecting the target face image by calling a face feature key point detection model preset by the system, extracting a plurality of feature key points in the target face image, and caching the coordinates of each feature key point.
4. The face image synthesizing method according to claim 2, wherein the performing image edge fusion on the plurality of triangular areas of the first destination face image and the plurality of triangular areas of the second destination face image according to the plurality of triangular areas of the first destination face image, affine to the corresponding plurality of triangular areas of the second destination face image, includes:
affine the first target face image to the second target face image through a perspective transformation matrix algorithm according to the triangular areas of the first target face image and the second target face image to obtain a third target face image;
and carrying out gray processing on the third-purpose face image, and carrying out edge fusion by calling a seamless fusion model preset by a system to obtain a fourth-purpose face image, wherein the fourth-purpose face image is the face synthetic image.
5. The utility model provides an extraction element of face feature key point which characterized in that includes:
the first receiving module is used for receiving the acquired image information;
the processing module is used for carrying out binarization processing on the image information to obtain a binarized image;
the detection module is used for detecting the target face image in the image information;
the feature key point extracting module is used for extracting feature key points of the target face image and acquiring coordinates of the feature key points;
the subdivision module is used for triangulating the target face image according to the coordinates to obtain a plurality of triangular areas;
the detecting the face image of the image information comprises:
detecting the binarized image by using a face area detection model preset by a system to obtain at least one face area in the binarized image;
storing the starting point coordinates X, Y of the detected face area and the width and height of the face area;
calculating the area of the face region according to the starting point coordinates X, Y of the face region and the width and height of the face region;
selecting a face region with the largest area in the binarized image as a target face image;
the storing the detected starting point coordinates X, Y of the face region and the width and height of the face region includes:
converting the detected face region into a rectangular region through a face region detection model;
storing the detected starting point coordinates X, Y of the rectangular area and the width and height of the face area;
the face area is calculated by the following formula:
S=f(x)W1*f(x)H1
where S represents the area of the region to be calculated, W1 represents the width of the rectangular region, H1 represents the height of the rectangular region, and f (x) represents the weight value.
6. A face image synthesizing apparatus, comprising:
the second receiving module is used for respectively receiving the acquired first image and second image;
a first face feature key point extraction module, configured to extract face feature key points of the first image by using the apparatus of claim 5, so as to obtain a plurality of triangular areas of the first target face image;
a second face feature key point extraction module, configured to extract face feature key points of the second image by using the apparatus of claim 5, so as to obtain a plurality of triangular areas of the second target face image;
and the affine module is used for affining the triangular areas of the first target face image to the corresponding triangular areas of the second target face image according to the triangular areas of the first target face image and the second target face image, and carrying out image edge fusion to obtain a face synthesized image.
7. The extraction system of the key points of the facial features is characterized by comprising the following components:
at least one control device, the control device is configured to perform the method for extracting the feature key points of the face according to any one of claims 1 and 3, and extract coordinates of feature key points of the target face image in the received image information.
8. A face image composition system, comprising:
at least one control device for performing the method of face image synthesis according to any one of claims 2, 4, synthesizing the destination face image in the received image information.
9. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of extracting key points of facial features as claimed in any one of claims 1, 3 or the method of synthesizing a facial image as claimed in any one of claims 2, 4.
CN201911128636.3A 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method Active CN110879983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911128636.3A CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911128636.3A CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Publications (2)

Publication Number Publication Date
CN110879983A CN110879983A (en) 2020-03-13
CN110879983B true CN110879983B (en) 2023-07-25

Family

ID=69728942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911128636.3A Active CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Country Status (1)

Country Link
CN (1) CN110879983B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915479B (en) * 2020-07-15 2024-04-26 抖音视界有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113658035B (en) * 2021-08-17 2023-08-08 北京百度网讯科技有限公司 Face transformation method, device, equipment, storage medium and product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
WO2018188453A1 (en) * 2017-04-11 2018-10-18 腾讯科技(深圳)有限公司 Method for determining human face area, storage medium, and computer device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506693B (en) * 2017-07-24 2019-09-20 深圳市智美达科技股份有限公司 Distort face image correcting method, device, computer equipment and storage medium
CN108876718B (en) * 2017-11-23 2022-03-22 北京旷视科技有限公司 Image fusion method and device and computer storage medium
CN108876705B (en) * 2017-11-23 2022-03-22 北京旷视科技有限公司 Image synthesis method and device and computer storage medium
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
WO2018188453A1 (en) * 2017-04-11 2018-10-18 腾讯科技(深圳)有限公司 Method for determining human face area, storage medium, and computer device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Delaunay Triangulation based 3D Human Face Modeling from Uncalibrated Images;R. Hassanpour et al.;《IEEE Xplore》;全文 *
关键点匹配三维人脸识别方法;宋顶利;杨炳儒;于复兴;;计算机应用研究(第11期);全文 *

Also Published As

Publication number Publication date
CN110879983A (en) 2020-03-13

Similar Documents

Publication Publication Date Title
US10515291B2 (en) Template creation device and template creation method
EP1969559B1 (en) Contour finding in segmentation of video sequences
TWI395145B (en) Hand gesture recognition system and method
US8565525B2 (en) Edge comparison in segmentation of video sequences
CN111435438A (en) Graphical fiducial mark recognition for augmented reality, virtual reality and robotics
US8126268B2 (en) Edge-guided morphological closing in segmentation of video sequences
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN108323204A (en) A kind of method and intelligent terminal of detection face flaw point
WO2007076890A1 (en) Segmentation of video sequences
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN110264493A (en) A kind of multiple target object tracking method and device under motion state
EP1971967A1 (en) Average calculation in color space, particularly for segmentation of video sequences
EP2987322A1 (en) Handheld portable optical scanner and method of using
JPWO2012077286A1 (en) Object detection apparatus and object detection method
CN110879983B (en) Face feature key point extraction method and face image synthesis method
CN110781770A (en) Living body detection method, device and equipment based on face recognition
JP2021517281A (en) Multi-gesture fine division method for smart home scenes
JP6331761B2 (en) Determination device, determination method, and determination program
US20210142064A1 (en) Image processing apparatus, method of processing image, and storage medium
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN109145855A (en) A kind of method for detecting human face and device
JP5128454B2 (en) Wrinkle detection device, wrinkle detection method and program
CN111105394B (en) Method and device for detecting characteristic information of luminous pellets
CN113066121A (en) Image analysis system and method for identifying repeat cells
WO2013154062A1 (en) Image recognition system, image recognition method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant