CN110879983A - Face feature key point extraction method and face image synthesis method - Google Patents

Face feature key point extraction method and face image synthesis method Download PDF

Info

Publication number
CN110879983A
CN110879983A CN201911128636.3A CN201911128636A CN110879983A CN 110879983 A CN110879983 A CN 110879983A CN 201911128636 A CN201911128636 A CN 201911128636A CN 110879983 A CN110879983 A CN 110879983A
Authority
CN
China
Prior art keywords
image
face
face image
target
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911128636.3A
Other languages
Chinese (zh)
Other versions
CN110879983B (en
Inventor
闫宏伟
梅齐勇
聂猛猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xunfei Fantasy Beijing Technology Co Ltd
Original Assignee
Xunfei Fantasy Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xunfei Fantasy Beijing Technology Co Ltd filed Critical Xunfei Fantasy Beijing Technology Co Ltd
Priority to CN201911128636.3A priority Critical patent/CN110879983B/en
Publication of CN110879983A publication Critical patent/CN110879983A/en
Application granted granted Critical
Publication of CN110879983B publication Critical patent/CN110879983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method for extracting key points of human face features and a method for synthesizing human face images, wherein the method for extracting the key points of the human face features comprises the following steps: receiving collected image information, performing binarization processing on the image information to obtain a binarized image, and detecting a face image in the image information; extracting the characteristic key points of the face image, acquiring the coordinates of the characteristic key points, and triangulating the face image according to the coordinates of the characteristic key points to obtain a plurality of triangular areas. By implementing the method and the device, the characteristic key point coordinates of the acquired image information and the image information to be synthesized are extracted, so that the face recognition efficiency can be obviously improved.

Description

Face feature key point extraction method and face image synthesis method
Technical Field
The invention relates to the field of computer image recognition and processing, in particular to a method for extracting key points of human face features and a method for synthesizing human face images.
Background
Today, with the development of scientific technology, intelligent computer devices play an increasingly important role in our lives. For more convenient life, computer image processing technology has been applied to aspects of our life, especially in the field of image processing. And with the reduction of computer hardware cost, the improvement of the operation speed of a central processing unit and the professional development of the image identification and processing technology, the application of the method in the actual life is more and more extensive and convenient.
At present, the process of image recognition technology is divided into information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision. Most image recognition technologies are based on an OPENCV open source library, various other open source libraries and various existing algorithms (such as DLIB, image binarization algorithm and OTSU algorithm) are loaded to realize various new algorithms, the computer space is used in a minimized manner, different recognition requirements are met, and the purpose of processing different images is achieved. However, the current technology has high limitation on the form of the extracted face, and when face information is extracted, a relatively correct face image needs to be provided, so that the recognition efficiency is low.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defects that the existing related technologies have many limitations on the form of the extracted face image, and when face image information is extracted, a more correct face image needs to be provided, and the recognition efficiency is low, so as to provide a method for extracting key points of face features.
According to a first aspect, the embodiment of the invention discloses a method for extracting key points of human face features, which comprises the following steps: the method comprises the steps of receiving collected image information, carrying out binarization processing on the image information to obtain a binarization image, detecting a target face image in the image information, extracting feature key points of the target face image, obtaining coordinates of the feature key points, and carrying out triangulation on the face image according to the coordinates to obtain a plurality of triangular areas.
According to a second aspect, the embodiment of the invention discloses a face image synthesis method, which comprises the following steps: respectively receiving a first image and a second image which are collected, adopting the face feature key point extraction method of the first aspect to extract face feature key points of the first image to obtain a plurality of triangular areas of a first target face image, adopting the face feature key point extraction method of the first aspect to extract face feature key points of the second image to obtain a plurality of triangular areas of a second target face image, affine-matching the plurality of triangular areas of the first target face image to the plurality of corresponding triangular areas of the second target face image according to the plurality of triangular areas in the first target face image and the second target face image, and carrying out image edge fusion to obtain a face synthetic image.
With reference to the first aspect, in a first implementation manner of the first aspect, the detecting a face image of the image information includes: detecting the binary image by using a face region detection model preset by a system to obtain at least one face region in the binary image, storing the start point coordinates X, Y of the detected face region and the width and height of the face region, calculating the area of the face region according to the start point coordinates X, Y of the face region and the width and height of the face region, and selecting the face region with the largest area in the binary image as a target face image.
With reference to the first aspect, in a second implementation manner of the first aspect, the extracting feature key points of the target face image, and acquiring coordinates of the feature key points includes: and detecting the target face image by calling a face feature key point detection model preset by a system, extracting a plurality of feature key points in the target face image, and caching the coordinates of each feature key point.
With reference to the second aspect, in a first implementation manner of the second aspect, performing image edge fusion on the plurality of triangular regions of the first target face image and the plurality of triangular regions of the second target face image according to the plurality of triangular regions of the first target face image and the second target face image, to obtain a face synthesis image includes: according to the plurality of triangular areas of the first target face image and the second target face image, the first target face image is imitated to the second target face image through a perspective transformation matrix algorithm to obtain a third target face image, the third target face image is subjected to gray processing, edge fusion is carried out through a seamless fusion model preset by a calling system, and a fourth target face image is obtained, wherein the fourth target face image is the face synthesis image.
According to a third aspect, the embodiment of the invention discloses a device for extracting key points of human face features, comprising: the system comprises a first receiving module, a processing module, a detection module, a feature key point extraction module and a subdivision module, wherein the first receiving module is used for receiving collected image information, the processing module is used for carrying out binarization processing on the image information to obtain a binarized image, the detection module is used for detecting a target face image of the image information, the feature key point extraction module is used for extracting feature key points of the target face image to obtain coordinates of the feature key points, and the subdivision module is used for carrying out triangulation on the target face image according to the coordinates to obtain a plurality of triangular areas.
According to a fourth aspect, an embodiment of the present invention discloses a face image synthesis apparatus, including: a second receiving module, configured to receive the first image and the second image respectively, a first face feature key point extracting module, configured to perform face feature key point extraction on the first image by using the face feature key point extracting device of the third aspect to obtain a plurality of triangular regions of a first target face image, a second face feature key point extracting module, configured to perform face feature key point extraction on the second image by using the face feature key point extracting device of the third aspect to obtain a plurality of triangular regions of a second target face image, and an affine module, configured to perform image edge fusion by affine-matching the plurality of triangular regions of the first target face image to the corresponding plurality of triangular regions of the second target face image according to the plurality of triangular regions of the first target face image and the second target face image, and obtaining a human face synthetic image.
According to a fifth aspect, the embodiment of the invention discloses a system for extracting face feature key points, which comprises: at least one control device, configured to execute the method for extracting key points of a face according to the first aspect or any embodiment of the first aspect, and extract coordinates of key points of a target face image in received image information.
According to a sixth aspect, an embodiment of the present invention discloses a face image synthesis system, including: at least one control device, configured to execute the method for synthesizing a face image according to the second aspect or any embodiment of the second aspect, and synthesize a destination face image in the received image information.
According to a seventh aspect, the embodiment of the present invention discloses a computer-readable writable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for extracting key points of facial features according to the first aspect or any of the embodiments of the first aspect, or the steps of the method for synthesizing facial images according to any of the embodiments of the second aspect or the second aspect.
The technical scheme of the invention has the following advantages:
the invention provides a method for extracting key points of human face features and a method and a device for synthesizing human face images, wherein the method for extracting key points of human face features receives collected image information; carrying out binarization processing on the image information to obtain a binarized image; detecting a target face image in the image information; extracting characteristic key points of the target face image and acquiring coordinates of the characteristic key points; and triangulating the target face image according to the coordinates of the characteristic key points to obtain a plurality of triangular areas. The coordinate information of the target face image feature key points is obtained by combining the acquired target face image feature key points, the requirement on the form of the extracted target face image in the extraction process of the face feature key points is lowered, a relatively correct face image is not needed, the extraction efficiency of the face feature key points is improved, and the face recognition efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a specific example of a method for extracting key points of facial features in embodiment 1 of the present invention;
fig. 2 is a flowchart of a specific example of a face image synthesis method according to embodiment 2 of the present invention;
fig. 3 is a specific flowchart of a face image for detecting image information in the method for extracting key points of face features according to embodiment 1 of the present invention;
fig. 4 is a specific flowchart of obtaining feature key point coordinates in a method for extracting face feature key points according to embodiment 1 of the present invention;
fig. 5 is a specific flowchart of obtaining a face synthesis image in a face image synthesis method according to embodiment 2 of the present invention;
fig. 6 is a flowchart of a specific example of an apparatus for extracting key points of facial features according to embodiment 3 of the present invention;
fig. 7 is a flowchart of a specific example of a face image synthesis apparatus according to embodiment 4 of the present invention;
fig. 8 is a block diagram of a control apparatus in embodiment 5 of the present invention;
fig. 9 is a block diagram of a first controller in the control device in embodiment 5 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment of the invention provides a method for extracting key points of human face features, which is applied to the extraction process of key point values of human face features, in particular to a specific application scene that people want to extract feature values of human face regions in acquired image information, and the method for extracting key points of human face features in the embodiment comprises the following steps as shown in figure 1:
step S11: and receiving the acquired image information. The image information collected here is specifically divided into two types, the first type may be image information including a face image collected by the image pickup device, for example, shot image information, the shot image information may be a main face image in a synthesized image or image information including face information preset by the system, and the second type may be an image including face image information as a background image, and similarly, may be a background image including face information collected by the image pickup device or a background image including face information preset by the system.
Step S12: and carrying out binarization processing on the image information to obtain a binarized image. Specifically, the image binarization processing here means that the gray values of all pixel points on the image are set to be 0 or 255, that is, the whole image can exhibit an obvious black-and-white effect, and the data amount in the image can be significantly reduced.
Step S13: and detecting the face image in the target image information. Specifically, the image information directly acquired by the camera device generally includes many kinds of information, for example, facial image information, human body image information, background image information, and any kind of image information that may be acquired by the camera device, and the system detects a facial image with key points of facial features from the many kinds of information.
In an embodiment, the step S13 may specifically include, as shown in fig. 3, the following steps in the execution process:
step S131: and detecting the binary image by using a face region detection model preset by the system to obtain at least one face region in the binary image. Specifically, the face region detection model preset by the system is a face detection model carried in an Open Computer Source code Vision Library (OPENCV), and the face region detection model is operated to detect that any region exists on an image.
Step S132: specifically, the model returns the detected face region to the shape of a rectangular region, a circular region, an elliptical region, or a triangular region through a face region detection model in OPENCV preset by the system, for example, the model detects that two regions exist in the image, the model preset by the system converts the two regions into rectangular regions, which are a first rectangular region and a second rectangular region, at this time, the first origin coordinate, the width W1, the height H1 of the detected first rectangular region, and the second origin coordinate, the width W2, and the height H2 of the second rectangular region are saved.
Step S133: the area of the face region is calculated from the coordinates X, Y of the start point of the face region and the width and height of the face region, and specifically, the area of the region can be calculated using the following formula according to the width and height of the region for two regions detected by the model in the acquired image information.
S=f(x)W1*f(x)H1
Wherein S represents an area of a region to be calculated, W1 represents a width of a rectangular region, H1 represents a height of the rectangular region, f (x) represents a weight value, and f (x) may be set according to a specific scenario in an actual application, which is not limited by the present invention.
Step S134: and selecting the face area with the largest area in the binarized image as the target face image. In a specific embodiment, the areas of all the regions in the binarized image are respectively calculated as S1 and S2 through the above steps, the sizes of S1 and S2 are directly compared, and the region with the largest area is selected as the target face image.
In the above steps S131 to S134, the binarized image is detected by the face region detection model preset in the system, all face regions in the binarized image are obtained, then the start coordinates X, Y of the detected face region and the width and height of the face region are saved, finally, the face region area is calculated by the start coordinates X, Y of the face region and the width and height of the face region, and the face region with the largest area is determined as the target face image.
Step S14: and extracting the characteristic key points of the face image and acquiring the coordinates of the characteristic key points. Specifically, there are generally 68 feature key points of the face image, and the coordinates of the 68 feature key points are extracted and cached.
In an embodiment, the step S14 may specifically include, as shown in fig. 4, the following steps in the execution process:
step S141: and detecting a target face image by calling a system preset face feature key point detection model, extracting a plurality of feature key points in the face image, and caching the coordinates of each feature key point. Specifically, what is called is a system preset face feature key point detection model carried in OPENCV, and the model is called to detect information of face feature key points in a face image, for example, the face feature key points include: eyebrows; eyes, e.g., upper eyelid, lower eyelid; a nose, e.g., the nasal bridge, nostrils; mouth, e.g., upper lip, lower lip; the outline of the face, for example, two cheek edge positions, a chin edge position. In general, the face has 68 feature key points, and if the face feature key point detection model in other open source libraries is called, the feature key points of the face may be 72 points, 96 points or a plurality of points.
By implementing the step S141, the face feature key points are extracted by using the face key point detection model, and the coordinates of the feature key points are cached, so that the feature key points and the feature key point coordinates of the face are accurately extracted, the requirement on the face shape when the face feature key points are extracted is reduced, and the efficiency of extracting the face feature key points and the coordinates thereof is improved.
Step S15: and triangulating the face image according to the coordinates of the characteristic key points to obtain a plurality of triangular areas. In this embodiment, the coordinates of the feature key points may be coordinates of facial feature key points extracted in any one of the above embodiments, the facial image is a facial image region extracted by the method in any one of the above embodiments, and the principle of triangulation is that, assuming that V is a finite point set on a two-dimensional real number domain, an edge E is a closed line segment formed by points in the point set as end points, and E is a set of E. Then a triangulation T ═ (V, E) of the set of points V is a plan G which simultaneously satisfies the following conditions: 1. edges in the plan view do not contain any points in the set of points, except for the endpoints; 2. there are no intersecting edges; 3. all the faces in the plan view are triangular faces, and the collection of all the triangular faces is the convex hull of the scatter set V. Specifically, the finite point set V in the two-dimensional real number domain may be a feature key point set in the face image, and the planar graph may be a plurality of triangular regions obtained after triangulation.
The extraction method of the face feature key points in the embodiment of the invention obtains a binary image by carrying out binary processing on the received collected image information, then detects the face image in the image information, then extracts the face image feature key points according to the face image and stores the coordinates of the feature key points, and finally triangulates a target face image according to the coordinates of the feature key points to obtain a plurality of triangular areas, because only the coordinates of the face feature key points are obtained and are relative to the coordinates of a coordinate system, under the condition that the face form is relatively not correct, the absolute coordinates of the face feature key points can change, but the relative coordinates of the face feature key points can not change, and then triangulates to directly obtain the face image with the plurality of triangular areas, so the requirement on the form of the face image is low, the efficiency of extracting the key points of the face image features is improved, and the face recognition efficiency is further improved.
Example 2
The embodiment of the invention provides a face image synthesis method, which is applied to specific application scenes such as role playing, face changing, interesting shooting and the like, and comprises the following steps of replacing face information in a second image by using face information in a first image, wherein the face image synthesis method in the embodiment comprises the following steps as shown in figure 2:
step S21: the first and second captured images are received, respectively. In this embodiment, the first image may be image information including a face image acquired by an image pickup device, such as shot image information, and the shot image information may be a main face image in a synthesized image, or may also be image information including face information preset by a system, that is, the first image specifically provides a face image with significant features for the synthesized image, and may be, for example, a face image of a friend, a colleague, or the like actually existing in life; the second image may be an image including face image information as a background image, where the second image information provides a background and an approximate outline of a face for the composite image, and similarly, the second image may be a background image including face information acquired by an image pickup device, or a system-preset background image including face information, such as a face image that can be searched on a network, and may be a star, hero, or a face image actually existing in life.
Step S22: the method described in embodiment 1 is adopted to extract the key points of the face features of the first image, and a plurality of triangular regions of the first target face image are obtained. Specifically, by using the method for extracting key points of human face features described in the above embodiment, coordinates of key points of human face features are extracted from received first image information, and triangulation is performed on first target image information according to the coordinates of the key points of features, so as to obtain a human face image composed of a plurality of triangular regions.
Step S23: and extracting the key points of the face features of the second image by adopting the method in the embodiment 1 to obtain a plurality of triangular areas of the second target face image. Specifically, by using the method for extracting facial feature key points in the above embodiment, coordinates of a facial region and feature key points of the received second image information are extracted, and triangulation is performed on the second image information according to the feature key point coordinates, so as to obtain a facial image composed of a plurality of triangular regions.
The steps S22 and S23 may be performed in a non-sequential manner or in a simultaneous manner.
Step S24: and performing affine image edge fusion on the plurality of triangular regions of the first target face image to the corresponding plurality of triangular regions of the second target face image according to the plurality of triangular regions in the first target face image and the second target face image to obtain a face synthetic image.
In an embodiment, the step S24 may specifically include, as shown in fig. 5, the following steps in the execution process:
step S241: and according to the plurality of triangular areas of the first target face image and the second target face image, simulating the first target face image to the second target face image through a perspective transformation matrix algorithm to obtain a third target face image. Specifically, the first target face image and the second target face image are triangulated images obtained through the steps in the above embodiments, respectively, the perspective transformation matrix algorithm is one of the trigonometric radiation transformation, and the perspective transformation energy-saving maintains the "linearity" of the images, that is, the straight line in the original image remains as a straight line after the perspective transformation.
Step S242: and carrying out graying processing on the third target face image, and carrying out edge processing by calling a seamless fusion model preset by the system to obtain a fourth target face image, wherein the fourth target face image is a face synthetic image. In this embodiment, the graying process means that in the RGB model, if R ═ G ═ B, the color represents a grayscale color, where the grayscale value means that if the original color at a certain point is RGB (R, G, B), we can convert it into grayscale by the following method, for example, floating point algorithm, Gray ═ R0.3 + G0.59 + B0.11, so that each pixel of the grayscale image only needs to store one byte of grayscale value, and the grayscale range is 0-255. Specifically, there are many methods of graying an image, for example, a component method: the brightness of three components in the color image is used as the gray value of three gray images, and one gray image can be selected according to application requirements. Calculated according to the following formula:
f1(i,j)=R(i,j)f2(i,j)=G(i,j)f3(i,j)=B(i,j)
where, fk (i, j) (k ═ 1,2,3) is the grayscale value of the converted grayscale image at (i, j).
And performing edge fusion on the synthetic image through a seamless fusion model in OPENCV preset by the system, after the first face image is imitated to the second face image, because the skin colors of the face images on the two images are slightly different and the edge joint part is very hard due to pure affine, performing edge fusion on the synthetic image by adopting the seamless fusion model, and obtaining the face synthetic image with natural edge transition of the face image.
The face image synthesis method in the embodiment of the invention respectively performs the face feature key point extraction method of the embodiment on the collected first image and second image to respectively obtain a plurality of triangular regions of the first target face image, and then performs image edge fusion on the plurality of triangular regions of the first target face image to the corresponding plurality of triangular regions of the second target face image according to the plurality of triangular regions of the first target face image and the second target face image, and finally obtains the face synthesis image. The two face images are respectively extracted to obtain the face image recorded with the characteristic key point coordinates, and the face image can be directly imitated to another face image, so that the synthesized face image is quickly and accurately obtained, the defect that the requirement on the form of the face image to be synthesized in the face synthesis process is high is overcome, and the face synthesis efficiency and accuracy are improved.
Example 3
The present embodiment provides an apparatus for extracting key points of human face features, as shown in fig. 6, including:
the first receiving module 61 is configured to receive the acquired image information, and details of implementation may be referred to in the related description of step S11 of the foregoing method embodiment.
The processing module 62 is configured to perform binarization processing on the image information to obtain a binarized image, and details of implementation may be referred to in the related description of step S12 of the foregoing method embodiment.
The detailed implementation of the detection module 63, which is used for detecting the target face image of the image information, can be referred to the related description of step S13 in the above method embodiment.
The feature key point extracting module 64 is configured to extract feature key points of the target face image, and obtain coordinates of the feature key points, and details of implementation may be referred to in the related description of step S14 in the foregoing method embodiment.
The subdivision module 65 is configured to triangulate the target face image according to the coordinates of the feature key points to obtain a plurality of triangular regions, and the detailed implementation contents may refer to the related description of step S15 in the foregoing method embodiment.
The extraction device of the key points of the face features in the embodiment of the invention overcomes the defect of high requirement on the shape of the face image in the prior related technology, and improves the efficiency of extracting the key points of the face image features and the efficiency of face recognition because the triangulation operation is carried out only by acquiring the coordinates of the key points of the face features.
Example 4
The present embodiment provides a face image synthesis apparatus, as shown in fig. 7, including:
the second receiving module 71 is configured to receive the first image and the second image respectively, and the detailed implementation contents may be referred to in the related description of step S21 of the above method embodiment.
The first facial feature keypoint extraction module 72 is configured to perform facial feature keypoint extraction on the first image by using the facial feature keypoint extraction device in embodiment 3 to obtain a plurality of triangular regions of the first target facial image, and the detailed implementation content may be referred to the related description of step S22 in the foregoing method embodiment.
The second facial feature keypoint extraction module 73 is configured to perform facial feature keypoint extraction on the second image by using the facial feature keypoint extraction device in embodiment 2, so as to obtain a plurality of triangular regions of the second target facial image, and the detailed implementation content may refer to the related description of step S23 in the foregoing method embodiment.
The affine module 74 is configured to affine, according to the plurality of triangular regions of the first target face image and the plurality of triangular regions of the second target face image, the plurality of triangular regions of the first target face image to the plurality of corresponding triangular regions of the second target face image, and perform image edge fusion to obtain a face synthesized image, and details of implementation may be referred to in the related description of step S24 of the foregoing method embodiment.
The face image synthesis device in the embodiment of the invention extracts the face feature key point values of two face images respectively to obtain the face image recorded with the feature key point coordinates, and can directly replace the face image by affine, thereby overcoming the defect that the requirement on the form of the face image to be synthesized is higher in the existing related face synthesis technology, and rapidly and accurately obtaining the synthesized face image, so that the efficiency, the accuracy and the flexibility of face synthesis are improved.
Example 5
The present embodiment provides a control apparatus, as shown in fig. 8, including:
the first communication module 811: the system is used for transmitting data, receiving the acquired image information or respectively receiving the acquired first image information and second image information; the first communication module can be a Bluetooth module and a Wi-Fi module, and then communication is carried out through a set wireless communication protocol.
The first controller 812: connected to the first communication module 811 as shown in fig. 9, includes: at least one processor 91; and a memory 92 communicatively coupled to the at least one processor 91; the memory 92 stores instructions executable by the at least one processor 91, and when receiving data information, the at least one processor 91 executes the method for extracting key points of facial features shown in fig. 1 or the method for synthesizing facial images shown in fig. 2, in fig. 9, taking one processor as an example, the processor 91 and the memory 92 are connected by a bus 90, in this embodiment, the first communication module may be a wireless communication module, such as a bluetooth module, a Wi-Fi module, or the like, or may be a wired communication module. The transmission between the first controller 812 and the first communication module 811 is a wireless transmission.
The memory 92 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for extracting key points of facial features or the method for synthesizing facial images in the embodiments of the present application. The processor 91 executes various functional applications of the server and data processing, i.e., a method of extracting key points of facial features or a method of synthesizing facial images in the above-described embodiments, by running a non-transitory software program, instructions, and modules stored in the memory 92.
The memory 92 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, memory 92 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 92 may optionally include memory located remotely from the processor 91, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 92, which when executed by the one or more processors 91 perform the method described in any of the above embodiments.
The embodiment of the present invention further provides a non-transitory computer readable medium, where the non-transitory computer readable storage medium stores a computer instruction, and the computer instruction is used to enable a computer to execute the method for extracting key points of facial features or the method for synthesizing facial images, described in any one of the above embodiments, where the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviation: HDD), or a Solid-State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A method for extracting key points of human face features is characterized by comprising the following steps:
receiving collected image information;
carrying out binarization processing on the image information to obtain a binarized image;
detecting a target face image in the image information;
extracting the characteristic key points of the target face image, and acquiring the coordinates of the characteristic key points;
and triangulating the target face image according to the coordinates to obtain a plurality of triangular areas.
2. A method for synthesizing a face image, comprising:
respectively receiving a first image and a second image which are acquired;
extracting key points of human face features from the first image by adopting the method of claim 1 to obtain a plurality of triangular areas of a first target human face image;
extracting the key points of the face features of the second image by adopting the method of claim 1 to obtain a plurality of triangular areas of a second target face image;
and performing affine image edge fusion on the plurality of triangular regions of the first target face image to the corresponding plurality of triangular regions of the second target face image according to the plurality of triangular regions in the first target face image and the second target face image to obtain a face synthetic image.
3. The method for extracting key points of human face features according to claim 1, wherein the detecting the human face image of the image information comprises:
detecting the binary image by using a face region detection model preset by a system to obtain at least one face region in the binary image;
saving X, Y coordinates of the start point of the detected face region and the width and height of the face region;
calculating the area of the face region according to the coordinates X, Y of the starting point of the face region and the width and height of the face region;
and selecting the face area with the largest area in the binarized image as a target face image.
4. The method for extracting key points of human face features according to claim 1, wherein the extracting key points of features of the target human face image and the obtaining coordinates of the key points of features comprises:
and detecting the target face image by calling a face feature key point detection model preset by a system, extracting a plurality of feature key points in the target face image, and caching the coordinates of each feature key point.
5. The method for synthesizing a face image according to claim 2, wherein the performing image edge blending on the plurality of triangular regions of the first target face image and the plurality of triangular regions of the second target face image affine to the corresponding plurality of triangular regions of the second target face image according to the plurality of triangular regions of the first target face image and the second target face image to obtain the face synthesized image comprises:
according to the plurality of triangular areas of the first target face image and the second target face image, the first target face image is imitated to the second target face image through a perspective transformation matrix algorithm to obtain a third target face image;
and carrying out graying processing on the third target face image, and carrying out edge fusion by calling a seamless fusion model preset by a system to obtain a fourth target face image, wherein the fourth target face image is the face synthetic image.
6. An extraction device for key points of human face features is characterized by comprising the following steps:
the first receiving module is used for receiving the acquired image information;
the processing module is used for carrying out binarization processing on the image information to obtain a binarized image;
the detection module is used for detecting a target face image in the image information;
the characteristic key point extracting module is used for extracting characteristic key points of the target face image and acquiring coordinates of the characteristic key points;
and the subdivision module is used for triangulating the target face image according to the coordinates to obtain a plurality of triangular areas.
7. A face image synthesis apparatus, comprising:
the second receiving module is used for respectively receiving the acquired first image and the second image;
a first facial feature keypoint extraction module, configured to perform facial feature keypoint extraction on the first image by using the apparatus according to claim 6, so as to obtain a plurality of triangular regions of a first target facial image;
a second facial feature key point extraction module, configured to perform facial feature key point extraction on the second image by using the apparatus according to claim 6, to obtain a plurality of triangular regions of a second target facial image;
and the affine module is used for affine matching the triangular regions of the first target face image to the corresponding triangular regions of the second target face image according to the triangular regions of the first target face image and the second target face image to perform image edge fusion to obtain a face synthetic image.
8. A face feature key point extraction system is characterized by comprising:
at least one control device, which is used for executing the extraction method of the human face feature key points as claimed in any one of claims 1, 3 and 4, and extracting the feature key point coordinates of the target human face image in the received image information.
9. A face image synthesis system, comprising:
at least one control device for executing the method of face image synthesis according to any one of claims 2 and 5, synthesizing a destination face image in the received image information.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for extracting key points of facial features according to any one of claims 1, 3 and 4 or the method for synthesizing facial images according to any one of claims 2 and 5.
CN201911128636.3A 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method Active CN110879983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911128636.3A CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911128636.3A CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Publications (2)

Publication Number Publication Date
CN110879983A true CN110879983A (en) 2020-03-13
CN110879983B CN110879983B (en) 2023-07-25

Family

ID=69728942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911128636.3A Active CN110879983B (en) 2019-11-18 2019-11-18 Face feature key point extraction method and face image synthesis method

Country Status (1)

Country Link
CN (1) CN110879983B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915479A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113658035A (en) * 2021-08-17 2021-11-16 北京百度网讯科技有限公司 Face transformation method, device, equipment, storage medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506693A (en) * 2017-07-24 2017-12-22 深圳市智美达科技股份有限公司 Distort face image correcting method, device, computer equipment and storage medium
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
WO2018188453A1 (en) * 2017-04-11 2018-10-18 腾讯科技(深圳)有限公司 Method for determining human face area, storage medium, and computer device
CN108876705A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Image synthetic method, device and computer storage medium
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001092A1 (en) * 2016-06-29 2018-01-04 中兴通讯股份有限公司 Face recognition method and apparatus
WO2018188453A1 (en) * 2017-04-11 2018-10-18 腾讯科技(深圳)有限公司 Method for determining human face area, storage medium, and computer device
CN107506693A (en) * 2017-07-24 2017-12-22 深圳市智美达科技股份有限公司 Distort face image correcting method, device, computer equipment and storage medium
CN108876705A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Image synthetic method, device and computer storage medium
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
R. HASSANPOUR ET AL.: "Delaunay Triangulation based 3D Human Face Modeling from Uncalibrated Images", 《IEEE XPLORE》 *
宋顶利;杨炳儒;于复兴;: "关键点匹配三维人脸识别方法", 计算机应用研究, no. 11 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915479A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113658035A (en) * 2021-08-17 2021-11-16 北京百度网讯科技有限公司 Face transformation method, device, equipment, storage medium and product
CN113658035B (en) * 2021-08-17 2023-08-08 北京百度网讯科技有限公司 Face transformation method, device, equipment, storage medium and product

Also Published As

Publication number Publication date
CN110879983B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108765278B (en) Image processing method, mobile terminal and computer readable storage medium
WO2021000702A1 (en) Image detection method, device, and system
TWI395145B (en) Hand gesture recognition system and method
US10674135B2 (en) Handheld portable optical scanner and method of using
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
JP5873442B2 (en) Object detection apparatus and object detection method
JP6685827B2 (en) Image processing apparatus, image processing method and program
EP1969559B1 (en) Contour finding in segmentation of video sequences
JP5715833B2 (en) Posture state estimation apparatus and posture state estimation method
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
US20180025252A1 (en) Template creation device and template creation method
EP2987322A1 (en) Handheld portable optical scanner and method of using
EP1969562A1 (en) Edge-guided morphological closing in segmentation of video sequences
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
WO2006087581A1 (en) Method for facial features detection
KR20050022306A (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN109937434B (en) Image processing method, device, terminal and storage medium
KR20120138627A (en) A face tracking method and device
JPWO2012077287A1 (en) Posture state estimation apparatus and posture state estimation method
CN110781770B (en) Living body detection method, device and equipment based on face recognition
CN110879983A (en) Face feature key point extraction method and face image synthesis method
CN108274476B (en) Method for grabbing ball by humanoid robot
JP2021517281A (en) Multi-gesture fine division method for smart home scenes
CN109063598A (en) Face pore detection method, device, computer equipment and storage medium
CN110610131A (en) Method and device for detecting face motion unit, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant