CN106980818B - Personalized preprocessing method, system and terminal for face image - Google Patents

Personalized preprocessing method, system and terminal for face image Download PDF

Info

Publication number
CN106980818B
CN106980818B CN201710122371.0A CN201710122371A CN106980818B CN 106980818 B CN106980818 B CN 106980818B CN 201710122371 A CN201710122371 A CN 201710122371A CN 106980818 B CN106980818 B CN 106980818B
Authority
CN
China
Prior art keywords
face
image
face region
personalized
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710122371.0A
Other languages
Chinese (zh)
Other versions
CN106980818A (en
Inventor
曹耀和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zhibei Information Technology Co ltd
Original Assignee
Zhejiang Zhibei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zhibei Information Technology Co ltd filed Critical Zhejiang Zhibei Information Technology Co ltd
Priority to CN201710122371.0A priority Critical patent/CN106980818B/en
Publication of CN106980818A publication Critical patent/CN106980818A/en
Application granted granted Critical
Publication of CN106980818B publication Critical patent/CN106980818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention is suitable for the technical field of image processing, and provides a personalized preprocessing method for a face image, which comprises the following specific steps: identifying and extracting key points of the face gray level image; establishing a human face personalized feature model; performing edge filling on the face gray level image by using the face personalized feature model; and normalizing the face region image, and performing personalized alignment on the face region image. In the embodiment of the invention, the personalized features of the face image are simply and quickly obtained by adopting a mode of extracting the key points of the face, the key points are converted into personalized gray values through a mathematical model, the personalized features in the face gray image are effectively enhanced, and the personalized alignment mode is also adopted, so that the processed face gray image has clear layers, the face is positioned in the center of the image, and the method is particularly suitable for the preprocessing process of the face image in a real-time video.

Description

Personalized preprocessing method, system and terminal for face image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a personalized preprocessing method and system for a face image and a terminal thereof.
Background
With the rapid increase of the application requirements in security entrance control and financial trade, new attention is paid to the technology of biometric identification, and face recognition is one of the most widely applied technologies in all biometric identification methods. With the continuous evolution of the technology, the face recognition technology has been widely used in various fields such as public security, finance, network security, property management and attendance checking.
In recent years, a recognition method based on real-time video is one of the main development directions of face recognition technology. Because the face area in the real-time video is always changed, that is, the requirements of the face recognition based on the real-time video on illumination and posture are slightly low, and the requirement on the recognition accuracy of the face area is high, the recognition mode based on the face photo with high posture fault tolerance rate but insufficient accuracy cannot meet the recognition requirement of the real-time video. Moreover, unlike face photos, real-time videos contain a large amount of redundant information, and direct face recognition on the videos easily consumes a large amount of unnecessary resource overhead.
In summary, the existing face recognition technology has the technical problems of low recognition accuracy and incapability of performing personalized processing before recognition.
Disclosure of Invention
The embodiment of the invention provides a personalized preprocessing method for a face image, and aims to solve the technical problems that in the prior art, the recognition accuracy is low and personalized processing cannot be performed before recognition.
The embodiment of the invention is realized in such a way that the personalized preprocessing method for the face image comprises the following specific steps:
analyzing a face gray image, and identifying and extracting key points of the face gray image;
establishing a human face personalized feature model according to the key points of the human face gray level image;
performing edge filling on the face gray level image by using the face personalized feature model to obtain a face region image;
and normalizing the face region image according to the key points of the face region image, and performing personalized alignment on the face region image.
The embodiment of the invention also provides a personalized preprocessing system for the face image, which comprises:
the key point identification unit is used for analyzing the face gray level image, identifying and extracting key points of the face gray level image;
the characteristic model unit is used for establishing a human face personalized characteristic model according to the key points of the human face gray level image;
the filling unit is used for carrying out edge filling on the face gray level image by utilizing the face personalized feature model to obtain a face region image; and
and the adjusting unit is used for carrying out normalization processing on the face region image according to the key points of the face region image and carrying out personalized alignment adjustment on the face region image.
The embodiment of the invention also provides a personalized preprocessing terminal for the face image, which comprises the preprocessing system, wherein the preprocessing system is used for processing the face gray level image in the preprocessing terminal.
The invention discloses a preprocessing method for a face region, which adopts a mode of extracting key points of a face to simply and quickly obtain personalized features of a face image, converts the key points into personalized gray values through a mathematical model, ensures that the filled gray values of different face images are different, effectively enhances the personalized features in the face gray image, thereby greatly improving the accuracy of a normalization process and the accuracy of subsequent face recognition, adopts a personalized alignment mode to ensure that the processed face gray image has clear layers and the face is positioned in the center of the image, and is particularly suitable for the preprocessing process of the face image in a real-time video.
Drawings
Fig. 1 is a working environment diagram of a method for personalized preprocessing of face images according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for personalized preprocessing of face images according to an embodiment of the present invention;
fig. 3 is a flowchart for performing edge filling on the face gray level image by using the face personalized feature model to obtain a face region image according to the embodiment of the present invention;
fig. 4 is a flowchart illustrating normalization processing of the face region image according to key points of the face region image and personalized alignment of the face region image according to the embodiment of the present invention;
fig. 5 is a flowchart illustrating that the face region is moved according to the key points of the face region image, so that the center point of the face region coincides with the center point of the face region image according to the embodiment of the present invention;
fig. 6 is another flowchart illustrating that the face region is moved according to the key points of the face region image, so that the center point of the face region coincides with the center point of the face region image according to the embodiment of the present invention;
fig. 7 is a flowchart of a personalized preprocessing method for face images in practical use according to an embodiment of the present invention;
FIG. 8 is a block diagram of a system for personalized preprocessing of face images according to an embodiment of the present invention;
FIG. 9 is a block diagram of a fill cell provided by an embodiment of the present invention;
FIG. 10 is a block diagram of an adjustment unit provided by an embodiment of the present invention;
FIG. 11 is a block diagram of a position adjustment module provided in an embodiment of the present invention;
fig. 12 is another structural diagram of a position adjustment module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
According to the embodiment of the invention, a human face personalized feature model is adopted according to the salient features of the human face region image, and personalized gray values are filled during image graying processing, so that each gray image is different from the gray images of other human faces, thereby greatly improving the accuracy of key point positioning and the accuracy of subsequent human face recognition and realizing the high-precision real-time video human face recognition.
Fig. 1 illustrates a working environment of a method for personalized preprocessing of a face image according to an embodiment of the present invention.
The preprocessing device is connected with the camera through the server to acquire data containing the face area in the video stream, then preprocesses the data, and then transmits the processed image data to the recognition device for face recognition.
At this time, the preprocessing method is suitable for the preparation processes of removing redundant information of the face region image, highlighting the face and the like, and the device to be recognized performs face recognition on the video stream.
Example 1:
fig. 2 shows a flow of a personalized preprocessing method for a face image according to an embodiment of the present invention, which is detailed as follows:
in step S201, a face grayscale image is analyzed, and key points of the face grayscale image are identified and extracted.
The embodiment of the invention aims at the gray level image containing the converted human face and preprocesses the human face area before image recognition. That is, in practical use, the input data stream should be a gray scale image.
Wherein the key points include 68 key points of edge contours of four regions of eyes, nose, mouth, and chin. The embodiment of the invention adopts the dlib database to detect and position 68 key points of the face.
Of course, the user may use other key point detection methods such as asm (active Shape model) algorithm, or the number of key points may be appropriately adjusted, and it is within the scope of the present invention as long as the key point positioning process is based on the clipped human face region block diagram.
The key point of the face recognition technology is to find out the place where the face in the image is different from other people so as to distinguish and recognize the identity, so that the embodiment of the invention can simply and quickly find out the distinguishing characteristics of the image by a key point recognition method, and is greatly convenient for the subsequent face recognition process.
At this time, because the pose of the face in the real-time video is constantly changing, it is easy for the face region in the face region block diagram to be incomplete or blocked, so that a situation that some key points are missing during key point positioning occurs, at this time, the user may choose to discard the face grayscale image with incomplete key points, or even terminate the preprocessing process of the face grayscale image, and move to the next frame of image without being blocked.
In step S202, a personalized facial feature model is established according to the key points of the facial gray image.
The human face personalized feature model comprises the following steps:
Ratioi=λWi/Hi
in the embodiment of the invention, the value range of the lambda is more than or equal to 0.75 and less than or equal to 1.25.
At the same time, RatioiIs the characteristic proportion of the ith human face gray scale image, WiIs the distance or width H between each key point of the gray image in the horizontal directioniBetween the key points in the vertical direction for the gray scale imageDistance or height.
In the embodiment of the present invention, the facial personalized feature model is used to convert the key points in step S201 into quantifiable RatioiThe personalized features in the face region image are clearer, and therefore the accuracy of face recognition is improved.
When the embodiment of the invention is actually used, WiThe angular separation of the eyes, or mouth width, is generally selected and HiThe canthus to nose, mouth, or chin distance, i.e., the distance between the various salient regions, is used. Of course, if the recognition degree of these parts is not enough to be a significant feature, the width and height of a significant region such as a mouth or a nose may be used, or the average of the feature ratios in the respective significant regions may be used as the feature ratio of the whole face region image.
In step S203, edge filling is performed on the face grayscale image by using the face personalized feature model, so as to obtain a face region image.
In order to further enhance the salient features of the face region image, the embodiment of the invention adopts an edge filling mode to fill the feature proportion to the periphery of the face region in a gray scale mode, so that the background irrelevant to the face information is removed, the face is more prominent, and the accuracy of the subsequent key point positioning is improved.
In step S204, the face region image is normalized according to the key points of the face region image, and the face region image is personalized and aligned.
The embodiment of the invention aims at the technical problems of low face recognition accuracy and incapability of personalized processing before recognition in the prior art, adopts a mode of extracting face key points, simply and quickly obtains personalized features of a face image, converts the key points into personalized gray values through a mathematical model, enables the filled gray values of different face images to be different, and also adopts a personalized alignment mode to effectively enhance the personalized features in the face gray image, thereby greatly improving the accuracy of a normalization process and the accuracy of subsequent face recognition.
Of course, the preprocessing method of the embodiment of the present invention is not limited to the preprocessing of the images in the real-time video stream, and in fact, as long as the technology of recognizing the face images is adopted, the user can use the present invention to improve the recognition accuracy and enhance the personalized features in the face images.
Example 2:
fig. 3 shows a process of performing edge filling on the face gray image by using the face personalized feature model to obtain a face region image, which is provided by the embodiment of the present invention and is detailed as follows:
in step S301, an edge contour of a human face is extracted according to key points in the human face grayscale image, and the human face grayscale image is divided into a human face region and a non-human face region.
In step S302, according to the feature proportion in the personalized feature model of the face, filling a non-face region in the gray-scale image of the face according to the following formula to obtain a face region image:
Figure BDA0001237374460000051
wherein G isiFilling values of the ith human face gray level image in gray level filling are obtained,
Figure BDA0001237374460000052
the average gray value of the ith human face gray image is obtained.
Aiming at the technical problem that the characteristics are not obvious in the existing gray level filling, the embodiment of the invention uses the characteristic proportion for expressing the characteristics of the human face as the factor of the gray level filling, realizes an individualized gray level filling mode, ensures that the filling values of gray level images of different human faces are different, thereby effectively enhancing the distinguishing characteristics of the images in the human face area and improving the accuracy of the subsequent human face identification.
Example 3:
fig. 4 shows a process of normalizing the face region image according to the key points of the face region image and performing personalized alignment on the face region image according to the embodiment of the present invention, which is detailed as follows:
in step S401, the pupil position of the face region in the face region image is identified and acquired, and affine transformation is performed on the face region image, so that the distance between the left-eye pupil and the right-eye pupil is a constant D.
Due to the fact that the distance and the posture of the face in the real-time video stream are unstable, the embodiment of the invention firstly carries out normalization adjustment on the face region image, so that the face image is balanced in shape and presents a better posture, and the face features can be conveniently learned during face recognition.
In this case, D may be obtained through training learning, may be a fixed value preset by the user, and may even be a value proportional to the width of the face region image.
In step S402, the face region is moved according to the key points of the face region image, so that the center point of the face region coincides with the center point of the face region image.
If the size of the face region image is not adjusted, the key point of the face region image is consistent with the key point of the face gray image in the step S301, that is, the face region in the step S402 is the same as the face region in the step S301; if the size of the face region image is adjusted, the face region in step S402 is determined as the new key point.
The embodiment of the invention aims at the face region image of the real-time video, not only adopts a standard normalization mode, but also aligns the face region according to the personalized features of the face region, effectively enhances the personalized features of the face region image, and thus ensures that the accuracy of the subsequent face recognition is higher.
Example 4:
fig. 5 shows a flow of moving the face region according to the key points of the face region image, so that the center point of the face region coincides with the center point of the face region image, which is detailed as follows:
in step S501, a lead straight line segment is marked as p on the face region imageiThe length of the vertical line segment is the height of the face region image, and another vertical line segment is made on the face region and is recorded as p'iAnd the length of the human face area is the height between the key point of the eyebrow peak and the key point of the lower jaw bottom end in the vertical direction.
In the examples of the present invention, piHeight, p 'for representing face region image'iFor indicating the overall height of the face region.
In step S502, p is takeniAnd p'iAnd adjusting the face region in the vertical direction so that p'iMidpoint and piAre on the same horizontal line.
In order to further improve the accuracy of face recognition, the embodiment of the invention performs more detailed adjustment on the face region image, so that the face region is positioned at the midpoint of the face region image, and meanwhile, the adjustment process is divided into vertical movement and horizontal movement, thereby simplifying the adjustment process.
Example 5:
fig. 6 shows another flow for moving the face region according to the key points of the face region image, so that the center point of the face region coincides with the center point of the face region image, which is detailed as follows:
in step S601, a horizontal line segment is drawn on the face region image and recorded as qiThe length of the horizontal line segment is the height of the face region image in the vertical direction, and the horizontal line segment is marked as q 'on the face region'iThe length of the left face key point is the width between the left face key point and the right face key point in the horizontal direction.
In an embodiment of the present invention, qiWidth for representing face region imageDegree, q'iFor indicating the overall width of the face region.
In step S602, q is takeniAnd q'iAnd adjusting the face region in a horizontal direction so that q'iMid-point of (a) and qiAre on the same vertical line.
In an embodiment of the present invention, the steps of vertical movement and horizontal movement do not necessarily occur simultaneously, and the face region may be moved to the center of the image using only vertical movement.
Example 6:
fig. 7 shows a flow of the personalized preprocessing method for face images in actual use according to the embodiment of the present invention, which is detailed as follows:
in step S701, the face grayscale image is analyzed, and key points of the face grayscale image are identified and extracted.
In step S702, a personalized facial feature model is established according to the key points of the facial gray image.
In step S703, an edge contour of a face is extracted according to the key points in the face grayscale image, and the face grayscale image is divided into a face region and a non-face region.
In step S704, according to the feature ratio in the personalized feature model of the human face, filling a non-human face region in the gray-scale image of the human face according to the following formula to obtain a human face region image:
Figure BDA0001237374460000071
wherein G isiFilling values of the ith human face gray level image in gray level filling are obtained,
Figure BDA0001237374460000072
the average gray value of the ith human face gray image is obtained.
In step S705, the pupil positions of the face regions in the face region image are identified and acquired, and the face region image is affine transformed so that the distance between the left-eye pupil and the right-eye pupil is a constant D.
In step S706, a lead straight line segment is marked as p on the face region imageiThe length of the vertical line segment is the height of the face region image, and another vertical line segment is made on the face region and is recorded as p'iAnd the length of the human face area is the height between the key point of the eyebrow peak and the key point of the lower jaw bottom end in the vertical direction.
In step S707, p is takeniAnd p'iAnd adjusting the face region in the vertical direction so that p'iMidpoint and piAre on the same horizontal line.
In step S708, a horizontal line segment is formed on the face region image and is denoted as qiThe length of the horizontal line segment is the height of the face region image in the vertical direction, and the horizontal line segment is marked as q 'on the face region'iThe length of the left face key point is the width between the left face key point and the right face key point in the horizontal direction.
In step S709, q is respectively fetchediAnd q'iAnd adjusting the face region in a horizontal direction so that q'iMid-point of (a) and qiAre on the same vertical line.
The embodiment of the invention adopts a mode of extracting the key points of the face, obtains the personalized features of the face image simply and quickly, and ensures that each gray level image is different from the gray level images of other faces, thereby greatly improving the accuracy of key point positioning and the accuracy of subsequent face recognition.
It will be understood by those skilled in the art that all or part of the steps in the above method embodiments may be implemented by a program and associated hardware, and the program may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, a flash disk, etc.
Example 7:
fig. 8 shows a structure of a personalized preprocessing system for a face image according to an embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown.
In the embodiment of the present invention, the personalized preprocessing system for a face image includes a key point identification unit 801, a feature model unit 802, a filling unit 803, and an adjustment unit 804, where:
the key point identification unit 801 is configured to analyze a face grayscale image, and identify and extract key points of the face grayscale image.
The embodiment of the invention aims at the gray level image containing the converted human face and preprocesses the human face area before image recognition. That is, in practical use, the input data stream should be a gray scale image.
Wherein the key points include 68 key points of edge contours of four regions of eyes, nose, mouth, and chin. The embodiment of the invention adopts the dlib database to detect and position 68 key points of the face.
Of course, the user may use other key point detection methods such as asm (active Shape model) algorithm, or the number of key points may be appropriately adjusted, and it is within the scope of the present invention as long as the key point positioning process is based on the clipped human face region block diagram.
The key point of the face recognition technology is to find out the place where the face in the image is different from other people so as to distinguish and recognize the identity, so that the embodiment of the invention can simply and quickly find out the distinguishing characteristics of the image by a key point recognition method, and is greatly convenient for the subsequent face recognition process.
At this time, because the pose of the face in the real-time video is constantly changing, it is easy for the face region in the face region block diagram to be incomplete or blocked, so that a situation that some key points are missing during key point positioning occurs, at this time, the user may choose to discard the face grayscale image with incomplete key points, or even terminate the preprocessing process of the face grayscale image, and move to the next frame of image without being blocked.
And the feature model unit 802 is configured to establish a personalized feature model of the human face according to the key points of the gray level image of the human face.
The human face personalized feature model comprises the following steps:
Ratioi=λWi/Hi
in the embodiment of the invention, the value range of the lambda is more than or equal to 0.75 and less than or equal to 1.25.
At the same time, RatioiIs the characteristic proportion of the ith human face gray scale image, WiIs the distance or width H between each key point of the gray image in the horizontal directioniThe distance or height between each key point in the vertical direction of the gray scale image.
In the embodiment of the present invention, the facial personalized feature model is used to convert the key points in step S201 into quantifiable RatioiThe personalized features in the face region image are clearer, and therefore the accuracy of face recognition is improved.
When the embodiment of the invention is actually used, WiThe angular separation of the eyes, or mouth width, is generally selected and HiThe canthus to nose, mouth, or chin distance, i.e., the distance between the various salient regions, is used. Of course, if the recognition degree of these parts is not enough to be a significant feature, the width and height of a significant region such as a mouth or a nose may be used, or the average of the feature ratios in the respective significant regions may be used as the feature ratio of the whole face region image.
A filling unit 803, configured to perform edge filling on the face grayscale image by using the face personalized feature model, so as to obtain a face region image.
In order to further enhance the salient features of the face region image, the embodiment of the invention adopts an edge filling mode to fill the feature proportion to the periphery of the face region in a gray scale mode, so that the background irrelevant to the face information is removed, the face is more prominent, and the accuracy of the subsequent key point positioning is improved.
And the adjusting unit 804 is configured to normalize the face region image according to the key points of the face region image, and perform personalized alignment on the face region image.
The embodiment of the invention aims at the technical problems of low face recognition accuracy and incapability of personalized processing before recognition in the prior art, adopts a mode of extracting face key points, simply and quickly obtains personalized features of a face image, converts the key points into personalized gray values through a mathematical model, enables the filled gray values of different face images to be different, and also adopts a personalized alignment mode to effectively enhance the personalized features in the face gray image, thereby greatly improving the accuracy of a normalization process and the accuracy of subsequent face recognition.
Of course, the preprocessing method of the embodiment of the present invention is not limited to the preprocessing of the images in the real-time video stream, and in fact, as long as the technology of recognizing the face images is adopted, the user can use the present invention to improve the recognition accuracy and enhance the personalized features in the face images.
Example 8:
fig. 9 illustrates a structure of a filling unit 803 provided in an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is illustrated, in which:
an edge extraction module 901, configured to extract an edge contour of a human face according to key points in the human face grayscale image, and divide the human face grayscale image into a human face region and a non-human face region.
A gray level filling unit 902, configured to fill, according to the feature proportion in the personalized feature model of the face, a non-face region in the gray level image of the face according to the following formula, so as to obtain a face region image:
Figure BDA0001237374460000091
wherein G isiFilling values of the ith human face gray level image in gray level filling are obtained,
Figure BDA0001237374460000092
the average gray value of the ith human face gray image is obtained.
Aiming at the technical problem that the characteristics are not obvious in the existing gray level filling, the embodiment of the invention uses the characteristic proportion for expressing the characteristics of the human face as the factor of the gray level filling, realizes an individualized gray level filling mode, ensures that the filling values of gray level images of different human faces are different, thereby effectively enhancing the distinguishing characteristics of the images in the human face area and improving the accuracy of the subsequent human face identification.
Example 9:
fig. 10 shows a structure of an adjusting unit 804 provided in an embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown, where:
the size adjustment module 1001 is configured to identify and obtain a pupil position of a face region in the face region image, and perform affine transformation on the face region image, so that a distance between a left-eye pupil and a right-eye pupil is a constant D.
Due to the fact that the distance and the posture of the face in the real-time video stream are unstable, the embodiment of the invention firstly carries out normalization adjustment on the face region image, so that the face image is balanced in shape and presents a better posture, and the face features can be conveniently learned during face recognition.
In this case, D may be obtained through training learning, may be a fixed value preset by the user, and may even be a value proportional to the width of the face region image.
The position adjusting module 1002 is configured to move the face region according to the key point of the face region image, so that a center point of the face region coincides with a center point of the face region image.
If the size of the face region image is not adjusted, the key points of the face region image are consistent with the key points of the face gray level image, namely the face region is the same; and if the size of the face region image is adjusted, determining the face region as the new key point.
The embodiment of the invention aims at the face region image of the real-time video, not only adopts a standard normalization mode, but also aligns the face region according to the personalized features of the face region, effectively enhances the personalized features of the face region image, and thus ensures that the accuracy of the subsequent face recognition is higher.
Example 10:
fig. 11 shows a structure of a position adjustment module 1002 provided in an embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown, where:
a lead line segment sub-module 1101 for making a lead line segment p on the face region imageiThe length of the vertical line segment is the height of the face region image, and another vertical line segment is made on the face region and is recorded as p'iAnd the length of the human face area is the height between the key point of the eyebrow peak and the key point of the lower jaw bottom end in the vertical direction.
In the examples of the present invention, piHeight, p 'for representing face region image'iFor indicating the overall height of the face region.
A vertical adjustment submodule 1102 for taking p respectivelyiAnd p'iAnd adjusting the face region in the vertical direction so that p'iMidpoint and piAre on the same horizontal line.
In order to further improve the accuracy of face recognition, the embodiment of the invention performs more detailed adjustment on the face region image, so that the face region is positioned at the midpoint of the face region image, and meanwhile, the adjustment process is divided into vertical movement and horizontal movement, thereby simplifying the adjustment process.
Example 11:
fig. 12 shows another structure of a position adjustment module 1002 provided in an embodiment of the present invention, and for convenience of description, only the portions related to the embodiment of the present invention are shown, where:
a horizontal line segment submodule 1201, configured to draw a horizontal line segment on the face region image and record the horizontal line segment as qiThe length of the horizontal line segment is the height of the face region image in the vertical direction, and the horizontal line segment is marked as q 'on the face region'iThe length of the left face key point is the width between the left face key point and the right face key point in the horizontal direction.
In an embodiment of the present invention, qiWidth, q 'for representing face region image'iFor indicating the overall width of the face region.
A horizontal adjustment submodule 1202 for taking q respectivelyiAnd q'iAnd adjusting the face region in a horizontal direction so that q'iMid-point of (a) and qiAre on the same vertical line.
In an embodiment of the present invention, the steps of vertical movement and horizontal movement do not necessarily occur simultaneously, and the face region may be moved to the center of the image using only vertical movement.
Example 12:
the embodiment of the invention provides a personalized preprocessing terminal for a face image, which comprises a preprocessing system, wherein the preprocessing system is used for processing the face gray level image in the preprocessing terminal.
The preprocessing terminal comprises a mobile phone, a tablet personal computer or a personal computer.
The embodiment of the invention is the practical application of the embodiment, and the preprocessing terminal is an all-in-one machine at the moment, namely the preprocessing system and the preprocessing terminal are installed together.
In the several embodiments provided in the present application, it should be understood that the division of the modules and units is only one logical division, and there may be other divisions in actual implementation, for example, multiple units may be combined or may be integrated into another system, or some features may be omitted or not executed. Furthermore, functional units and modules in the embodiments of the present invention may be integrated into one processing unit, or each unit and module may exist alone physically, or two or more units and modules may be integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A personalized preprocessing method for a face image is characterized by comprising the following specific steps:
analyzing a face gray image, and identifying and extracting key points of the face gray image;
establishing a human face personalized feature model according to the key points of the human face gray level image;
performing edge filling on the face gray level image by using the face personalized feature model to obtain a face region image;
normalizing the face region image according to the key points of the face region image, and performing personalized alignment on the face region image;
the key points include 68 key points of edge contours of four regions of eyes, nose, mouth and chin;
the human face personalized feature model is as follows:
Ratioi=λWi/Hi
wherein, λ is a gray scale adjustment coefficient obtained by training and learning, RatioiIs the characteristic proportion of the ith human face gray scale image, WiIs the distance or width H between each key point of the gray image in the horizontal directioniThe distance or height between each key point in the vertical direction of the gray scale image is taken as the height or the distance between each key point in the vertical direction of the gray scale image;
the edge filling is carried out on the face gray level image by using the face personalized feature model to obtain a face region image, and the method specifically comprises the following steps:
extracting the edge contour of the face according to key points in the face gray level image, and dividing the face gray level image into a face area and a non-face area;
according to the characteristic proportion in the human face personalized characteristic model, filling a non-human face region in the human face gray level image according to the following formula to obtain a human face region image:
Figure FDA0002448794850000011
wherein G isiFilling values of the ith human face gray level image in gray level filling are obtained,
Figure FDA0002448794850000012
the average gray value of the ith human face gray image is obtained;
the method comprises the following steps of normalizing the face region image according to the key points of the face region image, and performing personalized alignment on the face region image, and specifically comprises the following steps:
identifying and acquiring pupil positions of a face area in the face area image, and carrying out affine transformation on the face area image to enable the distance between the left eye pupil and the right eye pupil to be a constant D;
and moving the face region according to the key points of the face region image, so that the central point of the face region is superposed with the central point of the face region image.
2. The method of claim 1, wherein λ is in a range of 0.75 ≦ λ ≦ 1.25.
3. The method according to claim 1, wherein the moving the face region according to the key points of the face region image to make the center point of the face region coincide with the center point of the face region image, further comprising the following steps:
making a lead straight line segment on the face region image and recording the segment as piThe length of the vertical line segment is the height of the face area image, and another vertical line segment is marked as p' on the face areaiThe length of the human face area is the height between the key point of the eyebrow peak and the key point of the lower jaw bottom end in the vertical direction;
respectively take piAnd p' are usediAnd adjusting the face area in the vertical direction such that p ″iMidpoint and piAre on the same horizontal line.
4. The method according to claim 1, wherein the moving the face region according to the key points of the face region image to make the center point of the face region coincide with the center point of the face region image, further comprising the following steps:
making a horizontal line segment on the face region image and recording the horizontal line segment as qiThe length of the horizontal line is the height of the face area image in the vertical direction, and another horizontal line segment is marked as q' on the face areaiThe length of the left face key point is the width of the left face key point and the right face key point in the horizontal direction;
take q respectivelyiAnd q' areiAnd adjusting the face region in a horizontal direction such that q ″iMid-point of (a) and qiAre on the same vertical line.
5. A personalized preprocessing system for facial images, comprising:
the key point identification unit is used for analyzing the face gray level image, identifying and extracting key points of the face gray level image;
the characteristic model unit is used for establishing a human face personalized characteristic model according to the key points of the human face gray level image;
the filling unit is used for carrying out edge filling on the face gray level image by utilizing the face personalized feature model to obtain a face region image; and
the adjusting unit is used for normalizing the face region image according to the key points of the face region image and performing personalized alignment on the face region image;
the key points include 68 key points of edge contours of four regions of eyes, nose, mouth and chin;
the human face personalized feature model is as follows:
Ratioi=λWi/Hi
wherein, λ is a gray scale adjustment coefficient obtained by training and learning, RatioiIs the characteristic proportion of the ith human face gray scale image, WiIs the distance or width H between each key point of the gray image in the horizontal directioniThe distance or height between each key point in the vertical direction of the gray scale image is taken as the height or the distance between each key point in the vertical direction of the gray scale image;
the filling unit specifically comprises:
the edge extraction module is used for extracting the edge contour of the face according to key points in the face gray level image and dividing the face gray level image into a face area and a non-face area;
the gray level filling unit is used for filling a non-face area in the face gray level image according to a characteristic proportion in the face personalized characteristic model and the following formula to obtain a face area image:
Figure FDA0002448794850000031
wherein G isiFilling values of the ith human face gray level image in gray level filling are obtained,
Figure FDA0002448794850000032
the average gray value of the ith human face gray image is obtained;
the adjusting unit specifically includes:
the size adjusting module is used for identifying and acquiring the pupil position of a human face area in the human face area image, and carrying out affine transformation on the human face area image to enable the distance between the left eye pupil and the right eye pupil to be a constant D;
and the position adjusting module is used for moving the face region according to the key points of the face region image, so that the central point of the face region is superposed with the central point of the face region image.
6. The system of claim 5, wherein λ is in a range of 0.75 ≦ λ ≦ 1.25.
7. The system of claim 5, wherein the position adjustment module specifically comprises:
a lead straight line segment submodule for making a lead straight line segment p on the face region imageiThe length of the vertical line segment is the height of the face area image, and another vertical line segment is marked as p' on the face areaiThe length of the human face area is the height between the key point of the eyebrow peak and the key point of the lower jaw bottom end in the vertical direction; and
vertical adjustment submodule for taking p respectivelyiAnd p' are usediAnd adjusting the face area in the vertical direction such that p ″iMidpoint and piAre on the same horizontal line.
8. The system of claim 5, wherein the position adjustment module further comprises:
a horizontal line segment submodule for making a horizontal line segment on the face region image and recording the horizontal line segment as qiThe length of the horizontal line is the height of the face area image in the vertical direction, and another horizontal line segment is marked as q' on the face areaiThe length of the left face key point is the width of the left face key point and the right face key point in the horizontal direction; and
a horizontal adjustment submodule for respectively taking qiAnd q' areiAnd adjusting the face region in a horizontal direction such that q ″iMid-point of (a) and qiAre on the same vertical line.
9. A personalized preprocessing terminal for face images, characterized by comprising a preprocessing system according to any one of claims 5 to 8, the preprocessing system being configured to process face grayscale images in the preprocessing terminal.
CN201710122371.0A 2017-03-03 2017-03-03 Personalized preprocessing method, system and terminal for face image Active CN106980818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710122371.0A CN106980818B (en) 2017-03-03 2017-03-03 Personalized preprocessing method, system and terminal for face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710122371.0A CN106980818B (en) 2017-03-03 2017-03-03 Personalized preprocessing method, system and terminal for face image

Publications (2)

Publication Number Publication Date
CN106980818A CN106980818A (en) 2017-07-25
CN106980818B true CN106980818B (en) 2020-06-26

Family

ID=59338205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710122371.0A Active CN106980818B (en) 2017-03-03 2017-03-03 Personalized preprocessing method, system and terminal for face image

Country Status (1)

Country Link
CN (1) CN106980818B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945188A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Personage based on scene cut dresss up method and device, computing device
CN110427907B (en) * 2019-08-09 2023-04-07 上海天诚比集科技有限公司 Face recognition preprocessing method for gray level image boundary detection and noise frame filling
CN115879776B (en) * 2023-03-02 2023-06-06 四川宏华电气有限责任公司 Dangerous area early warning method and system applied to petroleum drilling machine

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7155036B2 (en) * 2000-12-04 2006-12-26 Sony Corporation Face detection under varying rotation
CN104657458B (en) * 2015-02-06 2018-02-23 腾讯科技(深圳)有限公司 The methods of exhibiting and device of the target information of foreground target in scene image
CN105740851A (en) * 2016-03-16 2016-07-06 中国科学院上海生命科学研究院 Three-dimensional face automatic positioning method and curved surface registration method and system
CN106056064B (en) * 2016-05-26 2019-10-11 汉王科技股份有限公司 A kind of face identification method and face identification device
CN106446779B (en) * 2016-08-29 2017-07-04 深圳市软数科技有限公司 Personal identification method and device

Also Published As

Publication number Publication date
CN106980818A (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN109359575B (en) Face detection method, service processing method, device, terminal and medium
US10198623B2 (en) Three-dimensional facial recognition method and system
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
US7873189B2 (en) Face recognition by dividing an image and evaluating a similarity vector with a support vector machine
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US8064653B2 (en) Method and system of person identification by facial image
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
CN111144366A (en) Strange face clustering method based on joint face quality assessment
CN111241975A (en) Face recognition detection method and system based on mobile terminal edge calculation
CN106650574A (en) Face identification method based on PCANet
CN106980818B (en) Personalized preprocessing method, system and terminal for face image
CN108108760A (en) A kind of fast human face recognition
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
CN104008364A (en) Face recognition method
CN113298158A (en) Data detection method, device, equipment and storage medium
CN114861241A (en) Anti-peeping screen method based on intelligent detection and related equipment thereof
Karmakar et al. Face recognition using face-autocropping and facial feature points extraction
CN112149517A (en) Face attendance checking method and system, computer equipment and storage medium
CN112613471A (en) Face living body detection method and device and computer readable storage medium
Stein et al. A new method for combined face detection and identification using interest point descriptors
EP2998928A1 (en) Apparatus and method for extracting high watermark image from continuously photographed images
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
KR20160042646A (en) Method of Recognizing Faces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310053 Room B2090, 2nd floor, 368 Liuhe Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Zhibei Information Technology Co., Ltd.

Address before: 310053 Room B2090, 2nd floor, 368 Liuhe Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Hangzhou wisdom Mdt InfoTech Ltd

GR01 Patent grant
GR01 Patent grant