WO2017106996A1 - Human facial recognition method and human facial recognition device - Google Patents

Human facial recognition method and human facial recognition device Download PDF

Info

Publication number
WO2017106996A1
WO2017106996A1 PCT/CN2015/098018 CN2015098018W WO2017106996A1 WO 2017106996 A1 WO2017106996 A1 WO 2017106996A1 CN 2015098018 W CN2015098018 W CN 2015098018W WO 2017106996 A1 WO2017106996 A1 WO 2017106996A1
Authority
WO
WIPO (PCT)
Prior art keywords
dcp
face image
feature
image
inner circle
Prior art date
Application number
PCT/CN2015/098018
Other languages
French (fr)
Chinese (zh)
Inventor
陈书楷
杨奇
Original Assignee
厦门中控生物识别信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门中控生物识别信息技术有限公司 filed Critical 厦门中控生物识别信息技术有限公司
Priority to CN201580001105.1A priority Critical patent/CN107135664B/en
Priority to PCT/CN2015/098018 priority patent/WO2017106996A1/en
Publication of WO2017106996A1 publication Critical patent/WO2017106996A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Definitions

  • the embodiments of the present invention relate to the field of biometrics and computer technology, and in particular, to a method for recognizing a face and a face recognition device.
  • Face recognition technology has developed rapidly in recent years. Face recognition technology is based on human facial features and recognizes input face images or video streams. First, it is judged whether or not there is a human face, and if there is a human face, the position and size of each face and the position information of each main facial organ are further given. Based on this information, the identity features contained in each face are further extracted and compared with known faces to identify the identity of each face.
  • the face image of the face is first established. That is, the camera collects the face image files of the face of the unit personnel or takes their photos to form an image file, and stores these face image files to obtain the face code; and obtains the current body image, that is, the current access captured by the camera.
  • the pattern is coded for retrieval.
  • Embodiments of the present invention provide a method for recognizing a face and a face recognition device, which are used for effectively recognizing a face by extracting a DCP feature, so that the face recognition is more robust and the scheme is increased. Practicality to enhance the user experience.
  • the first aspect of the present invention provides a method for face recognition, including:
  • the filtering, processing, and obtaining a target facial image includes:
  • the filtered gradient image of the face image is calculated as follows:
  • FDG( ⁇ ) represents a filtered gradient image of the face image corresponding to the direction of the ⁇ angle
  • G represents the two-dimensional Gaussian filter
  • represents the gradient operator symbol.
  • the ⁇ angle is four angle values, which are 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
  • the extracting the concentric double cross mode DCP feature from the target facial image includes:
  • the inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
  • the inner circle sampling point and the outer circle sampling point are separately encoded, and the DCP features, including:
  • the DCP codes on the inner circle sampling point and the outer circle sampling point are respectively calculated as follows:
  • DCP i represents the DCP code of the ith sample point
  • S(x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • the DCP encoding representing the sampling points in the horizontal and vertical directions,
  • the DCP code representing the sample points in the diagonal direction.
  • S(x) represents a grayscale intensity function
  • b(x) represents a constant value function
  • f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions
  • d is a boundary threshold.
  • a second aspect of the present invention provides a face recognition device, including:
  • An obtaining module configured to acquire a face image of the user
  • a filtering module configured to perform filtering processing on the face image acquired by the acquiring module, and obtain a target face image
  • An extraction module configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module;
  • a calculation module configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module and a DCP feature corresponding to the original face image, and according to the similarity score pair
  • the target face image is identified, wherein the DCP feature corresponding to the original face image is obtained in advance.
  • the filtering module includes:
  • a calculating unit configured to calculate a filtered gradient image of the face image as follows:
  • FDG( ⁇ ) represents a filtered gradient image of the face image corresponding to the direction of the ⁇ angle
  • G represents the two-dimensional Gaussian filter
  • represents the gradient operator symbol.
  • the ⁇ angle is four angle values, which are 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
  • the extracting module includes:
  • a first acquiring unit configured to obtain an inner circle and an outer circle having different radii by using a center point of the target face image as a center;
  • a second acquiring unit configured to acquire, from the inner circle acquired by the first acquiring unit, eight inner circle sampling points with equal angular intervals;
  • a third acquiring unit configured to acquire, from the outer circle acquired by the first acquiring unit, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sampling point have Correspondence relationship
  • a coding unit configured to respectively encode the inner circle sampling point acquired by the second acquiring unit and the outer circle sampling point acquired by the third acquiring unit, and obtain the DCP feature.
  • the coding unit includes:
  • i DCP DCP encoding the i-th sampling point
  • S (x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • the DCP encoding representing the sampling points in the horizontal and vertical directions,
  • the DCP code representing the sample points in the diagonal direction.
  • the calculating subunit is further configured to calculate a value of the gray level intensity function S(x) as follows:
  • S(x) represents a grayscale intensity function
  • b(x) represents a constant value function
  • f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions
  • d is a boundary threshold.
  • a third aspect of the present invention provides a face recognition device, including:
  • the memory is used to store a program
  • the processor is configured to execute a program in the memory, such that the face recognition device performs the face recognition according to the first aspect of the present invention, the first to fifth possible implementation manners of the first aspect Methods.
  • a fourth aspect of the present invention provides a storage medium storing one or more programs, including:
  • the one or more programs include instructions that, when executed by the face recognition device including one or more processors, cause the face recognition device to perform the method of any one of claims 1 to Face recognition method.
  • a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target face image is identified according to the similarity score, wherein the original face image
  • the corresponding DCP features are obtained in advance.
  • the face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
  • FIG. 1 is a schematic diagram of an embodiment of a method for recognizing a face according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of local sampling of DCP features in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of two modes of adopting DCP features in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an extraction process of a DCP feature according to an embodiment of the present invention.
  • Figure 5 is a schematic diagram showing the results of FAR and FRR evaluation indicators of Experiment (1);
  • Figure 6 is a schematic diagram showing the results of FAR and FRR evaluation indicators of Experiment (2);
  • Figure 8 is a schematic diagram showing the results of FAR and FRR evaluation indexes of Experiment (4);
  • FIG. 10 is a schematic diagram of an embodiment of a face recognition device according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a face recognition device according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for recognizing a face and a face recognition device, which are used for effectively recognizing a face by extracting a DCP feature, so that the face recognition is more robust and the scheme is increased. Practicality to enhance the user experience.
  • an embodiment of a method for recognizing a face includes:
  • the camera in the face recognition device captures a face image of the user, wherein the face image should include at least an eye, a nose, and a mouth.
  • Obtaining a user's face image mainly includes the following two steps:
  • face image acquisition different face images can be collected through the camera lens, such as static images, dynamic images, different positions, different expressions, etc. can be well collected.
  • the acquisition device automatically searches for and captures the user's face image.
  • face detection face detection is mainly used in the preprocessing of face recognition in practice, that is, the position and size of the face are accurately calibrated in the image.
  • the pattern features contained in the face image are very rich, such as histogram feature, color feature, template feature, structural feature and rectangular feature (English name: Haar). Face detection is to pick out the useful information and use these features to achieve face detection.
  • the mainstream face detection method is based on the above features using adaptive weak classification algorithm (English full name: Adaptive boosting, English abbreviation: Adaboost), Adaboost algorithm is a method used for classification, it combines some weak classification methods , combined with a new strong classification method.
  • Adaboost algorithm is used to select some Haar features that best represent the face.
  • the weak classifier is constructed as a strong classifier according to the weighted voting method, and then several strong classifiers obtained by training are connected in series to form a cascaded classifier of cascade structure, which effectively improves the detection speed of the classifier.
  • the acquired face image needs to be pre-processed, and the image is processed based on the face image and the process of final service and feature extraction is performed.
  • the original image acquired by the system is often not directly used due to various conditions and random interference.
  • Image preprocessing such as gradation correction, noise filtering and filtering must be performed in the early stage of image processing.
  • the preprocessing process mainly includes ray compensation, gradation transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images, and finally can be used to extract features.
  • Target face image mainly includes ray compensation, gradation transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images, and finally can be used to extract features.
  • the feature of the concentric double cross mode (Dual-Cross Patterns, English abbreviation: DCP) is extracted from the target face image.
  • the DCP feature is improved on the basis of the characteristics of the local binary pattern (English full name: Local Binary Patterns, English abbreviation: LBP).
  • LBP Local Binary Patterns, English abbreviation: LBP.
  • the field of the unit 8 is changed into the double-area 8 fields, and then horizontally.
  • two sets of sub-DCP features of the same dimension are extracted according to the local quaternary coding method, and finally connected, which is a DCP feature.
  • the features that the face recognition device can use include visual features, pixel statistical features, face image transform coefficient features, and face image algebra features. Face feature extraction is performed on certain features of the face. Face feature extraction, also known as face representation, is a process of character modeling a face. The methods of face feature extraction are summarized into two categories: one is based on knowledge representation and the other is based on algebraic features or statistical learning.
  • the similarity score of the DCP feature corresponding to the DCP feature corresponding to the original face image stored in the database is calculated by using the chi-square test, and a threshold is set by setting a threshold value. When the similarity score exceeds this threshold, the result of the matching is output.
  • Face recognition is to compare the face features to be recognized with the obtained face feature templates, and judge the identity information of the faces according to the degree of similarity. This process is divided into two categories: one is confirmation, one-to-one process of image comparison, and the other is recognition, which is a one-to-many process of image matching and comparison.
  • the specific matching method is not limited here.
  • a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target face image is identified according to the similarity score, wherein the original face image
  • the corresponding DCP features are obtained in advance.
  • the face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
  • the face image is filtered and the target face image is obtained.
  • the filtered gradient image of the face image is calculated as follows:
  • FDG( ⁇ ) represents a filtered gradient image of the face image corresponding to the direction of the ⁇ angle
  • G represents the two-dimensional Gaussian filter
  • represents the gradient operator symbol.
  • FDG( ⁇ ) represents an FDG filtered gradient image of a face image corresponding to the direction of the ⁇ angle, Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ⁇ represents the gradient operator symbol.
  • G is a two-dimensional Gaussian filter whose calculation formula is as follows:
  • a Gaussian filter is a type of linear smoothing filter that selects weights based on the shape of a Gaussian function. Gaussian smoothing filters are very effective at suppressing noise that obeys a normal distribution.
  • the one-dimensional zero-mean Gaussian function is:
  • the Gaussian distribution parameter Sigma determines the width of the Gaussian function.
  • a two-dimensional zero-mean discrete Gaussian function is commonly used as a smoothing filter.
  • the Gaussian function has five important properties that make it particularly useful in early image processing. These properties indicate that Gaussian smoothing filters are very efficient low-pass filters in both the spatial and frequency domains, and Gaussian functions. There are five very important properties, namely:
  • the two-dimensional Gaussian function has rotational symmetry, that is, the smoothness of the filter in all directions is the same.
  • rotational symmetry means that the Gaussian smoothing filter does not deflect in either direction in subsequent edge detection.
  • the Gaussian function is a single-valued function, which indicates that the Gaussian filter replaces the pixel value of the point with the weighted mean of the pixel neighborhood, and the weight of each neighborhood pixel is monotonically increased with the distance between the point and the center point. Less. This property is important because the edge is an image local feature. If the smoothing operation still has a large effect on pixels far from the center of the operator, the smoothing operation will distort the image.
  • the Fourier transform spectrum of the Gaussian function is single-lobed. This property is a direct inference of the fact that the Gaussian function Fourier transform is equal to the Gaussian function itself. Images are often contaminated by unwanted high-frequency signals (such as noise and fine texture), and desired image features (such as edges) contain both low-frequency components and high-frequency components. Gaussian function Fourier transform single-lobes This means that the smoothed image is not contaminated by unwanted high frequency signals while retaining most of the desired signal.
  • the Gaussian filter width (determining the degree of smoothness) is characterized by the parameter ⁇ , and the relationship between ⁇ and the degree of smoothing is very simple. The larger the ⁇ , the wider the band of the Gaussian filter and the smoothness The better.
  • the two-dimensional Gaussian function convolution can be performed in two steps. First, the image is convolved with the one-dimensional Gaussian function, and then the convolution result is convolved with the same one-dimensional Gaussian function perpendicular to the direction. Therefore, the calculation of the two-dimensional Gaussian filter The amount grows linearly with the width of the filter template rather than squared.
  • a method for obtaining a target face image by filtering a face image is provided, and a filter gradient image is separately calculated from four directions by using a correlation formula, and the filter is correspondingly performed.
  • the optimization can suppress the influence of noise and illumination changes, improve the quality of the face image, facilitate the extraction of image features, and enhance the practicability of the scheme.
  • the ⁇ angle is four angle values, respectively. It is 0 degrees, 45 degrees, 90 degrees and 135 degrees.
  • the FDG filter gradient image has the following formula:
  • F X represents FDG gradient filtering in the horizontal direction
  • F Y represents FDG gradient filtering in the vertical direction.
  • the filter in the four directions is only one schematic. In practical applications, the filter may be configured by other parameters, which is not limited herein.
  • the filtering process for the face image may specifically involve filtering in four directions, namely 0 degrees, 45 degrees, 90 degrees, and 135 degrees, and is verified by experiments, from these four angles.
  • the input face image is filtered to achieve a cost-effective effect.
  • the filtering process in multiple directions can achieve better when the angular interval is smaller.
  • the ability to suppress noise and suppress illumination changes, but it increases the computational burden. Not conducive to practical applications.
  • filtering using a direction with a larger angular interval may weaken the suppression of noise and illumination variations, which is not conducive to image processing. Therefore, the filtering process in the four directions of the face image provided by the solution of the present invention has operability and practicability.
  • extracting a concentric double crisscross mode DCP feature from a target face image can include:
  • the inner circle sampling point and the outer circle sampling point are respectively coded, and the DCP feature is obtained.
  • the DCP features are extracted by using the sampling method of the eight circles of the double circle.
  • the diagonals of the target face images are respectively connected, and the intersection of the diagonal lines, that is, the center point is obtained.
  • Two circles with different radii are drawn with the center point as the center, the inner circle is the smaller radius, and the outer circle is the larger radius.
  • the radius of the inner circle and the outer circle are both sampled with two points.
  • FIG. 2 is a schematic diagram of local sampling of DCP features according to an embodiment of the present invention.
  • the center point is O
  • the inner circle sampling point is A i
  • the outer circle sampling point is B i
  • a 0 and B 0 have a corresponding relationship, and both correspond to a 0 degree angle
  • a 1 Corresponding to B 1 each corresponds to a 45 degree angle, and so on, until A 7 and B 7 have a corresponding relationship, which corresponds to a 335 degree angle.
  • the sampling method is to extract DCP features in 8 fields of double circle.
  • the advantage of extracting features from the unit is to increase the context information of the local domain and characterize the local intensity contrast.
  • the sampling method of eight fields of three circles was tried, but the test effect was general. The reason was that the coding method of the center point and the three points in each direction did not have a suitable calculation form, so the sampling method was affected. Evaluation.
  • the DCP feature is extracted by using a method of local sampling from 8 fields of double circle.
  • LBP features are extracted from 16 fields of a single circle or 16 fields of a single circle.
  • the extraction of DCP features can better cater to the trend of facial texture.
  • the face there are two main pieces of key information, one is the structure of the facial organs, and the other is the shape of the facial organs.
  • the shape of the facial organs is regular, and their ends converge substantially in a diagonal direction so that features can be extracted from the diagonal direction.
  • the wrinkles on the forehead are flat, but they are convex or inclined on the cheeks. Therefore, the method of local sampling of 8 fields in the double circle can better describe the main texture information of the face and improve the feasibility of the scheme.
  • the inner circle sampling point and the outer circle are respectively selected on the basis of the third optional embodiment corresponding to FIG.
  • the sampling points are encoded and the DCP features are obtained, which may include:
  • DCP i represents the DCP code of the ith sample point
  • S(x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • DCP encoding indicating the sampling points in the horizontal and vertical directions Indicates the DCP encoding of the sample points in the diagonal direction.
  • the inner circle sampling point and the outer circle sampling point are respectively coded, and the DCP feature is obtained, which may be specifically divided into the following two steps.
  • the first step is to divide the inner circle and the outer circle according to the eight directions.
  • the upper sampling points are independently coded;
  • the second step is to connect the 8 direction codes on the inner circle and the 8 direction codes on the outer circle to obtain DCP coding.
  • the DCP encoding is calculated as:
  • DCP i represents the DCP code of the i-th sample point
  • S(x) represents the gray-scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • FIG. 3 is a schematic diagram of two modes of adopting DCP features according to an embodiment of the present invention. As shown in the figure, according to the above analysis, DCP coding can be divided into the following two groups:
  • DCP -1 ⁇ DCP 0 , DCP 2 , DCP 4 , DCP 6 ⁇
  • Diagonal direction: DCP -2 ⁇ DCP 1 , DCP 3 , DCP 5 , DCP 7 ⁇
  • the final composition of the DCP is:
  • FIG. 4 is a schematic diagram of a process for extracting a DCP feature according to an embodiment of the present invention.
  • a face image is acquired, and a target face image is obtained by filtering.
  • the target face image is divided into two circles and 8 fields, and DCP coding of two sets of four directions is obtained, and finally the target face image DCP feature is formed.
  • Face recognition is performed by comparison with DCP features in the database.
  • the inner circle sampling point and the outer circle sampling point are respectively encoded, and a DCP feature is obtained.
  • the sampling points on the inner circle and the outer circle are independently coded in 8 directions, 8 direction codes on the inner circle are connected, and 8 direction codes on the outer circle are obtained to obtain DCP coding.
  • S(x) represents the gray level intensity function
  • b(x) represents the constant value function
  • f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions
  • d is the boundary threshold
  • fuzzy membership functions f 1,d (x) and f 0,d (x) are calculated as follows:
  • d is the boundary threshold, which affects the membership degree of the module.
  • d is 0.0005.
  • the intensity contrast is susceptible to noise, so the "soft boundary" coding method is used to improve the gray intensity function, so that the pixel point and the center
  • the gray value of the point is close, it is not susceptible to noise, which makes the extraction process of the DCP feature more robust, and improves the feasibility and practicability of the solution of the present invention.
  • FIG. 6 is a schematic diagram of the FAR and FRR evaluation index results of the experiment (2), and FIG. 6 shows the FAR and FRR evaluations obtained on the face registration database. result.
  • Fig. 7 is a schematic diagram of the FAR and FRR evaluation index results of experiment (3).
  • FIG. 9 is a schematic diagram of the FAR and FRR evaluation index results of experiment (5). It can be understood that if the face image is not aligned, it is necessary to first perform similar transformation and affine transformation on the face image, and then perform image cropping.
  • the face recognition device 200 in the embodiment of the present invention includes:
  • the obtaining module 201 is configured to acquire a face image of the user
  • the filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
  • the extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
  • a calculation module 204 configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity The score identifies the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance.
  • the acquiring module 201 acquires a face image of the user; the filtering module 202 performs filtering processing on the face image acquired by the obtaining module 201, and obtains a target face image; and the target obtained by the filtering module 202 is filtered by the filtering module 202.
  • the concentric double cross mode DCP feature is extracted from the face image; the calculation module 204 uses the chi-square test to calculate the similarity score between the DCP feature corresponding to the target face image extracted by the extraction module 203 and the DCP feature corresponding to the original face image.
  • the target face image is identified according to the similarity score, wherein the DCP feature corresponding to the original face image is obtained in advance.
  • a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target is scored according to the similarity score.
  • the face image is identified, wherein the DCP feature corresponding to the original face image is obtained in advance.
  • the face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
  • another embodiment of the face recognition device of the present invention includes:
  • the obtaining module 201 is configured to acquire a face image of the user
  • the filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
  • the extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
  • a calculation module 204 configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
  • the filtering module 202 includes:
  • the calculating unit 2021 is configured to calculate a filtered gradient image of the face image as follows:
  • FDG( ⁇ ) represents a filtered gradient image of the face image corresponding to the direction of the ⁇ angle
  • G represents the two-dimensional Gaussian filter
  • represents the gradient operator symbol.
  • a method for obtaining a target face image by filtering a face image is provided, and a filter gradient image is separately calculated from four directions by using a correlation formula, and the filter is correspondingly performed.
  • the optimization can suppress the influence of noise and illumination changes, improve the quality of the face image, facilitate the extraction of image features, and enhance the practicability of the scheme.
  • the ⁇ angle is four angle values, respectively, on the basis of the foregoing embodiment corresponding to FIG. 45 degrees, 90 degrees and 135 degrees.
  • the filtering process for the face image may specifically involve filtering in four directions, namely 0 degrees, 45 degrees, 90 degrees, and 135 degrees, and is verified by experiments, from the fourth The angle of the input face image is filtered, which can achieve the effect of high cost performance.
  • the filtering process in multiple directions can achieve better functions of suppressing noise and suppressing illumination changes in the case where the angular interval is smaller, the calculation load is increased. Not conducive to practical applications.
  • filtering using a direction with a larger angular interval may weaken the suppression of noise and illumination variations, which is not conducive to image processing. Therefore, the filtering process in the four directions of the face image provided by the solution of the present invention has operability and practicability.
  • another embodiment of the face recognition device of the present invention includes:
  • the obtaining module 201 is configured to acquire a face image of the user
  • the filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
  • the extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
  • a calculation module 204 configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
  • the extraction module 203 includes:
  • the first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
  • a second acquiring unit 2032 configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
  • a third obtaining unit 2033 configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
  • the encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature.
  • the DCP feature is extracted by using a method of local sampling from 8 fields of double circle.
  • LBP is extracted from 16 fields of a single circle or 16 fields of a single circle.
  • DCP features can better cater to the direction of facial texture.
  • the structure of the facial organs there are two main pieces of key information, one is the structure of the facial organs, and the other is the shape of the facial organs.
  • the shape of the facial organs is regular, and their ends converge substantially in a diagonal direction so that features can be extracted from the diagonal direction.
  • the wrinkles on the forehead are flat, but they are convex or inclined on the cheeks. Therefore, the method of local sampling of 8 fields in the double circle can better describe the main texture information of the face and improve the feasibility of the scheme.
  • another embodiment of the face recognition device of the present invention includes:
  • the obtaining module 201 is configured to acquire a face image of the user
  • the filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
  • the extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
  • a calculation module 204 configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
  • the extraction module 203 includes:
  • the first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
  • a second acquiring unit 2032 configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
  • a third obtaining unit 2033 configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
  • the encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature;
  • the coding unit 2034 includes:
  • a calculating subunit 20341 configured to separately calculate the inner circle sampling point and the outer side as follows DCP encoding at the round sampling point:
  • DCP i represents the DCP code of the ith sample point
  • S(x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • the DCP encoding representing the sampling points in the horizontal and vertical directions,
  • the DCP code representing the sample points in the diagonal direction.
  • the inner circle sampling point and the outer circle sampling point are respectively encoded, and a DCP feature is obtained.
  • the sampling points on the inner circle and the outer circle are independently coded in 8 directions, 8 direction codes on the inner circle are connected, and 8 direction codes on the outer circle are obtained to obtain DCP coding.
  • the obtaining module 201 is configured to acquire a face image of the user
  • the filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
  • the extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
  • the calculating module 204 is configured to calculate a similarity between the DCP feature corresponding to the target face image extracted by the extraction module 203 and the DCP feature corresponding to the original face image by using a chi-square test Number, and identifying the target face image according to the similarity score, wherein the DCP feature corresponding to the original face image is obtained in advance;
  • the extraction module 203 includes:
  • the first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
  • a second acquiring unit 2032 configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
  • a third obtaining unit 2033 configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
  • the encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature;
  • the coding unit 2034 includes:
  • the calculating subunit 20341 is configured to separately calculate DCP codes on the inner circle sampling point and the outer circle sampling point as follows:
  • DCP i represents the DCP code of the ith sample point
  • S(x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • the DCP encoding representing the sampling points in the horizontal and vertical directions,
  • the DCP code representing the sample points in the diagonal direction.
  • the calculating subunit 20341 is further configured to calculate the gray level intensity function S(x) as follows Value:
  • S(x) represents a grayscale intensity function
  • b(x) represents a constant value function
  • f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions
  • d is a boundary threshold.
  • the intensity contrast is susceptible to noise, so the "soft boundary" coding method is used to improve the gray intensity function, so that the pixel point and the center
  • the gray value of the point is close, it is not susceptible to noise, which makes the extraction process of the DCP feature more robust, and improves the feasibility and practicability of the solution of the present invention.
  • FIG. 14 is a schematic structural diagram of a face recognition device 30 according to an embodiment of the present invention.
  • the face recognition device 30 can include an input device 310, an output device 320, a processor 330, and a memory 340.
  • the output device in the embodiment of the present invention may be a display device.
  • Memory 340 can include read only memory and random access memory and provides instructions and data to processor 330. A portion of the memory 340 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • the memory 340 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof:
  • Operation instructions include various operation instructions for implementing various operations.
  • Operating system Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
  • the processor 330 is configured to:
  • the processor 330 controls the operation of the face recognition device 30.
  • the processor 330 may also be referred to as a central processing unit (English full name: Central Processing Unit: CPU).
  • Memory 340 can include read only memory and random access memory and provides instructions and data to processor 330. A portion of the memory 340 may also include an NVRAM.
  • the components of the face recognition device 30 are coupled together by a bus system 350.
  • the bus system 350 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 350 in the figure.
  • Processor 330 may be an integrated circuit chip with signal processing capabilities.
  • each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 330 or an instruction in a form of software.
  • the processor 330 may be a general-purpose processor, a digital signal processor (English name: digital signal processing, English abbreviation: DSP), an application-specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), ready-made programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA ready-made programmable Gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in memory 340, and processor 330 reads the information in memory 340 and, in conjunction with its hardware, performs the steps of the above method.
  • the processor 330 is further configured to:
  • the filtered gradient image of the face image is calculated as follows:
  • FDG( ⁇ ) represents a filtered gradient image of the face image corresponding to the direction of the ⁇ angle, Indicates the standard direction vector in the filter, G denotes a two-dimensional Gaussian filter, and ⁇ denotes a gradient operator symbol.
  • the processor 330 is further configured to:
  • the inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
  • the processor 330 is further configured to:
  • the DCP codes on the inner circle sampling point and the outer circle sampling point are respectively calculated as follows:
  • DCP i represents the DCP code of the ith sample point
  • S(x) represents the gray scale intensity function
  • I O respectively represent gray values of the sampling points A i , B i and O;
  • the DCP features are calculated as follows:
  • DCP represents the DCP feature and i represents the ith sample point.
  • the DCP encoding representing the sampling points in the horizontal and vertical directions,
  • the DCP code representing the sample points in the diagonal direction.
  • the processor 330 is further configured to:
  • S(x) represents a grayscale intensity function
  • b(x) represents a constant value function
  • f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions
  • d is a boundary threshold.
  • FIG. 14 The related description of FIG. 14 can be understood by referring to the related description and effect of the method part of FIG. 1 , and no further description is made herein.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (English full name: Read-Only Memory, English abbreviation: ROM), a random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic A variety of media that can store program code, such as a disc or a disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A human facial recognition method and a human facial recognition device, the method comprising: acquire a facial image from a user (101); perform a filtering process on the facial image to acquire a target facial image (102); acquire a concentric dual cross pattern (DCP) feature from the target facial image (103); use a chi-squared test to calculate a similarity score between the DCP feature corresponding to the target facial image and a DCP feature corresponding to an original facial image, and use the similarity score to perform facial recognition of the target facial image, wherein the DCP feature corresponding to an original facial image is acquired in advance (104). The method acquires the DCP feature to achieve effective facial recognition, raise the robustness of facial recognition, enhance practicality of a facial recognition solution, and improve user experience.

Description

一种人脸识别的方法以及人脸识别装置Face recognition method and face recognition device 技术领域Technical field
本发明实施例涉及生物识别技术领域以及计算机技术领域,尤其涉及一种人脸识别的方法以及人脸识别装置。The embodiments of the present invention relate to the field of biometrics and computer technology, and in particular, to a method for recognizing a face and a face recognition device.
背景技术Background technique
人脸识别技术在近年来发展迅速,人脸识别技术是基于人的脸部特征,对输入的人脸图像或者视频流进行识别。首先判断其是否存在人脸,如果存在人脸,则进一步的给出每个脸的位置、大小和各个主要面部器官的位置信息。并依据这些信息,进一步提取每个人脸中所蕴涵的身份特征,并将其与已知的人脸进行对比,从而识别每个人脸的身份。Face recognition technology has developed rapidly in recent years. Face recognition technology is based on human facial features and recognizes input face images or video streams. First, it is judged whether or not there is a human face, and if there is a human face, the position and size of each face and the position information of each main facial organ are further given. Based on this information, the identity features contained in each face are further extracted and compared with known faces to identify the identity of each face.
现有技术中,首先建立人脸的面像档案。即用摄像机采集单位人员的人脸的面像文件或取他们的照片形成面像文件,并将这些面像文件生成面纹编码贮存起来;获取当前的人体面像,即用摄像机捕捉的当前出入人员的面像,或取照片输入,并将当前的面像文件生成面纹编码;用当前的面纹编码与档案库存的比对,即将当前的面像的面纹编码与档案库存中的面纹编码进行检索比对。In the prior art, the face image of the face is first established. That is, the camera collects the face image files of the face of the unit personnel or takes their photos to form an image file, and stores these face image files to obtain the face code; and obtains the current body image, that is, the current access captured by the camera. The face image of the person, or take a photo input, and encode the current face image file; use the current face code to compare with the file inventory, that is, the face code of the current face image and the face in the file inventory The pattern is coded for retrieval.
然而,在现有技术中,对应面纹编码而言,虽然能够刻画人脸的纹理结构,但是对于噪声和光线变化带来的影响尚未很好的解决,且在人脸姿态和表情发生变化的时候,也难以通过简便的运算来进行人脸识别,可见其鲁棒性不够好。However, in the prior art, in terms of face code coding, although the texture structure of the face can be characterized, the influence on the noise and the light change has not been well solved, and the face pose and the expression change. At the same time, it is difficult to perform face recognition by simple calculation, and it can be seen that the robustness is not good enough.
发明内容Summary of the invention
本发明实施例提供了一种人脸识别的方法以及人脸识别装置,用于通过对DCP特征的提取来有效地识别人脸,使得进行人脸识别的时候更具有鲁棒性,增加方案的实用性,提升用户体验。Embodiments of the present invention provide a method for recognizing a face and a face recognition device, which are used for effectively recognizing a face by extracting a DCP feature, so that the face recognition is more robust and the scheme is increased. Practicality to enhance the user experience.
有鉴于此,本发明第一方面提供一种人脸识别的方法,包括:In view of this, the first aspect of the present invention provides a method for face recognition, including:
获取用户的人脸图像;Obtaining a face image of the user;
对所述人脸图像进行滤波处理,并得到目标人脸图像;Filtering the face image and obtaining a target face image;
从所述目标人脸图像中提取同心双十字交叉模式DCP特征;Extracting a concentric double crisscross mode DCP feature from the target face image;
采用卡方检验计算所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图 像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的。Calculating a similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image by using a chi-square test, and comparing the target face map according to the similarity score For example, the recognition is performed, wherein the DCP feature corresponding to the original face image is obtained in advance.
结合本发明实施例的第一方面,在第一种可能的实现方式中,所述对所述人脸图像进行滤波处理,并得到目标人脸图像,包括:With reference to the first aspect of the embodiments of the present invention, in a first possible implementation manner, the filtering, processing, and obtaining a target facial image includes:
按照如下方式计算所述人脸图像的滤波梯度图像:The filtered gradient image of the face image is calculated as follows:
Figure PCTCN2015098018-appb-000001
Figure PCTCN2015098018-appb-000001
其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
Figure PCTCN2015098018-appb-000002
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符号。
Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000002
Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ▽ represents the gradient operator symbol.
结合本发明实施例的第一方面第一种可能实现方式,在第二种可能的实现方式中,所述θ角度为四个角度值,分别为0度、45度、90度以及135度。With reference to the first possible implementation manner of the first aspect of the embodiments of the present invention, in the second possible implementation manner, the θ angle is four angle values, which are 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
结合本发明实施例的第一方面,在第三种可能的实现方式中,所述从所述目标人脸图像中提取同心双十字交叉模式DCP特征,包括:With reference to the first aspect of the embodiments of the present invention, in a third possible implementation, the extracting the concentric double cross mode DCP feature from the target facial image includes:
以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;Taking the center point of the target face image as a center, respectively obtaining an inner circle and an outer circle having different radii;
从所述内圆上获取8个角度间隔相等的内圆采样点;Obtaining 8 inner circle sampling points with equal angular intervals from the inner circle;
从所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;Obtaining 8 outer-circle DCP sampling points with equal angular intervals from the outer circle, wherein the inner-circular sampling points have a corresponding relationship with the outer-circular sampling points;
分别对所述内圆采样点与所述外圆采样点进行编码,并得到所述DCP特征。The inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
结合本发明实施例的第一方面第三种可能实现方式,在第四种可能的实现方式中,所述分别对所述内圆采样点与所述外圆采样点进行编码,并得到所述DCP特征,包括:With reference to the third possible implementation manner of the first aspect of the embodiments of the present invention, in a fourth possible implementation, the inner circle sampling point and the outer circle sampling point are separately encoded, and the DCP features, including:
按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编码:The DCP codes on the inner circle sampling point and the outer circle sampling point are respectively calculated as follows:
Figure PCTCN2015098018-appb-000003
Figure PCTCN2015098018-appb-000003
或,or,
Figure PCTCN2015098018-appb-000004
Figure PCTCN2015098018-appb-000004
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000005
Figure PCTCN2015098018-appb-000006
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
Figure PCTCN2015098018-appb-000005
Figure PCTCN2015098018-appb-000006
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000007
Figure PCTCN2015098018-appb-000007
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000008
表示水平和垂直方向采样点的所述DCP编码,
Figure PCTCN2015098018-appb-000009
表示对角方向采样点的所述DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000008
The DCP encoding representing the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000009
The DCP code representing the sample points in the diagonal direction.
结合本发明实施例的第一方面第四种可能实现方式,在第五种可能的实现方式中,With reference to the fourth possible implementation manner of the first aspect of the embodiments of the present invention, in a fifth possible implementation manner,
按照如下方式计算所述灰度强度函数S(x)的值:The value of the gray level intensity function S(x) is calculated as follows:
Figure PCTCN2015098018-appb-000010
Figure PCTCN2015098018-appb-000010
其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
本发明第二方面提供一种人脸识别装置,包括:A second aspect of the present invention provides a face recognition device, including:
获取模块,用于获取用户的人脸图像;An obtaining module, configured to acquire a face image of the user;
滤波模块,用于对所述获取模块获取的所述人脸图像进行滤波处理,并得到目标人脸图像;a filtering module, configured to perform filtering processing on the face image acquired by the acquiring module, and obtain a target face image;
提取模块,用于从所述滤波模块滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;An extraction module, configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module;
计算模块,用于采用卡方检验计算所述提取模块提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的。a calculation module, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module and a DCP feature corresponding to the original face image, and according to the similarity score pair The target face image is identified, wherein the DCP feature corresponding to the original face image is obtained in advance.
结合本发明实施例的第二方面,在第一种可能的实现方式中,所述滤波模块包括:With reference to the second aspect of the embodiments of the present invention, in a first possible implementation, the filtering module includes:
计算单元,用于按照如下方式计算所述人脸图像的滤波梯度图像: a calculating unit, configured to calculate a filtered gradient image of the face image as follows:
Figure PCTCN2015098018-appb-000011
Figure PCTCN2015098018-appb-000011
其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
Figure PCTCN2015098018-appb-000012
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符号。
Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000012
Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ▽ represents the gradient operator symbol.
结合本发明实施例的第二方面第一种可能实现方式,在第二种可能的实现方式中,所述θ角度为四个角度值,分别为0度、45度、90度以及135度。With reference to the first possible implementation manner of the second aspect of the embodiment of the present invention, in the second possible implementation manner, the θ angle is four angle values, which are 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
结合本发明实施例的第二方面,在第三种可能的实现方式中,所述提取模块包括:With reference to the second aspect of the embodiments of the present invention, in a third possible implementation, the extracting module includes:
第一获取单元,用于以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;a first acquiring unit, configured to obtain an inner circle and an outer circle having different radii by using a center point of the target face image as a center;
第二获取单元,用于从所述第一获取单元获取的所述内圆上获取8个角度间隔相等的内圆采样点;a second acquiring unit, configured to acquire, from the inner circle acquired by the first acquiring unit, eight inner circle sampling points with equal angular intervals;
第三获取单元,用于从所述第一获取单元获取的所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;a third acquiring unit, configured to acquire, from the outer circle acquired by the first acquiring unit, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sampling point have Correspondence relationship
编码单元,用于分别对所述第二获取单元获取的所述内圆采样点与所述第三获取单元获取的所述外圆采样点进行编码,并得到所述DCP特征。a coding unit, configured to respectively encode the inner circle sampling point acquired by the second acquiring unit and the outer circle sampling point acquired by the third acquiring unit, and obtain the DCP feature.
结合本发明实施例的第二方面第三种可能实现方式,在第四种可能的实现方式中,所述编码单元包括:With reference to the third possible implementation manner of the second aspect of the embodiments of the present invention, in a fourth possible implementation, the coding unit includes:
计算子单元,用于按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编码:Calculating a subunit for respectively calculating DCP codes on the inner circle sampling point and the outer circle sampling point as follows:
Figure PCTCN2015098018-appb-000013
Figure PCTCN2015098018-appb-000013
或,or,
Figure PCTCN2015098018-appb-000014
Figure PCTCN2015098018-appb-000014
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000015
Figure PCTCN2015098018-appb-000016
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Wherein, i represents DCP DCP encoding the i-th sampling point, S (x) represents the gray scale intensity function,
Figure PCTCN2015098018-appb-000015
Figure PCTCN2015098018-appb-000016
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000017
Figure PCTCN2015098018-appb-000017
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000018
表示水平和垂直方向采样点的所述DCP编码,
Figure PCTCN2015098018-appb-000019
表示对角方向采样点的所述DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000018
The DCP encoding representing the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000019
The DCP code representing the sample points in the diagonal direction.
结合本发明实施例的第二方面第四种可能实现方式,在第五种可能的实现方式中,With reference to the fourth possible implementation manner of the second aspect of the embodiments of the present invention, in a fifth possible implementation manner,
所述计算子单元,还用于按照如下方式计算所述灰度强度函数S(x)的值:The calculating subunit is further configured to calculate a value of the gray level intensity function S(x) as follows:
Figure PCTCN2015098018-appb-000020
Figure PCTCN2015098018-appb-000020
其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
本发明第三方面提供一种人脸识别装置,包括:A third aspect of the present invention provides a face recognition device, including:
处理器以及存储器;Processor and memory;
所述存储器用于存储程序;The memory is used to store a program;
所述处理器用于执行所述存储器中的程序,使得所述人脸识别装置执行如本发明第一方面、第一方面第一至第五种可能实现方式中任一项所述的人脸识别的方法。The processor is configured to execute a program in the memory, such that the face recognition device performs the face recognition according to the first aspect of the present invention, the first to fifth possible implementation manners of the first aspect Methods.
本发明第四方面提供一种存储一个或多个程序的存储介质,包括:A fourth aspect of the present invention provides a storage medium storing one or more programs, including:
一个或多个程序包括指令,所述指令当被包括一个或多个处理器的所述人脸识别装置执行时,使所述人脸识别装置执行如权利要求1至6任一项所述的人脸识别的方法。The one or more programs include instructions that, when executed by the face recognition device including one or more processors, cause the face recognition device to perform the method of any one of claims 1 to Face recognition method.
从以上技术方案可以看出,本发明实施例具有以下优点:It can be seen from the above technical solutions that the embodiments of the present invention have the following advantages:
本发明实施例中,提供了一种人脸识别的方法,首先获取用户的人脸图像,再对人脸图像进行滤波处理,并得到目标人脸图像,然后从目标人脸图像中提取DCP特征,最后采用卡方检验计算目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据相似性分数对目标人脸图像进行识别,其中,原始人脸图像对应的DCP特征为预先得到的。通过对DCP特征的提取来有效地识别人脸,使得进行人脸识别的时候更具有鲁棒性,增加方案的实用性,提升用户体验。 In the embodiment of the present invention, a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target face image is identified according to the similarity score, wherein the original face image The corresponding DCP features are obtained in advance. The face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
附图说明DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention, Those skilled in the art can also obtain other drawings according to these drawings without paying any creative work.
图1为本发明实施例中人脸识别的方法一个实施例示意图;1 is a schematic diagram of an embodiment of a method for recognizing a face according to an embodiment of the present invention;
图2为本发明实施例中DCP特征局部采样示意图;2 is a schematic diagram of local sampling of DCP features in an embodiment of the present invention;
图3为本发明实施例中DCP特征的两种采用模式示意图;3 is a schematic diagram of two modes of adopting DCP features in an embodiment of the present invention;
图4为本发明实施例中DCP特征的提取过程示意图;4 is a schematic diagram of an extraction process of a DCP feature according to an embodiment of the present invention;
图5是实验(1)的FAR和FRR评价指标结果示意图;Figure 5 is a schematic diagram showing the results of FAR and FRR evaluation indicators of Experiment (1);
图6是实验(2)的FAR和FRR评价指标结果示意图;Figure 6 is a schematic diagram showing the results of FAR and FRR evaluation indicators of Experiment (2);
图7是实验(3)的FAR和FRR评价指标结果示意图;7 is a schematic diagram showing the results of FAR and FRR evaluation indexes of Experiment (3);
图8是实验(4)的FAR和FRR评价指标结果示意图;Figure 8 is a schematic diagram showing the results of FAR and FRR evaluation indexes of Experiment (4);
图9是实验(5)的FAR和FRR评价指标结果示意图;9 is a schematic diagram showing the results of FAR and FRR evaluation indexes of Experiment (5);
图10为本发明实施例中人脸识别装置一个实施例示意图;FIG. 10 is a schematic diagram of an embodiment of a face recognition device according to an embodiment of the present invention; FIG.
图11为本发明实施例中人脸识别装置另一个实施例示意图;FIG. 11 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention; FIG.
图12为本发明实施例中人脸识别装置另一个实施例示意图;FIG. 12 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention; FIG.
图13为本发明实施例中人脸识别装置另一个实施例示意图;FIG. 13 is a schematic diagram of another embodiment of a face recognition device according to an embodiment of the present invention; FIG.
图14为本发明实施例中人脸识别装置一个结构示意图。FIG. 14 is a schematic structural diagram of a face recognition device according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不 排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if present) in the specification and claims of the present invention and the above figures are used to distinguish similar objects without being used for Describe a specific order or order. It is to be understood that the data so used may be interchanged as appropriate, such that the embodiments of the invention described herein can be implemented, for example, in a sequence other than those illustrated or described herein. In addition, the terms "including" and "having" and any variants thereof are intended to cover no An exclusive inclusion, for example, a process, method, system, product, or device that comprises a series of steps or units is not necessarily limited to those steps or units that are clearly listed, but may include those that are not clearly listed or Other steps or units inherent to the method, product or device.
本发明实施例提供了一种人脸识别的方法以及人脸识别装置,用于通过对DCP特征的提取来有效地识别人脸,使得进行人脸识别的时候更具有鲁棒性,增加方案的实用性,提升用户体验。Embodiments of the present invention provide a method for recognizing a face and a face recognition device, which are used for effectively recognizing a face by extracting a DCP feature, so that the face recognition is more robust and the scheme is increased. Practicality to enhance the user experience.
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
下面对本发明中的人脸识别的方法进行详细描述,请参阅图1,本发明实施例提供的一种人脸识别的方法实施例包括:The method for the face recognition in the present invention is described in detail below. Referring to FIG. 1, an embodiment of a method for recognizing a face according to an embodiment of the present invention includes:
101、获取用户的人脸图像;101. Obtain a face image of the user;
本实施例中,人脸识别装置中的摄像头捕捉用户的人脸图像,其中,该人脸图像中应当至少包括眼睛、鼻子以及嘴巴。In this embodiment, the camera in the face recognition device captures a face image of the user, wherein the face image should include at least an eye, a nose, and a mouth.
获取用户的人脸图像主要包括以下两个步骤:Obtaining a user's face image mainly includes the following two steps:
第一,人脸图像采集,不同的人脸图像都能通过摄像镜头采集下来,比如静态图像、动态图像、不同的位置、不同表情等方面都可以得到很好的采集。当用户在采集设备的拍摄范围内时,采集设备会自动搜索并拍摄用户的人脸图像。First, face image acquisition, different face images can be collected through the camera lens, such as static images, dynamic images, different positions, different expressions, etc. can be well collected. When the user is within the shooting range of the acquisition device, the acquisition device automatically searches for and captures the user's face image.
第二,人脸检测,人脸检测在实际中主要用于人脸识别的预处理,即在图像中准确标定出人脸的位置和大小。人脸图像中包含的模式特征十分丰富,如直方图特征、颜色特征、模板特征、结构特征及矩形特征(英文全称:Haar)等。人脸检测就是把这其中有用的信息挑出来,并利用这些特征实现人脸检测。Second, face detection, face detection is mainly used in the preprocessing of face recognition in practice, that is, the position and size of the face are accurately calibrated in the image. The pattern features contained in the face image are very rich, such as histogram feature, color feature, template feature, structural feature and rectangular feature (English name: Haar). Face detection is to pick out the useful information and use these features to achieve face detection.
主流的人脸检测方法基于以上特征采用自适应弱分类算法(英文全称:Adaptive boosting,英文缩写:Adaboost),Adaboost算法是一种用来分类的方法,它把一些比较弱的分类方法合在一起,组合出新的很强的分类方法。人脸检测过程中使用Adaboost算法挑选出一些最能代表人脸的Haar特征,按 照加权投票的方式将弱分类器构造为一个强分类器,再将训练得到的若干强分类器串联组成一个级联结构的层叠分类器,有效地提高分类器的检测速度。The mainstream face detection method is based on the above features using adaptive weak classification algorithm (English full name: Adaptive boosting, English abbreviation: Adaboost), Adaboost algorithm is a method used for classification, it combines some weak classification methods , combined with a new strong classification method. In the face detection process, the Adaboost algorithm is used to select some Haar features that best represent the face. The weak classifier is constructed as a strong classifier according to the weighted voting method, and then several strong classifiers obtained by training are connected in series to form a cascaded classifier of cascade structure, which effectively improves the detection speed of the classifier.
102、对人脸图像进行滤波处理,并得到目标人脸图像;102. Perform filtering processing on the face image, and obtain a target face image;
本实施例中,需要先对获取到的人脸图像进行预处理,基于人脸图像对图像进行处理并最终服务与特征提取的过程。系统获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早期阶段对它进行灰度校正、噪声过滤和滤波处理等图像预处理。对于人脸图像而言,其预处理过程主要包括人脸图像的光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等,最后得到可以用于提取特征的目标人脸图像。In this embodiment, the acquired face image needs to be pre-processed, and the image is processed based on the face image and the process of final service and feature extraction is performed. The original image acquired by the system is often not directly used due to various conditions and random interference. Image preprocessing such as gradation correction, noise filtering and filtering must be performed in the early stage of image processing. For face images, the preprocessing process mainly includes ray compensation, gradation transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of face images, and finally can be used to extract features. Target face image.
103、从目标人脸图像中提取同心双十字交叉模式DCP特征;103. Extracting a concentric double crisscross mode DCP feature from the target face image;
本实施例中,从目标人脸图像中提取同心双十字交叉模式(英文全称:Dual-Cross Patterns,英文缩写:DCP)特征。其中,DCP特征是在局部二值模式(英文全称:Local Binary Patterns,英文缩写:LBP)特征的基础上改进而来的,先把单元8领域变成双圆各8领域采样,再按水平、垂直和对角线方向,分别按照局部四进制编码的方式,提取2组相同维数的子DCP特征,最后连接而成,即为DCP特征。In this embodiment, the feature of the concentric double cross mode (Dual-Cross Patterns, English abbreviation: DCP) is extracted from the target face image. Among them, the DCP feature is improved on the basis of the characteristics of the local binary pattern (English full name: Local Binary Patterns, English abbreviation: LBP). First, the field of the unit 8 is changed into the double-area 8 fields, and then horizontally. In the vertical and diagonal directions, two sets of sub-DCP features of the same dimension are extracted according to the local quaternary coding method, and finally connected, which is a DCP feature.
可以理解的是,除了提取DCP特征以外,人脸识别装置还可以使用的特征有视觉特征、像素统计特征、人脸图像变换系数特征和人脸图像代数特征等。人脸特征提取就是针对人脸的某些特征进行的。人脸特征提取,也称人脸表征,它是对人脸进行特征建模的过程。人脸特征提取的方法归纳起来分为两大类:一种是基于知识的表征方法,另外一种是基于代数特征或统计学习的表征方法。It can be understood that in addition to extracting DCP features, the features that the face recognition device can use include visual features, pixel statistical features, face image transform coefficient features, and face image algebra features. Face feature extraction is performed on certain features of the face. Face feature extraction, also known as face representation, is a process of character modeling a face. The methods of face feature extraction are summarized into two categories: one is based on knowledge representation and the other is based on algebraic features or statistical learning.
104、采用卡方检验计算目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据相似性分数对目标人脸图像进行识别,其中,原始人脸图像对应的DCP特征为预先得到的。104. Calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image by using the chi-square test, and identify the target face image according to the similarity score, wherein the original face image The corresponding DCP features are obtained in advance.
本实施例中,采用卡方检验计算目标人脸图像对应的DCP特征与数据库中存储的原始人脸图像对应的DCP特征的相似性分数,通过设定一个阈值, 当相似性分数超过这一阈值,则把匹配得到的结果输出。In this embodiment, the similarity score of the DCP feature corresponding to the DCP feature corresponding to the original face image stored in the database is calculated by using the chi-square test, and a threshold is set by setting a threshold value. When the similarity score exceeds this threshold, the result of the matching is output.
人脸识别就是将待识别的人脸特征与已得到的人脸特征模板进行比较,根据相似程度对人脸的身份信息进行判断。这一过程又分为两类:一类是确认,是一对一进行图像比较的过程,另一类是辨认,是一对多进行图像匹配对比的过程,此处不对具体匹配方式做限定。Face recognition is to compare the face features to be recognized with the obtained face feature templates, and judge the identity information of the faces according to the degree of similarity. This process is divided into two categories: one is confirmation, one-to-one process of image comparison, and the other is recognition, which is a one-to-many process of image matching and comparison. The specific matching method is not limited here.
本发明实施例中,提供了一种人脸识别的方法,首先获取用户的人脸图像,再对人脸图像进行滤波处理,并得到目标人脸图像,然后从目标人脸图像中提取DCP特征,最后采用卡方检验计算目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据相似性分数对目标人脸图像进行识别,其中,原始人脸图像对应的DCP特征为预先得到的。通过对DCP特征的提取来有效地识别人脸,使得进行人脸识别的时候更具有鲁棒性,增加方案的实用性,提升用户体验。In the embodiment of the present invention, a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target face image is identified according to the similarity score, wherein the original face image The corresponding DCP features are obtained in advance. The face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
可选地,在上述图1对应的实施例的基础上,本发明实施例提供的人脸识别的方法第一个可选实施例中,对人脸图像进行滤波处理,并得到目标人脸图像,可以包括:Optionally, in the first optional embodiment of the method for recognizing a face according to the embodiment of the present invention, the face image is filtered and the target face image is obtained. Can include:
按照如下方式计算人脸图像的滤波梯度图像:The filtered gradient image of the face image is calculated as follows:
Figure PCTCN2015098018-appb-000021
Figure PCTCN2015098018-appb-000021
其中,FDG(θ)表示θ角度的方向上对应的人脸图像的滤波梯度图像,
Figure PCTCN2015098018-appb-000022
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符号。
Where FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000022
Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ▽ represents the gradient operator symbol.
本实施例中,为了抑制噪声和光照变化的影响,需要从多个角度的方向上,对输入的人脸图像分别使用高斯一阶导数算子(英文全称:the first derivative of Gaussian operator,英文缩写:FDG)计算得到FDG滤波梯度图像,其公式如下:In this embodiment, in order to suppress the influence of noise and illumination changes, it is necessary to use a Gaussian first derivative operator for the input face image from multiple angles (English full name: the first derivative of Gaussian operator, English abbreviation :FDG) Calculate the FDG filter gradient image, the formula is as follows:
Figure PCTCN2015098018-appb-000023
Figure PCTCN2015098018-appb-000023
FDG(θ)表示θ角度的方向上对应的人脸图像的FDG滤波梯度图像,
Figure PCTCN2015098018-appb-000024
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符号。
FDG(θ) represents an FDG filtered gradient image of a face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000024
Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ▽ represents the gradient operator symbol.
其中,
Figure PCTCN2015098018-appb-000025
是用在滤波中的标准方向向量,它表示为:
among them,
Figure PCTCN2015098018-appb-000025
Is the standard direction vector used in the filter, which is expressed as:
Figure PCTCN2015098018-appb-000026
Figure PCTCN2015098018-appb-000026
G是二维高斯滤波器,其计算公式如下:G is a two-dimensional Gaussian filter whose calculation formula is as follows:
Figure PCTCN2015098018-appb-000027
Figure PCTCN2015098018-appb-000027
高斯滤波器是一类根据高斯函数的形状来选择权值的线性平滑滤波器。高斯平滑滤波器对于抑制服从正态分布的噪声非常有效。一维零均值高斯函数为:A Gaussian filter is a type of linear smoothing filter that selects weights based on the shape of a Gaussian function. Gaussian smoothing filters are very effective at suppressing noise that obeys a normal distribution. The one-dimensional zero-mean Gaussian function is:
Figure PCTCN2015098018-appb-000028
Figure PCTCN2015098018-appb-000028
其中,高斯分布参数Sigma决定了高斯函数的宽度。对于图像处理来说,常用二维零均值离散高斯函数作平滑滤波器。高斯函数具有五个重要的性质,这些性质使得它在早期图像处理中特别有用.这些性质表明,高斯平滑滤波器无论在空间域还是在频率域都是十分有效的低通滤波器,且高斯函数具有五个非常重要的性质,分别是:Among them, the Gaussian distribution parameter Sigma determines the width of the Gaussian function. For image processing, a two-dimensional zero-mean discrete Gaussian function is commonly used as a smoothing filter. The Gaussian function has five important properties that make it particularly useful in early image processing. These properties indicate that Gaussian smoothing filters are very efficient low-pass filters in both the spatial and frequency domains, and Gaussian functions. There are five very important properties, namely:
(1)二维高斯函数具有旋转对称性,即滤波器在各个方向上的平滑程度是相同的。一般来说,一幅图像的边缘方向是事先不知道的,因此,在滤波前是无法确定一个方向上比另一方向上需要更多的平滑。旋转对称性意味着高斯平滑滤波器在后续边缘检测中不会偏向任一方向。(1) The two-dimensional Gaussian function has rotational symmetry, that is, the smoothness of the filter in all directions is the same. In general, the edge direction of an image is not known in advance, so it is impossible to determine that more smoothing is required in one direction than in the other direction before filtering. Rotational symmetry means that the Gaussian smoothing filter does not deflect in either direction in subsequent edge detection.
(2)高斯函数是单值函数,这表明,高斯滤波器用像素邻域的加权均值来代替该点的像素值,而每一邻域像素点权值是随该点与中心点的距离单调增减的。这一性质是很重要的,因为边缘是一种图像局部特征,如果平滑运算对离算子中心很远的像素点仍然有很大作用,则平滑运算会使图像失真。(2) The Gaussian function is a single-valued function, which indicates that the Gaussian filter replaces the pixel value of the point with the weighted mean of the pixel neighborhood, and the weight of each neighborhood pixel is monotonically increased with the distance between the point and the center point. Less. This property is important because the edge is an image local feature. If the smoothing operation still has a large effect on pixels far from the center of the operator, the smoothing operation will distort the image.
(3)高斯函数的傅立叶变换频谱是单瓣的,这一性质是高斯函数傅立叶变换等于高斯函数本身这一事实的直接推论。图像常被不希望的高频信号所污染(比如噪声和细纹理),而所希望的图像特征(比如边缘),既含有低频分量,又含有高频分量.高斯函数傅里叶变换的单瓣意味着平滑图像不会被不需要的高频信号所污染,同时保留了大部分所需信号。(3) The Fourier transform spectrum of the Gaussian function is single-lobed. This property is a direct inference of the fact that the Gaussian function Fourier transform is equal to the Gaussian function itself. Images are often contaminated by unwanted high-frequency signals (such as noise and fine texture), and desired image features (such as edges) contain both low-frequency components and high-frequency components. Gaussian function Fourier transform single-lobes This means that the smoothed image is not contaminated by unwanted high frequency signals while retaining most of the desired signal.
(4)高斯滤波器宽度(决定着平滑程度)是由参数σ表征的,而且σ和平滑程度的关系是非常简单的。σ越大,高斯滤波器的频带就越宽,平滑程度 就越好.通过调节平滑程度参数σ,可在图像特征过分模糊(过于平滑)与平滑图像中由于噪声和细纹理所引起的过多的不希望突变量(不够平滑)之间取得折衷。(4) The Gaussian filter width (determining the degree of smoothness) is characterized by the parameter σ, and the relationship between σ and the degree of smoothing is very simple. The larger the σ, the wider the band of the Gaussian filter and the smoothness The better. By adjusting the smoothness parameter σ, a compromise can be obtained between excessive blurring of the image features (too smooth) and excessive undesired abrupt changes (not enough smoothing) due to noise and fine texture in the smoothed image.
(5)由于高斯函数的可分离性,较大尺寸的高斯滤波器可以得以有效地实现。二维高斯函数卷积可以分两步来进行,首先将图像与一维高斯函数进行卷积,然后将卷积结果与方向垂直的相同一维高斯函数卷积.因此,二维高斯滤波的计算量随滤波模板宽度成线性增长而不是成平方增长。(5) Due to the separability of the Gaussian function, a larger-sized Gaussian filter can be effectively realized. The two-dimensional Gaussian function convolution can be performed in two steps. First, the image is convolved with the one-dimensional Gaussian function, and then the convolution result is convolved with the same one-dimensional Gaussian function perpendicular to the direction. Therefore, the calculation of the two-dimensional Gaussian filter The amount grows linearly with the width of the filter template rather than squared.
其次,本发明实施例中,提供了一种通过对人脸图像进行滤波处理,得到目标人脸图像的方法,利用相关公式从四个方向分别计算得到滤波梯度图像,并相应地对滤波器进行了优化,从而可以抑制噪声和光照变化的影响,提升人脸图像的品质,有利于提取图像特征,增强方案的实用性。Secondly, in the embodiment of the present invention, a method for obtaining a target face image by filtering a face image is provided, and a filter gradient image is separately calculated from four directions by using a correlation formula, and the filter is correspondingly performed. The optimization can suppress the influence of noise and illumination changes, improve the quality of the face image, facilitate the extraction of image features, and enhance the practicability of the scheme.
可选地,在上述图1对应的第一个可选实施例的基础上,本发明实施例提供的人脸识别的方法第二个可选实施例中,θ角度为四个角度值,分别为0度、45度、90度以及135度。Optionally, in the second optional embodiment of the method for recognizing a face according to the foregoing embodiment of the present invention, the θ angle is four angle values, respectively. It is 0 degrees, 45 degrees, 90 degrees and 135 degrees.
本实施例中,将以θ角度分别等于0度、45度、90度以及135度时的滤波器进行介绍。In the present embodiment, a filter in which the θ angle is equal to 0, 45, 90, and 135 degrees, respectively, will be described.
根据上述图1对应的第一个可选实施例可知,FDG滤波梯度图像,其公式如下:According to the first alternative embodiment corresponding to FIG. 1 above, the FDG filter gradient image has the following formula:
Figure PCTCN2015098018-appb-000029
Figure PCTCN2015098018-appb-000029
在计算0度和90度的FDG梯度滤波图像时,直接将相应数值代入公式计算即可,然而,当在计算45度和135度的FDG梯度滤波图像时,则可以使用如下公式:When calculating the FDG gradient filtered images of 0 degrees and 90 degrees, the corresponding values can be directly substituted into the formula. However, when calculating the FDG gradient filtered images of 45 degrees and 135 degrees, the following formula can be used:
Fθ=FXcosθ+FYsinθF θ =F X cosθ+F Y sinθ
其中,FX表示水平方向上的FDG梯度滤波,FY表示垂直方向上的FDG梯度滤波。Wherein F X represents FDG gradient filtering in the horizontal direction, and F Y represents FDG gradient filtering in the vertical direction.
由于对FDG滤波器进行了优化,通过实验验证,滤波器的尺寸选用5×5,二维高斯滤波器中的数学期望μ=4,方差σ=1时,滤波器效果会比较好。可以得到4个方向上的滤波器,分别表示为: Since the FDG filter is optimized, it is verified by experiments that the size of the filter is 5×5, the mathematical expectation in the two-dimensional Gaussian filter is μ=4, and the variance σ=1, the filter effect is better. Filters in 4 directions can be obtained, which are expressed as:
水平方向:
Figure PCTCN2015098018-appb-000030
horizontal direction:
Figure PCTCN2015098018-appb-000030
垂直方向:
Figure PCTCN2015098018-appb-000031
Vertical direction:
Figure PCTCN2015098018-appb-000031
对角方向:
Figure PCTCN2015098018-appb-000032
Diagonal direction:
Figure PCTCN2015098018-appb-000032
对角方向:
Figure PCTCN2015098018-appb-000033
Diagonal direction:
Figure PCTCN2015098018-appb-000033
需要说明的是,上述得到4个方向上的滤波器仅仅为一个示意,在实际应用中,也可以是其他参数构成的滤波器,此处不做限定。It should be noted that the filter in the four directions is only one schematic. In practical applications, the filter may be configured by other parameters, which is not limited herein.
再次,本发明实施例中,针对人脸图像的滤波处理,具体可以涉及到四个方向的滤波,分别为0度、45度、90度以及135度,经过实验验证,从这四个角度对输入的人脸图像做滤波处理,能够达到处理效率性价比较高的效果。虽然在角度间隔更小的情况下,对多个方向分别进行滤波处理可以达到更好的 抑制噪声和抑制光照变化的功能,但是会增加计算负担。不利于实际应用。然而使用角度间隔更大的方向进行滤波,可能会对噪声和光照变化的抑制变弱,不利于图像处理。因此,本发明方案提供的对人脸图像做四个方向上的滤波处理具有可操作性和实用性。In addition, in the embodiment of the present invention, the filtering process for the face image may specifically involve filtering in four directions, namely 0 degrees, 45 degrees, 90 degrees, and 135 degrees, and is verified by experiments, from these four angles. The input face image is filtered to achieve a cost-effective effect. Although the filtering process in multiple directions can achieve better when the angular interval is smaller. The ability to suppress noise and suppress illumination changes, but it increases the computational burden. Not conducive to practical applications. However, filtering using a direction with a larger angular interval may weaken the suppression of noise and illumination variations, which is not conducive to image processing. Therefore, the filtering process in the four directions of the face image provided by the solution of the present invention has operability and practicability.
可选地,在上述图1对应的实施例的基础上,本发明实施例提供的人脸识别的方法第三个可选实施例中,从目标人脸图像中提取同心双十字交叉模式DCP特征,可以包括:Optionally, on the basis of the foregoing embodiment corresponding to FIG. 1 , in a third optional embodiment of the method for recognizing a face according to the embodiment of the present invention, extracting a concentric double crisscross mode DCP feature from a target face image Can include:
以目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;Taking the center point of the target face image as the center, respectively obtaining the inner circle and the outer circle having different radii;
从内圆上获取8个角度间隔相等的内圆采样点;Obtaining 8 inner circle sampling points with equal angular intervals from the inner circle;
从外圆上获取8个角度间隔相等的外圆DCP采样点,其中,内圆采样点与外圆采样点具有对应关系;Obtaining 8 outer circle DCP sampling points with equal angular intervals from the outer circle, wherein the inner circle sampling points have corresponding correspondences with the outer circle sampling points;
分别对内圆采样点与外圆采样点进行编码,并得到DCP特征。The inner circle sampling point and the outer circle sampling point are respectively coded, and the DCP feature is obtained.
本实施例中,采用双圆8个领域的采样的方法提取DCP特征,首先分别连接目标人脸图像的对角,并得到对角线的交点,即中心点。以该中心点为圆心画出两个半径不同的圆,半径较小的为内圆,半径较大的为外圆。对于内圆与外圆同时在每隔45度角的方向上,按照内圆与外圆不同的半径均与采样两个点。具体地,请参阅图2,图2为本发明实施例中DCP特征局部采样示意图,如图所示,中心点为O,内圆采样点为Ai,外圆采样点为Bi,i=0,1,…,7,分别按照8个方向对称排列,也就是内圆采样点与外圆采样点具有对应关系,例如,A0与B0具有对应关系,均对应0度角,A1与B1具有对应关系,均对应45度角,以此类推,直到得到A7与B7具有对应关系,均对应335度角。In this embodiment, the DCP features are extracted by using the sampling method of the eight circles of the double circle. First, the diagonals of the target face images are respectively connected, and the intersection of the diagonal lines, that is, the center point is obtained. Two circles with different radii are drawn with the center point as the center, the inner circle is the smaller radius, and the outer circle is the larger radius. For the inner circle and the outer circle at the same time in every 45 degree angle, the radius of the inner circle and the outer circle are both sampled with two points. Specifically, please refer to FIG. 2. FIG. 2 is a schematic diagram of local sampling of DCP features according to an embodiment of the present invention. As shown in the figure, the center point is O, the inner circle sampling point is A i , and the outer circle sampling point is B i , i= 0,1,...,7 are symmetrically arranged in 8 directions, that is, the inner circle sampling point has a corresponding relationship with the outer circle sampling point. For example, A 0 and B 0 have a corresponding relationship, and both correspond to a 0 degree angle, A 1 Corresponding to B 1 , each corresponds to a 45 degree angle, and so on, until A 7 and B 7 have a corresponding relationship, which corresponds to a 335 degree angle.
本方案中,采样方式是双圆8个领域提取DCP特征,比起单元提取特征的优势在于增加了局部领域的上下文信息,刻画了局部强度对比。然而在改进当中,尝试了三圆8个领域的采样方式,但是测试效果一般,其原因是中心点与每个方向的3个点的编码方式没有合适的计算形式,所以影响了该采样方法的评价。In this scheme, the sampling method is to extract DCP features in 8 fields of double circle. The advantage of extracting features from the unit is to increase the context information of the local domain and characterize the local intensity contrast. However, in the improvement, the sampling method of eight fields of three circles was tried, but the test effect was general. The reason was that the coding method of the center point and the three points in each direction did not have a suitable calculation form, so the sampling method was affected. Evaluation.
另外,尝试椭圆形的8个领域采用方法,测试结果并没有明显提高。至于16个领域与本方案研究的8个方向不一致,故不进行介绍。 In addition, the eight fields of the ellipse were tried, and the test results were not significantly improved. As for the 16 areas that are inconsistent with the eight directions of this program, they are not introduced.
可以理解的是,当内圆的半径取4,且外圆的半径取6个像素时,得出的测试结果较好。It can be understood that when the radius of the inner circle is 4 and the radius of the outer circle is 6 pixels, the test result is better.
最后,分别对内圆采样点与外圆采样点进行编码,并得到DCP特征。Finally, the inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
其次,本发明实施例中,采用从双圆8个领域局部采样的方法来提取DCP特征,相比之下,现有技术中通过从单圆8个领域或单圆16个领域中提取LBP特征,提取DCP特征更能迎合面部纹理的走向。对于人脸而言,主要有两部分关键信息,一是面部器官结构,另一个是面部器官的形状。一般而言,面部器官的形状是规则的,而它们的末端大致收敛于对角方向,从而可以从对角方向来提取特征。此外,在前额上的皱纹是平面的,然而在面颊上是凸起或倾斜的,因此采用双圆8个领域局部采样的方法,能够较好地描述面部主要纹理信息,提升方案的可行性。Secondly, in the embodiment of the present invention, the DCP feature is extracted by using a method of local sampling from 8 fields of double circle. In contrast, in the prior art, LBP features are extracted from 16 fields of a single circle or 16 fields of a single circle. The extraction of DCP features can better cater to the trend of facial texture. For the face, there are two main pieces of key information, one is the structure of the facial organs, and the other is the shape of the facial organs. In general, the shape of the facial organs is regular, and their ends converge substantially in a diagonal direction so that features can be extracted from the diagonal direction. In addition, the wrinkles on the forehead are flat, but they are convex or inclined on the cheeks. Therefore, the method of local sampling of 8 fields in the double circle can better describe the main texture information of the face and improve the feasibility of the scheme.
可选地,在上述图1对应的第三个可选实施例的基础上,本发明实施例提供的人脸识别的方法第四个可选实施例中,分别对内圆采样点与外圆采样点进行编码,并得到DCP特征,可以包括:Optionally, in the fourth optional embodiment of the method for recognizing a face provided by the embodiment of the present invention, the inner circle sampling point and the outer circle are respectively selected on the basis of the third optional embodiment corresponding to FIG. The sampling points are encoded and the DCP features are obtained, which may include:
按照如下方式分别计算内圆采样点与外圆采样点上的DCP编码:Calculate the DCP code on the inner and outer sampling points as follows:
Figure PCTCN2015098018-appb-000034
Figure PCTCN2015098018-appb-000034
或,or,
Figure PCTCN2015098018-appb-000035
Figure PCTCN2015098018-appb-000035
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000036
Figure PCTCN2015098018-appb-000037
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
Figure PCTCN2015098018-appb-000036
Figure PCTCN2015098018-appb-000037
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000038
Figure PCTCN2015098018-appb-000038
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000039
表示水平和垂直方向采样点的DCP编码,
Figure PCTCN2015098018-appb-000040
表示对角方向采样点的DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000039
DCP encoding indicating the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000040
Indicates the DCP encoding of the sample points in the diagonal direction.
本实施例中,分别对内圆采样点与外圆采样点进行编码,并得到DCP特征具体可以是,分为下面两个步骤来完成,第一步,按照8个方向对内圆与外圆上的采样点各自独立编码;第二步,连接内圆上的8个方向编码,以及外圆上的8个方向编码,得到DCP编码。 In this embodiment, the inner circle sampling point and the outer circle sampling point are respectively coded, and the DCP feature is obtained, which may be specifically divided into the following two steps. The first step is to divide the inner circle and the outer circle according to the eight directions. The upper sampling points are independently coded; the second step is to connect the 8 direction codes on the inner circle and the 8 direction codes on the outer circle to obtain DCP coding.
在每个方向上,DCP编码的计算公式为:In each direction, the DCP encoding is calculated as:
Figure PCTCN2015098018-appb-000041
Figure PCTCN2015098018-appb-000041
或,or,
Figure PCTCN2015098018-appb-000042
Figure PCTCN2015098018-appb-000042
两个公式计算得到的相似性分数一致,其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000043
以及IO分别表示采样点Ai、Bi以及O的灰度值;
The similarity scores calculated by the two formulas are consistent, where DCP i represents the DCP code of the i-th sample point, and S(x) represents the gray-scale intensity function.
Figure PCTCN2015098018-appb-000043
And I O respectively represent gray values of the sampling points A i , B i and O;
灰度强度函数S(x)的计算公式如下:The calculation formula of the gray intensity function S(x) is as follows:
Figure PCTCN2015098018-appb-000044
Figure PCTCN2015098018-appb-000044
根据DCP编码的计算公式,DCP的值可以取0、1、2和3,共四个值,即可四进制编码。若按照8个方向编码,则共有48=65536维,这在实际应用中困难较多。若把方向分成2组,每组4个方向,分别为水平和垂直方向(0,π/2,π,3π/2),以及对角方向(π/4,3π/4,5π/4,7π/4),共有44×2=512维,这样便可以大大降低维数。According to the calculation formula of DCP coding, the value of DCP can take 0, 1, 2 and 3, a total of four values, which can be encoded in quaternary. If it is coded in 8 directions, there are 4 8 = 65536 dimensions, which is more difficult in practical applications. If the direction is divided into two groups, each group has four directions, which are horizontal and vertical directions (0, π/2, π, 3π/2), and diagonal directions (π/4, 3π/4, 5π/4, 7π/4), a total of 4 4 × 2 = 512 dimensions, so that the dimension can be greatly reduced.
对于上述的分组编码策略,是利用了最大联合熵理论,为了减少信息的损耗,就需要每组4个方向之间的间隔最大,即互相垂直,这样各自便具有一定的独立性。此外,对于一幅图像,像素点越稀疏分散,则它们之间的独立性越强,联合熵就可以达到最大。请参阅图3,图3为本发明实施例中DCP特征的两种采用模式示意图,如图所示,根据以上分析,DCP编码可以分为以下两组:For the above-mentioned block coding strategy, the maximum joint entropy theory is utilized. In order to reduce the loss of information, the interval between the four directions of each group needs to be the largest, that is, perpendicular to each other, so that each has a certain independence. In addition, for an image, the more sparsely dispersed the pixels, the stronger the independence between them, and the joint entropy can be maximized. Please refer to FIG. 3. FIG. 3 is a schematic diagram of two modes of adopting DCP features according to an embodiment of the present invention. As shown in the figure, according to the above analysis, DCP coding can be divided into the following two groups:
水平方向和垂直方向:DCP-1={DCP0,DCP2,DCP4,DCP6}Horizontal and vertical directions: DCP -1 = {DCP 0 , DCP 2 , DCP 4 , DCP 6 }
对角方向:DCP-2={DCP1,DCP3,DCP5,DCP7}Diagonal direction: DCP -2 = {DCP 1 , DCP 3 , DCP 5 , DCP 7 }
于是各自的计算公式如下:Then the respective calculation formula is as follows:
Figure PCTCN2015098018-appb-000045
Figure PCTCN2015098018-appb-000045
Figure PCTCN2015098018-appb-000046
Figure PCTCN2015098018-appb-000046
最后组成的DCP特征为: The final composition of the DCP is:
Figure PCTCN2015098018-appb-000047
Figure PCTCN2015098018-appb-000047
请参阅图4,图4为本发明实施例中DCP特征的提取过程示意图,首先获取人脸图像,通过滤波处理得到目标人脸图像。将目标人脸图像进行双圆8领域划分,得到两组4个方向的DCP编码,最后组成目标人脸图像DCP特征。通过与数据库中的DCP特征比对来进行人脸识别。Please refer to FIG. 4. FIG. 4 is a schematic diagram of a process for extracting a DCP feature according to an embodiment of the present invention. First, a face image is acquired, and a target face image is obtained by filtering. The target face image is divided into two circles and 8 fields, and DCP coding of two sets of four directions is obtained, and finally the target face image DCP feature is formed. Face recognition is performed by comparison with DCP features in the database.
再次,本发明实施例中,分别对内圆采样点与外圆采样点进行编码,并得到DCP特征。按照8个方向对内圆与外圆上的采样点各自独立编码,连接内圆上的8个方向编码,以及外圆上的8个方向编码,得到DCP编码。通过上述方式获取DCP特征,可以大大降低计算维数,提供计算效率。虽然这种拆分编码的策略损失了部分纹理信息,但是使得DCP编码在表述人脸上更紧密,且更具有鲁棒性。Again, in the embodiment of the present invention, the inner circle sampling point and the outer circle sampling point are respectively encoded, and a DCP feature is obtained. The sampling points on the inner circle and the outer circle are independently coded in 8 directions, 8 direction codes on the inner circle are connected, and 8 direction codes on the outer circle are obtained to obtain DCP coding. By obtaining the DCP feature in the above manner, the calculation dimension can be greatly reduced, and the calculation efficiency can be provided. Although this split coding strategy loses some of the texture information, it makes DCP coding more compact and more robust on the face of the person.
可选地,在上述图1对应的第四个可选实施例的基础上,本发明实施例提供的人脸识别的方法第五个可选实施例中,Optionally, in the fifth optional embodiment of the method for recognizing a face provided by the embodiment of the present invention, in the fourth optional embodiment corresponding to FIG.
按照如下方式计算灰度强度函数S(x)的值:Calculate the value of the grayscale intensity function S(x) as follows:
Figure PCTCN2015098018-appb-000048
Figure PCTCN2015098018-appb-000048
其中,S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Where S(x) represents the gray level intensity function, b(x) represents the constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is the boundary threshold.
本实施例中,在双圆8领域中,当素点与中心点的灰度值接近时,强度对比容易受到噪声影响,因此采用“软边界”编码方式对灰度强度函数进行改进,具体可以采用如下公式计算灰度强度函数S(x)的值:In this embodiment, in the field of double circle 8, when the gray value of the prime point and the center point are close, the intensity contrast is easily affected by noise, so the "soft boundary" coding method is used to improve the gray intensity function, specifically Calculate the value of the grayscale intensity function S(x) using the following formula:
Figure PCTCN2015098018-appb-000049
Figure PCTCN2015098018-appb-000049
其中,常值函数b(x)的计算方法如下:Among them, the calculation method of the constant value function b(x) is as follows:
Figure PCTCN2015098018-appb-000050
Figure PCTCN2015098018-appb-000050
模糊隶属度函数f1,d(x)与f0,d(x)的计算方法如下: The fuzzy membership functions f 1,d (x) and f 0,d (x) are calculated as follows:
Figure PCTCN2015098018-appb-000051
Figure PCTCN2015098018-appb-000051
f0,d(x)=1-f1,d(x)f 0,d (x)=1-f 1,d (x)
其中,d为边界阈值,它影响模块后隶属度,在实验中测得d为0.0005。Where d is the boundary threshold, which affects the membership degree of the module. In the experiment, d is 0.0005.
进一步地,本发明实施例中,当像素点与中心点的灰度值接近时,强度对比容易受到噪声影响,因此采用“软边界”编码方式对灰度强度函数进行改进,使得像素点与中心点的灰度值接近时,不容易受到噪声影响,从而使得对于DCP特征的提取过程更具有鲁棒性,并提升本发明方案的可行性,以及实用性。Further, in the embodiment of the present invention, when the gray point value of the pixel point and the center point are close, the intensity contrast is susceptible to noise, so the "soft boundary" coding method is used to improve the gray intensity function, so that the pixel point and the center When the gray value of the point is close, it is not susceptible to noise, which makes the extraction process of the DCP feature more robust, and improves the feasibility and practicability of the solution of the present invention.
为便于理解,下面将以实验过程对本发明中一种人脸识别的方法进行详细描述,具体为:For ease of understanding, a method for face recognition in the present invention will be described in detail below by an experimental procedure, specifically as follows:
本实验主要用于验证上述各个实施例中涉及到的最优数据,在实验中,对应图像分块主要采用3×3分块,而其他的分块组合因为测试效果一般,故不采用。各个分块在特征匹配时,都相应地分配了权重,在用FDG算子之前,若先使用特征检测(英文全称:Difference of Gaussia,英文缩写:DoG)算子处理人脸图像,其识别效果会较差。在包含38225幅人脸图像的面部登记数据库上,测试后的错误拒接率(英文全称:False Rejection Rate,英文缩写:FRR)以及错误接受率(英文全称:False Acceptance Rate,英文缩写:FAR)评价结果如图5所示,请参阅图5,图5是实验(1)的FAR和FRR评价指标结果示意图。其中,而FRR和FAR是用来评估指纹或面纹识别算法性能的两个主要参数。This experiment is mainly used to verify the optimal data involved in the above various embodiments. In the experiment, the corresponding image block mainly adopts 3×3 blocks, and other block combinations are not used because of the general test effect. When each feature is matched, the weights are assigned accordingly. Before using the FDG operator, if the feature detection (English name: Difference of Gaussia, English abbreviation: DoG) is used to process the face image, the recognition effect is obtained. Will be worse. On the face registration database containing 38225 face images, the error rejection rate after testing (English full name: False Rejection Rate, English abbreviation: FRR) and the error acceptance rate (English full name: False Acceptance Rate, English abbreviation: FAR) The evaluation results are shown in Fig. 5. Please refer to Fig. 5. Fig. 5 is a schematic diagram showing the results of the FAR and FRR evaluation indexes of the experiment (1). Among them, FRR and FAR are the two main parameters used to evaluate the performance of fingerprint or face recognition algorithms.
然而在改用本发明方案中的滤波器后,请参阅图6,图6是实验(2)的FAR和FRR评价指标结果示意图,图6展示了在面部登记数据库上,得到的FAR和FRR评价结果。 However, after switching to the filter in the scheme of the present invention, please refer to FIG. 6. FIG. 6 is a schematic diagram of the FAR and FRR evaluation index results of the experiment (2), and FIG. 6 shows the FAR and FRR evaluations obtained on the face registration database. result.
在特征匹配时,优化了各个分块的权重后,得到的FAR和FRR评价结果如图7所示,图7是实验(3)的FAR和FRR评价指标结果示意图。In the feature matching, after optimizing the weight of each block, the obtained FAR and FRR evaluation results are shown in Fig. 7, and Fig. 7 is a schematic diagram of the FAR and FRR evaluation index results of experiment (3).
经过对3×3分块尺寸的调整和各个分块权重的优化后,得到图8对应的实验(4)的FAR和FRR评价指标结果示意图。After the adjustment of the 3×3 block size and the optimization of each block weight, the FAR and FRR evaluation index results of the experiment (4) corresponding to FIG. 8 are obtained.
采用软边界模糊隶属函数后,请参阅图9,图9是实验(5)的FAR和FRR评价指标结果示意图。可以理解的是,如果人脸图像未经过对齐处理,则需要需要先对人脸图像使用相似变换与仿射变换,再进行图像裁剪。After using the soft boundary fuzzy membership function, please refer to FIG. 9. FIG. 9 is a schematic diagram of the FAR and FRR evaluation index results of experiment (5). It can be understood that if the face image is not aligned, it is necessary to first perform similar transformation and affine transformation on the face image, and then perform image cropping.
下面对本发明中的人脸识别装置进行详细描述,请参阅图10,本发明实施例中的人脸识别装置200,包括:The following is a detailed description of the face recognition device in the present invention. Referring to FIG. 10, the face recognition device 200 in the embodiment of the present invention includes:
获取模块201,用于获取用户的人脸图像;The obtaining module 201 is configured to acquire a face image of the user;
滤波模块202,用于对所述获取模块201获取的所述人脸图像进行滤波处理,并得到目标人脸图像;The filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
提取模块203,用于从所述滤波模块202滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;The extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
计算模块204,用于采用卡方检验计算所述提取模块203提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的。a calculation module 204, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity The score identifies the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance.
本实施例中,获取模块201获取用户的人脸图像;滤波模块202对获取模块201获取的人脸图像进行滤波处理,并得到目标人脸图像;提取模块203从滤波模块202滤波后得到的目标人脸图像中提取同心双十字交叉模式DCP特征;计算模块204采用卡方检验计算提取模块203提取的目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据相似性分数对目标人脸图像进行识别,其中,原始人脸图像对应的DCP特征为预先得到的。In this embodiment, the acquiring module 201 acquires a face image of the user; the filtering module 202 performs filtering processing on the face image acquired by the obtaining module 201, and obtains a target face image; and the target obtained by the filtering module 202 is filtered by the filtering module 202. The concentric double cross mode DCP feature is extracted from the face image; the calculation module 204 uses the chi-square test to calculate the similarity score between the DCP feature corresponding to the target face image extracted by the extraction module 203 and the DCP feature corresponding to the original face image. And the target face image is identified according to the similarity score, wherein the DCP feature corresponding to the original face image is obtained in advance.
本发明实施例中,提供了一种人脸识别的方法,首先获取用户的人脸图像,再对人脸图像进行滤波处理,并得到目标人脸图像,然后从目标人脸图像中提取DCP特征,最后采用卡方检验计算目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据相似性分数对目标 人脸图像进行识别,其中,原始人脸图像对应的DCP特征为预先得到的。通过对DCP特征的提取来有效地识别人脸,使得进行人脸识别的时候更具有鲁棒性,增加方案的实用性,提升用户体验。In the embodiment of the present invention, a method for face recognition is provided. First, a face image of a user is acquired, and then the face image is filtered, and a target face image is obtained, and then the DCP feature is extracted from the target face image. Finally, the chi-square test is used to calculate the similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image, and the target is scored according to the similarity score. The face image is identified, wherein the DCP feature corresponding to the original face image is obtained in advance. The face recognition is effectively used to effectively identify the face, which makes the face recognition more robust, increases the practicability of the solution, and enhances the user experience.
请参阅图11,本发明人脸识别装置的另一个实施例包括:Referring to FIG. 11, another embodiment of the face recognition device of the present invention includes:
获取模块201,用于获取用户的人脸图像;The obtaining module 201 is configured to acquire a face image of the user;
滤波模块202,用于对所述获取模块201获取的所述人脸图像进行滤波处理,并得到目标人脸图像;The filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
提取模块203,用于从所述滤波模块202滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;The extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
计算模块204,用于采用卡方检验计算所述提取模块203提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的;a calculation module 204, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
其中,所述滤波模块202包括:The filtering module 202 includes:
计算单元2021,用于按照如下方式计算所述人脸图像的滤波梯度图像:The calculating unit 2021 is configured to calculate a filtered gradient image of the face image as follows:
Figure PCTCN2015098018-appb-000052
Figure PCTCN2015098018-appb-000052
其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
Figure PCTCN2015098018-appb-000053
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符号。
Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000053
Represents the standard direction vector in the filter, G represents the two-dimensional Gaussian filter, and ▽ represents the gradient operator symbol.
其次,本发明实施例中,提供了一种通过对人脸图像进行滤波处理,得到目标人脸图像的方法,利用相关公式从四个方向分别计算得到滤波梯度图像,并相应地对滤波器进行了优化,从而可以抑制噪声和光照变化的影响,提升人脸图像的品质,有利于提取图像特征,增强方案的实用性。Secondly, in the embodiment of the present invention, a method for obtaining a target face image by filtering a face image is provided, and a filter gradient image is separately calculated from four directions by using a correlation formula, and the filter is correspondingly performed. The optimization can suppress the influence of noise and illumination changes, improve the quality of the face image, facilitate the extraction of image features, and enhance the practicability of the scheme.
可选地,在上述图11对应的实施例的基础上,本发明实施例提供的人脸识别装置第一个可选实施例中,所述θ角度为四个角度值,分别为0度、45度、90度以及135度。Optionally, in the first optional embodiment of the face recognition device provided by the embodiment of the present invention, the θ angle is four angle values, respectively, on the basis of the foregoing embodiment corresponding to FIG. 45 degrees, 90 degrees and 135 degrees.
再次,本发明实施例中,针对人脸图像的滤波处理,具体可以涉及到四个方向的滤波,分别为0度、45度、90度以及135度,经过实验验证,从这四 个角度对输入的人脸图像做滤波处理,能够达到处理效率性价比较高的效果。虽然在角度间隔更小的情况下,对多个方向分别进行滤波处理可以达到更好的抑制噪声和抑制光照变化的功能,但是会增加计算负担。不利于实际应用。然而使用角度间隔更大的方向进行滤波,可能会对噪声和光照变化的抑制变弱,不利于图像处理。因此,本发明方案提供的对人脸图像做四个方向上的滤波处理具有可操作性和实用性。In addition, in the embodiment of the present invention, the filtering process for the face image may specifically involve filtering in four directions, namely 0 degrees, 45 degrees, 90 degrees, and 135 degrees, and is verified by experiments, from the fourth The angle of the input face image is filtered, which can achieve the effect of high cost performance. Although the filtering process in multiple directions can achieve better functions of suppressing noise and suppressing illumination changes in the case where the angular interval is smaller, the calculation load is increased. Not conducive to practical applications. However, filtering using a direction with a larger angular interval may weaken the suppression of noise and illumination variations, which is not conducive to image processing. Therefore, the filtering process in the four directions of the face image provided by the solution of the present invention has operability and practicability.
请参阅图12,本发明人脸识别装置的另一个实施例包括:Referring to FIG. 12, another embodiment of the face recognition device of the present invention includes:
获取模块201,用于获取用户的人脸图像;The obtaining module 201 is configured to acquire a face image of the user;
滤波模块202,用于对所述获取模块201获取的所述人脸图像进行滤波处理,并得到目标人脸图像;The filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
提取模块203,用于从所述滤波模块202滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;The extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
计算模块204,用于采用卡方检验计算所述提取模块203提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的;a calculation module 204, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
其中,所述提取模块203包括:The extraction module 203 includes:
第一获取单元2031,用于以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;The first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
第二获取单元2032,用于从所述第一获取单元2031获取的所述内圆上获取8个角度间隔相等的内圆采样点;a second acquiring unit 2032, configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
第三获取单元2033,用于从所述第一获取单元2031获取的所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;a third obtaining unit 2033, configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
编码单元2034,用于分别对所述第二获取单元2032获取的所述内圆采样点与所述第三获取单元2033获取的所述外圆采样点进行编码,并得到所述DCP特征。The encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature.
其次,本发明实施例中,采用从双圆8个领域局部采样的方法来提取DCP特征,相比之下,现有技术中通过从单圆8个领域或单圆16个领域中提取LBP 特征,提取DCP特征更能迎合面部纹理的走向。对于人脸而言,主要有两部分关键信息,一是面部器官结构,另一个是面部器官的形状。一般而言,面部器官的形状是规则的,而它们的末端大致收敛于对角方向,从而可以从对角方向来提取特征。此外,在前额上的皱纹是平面的,然而在面颊上是凸起或倾斜的,因此采用双圆8个领域局部采样的方法,能够较好地描述面部主要纹理信息,提升方案的可行性。Secondly, in the embodiment of the present invention, the DCP feature is extracted by using a method of local sampling from 8 fields of double circle. In contrast, in the prior art, LBP is extracted from 16 fields of a single circle or 16 fields of a single circle. Features, extracting DCP features can better cater to the direction of facial texture. For the face, there are two main pieces of key information, one is the structure of the facial organs, and the other is the shape of the facial organs. In general, the shape of the facial organs is regular, and their ends converge substantially in a diagonal direction so that features can be extracted from the diagonal direction. In addition, the wrinkles on the forehead are flat, but they are convex or inclined on the cheeks. Therefore, the method of local sampling of 8 fields in the double circle can better describe the main texture information of the face and improve the feasibility of the scheme.
请参阅图13,本发明人脸识别装置的另一个实施例包括:Referring to FIG. 13, another embodiment of the face recognition device of the present invention includes:
获取模块201,用于获取用户的人脸图像;The obtaining module 201 is configured to acquire a face image of the user;
滤波模块202,用于对所述获取模块201获取的所述人脸图像进行滤波处理,并得到目标人脸图像;The filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
提取模块203,用于从所述滤波模块202滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;The extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
计算模块204,用于采用卡方检验计算所述提取模块203提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的;a calculation module 204, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module 203 and a DCP feature corresponding to the original face image, and according to the similarity a score is used to identify the target face image, wherein the DCP feature corresponding to the original face image is obtained in advance;
其中,所述提取模块203包括:The extraction module 203 includes:
第一获取单元2031,用于以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;The first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
第二获取单元2032,用于从所述第一获取单元2031获取的所述内圆上获取8个角度间隔相等的内圆采样点;a second acquiring unit 2032, configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
第三获取单元2033,用于从所述第一获取单元2031获取的所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;a third obtaining unit 2033, configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
编码单元2034,用于分别对所述第二获取单元2032获取的所述内圆采样点与所述第三获取单元2033获取的所述外圆采样点进行编码,并得到所述DCP特征;The encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature;
其中,所述编码单元2034包括:The coding unit 2034 includes:
计算子单元20341,用于按照如下方式分别计算所述内圆采样点与所述外 圆采样点上的DCP编码:a calculating subunit 20341, configured to separately calculate the inner circle sampling point and the outer side as follows DCP encoding at the round sampling point:
Figure PCTCN2015098018-appb-000054
Figure PCTCN2015098018-appb-000054
或,or,
Figure PCTCN2015098018-appb-000055
Figure PCTCN2015098018-appb-000055
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000056
Figure PCTCN2015098018-appb-000057
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
Figure PCTCN2015098018-appb-000056
Figure PCTCN2015098018-appb-000057
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000058
Figure PCTCN2015098018-appb-000058
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000059
表示水平和垂直方向采样点的所述DCP编码,
Figure PCTCN2015098018-appb-000060
表示对角方向采样点的所述DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000059
The DCP encoding representing the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000060
The DCP code representing the sample points in the diagonal direction.
再次,本发明实施例中,分别对内圆采样点与外圆采样点进行编码,并得到DCP特征。按照8个方向对内圆与外圆上的采样点各自独立编码,连接内圆上的8个方向编码,以及外圆上的8个方向编码,得到DCP编码。通过上述方式获取DCP特征,可以大大降低计算维数,提供计算效率。虽然这种拆分编码的策略损失了部分纹理信息,但是使得DCP编码在表述人脸上更紧密,且更具有鲁棒性。Again, in the embodiment of the present invention, the inner circle sampling point and the outer circle sampling point are respectively encoded, and a DCP feature is obtained. The sampling points on the inner circle and the outer circle are independently coded in 8 directions, 8 direction codes on the inner circle are connected, and 8 direction codes on the outer circle are obtained to obtain DCP coding. By obtaining the DCP feature in the above manner, the calculation dimension can be greatly reduced, and the calculation efficiency can be provided. Although this split coding strategy loses some of the texture information, it makes DCP coding more compact and more robust on the face of the person.
可选地,在上述图13对应的实施例的基础上,本发明实施例提供的人脸识别装置第二个可选实施例中,Optionally, in the second optional embodiment of the face recognition device provided by the embodiment of the present invention, on the basis of the foregoing embodiment corresponding to FIG.
获取模块201,用于获取用户的人脸图像;The obtaining module 201 is configured to acquire a face image of the user;
滤波模块202,用于对所述获取模块201获取的所述人脸图像进行滤波处理,并得到目标人脸图像;The filtering module 202 is configured to perform filtering processing on the face image acquired by the acquiring module 201, and obtain a target face image;
提取模块203,用于从所述滤波模块202滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;The extraction module 203 is configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module 202;
计算模块204,用于采用卡方检验计算所述提取模块203提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分 数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的;The calculating module 204 is configured to calculate a similarity between the DCP feature corresponding to the target face image extracted by the extraction module 203 and the DCP feature corresponding to the original face image by using a chi-square test Number, and identifying the target face image according to the similarity score, wherein the DCP feature corresponding to the original face image is obtained in advance;
其中,所述提取模块203包括:The extraction module 203 includes:
第一获取单元2031,用于以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;The first obtaining unit 2031 is configured to obtain an inner circle and an outer circle having different radii, respectively, with the center point of the target face image as a center;
第二获取单元2032,用于从所述第一获取单元2031获取的所述内圆上获取8个角度间隔相等的内圆采样点;a second acquiring unit 2032, configured to acquire, from the inner circle acquired by the first acquiring unit 2031, eight inner circle sampling points with equal angular intervals;
第三获取单元2033,用于从所述第一获取单元2031获取的所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;a third obtaining unit 2033, configured to acquire, from the outer circle acquired by the first acquiring unit 2031, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sample Points have a corresponding relationship;
编码单元2034,用于分别对所述第二获取单元2032获取的所述内圆采样点与所述第三获取单元2033获取的所述外圆采样点进行编码,并得到所述DCP特征;The encoding unit 2034 is configured to respectively encode the inner circle sampling point acquired by the second acquiring unit 2032 and the outer circle sampling point acquired by the third acquiring unit 2033, and obtain the DCP feature;
其中,所述编码单元2034包括:The coding unit 2034 includes:
计算子单元20341,用于按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编码:The calculating subunit 20341 is configured to separately calculate DCP codes on the inner circle sampling point and the outer circle sampling point as follows:
Figure PCTCN2015098018-appb-000061
Figure PCTCN2015098018-appb-000061
或,or,
Figure PCTCN2015098018-appb-000062
Figure PCTCN2015098018-appb-000062
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000063
Figure PCTCN2015098018-appb-000064
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
Figure PCTCN2015098018-appb-000063
Figure PCTCN2015098018-appb-000064
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000065
Figure PCTCN2015098018-appb-000065
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000066
表示水平和垂直方向采样点的所述DCP编码,
Figure PCTCN2015098018-appb-000067
表示对角方向采样点的所述DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000066
The DCP encoding representing the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000067
The DCP code representing the sample points in the diagonal direction.
所述计算子单元20341,还用于按照如下方式计算所述灰度强度函数S(x) 的值:The calculating subunit 20341 is further configured to calculate the gray level intensity function S(x) as follows Value:
Figure PCTCN2015098018-appb-000068
Figure PCTCN2015098018-appb-000068
其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
进一步地,本发明实施例中,当像素点与中心点的灰度值接近时,强度对比容易受到噪声影响,因此采用“软边界”编码方式对灰度强度函数进行改进,使得像素点与中心点的灰度值接近时,不容易受到噪声影响,从而使得对于DCP特征的提取过程更具有鲁棒性,并提升本发明方案的可行性,以及实用性。Further, in the embodiment of the present invention, when the gray point value of the pixel point and the center point are close, the intensity contrast is susceptible to noise, so the "soft boundary" coding method is used to improve the gray intensity function, so that the pixel point and the center When the gray value of the point is close, it is not susceptible to noise, which makes the extraction process of the DCP feature more robust, and improves the feasibility and practicability of the solution of the present invention.
图14是本发明实施例人脸识别装置30的结构示意图。人脸识别装置30可包括输入设备310、输出设备320、处理器330和存储器340。本发明实施例中的输出设备可以是显示设备。FIG. 14 is a schematic structural diagram of a face recognition device 30 according to an embodiment of the present invention. The face recognition device 30 can include an input device 310, an output device 320, a processor 330, and a memory 340. The output device in the embodiment of the present invention may be a display device.
存储器340可以包括只读存储器和随机存取存储器,并向处理器330提供指令和数据。存储器340的一部分还可以包括非易失性随机存取存储器(英文全称:Non-Volatile Random Access Memory,英文缩写:NVRAM)。Memory 340 can include read only memory and random access memory and provides instructions and data to processor 330. A portion of the memory 340 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
存储器340存储了如下的元素,可执行模块或者数据结构,或者它们的子集,或者它们的扩展集:The memory 340 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof:
操作指令:包括各种操作指令,用于实现各种操作。Operation instructions: include various operation instructions for implementing various operations.
操作系统:包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。Operating system: Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
本发明实施例中处理器330用于:In the embodiment of the present invention, the processor 330 is configured to:
获取用户的人脸图像;Obtaining a face image of the user;
对所述人脸图像进行滤波处理,并得到目标人脸图像;Filtering the face image and obtaining a target face image;
从所述目标人脸图像中提取同心双十字交叉模式DCP特征;Extracting a concentric double crisscross mode DCP feature from the target face image;
采用卡方检验计算所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的。 Calculating a similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image by using a chi-square test, and identifying the target face image according to the similarity score, wherein The DCP feature corresponding to the original face image is obtained in advance.
处理器330控制人脸识别装置30的操作,处理器330还可以称为中央处理单元(英文全称:Central Processing Unit,英文缩写:CPU)。存储器340可以包括只读存储器和随机存取存储器,并向处理器330提供指令和数据。存储器340的一部分还可以包括NVRAM。具体的应用中,人脸识别装置30的各个组件通过总线系统350耦合在一起,其中总线系统350除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统350。The processor 330 controls the operation of the face recognition device 30. The processor 330 may also be referred to as a central processing unit (English full name: Central Processing Unit: CPU). Memory 340 can include read only memory and random access memory and provides instructions and data to processor 330. A portion of the memory 340 may also include an NVRAM. In a specific application, the components of the face recognition device 30 are coupled together by a bus system 350. The bus system 350 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 350 in the figure.
上述本发明实施例揭示的方法可以应用于处理器330中,或者由处理器330实现。处理器330可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器330中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器330可以是通用处理器、数字信号处理器(英文全称:digital signal processing,英文缩写:DSP)、专用集成电路(英文全称:Application Specific Integrated Circuit,英文缩写:ASIC)、现成可编程门阵列(英文全称:Field-Programmable Gate Array,英文缩写:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器340,处理器330读取存储器340中的信息,结合其硬件完成上述方法的步骤。The method disclosed in the foregoing embodiments of the present invention may be applied to the processor 330 or implemented by the processor 330. Processor 330 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 330 or an instruction in a form of software. The processor 330 may be a general-purpose processor, a digital signal processor (English name: digital signal processing, English abbreviation: DSP), an application-specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), ready-made programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in memory 340, and processor 330 reads the information in memory 340 and, in conjunction with its hardware, performs the steps of the above method.
可选地,处理器330还用于:Optionally, the processor 330 is further configured to:
按照如下方式计算所述人脸图像的滤波梯度图像:The filtered gradient image of the face image is calculated as follows:
Figure PCTCN2015098018-appb-000069
Figure PCTCN2015098018-appb-000069
其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
Figure PCTCN2015098018-appb-000070
表示在滤波中的标准方向向量,G表示二维高斯滤波器,▽表示梯度算子符 号。
Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
Figure PCTCN2015098018-appb-000070
Indicates the standard direction vector in the filter, G denotes a two-dimensional Gaussian filter, and ▽ denotes a gradient operator symbol.
可选地,处理器330还用于:Optionally, the processor 330 is further configured to:
以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;Taking the center point of the target face image as a center, respectively obtaining an inner circle and an outer circle having different radii;
从所述内圆上获取8个角度间隔相等的内圆采样点;Obtaining 8 inner circle sampling points with equal angular intervals from the inner circle;
从所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;Obtaining 8 outer-circle DCP sampling points with equal angular intervals from the outer circle, wherein the inner-circular sampling points have a corresponding relationship with the outer-circular sampling points;
分别对所述内圆采样点与所述外圆采样点进行编码,并得到所述DCP特征。The inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
可选地,处理器330还用于:Optionally, the processor 330 is further configured to:
按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编码:The DCP codes on the inner circle sampling point and the outer circle sampling point are respectively calculated as follows:
Figure PCTCN2015098018-appb-000071
Figure PCTCN2015098018-appb-000071
或,or,
Figure PCTCN2015098018-appb-000072
Figure PCTCN2015098018-appb-000072
其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
Figure PCTCN2015098018-appb-000073
Figure PCTCN2015098018-appb-000074
以及IO分别表示采样点Ai、Bi以及O的灰度值;
Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
Figure PCTCN2015098018-appb-000073
Figure PCTCN2015098018-appb-000074
And I O respectively represent gray values of the sampling points A i , B i and O;
按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
Figure PCTCN2015098018-appb-000075
Figure PCTCN2015098018-appb-000075
其中,DCP表示DCP特征,i表示第i个采样点,
Figure PCTCN2015098018-appb-000076
表示水平和垂直方向采样点的所述DCP编码,
Figure PCTCN2015098018-appb-000077
表示对角方向采样点的所述DCP编码。
Where DCP represents the DCP feature and i represents the ith sample point.
Figure PCTCN2015098018-appb-000076
The DCP encoding representing the sampling points in the horizontal and vertical directions,
Figure PCTCN2015098018-appb-000077
The DCP code representing the sample points in the diagonal direction.
可选地,处理器330还用于:Optionally, the processor 330 is further configured to:
按照如下方式计算所述灰度强度函数S(x)的值:The value of the gray level intensity function S(x) is calculated as follows:
Figure PCTCN2015098018-appb-000078
Figure PCTCN2015098018-appb-000078
其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。 Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
图14的相关描述可以参阅图1方法部分的相关描述和效果进行理解,本处不做过多赘述。The related description of FIG. 14 can be understood by referring to the related description and effect of the method part of FIG. 1 , and no further description is made herein.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the unit described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。 The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (English full name: Read-Only Memory, English abbreviation: ROM), a random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic A variety of media that can store program code, such as a disc or a disc.
以上对本发明所提供的一种人脸识别的方法进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。 The method for face recognition provided by the present invention is described in detail above. The principles and embodiments of the present invention are described in the following. The description of the above embodiments is only for helping to understand the method of the present invention. And the core idea; at the same time, for those skilled in the art, according to the idea of the embodiment of the present invention, there are some changes in the specific implementation manner and the application scope. In summary, the content of the present specification should not be construed as Limitations of the invention.

Claims (14)

  1. 一种人脸识别的方法,其特征在于,包括:A method for face recognition, comprising:
    获取用户的人脸图像;Obtaining a face image of the user;
    对所述人脸图像进行滤波处理,并得到目标人脸图像;Filtering the face image and obtaining a target face image;
    从所述目标人脸图像中提取同心双十字交叉模式DCP特征;Extracting a concentric double crisscross mode DCP feature from the target face image;
    采用卡方检验计算所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应的DCP特征为预先得到的。Calculating a similarity score between the DCP feature corresponding to the target face image and the DCP feature corresponding to the original face image by using a chi-square test, and identifying the target face image according to the similarity score, wherein The DCP feature corresponding to the original face image is obtained in advance.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述人脸图像进行滤波处理,并得到目标人脸图像,包括:The method according to claim 1, wherein the filtering the face image and obtaining the target face image comprises:
    按照如下方式计算所述人脸图像的滤波梯度图像:The filtered gradient image of the face image is calculated as follows:
    Figure PCTCN2015098018-appb-100001
    Figure PCTCN2015098018-appb-100001
    其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
    Figure PCTCN2015098018-appb-100002
    表示在滤波中的标准方向向量,G表示二维高斯滤波器,
    Figure PCTCN2015098018-appb-100003
    表示梯度算子符号。
    Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
    Figure PCTCN2015098018-appb-100002
    Represents the standard direction vector in the filter, and G represents the two-dimensional Gaussian filter.
    Figure PCTCN2015098018-appb-100003
    Represents the gradient operator symbol.
  3. 根据权利要求2所述的方法,其特征在于,所述θ角度为四个角度值,分别为0度、45度、90度以及135度。The method of claim 2 wherein said θ angle is four angular values of 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
  4. 根据权利要求1所述的方法,其特征在于,所述从所述目标人脸图像中提取同心双十字交叉模式DCP特征,包括:The method according to claim 1, wherein the extracting the concentric double cross mode DCP feature from the target face image comprises:
    以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;Taking the center point of the target face image as a center, respectively obtaining an inner circle and an outer circle having different radii;
    从所述内圆上获取8个角度间隔相等的内圆采样点;Obtaining 8 inner circle sampling points with equal angular intervals from the inner circle;
    从所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;Obtaining 8 outer-circle DCP sampling points with equal angular intervals from the outer circle, wherein the inner-circular sampling points have a corresponding relationship with the outer-circular sampling points;
    分别对所述内圆采样点与所述外圆采样点进行编码,并得到所述DCP特征。The inner circle sampling point and the outer circle sampling point are respectively encoded, and the DCP feature is obtained.
  5. 根据权利要求4所述的方法,其特征在于,所述分别对所述内圆采样点与所述外圆采样点进行编码,并得到所述DCP特征,包括:The method according to claim 4, wherein the encoding the inner circle sampling point and the outer circle sampling point respectively, and obtaining the DCP feature, comprises:
    按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编 码:Calculating the DCP series on the inner circle sampling point and the outer circle sampling point respectively as follows code:
    Figure PCTCN2015098018-appb-100004
    Figure PCTCN2015098018-appb-100004
    或,or,
    Figure PCTCN2015098018-appb-100005
    Figure PCTCN2015098018-appb-100005
    其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
    Figure PCTCN2015098018-appb-100006
    Figure PCTCN2015098018-appb-100007
    以及IO分别表示采样点Ai、Bi以及O的灰度值;
    Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
    Figure PCTCN2015098018-appb-100006
    Figure PCTCN2015098018-appb-100007
    And I O respectively represent gray values of the sampling points A i , B i and O;
    按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
    Figure PCTCN2015098018-appb-100008
    Figure PCTCN2015098018-appb-100008
    其中,DCP表示DCP特征,i表示第i个采样点,
    Figure PCTCN2015098018-appb-100009
    表示水平和垂直方向采样点的所述DCP编码,
    Figure PCTCN2015098018-appb-100010
    表示对角方向采样点的所述DCP编码。
    Where DCP represents the DCP feature and i represents the ith sample point.
    Figure PCTCN2015098018-appb-100009
    The DCP encoding representing the sampling points in the horizontal and vertical directions,
    Figure PCTCN2015098018-appb-100010
    The DCP code representing the sample points in the diagonal direction.
  6. 根据权利要求5所述的方法,其特征在于,The method of claim 5 wherein:
    按照如下方式计算所述灰度强度函数S(x)的值:The value of the gray level intensity function S(x) is calculated as follows:
    Figure PCTCN2015098018-appb-100011
    Figure PCTCN2015098018-appb-100011
    其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
  7. 一种人脸识别装置,其特征在于,包括:A face recognition device, comprising:
    获取模块,用于获取用户的人脸图像;An obtaining module, configured to acquire a face image of the user;
    滤波模块,用于对所述获取模块获取的所述人脸图像进行滤波处理,并得到目标人脸图像;a filtering module, configured to perform filtering processing on the face image acquired by the acquiring module, and obtain a target face image;
    提取模块,用于从所述滤波模块滤波后得到的所述目标人脸图像中提取同心双十字交叉模式DCP特征;An extraction module, configured to extract a concentric double cross mode DCP feature from the target face image obtained by filtering by the filtering module;
    计算模块,用于采用卡方检验计算所述提取模块提取的所述目标人脸图像对应的DCP特征与原始人脸图像对应的DCP特征之间的相似性分数,并根据所述相似性分数对所述目标人脸图像进行识别,其中,所述原始人脸图像对应 的DCP特征为预先得到的。a calculation module, configured to calculate, by using a chi-square test, a similarity score between a DCP feature corresponding to the target face image extracted by the extraction module and a DCP feature corresponding to the original face image, and according to the similarity score pair Identifying the target face image, wherein the original face image corresponds to The DCP characteristics are obtained in advance.
  8. 根据权利要求7所述的人脸识别装置,其特征在于,所述滤波模块包括:The face recognition device according to claim 7, wherein the filtering module comprises:
    计算单元,用于按照如下方式计算所述人脸图像的滤波梯度图像:a calculating unit, configured to calculate a filtered gradient image of the face image as follows:
    Figure PCTCN2015098018-appb-100012
    Figure PCTCN2015098018-appb-100012
    其中,FDG(θ)表示θ角度的方向上对应的所述人脸图像的滤波梯度图像,
    Figure PCTCN2015098018-appb-100013
    表示在滤波中的标准方向向量,G表示二维高斯滤波器,
    Figure PCTCN2015098018-appb-100014
    表示梯度算子符号。
    Wherein FDG(θ) represents a filtered gradient image of the face image corresponding to the direction of the θ angle,
    Figure PCTCN2015098018-appb-100013
    Represents the standard direction vector in the filter, and G represents the two-dimensional Gaussian filter.
    Figure PCTCN2015098018-appb-100014
    Represents the gradient operator symbol.
  9. 根据权利要求8所述的方法,其特征在于,所述θ角度为四个角度值,分别为0度、45度、90度以及135度。The method of claim 8 wherein said θ angle is four angle values of 0 degrees, 45 degrees, 90 degrees, and 135 degrees, respectively.
  10. 根据权利要求7所述的人脸识别装置,其特征在于,所述提取模块包括:The face recognition device according to claim 7, wherein the extraction module comprises:
    第一获取单元,用于以所述目标人脸图像的中心点为圆心,分别获取半径不同的内圆与外圆;a first acquiring unit, configured to obtain an inner circle and an outer circle having different radii by using a center point of the target face image as a center;
    第二获取单元,用于从所述第一获取单元获取的所述内圆上获取8个角度间隔相等的内圆采样点;a second acquiring unit, configured to acquire, from the inner circle acquired by the first acquiring unit, eight inner circle sampling points with equal angular intervals;
    第三获取单元,用于从所述第一获取单元获取的所述外圆上获取8个角度间隔相等的外圆DCP采样点,其中,所述内圆采样点与所述外圆采样点具有对应关系;a third acquiring unit, configured to acquire, from the outer circle acquired by the first acquiring unit, eight outer circle DCP sampling points with equal angular intervals, wherein the inner circle sampling point and the outer circle sampling point have Correspondence relationship
    编码单元,用于分别对所述第二获取单元获取的所述内圆采样点与所述第三获取单元获取的所述外圆采样点进行编码,并得到所述DCP特征。a coding unit, configured to respectively encode the inner circle sampling point acquired by the second acquiring unit and the outer circle sampling point acquired by the third acquiring unit, and obtain the DCP feature.
  11. 根据权利要求10所述的人脸识别装置,其特征在于,所述编码单元包括:The face recognition device according to claim 10, wherein the coding unit comprises:
    计算子单元,用于按照如下方式分别计算所述内圆采样点与所述外圆采样点上的DCP编码:Calculating a subunit for respectively calculating DCP codes on the inner circle sampling point and the outer circle sampling point as follows:
    Figure PCTCN2015098018-appb-100015
    Figure PCTCN2015098018-appb-100015
    或,or,
    Figure PCTCN2015098018-appb-100016
    Figure PCTCN2015098018-appb-100016
    其中,DCPi表示第i个采样点的DCP编码,S(x)表示灰度强度函数,
    Figure PCTCN2015098018-appb-100017
    Figure PCTCN2015098018-appb-100018
    以及IO分别表示采样点Ai、Bi以及O的灰度值;
    Where DCP i represents the DCP code of the ith sample point, and S(x) represents the gray scale intensity function.
    Figure PCTCN2015098018-appb-100017
    Figure PCTCN2015098018-appb-100018
    And I O respectively represent gray values of the sampling points A i , B i and O;
    按照如下方式计算所述DCP特征:The DCP features are calculated as follows:
    Figure PCTCN2015098018-appb-100019
    Figure PCTCN2015098018-appb-100019
    其中,DCP表示DCP特征,i表示第i个采样点,
    Figure PCTCN2015098018-appb-100020
    表示水平和垂直方向采样点的所述DCP编码,
    Figure PCTCN2015098018-appb-100021
    表示对角方向采样点的所述DCP编码。
    Where DCP represents the DCP feature and i represents the ith sample point.
    Figure PCTCN2015098018-appb-100020
    The DCP encoding representing the sampling points in the horizontal and vertical directions,
    Figure PCTCN2015098018-appb-100021
    The DCP code representing the sample points in the diagonal direction.
  12. 根据权利要求11所述的人脸识别装置,其特征在于,A face recognition device according to claim 11, wherein
    所述计算子单元,还用于按照如下方式计算所述灰度强度函数S(x)的值:The calculating subunit is further configured to calculate a value of the gray level intensity function S(x) as follows:
    Figure PCTCN2015098018-appb-100022
    Figure PCTCN2015098018-appb-100022
    其中,所述S(x)表示灰度强度函数,b(x)表示常值函数,f1,d(x)与f0,d(x)表示为模糊隶属度函数,d为边界阈值。Wherein, S(x) represents a grayscale intensity function, b(x) represents a constant value function, f 1,d (x) and f 0,d (x) are represented as fuzzy membership functions, and d is a boundary threshold.
  13. 一种人脸识别装置,其特征在于,包括:A face recognition device, comprising:
    处理器以及存储器;Processor and memory;
    所述存储器用于存储程序;The memory is used to store a program;
    所述处理器用于执行所述存储器中的程序,使得所述人脸识别装置执行如权利要求1至6任一项所述的人脸识别的方法。The processor is configured to execute a program in the memory such that the face recognition device performs the method of face recognition according to any one of claims 1 to 6.
  14. 一种存储一个或多个程序的存储介质,所述一个或多个程序包括指令,所述指令当被包括一个或多个处理器的所述人脸识别装置执行时,使所述人脸识别装置执行如权利要求1至6任一项所述的人脸识别的方法。 A storage medium storing one or more programs, the one or more programs including instructions that cause the face recognition when executed by the face recognition device including one or more processors The apparatus performs the method of face recognition according to any one of claims 1 to 6.
PCT/CN2015/098018 2015-12-21 2015-12-21 Human facial recognition method and human facial recognition device WO2017106996A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580001105.1A CN107135664B (en) 2015-12-21 2015-12-21 Face recognition method and face recognition device
PCT/CN2015/098018 WO2017106996A1 (en) 2015-12-21 2015-12-21 Human facial recognition method and human facial recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/098018 WO2017106996A1 (en) 2015-12-21 2015-12-21 Human facial recognition method and human facial recognition device

Publications (1)

Publication Number Publication Date
WO2017106996A1 true WO2017106996A1 (en) 2017-06-29

Family

ID=59088743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/098018 WO2017106996A1 (en) 2015-12-21 2015-12-21 Human facial recognition method and human facial recognition device

Country Status (2)

Country Link
CN (1) CN107135664B (en)
WO (1) WO2017106996A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800643A (en) * 2018-12-14 2019-05-24 天津大学 A kind of personal identification method of living body faces multi-angle
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN111079700A (en) * 2019-12-30 2020-04-28 河南中原大数据研究院有限公司 Three-dimensional face recognition method based on fusion of multiple data types
CN111898465A (en) * 2020-07-08 2020-11-06 北京捷通华声科技股份有限公司 Method and device for acquiring face recognition model
CN112889061A (en) * 2018-12-07 2021-06-01 北京比特大陆科技有限公司 Method, device and equipment for evaluating quality of face image and storage medium
CN113553961A (en) * 2021-07-27 2021-10-26 北京京东尚科信息技术有限公司 Training method and device of face recognition model, electronic equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107947883A (en) * 2017-11-27 2018-04-20 戴惠英 Radio intelligent monitoring system
CN109842454A (en) * 2017-11-27 2019-06-04 戴惠英 A kind of radio Intellectualized monitoring method
CN107947809A (en) * 2017-11-30 2018-04-20 周小凤 A kind of method for the automation level that radio is provided
CN109861707A (en) * 2017-11-30 2019-06-07 周小凤 Solar power supply type radio
CN109063555B (en) * 2018-06-26 2021-07-02 杭州电子科技大学 Multi-pose face recognition method based on low-rank decomposition and sparse representation residual error comparison
CN109299653A (en) * 2018-08-06 2019-02-01 重庆邮电大学 A kind of human face expression feature extracting method based on the complete three value mode of part of improvement
CN109344758B (en) * 2018-09-25 2022-07-08 厦门大学 Face recognition method based on improved local binary pattern
CN111914632B (en) * 2020-06-19 2024-01-05 广州杰赛科技股份有限公司 Face recognition method, device and storage medium
CN112507315B (en) * 2021-02-05 2021-06-18 红石阳光(北京)科技股份有限公司 Personnel passing detection system based on intelligent brain
CN113218970A (en) * 2021-03-17 2021-08-06 上海师范大学 BGA packaging quality automatic detection method based on X-ray

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408936A (en) * 2007-10-12 2009-04-15 三星Techwin株式会社 Method of controlling digital image processing apparatus for face detection, and digital image processing apparatus employing the method
CN102150180A (en) * 2008-10-14 2011-08-10 松下电器产业株式会社 Face recognition apparatus and face recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150561A (en) * 2013-03-19 2013-06-12 华为技术有限公司 Face recognition method and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408936A (en) * 2007-10-12 2009-04-15 三星Techwin株式会社 Method of controlling digital image processing apparatus for face detection, and digital image processing apparatus employing the method
CN102150180A (en) * 2008-10-14 2011-08-10 松下电器产业株式会社 Face recognition apparatus and face recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DING, CHANGXING ET AL.: "Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition", 21 January 2014 (2014-01-21), pages 2 - 4 and 11, XP080003060, Retrieved from the Internet <URL:HTTP://ARXIV.ORG/ABS/1401.5311> *
DING, CHANGXING ET AL.: "Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 6 August 2015 (2015-08-06), pages 2 - 4 and 11, XP055407559 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112889061A (en) * 2018-12-07 2021-06-01 北京比特大陆科技有限公司 Method, device and equipment for evaluating quality of face image and storage medium
CN109800643A (en) * 2018-12-14 2019-05-24 天津大学 A kind of personal identification method of living body faces multi-angle
CN109800643B (en) * 2018-12-14 2023-03-31 天津大学 Identity recognition method for living human face in multiple angles
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110009052B (en) * 2019-04-11 2022-11-18 腾讯科技(深圳)有限公司 Image recognition method, image recognition model training method and device
CN111079700A (en) * 2019-12-30 2020-04-28 河南中原大数据研究院有限公司 Three-dimensional face recognition method based on fusion of multiple data types
CN111079700B (en) * 2019-12-30 2023-04-07 陕西西图数联科技有限公司 Three-dimensional face recognition method based on fusion of multiple data types
CN111898465A (en) * 2020-07-08 2020-11-06 北京捷通华声科技股份有限公司 Method and device for acquiring face recognition model
CN111898465B (en) * 2020-07-08 2024-05-14 北京捷通华声科技股份有限公司 Method and device for acquiring face recognition model
CN113553961A (en) * 2021-07-27 2021-10-26 北京京东尚科信息技术有限公司 Training method and device of face recognition model, electronic equipment and storage medium
CN113553961B (en) * 2021-07-27 2023-09-05 北京京东尚科信息技术有限公司 Training method and device of face recognition model, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107135664B (en) 2020-09-11
CN107135664A (en) 2017-09-05

Similar Documents

Publication Publication Date Title
WO2017106996A1 (en) Human facial recognition method and human facial recognition device
CN105138972B (en) Face authentication method and device
CN105917353B (en) Feature extraction and matching for biological identification and template renewal
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
JP5121506B2 (en) Image processing apparatus, image processing method, program, and storage medium
KR101224408B1 (en) A distance iris recognition system
CN110569756A (en) face recognition model construction method, recognition method, device and storage medium
Iwahori et al. Automatic detection of polyp using hessian filter and HOG features
CN108416291B (en) Face detection and recognition method, device and system
Barpanda et al. Iris recognition with tunable filter bank based feature
Banerjee et al. ARTeM: A new system for human authentication using finger vein images
Choudhary et al. A survey: Feature extraction methods for iris recognition
CN111199197B (en) Image extraction method and processing equipment for face recognition
CN112597812A (en) Finger vein identification method and system based on convolutional neural network and SIFT algorithm
Huo et al. An effective feature descriptor with Gabor filter and uniform local binary pattern transcoding for Iris recognition
Taha et al. Iris features extraction and recognition based on the local binary pattern technique
Raffei et al. Fusion iris and periocular recognitions in non-cooperative environment
Latha et al. A robust person authentication system based on score level fusion of left and right irises and retinal features
Mani et al. Design of a novel shape signature by farthest point angle for object recognition
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
Goswami et al. Kernel group sparse representation based classifier for multimodal biometrics
CN111914585A (en) Iris identification method and system
Das et al. Person identification through IRIS recognition
CN112380966A (en) Monocular iris matching method based on feature point reprojection
CN111723612A (en) Face recognition and face recognition network training method and device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15910996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15910996

Country of ref document: EP

Kind code of ref document: A1