CN110084135B - Face recognition method, device, computer equipment and storage medium - Google Patents

Face recognition method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110084135B
CN110084135B CN201910268066.1A CN201910268066A CN110084135B CN 110084135 B CN110084135 B CN 110084135B CN 201910268066 A CN201910268066 A CN 201910268066A CN 110084135 B CN110084135 B CN 110084135B
Authority
CN
China
Prior art keywords
image
binary pattern
local binary
histogram
chi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910268066.1A
Other languages
Chinese (zh)
Other versions
CN110084135A (en
Inventor
曹靖康
郑权
王义文
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910268066.1A priority Critical patent/CN110084135B/en
Publication of CN110084135A publication Critical patent/CN110084135A/en
Priority to PCT/CN2019/103136 priority patent/WO2020199475A1/en
Application granted granted Critical
Publication of CN110084135B publication Critical patent/CN110084135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of biological recognition, which is used for face recognition. The method comprises the following steps: acquiring an image to be detected; performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space; extracting a local binary pattern feature value of the target image, and carrying out histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram; inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result; and carrying out face image recognition according to the classification result. The method can improve the efficiency and accuracy of face recognition and prevent face recognition spoofing attack.

Description

Face recognition method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus, computer device, and storage medium.
Background
Currently, with the continuous popularization of online payment, face recognition technology is advancing continuously, for example, when online payment is performed by "face brushing", face recognition is required. However, the existing face recognition technology can only recognize the identity of the face image and cannot accurately distinguish the authenticity of the input face, so that serious potential safety hazards exist in face recognition, for example, an attacker can forge the face of a legal user to make illegal payment by adopting means of photos, face changing, masks, shielding, electronic screens and the like, thus bringing economic loss to the legal user and being unfavorable for building a safe payment environment.
Disclosure of Invention
The application provides a face recognition method, a face recognition device, computer equipment and a storage medium, which are used for preventing spoofing attack in face recognition and improving the safety of user information.
In a first aspect, the present application provides a face recognition method, the method comprising:
acquiring an image to be detected;
performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space;
Extracting a local binary pattern feature value of the target image, and carrying out histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram;
Inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result;
And carrying out face image recognition according to the classification result.
In a second aspect, the present application also provides a face recognition device, including:
the image acquisition unit is used for acquiring an image to be detected;
The image conversion unit is used for carrying out color space conversion on the image to be detected so as to obtain a target image corresponding to the image to be detected in a preset color space;
the image processing unit is used for extracting local binary pattern characteristic values of the target image, and carrying out histogram statistics according to the local binary pattern characteristic values to obtain a local binary pattern histogram;
the classification detection unit is used for inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection so as to obtain a classification result;
And the face recognition unit is used for recognizing the face image according to the classification result.
In a third aspect, the present application also provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the face recognition method as described above when the computer program is executed.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement a face recognition method as described above.
The application discloses a face recognition method, a device, equipment and a storage medium, which are used for obtaining a target image in a preset color space by performing color space conversion on an obtained image to be detected; extracting a local binary pattern feature value of the target image, and carrying out histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram; inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result; and carrying out face image recognition according to the classification result. The method can improve the efficiency and accuracy of face recognition and prevent face recognition spoofing attack.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of sub-steps of the face recognition method of fig. 1;
fig. 3 is an effect schematic diagram of a face recognition method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
Fig. 5 is a schematic block diagram of a face recognition device according to an embodiment of the present application;
Fig. 6 is a schematic block diagram of another face recognition device according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
The embodiment of the application provides a face recognition method, a face recognition device, computer equipment and a storage medium. The face recognition method can be applied to a terminal or a server to accurately recognize face verification information of a user, so that spoofing attack in face recognition is prevented, and the safety of the user information is further improved.
For example, the face recognition method can be used for unlocking recognition, payment recognition or information verification of the mobile terminal, and the like, and can also be applied to unlocking recognition in the access control, and other similar fields can be applied.
The servers may be independent servers or may be server clusters. The terminal can be electronic equipment such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, wearable equipment and the like.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application. As shown in fig. 1, the face recognition method specifically includes steps S101 to S105.
S101, acquiring an image to be detected.
The image to be detected is an image for face recognition, specifically, an image for face recognition is collected in real time, for example, a camera of a mobile phone is used for collecting a face image of a user, and the image to be detected is subjected to face recognition to realize a certain function, for example, unlocking the mobile phone, starting a certain application, or performing online payment. The collected image to be detected is an image in RGB space, and may be an image in other formats.
S102, performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space.
In this embodiment, the preset color space includes an HSV color space or a YCbCr color space, and performing color space conversion refers to converting an image to be detected into an image of the HSV color space or an image of the YCbCr color space.
Specifically, the image to be detected is subjected to color space conversion by a conversion algorithm of a preset color space to obtain a target image of the preset color space (HSV or YCbCr color space). For example, the image to be detected is an image in an RGB space, and the image to be detected is subjected to color space conversion by utilizing a RGB-HSV conversion algorithm to obtain a target image in an HSV color space. The conversion function may be specifically called for conversion, and of course, the color space conversion may also be performed using corresponding image processing tools (MATLAB, PS, etc.).
The parameters of the colors in the model of the HSV space are respectively as follows: h represents hue, S represents saturation, and V represents brightness. Whereas YCrCb, YUV, is mainly used to optimize the transmission of color video signals. Wherein "Y" represents brightness (Luminance or Luma), i.e., a gray scale value; while "U" and "V" denote chromaticity and density (Chrominance or Chroma), respectively, which are used to describe image color and saturation for a given pixel color. The "brightness" is established through the RGB input signals, by superimposing specific parts of the RGB signals together. "chroma" defines two aspects of color-hue and saturation, denoted by Cr and Cb, respectively. Wherein Cr reflects the difference between the red portion of the RGB input signal and the RGB signal luminance value. And Cb reflects the difference between the blue portion of the RGB input signal and the luminance value of the RGB signal.
The general image is based on RGB space, but because the complexion of the face in the RGB space is greatly influenced by brightness, the complexion point is difficult to separate from the non-complexion point, namely, after the RGB space is processed, the complexion point is discrete point, and a plurality of non-complexion points are embedded in the middle, which brings a difficult problem for the calibration of complexion area (the calibration of the face, the calibration of eyes and the like).
In this embodiment, the RGB image is converted into the HSV color space or the YCrCb color space, so that the brightness effect can be ignored, and the skin color can be well similar to the skin color because the brightness effect is small in the HSV space or the YCrCb space. Thus, the three-dimensional YCrCb space can be reduced to two-dimensional CrCb, and the skin color points can form a certain shape, such as: the human face can see a region of the human face, the arm can see a form of the arm, and the method is beneficial to processing mode identification, and if the CrCb value of a certain point meets the following conditions: 133.ltoreq.Cr.ltoreq.173, 77.ltoreq.Cb.ltoreq.127, then the point may be considered a skin tone point, otherwise the point is considered a non-skin tone point. Therefore, the image to be detected is subjected to color space conversion, and the influence of skin color is easily removed.
S103, extracting local binary pattern feature values of the target image, and carrying out histogram statistics according to the local binary pattern feature values to obtain a local binary pattern histogram.
Specifically, a local binary pattern feature value of the target image is extracted by using an algorithm of a local binary pattern (Local Binary Pattern, LBP), and then histogram statistics is performed according to the local binary pattern feature value to obtain a local binary pattern histogram.
In one embodiment, an image to be detected is converted from an RGB space into an image of an HSV color space or a target image of a YCrCb color space, and local binary pattern feature values of skin color points of the target image are extracted in determining skin color points in the image of the HSV color space or the target image of the YCrCb color space. Thereby, the face recognition speed can be increased.
Taking YCrCb space as an example, firstly determining pixel points corresponding to CrCb values in a preset CrCb range in the image to be detected as skin color points, then rapidly extracting corresponding local binary pattern (Local Binary Pattern, LBP) characteristic values from the image in the YCrCb space, and constructing an LBP histogram of N points according to the LBP characteristic values, wherein N is 64/128 or 256.
In one embodiment, in order to improve the accuracy of face recognition, as shown in fig. 2, step S103 specifically includes the following:
S103a, extracting local binary pattern characteristic values corresponding to multiple channels of the target image in the preset color space; s103b, carrying out histogram statistics on the local binary pattern characteristic values of each channel to obtain a local binary pattern histogram of the channel; and S103c, combining the local binary pattern histograms of the channels to generate a local binary pattern histogram.
Specifically, as shown in fig. 3, the converted target image includes a multi-channel image, for example, the target image converted into YCbCr color space may include three channel images, which are a Y image, a Cb image, and a Cr image, so that LBP feature values corresponding to the three channel images (Y, cb, cr) of the target image may be extracted, and histogram statistics may be performed on the multi-channel LBP feature values to obtain a plurality of channel LBP histograms; and combining the three channel LBP histograms to obtain the LBP histogram. After the color space is converted and the LBP histogram is made by utilizing multiple channels, the method has better robustness, is difficult to distinguish in the RGB color space, but has obvious difference in texture if the color space is converted into the HSV or YCbCr color space, thereby improving the identification accuracy.
S104, inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result.
In this embodiment, the classification model includes a support vector machine classifier (Support Vector Machine, SVM), and of course, a deep learning model may also be used as the classifier, for example, a convolutional neural network is used for model training, so as to obtain a neural network model with a classification function as the classification model.
Specifically, the LBP histogram obtained in the above step is input into a pre-trained classification model for classification and identification to obtain a classification result, wherein the classification result is a classification result. For example, as shown in fig. 3, if Real is output by the SVM, it indicates that the image to be detected corresponds to a living body image; if the SVM outputs Fake, it indicates that the image to be detected corresponds to a non-living image, which may be a print attack or a video attack.
S105, face image recognition is carried out according to the classification result.
Specifically, if the classification result is a living body image, performing image recognition on a face image in the image to be detected; and if the classification result is a non-living body image, outputting verification failure information, such as that the face verification fails.
The image recognition of the face image in the image to be detected comprises the following steps: determining a face image in the target image; and comparing and identifying the face image with the face features acquired in advance. Of course, the face feature comparison and recognition can be performed on the target image in the converted color space to obtain a recognition result, such as successful recognition or recognition failure, so that the accuracy and speed of face recognition are improved, and the real-time performance of face recognition is ensured.
According to the face recognition method disclosed by the embodiment, the target image of the preset color space (HSV or YCbCr color space) is obtained by performing color space conversion on the image to be detected; extracting local binary pattern feature values of the target image, and carrying out histogram statistics on the LBP feature values to obtain an LBP histogram; inputting the obtained LBP histogram into a pre-trained classification model (SVM two-classifier) for classification detection to obtain a classification result; and recognizing the face image according to the classification result output by the classification model. The method can effectively prevent spoofing attack in face recognition, and further improve the safety of user information.
Referring to fig. 4, fig. 4 is a schematic flowchart of another face recognition method according to an embodiment of the present application. As shown in fig. 4, the face recognition method specifically includes steps S201 to S206.
S201, acquiring an image to be detected.
The image to be detected is an image for face recognition, specifically, an image for face recognition acquired in real time, for example, a face image of a user is acquired through a camera of a mobile phone, and the image to be detected generally acquired is an image in an RGB space.
S202, performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space.
The preset color space comprises an HSV color space or a YCbCr color space, and the color space conversion is to convert the image to be detected into an image of the HSV color space or an image of the YCbCr color space.
Specifically, the image to be detected is subjected to color space conversion by a conversion algorithm of a preset color space to obtain a target image of the preset color space (HSV or YCbCr color space). For example, the image to be detected is an image in an RGB space, and the image to be detected is subjected to color space conversion by utilizing a RGB-HSV conversion algorithm to obtain a target image in an HSV color space.
S203, extracting local binary pattern feature values of the target image, and carrying out histogram statistics according to the local binary pattern feature values to obtain a local binary pattern histogram.
Specifically, a local binary pattern feature value of the target image is extracted by using an algorithm of a local binary pattern (Local Binary Pattern, LBP), and then histogram statistics is performed according to the local binary pattern feature value to obtain a local binary pattern histogram.
S204, calculating the chi-square distance between the target image and the image in the preset data set according to the local binary pattern histogram, wherein the preset data set comprises a living body data set and a non-living body data set.
Specifically, step S204 includes the following: and calculating the chi-square distance between the target image and the image in the preset data set by using a preset chi-square distance formula according to the local binary pattern histogram.
Wherein, the preset chi-square distance formula is:
d(Hx,Hr,Hf)=dγ(Hx,Hr)-dγ(Hx,Hf) (1)
In formulas 1 to 3, d (H x,Hr,Hf) is the chi-square distance between the target image and the image in the preset dataset; h x is a local binary pattern feature value of the target image, and H r and H f are average chi-square distances of the living and non-living data sets, respectively.
In one embodiment, before calculating the chi-square distance between the target image and the image in the preset dataset according to the LBP histogram, the method further comprises: a preset data set is established, the preset data set including a living data set and a non-living data set.
Firstly, a dataset (i.e. a living body dataset) with only real living body face images is established, then a chi-square distance calculation formula calculates the chi-square distance of a histogram after LBP algorithm between each living body face image and other living body face images, and then the average distance of the chi-square distances is taken as a distance threshold of the living body dataset.
Then, a data set (i.e., a non-living data set) of only non-living face images is built again, then the chi-square distance of the histogram after the LBP algorithm between each non-living face image and other non-living face images is calculated according to the chi-square distance calculation formula, and the average distance of the chi-square distances is taken as the distance threshold of the non-living data set. The simultaneous live and non-live data sets may also be used to train an SVM classifier for classifying and identifying the input image to identify whether it is a live or non-live image.
The chi-square distance calculation formula specifically comprises the following steps:
Wherein H x (i) is the LBP eigenvalue of the LBP histogram of the target image at point i, and H y (i) is the LBP eigenvalue of the LBP histogram of the image in the living or non-living dataset at point i; typically the number N of points of the LBP histogram is taken to be 64 or 128, up to 256; d γ(Hx,Hy) is the chi-square distance between two face images, and is used to represent the similarity between two face images.
The chi-square distance value between the live photo and the non-live photo obtained by using the preset data set is smaller than that of two live photos, so that the identification accuracy can be improved.
In one embodiment, the color space can be converted and the multi-channel LBP histogram and the chi-square distance corresponding to the multi-channel histogram can be used for identification and detection, so that the multi-channel LBP histogram has better robustness.
S205, inputting the local binary pattern histogram and the chi-square distance into a pre-trained classification model for classification detection to obtain a classification result.
In this embodiment, the classification model includes a support vector machine classifier (Support Vector Machine, SVM), and of course, a deep learning model may also be used as the classifier, for example, a convolutional neural network is used for model training, so as to obtain a neural network model with a classification function as the classification model.
Specifically, the LBP histogram and the chi-square distance obtained in the above steps are input into a pre-trained classification model for classification recognition to obtain a classification result, wherein the classification result is a classification result. For example, as shown in fig. 3, if the SVM outputs real, it indicates that the image to be detected corresponds to a living body image; if the SVM outputs Fake, it indicates that the image to be detected corresponds to a non-living image, which may be a print attack or a video attack.
S206, carrying out face image recognition according to the classification result.
Specifically, if the classification result is a living body image, performing image recognition on a face image in the image to be detected; and if the classification result is a non-living body image, outputting verification failure information, such as that the face verification fails.
The image recognition of the face image in the image to be detected comprises the following steps: determining a face image in the target image; and comparing and identifying the face image with the face features acquired in advance.
According to the face recognition method disclosed by the embodiment, the target image of the preset color space (HSV or YCbCr color space) is obtained by performing color space conversion on the image to be detected; extracting local binary pattern feature values of the target image, carrying out histogram statistics on the LBP feature values to obtain an LBP histogram, and calculating the chi-square distance between the histogram and a preset data set; the obtained LBP histogram and the chi-square distance are simultaneously input into a pre-trained classification model (SVM two classifier) for classification detection to obtain a classification result; and recognizing the face image according to the classification result output by the classification model. The method can effectively prevent cheating attack in face recognition through the LBP histogram and the chi-square distance, and further improves the safety of user information.
Referring to fig. 5, fig. 5 is a schematic block diagram of a face recognition device according to an embodiment of the present application, where the face recognition device may be configured in a terminal or a server, for performing the face recognition method described above.
As shown in fig. 5, the face recognition apparatus 400 includes: an image acquisition unit 401, an image conversion unit 402, an image processing unit 403, a classification detection unit 404, and a face recognition unit 405.
An image acquisition unit 401 is configured to acquire an image to be detected.
An image conversion unit 402, configured to perform color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space.
An image processing unit 403, configured to extract a local binary pattern feature value of the target image, and perform histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram.
In one embodiment, the image processing unit 403 includes: a feature value extraction subunit 4031, a histogram statistics subunit 4032, and a histogram merging subunit 4033.
The feature value extracting subunit 4031 is configured to extract a local binary pattern feature value corresponding to the multiple channels of the target image in the preset color space.
The histogram statistics subunit 4032 is configured to perform histogram statistics on the local binary pattern feature values of each channel to obtain a local binary pattern histogram of the channel.
A histogram merging subunit 4033, configured to merge a plurality of the channel local binary pattern histograms to generate a local binary pattern histogram.
The classification detection unit 404 is configured to input the obtained local binary pattern histogram to a pre-trained classification model for classification detection to obtain a classification result.
And the face recognition unit 405 is configured to perform face image recognition according to the classification result.
Referring to fig. 6, fig. 6 is a schematic block diagram of another face recognition apparatus according to an embodiment of the present application, which is used to perform the face recognition method described above. The face recognition device can be configured in a server or a terminal.
As shown in fig. 6, the face recognition apparatus 500 includes: an image acquisition unit 501, an image conversion unit 502, an image processing unit 503, a distance calculation unit 504, a classification detection unit 505, and a face recognition unit 506.
An image acquisition unit 501 is configured to acquire an image to be detected.
The image conversion unit 502 is configured to perform color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space.
An image processing unit 503, configured to extract a local binary pattern feature value of the target image, and perform histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram.
A distance calculating unit 504, configured to calculate a chi-square distance between the target image and an image in a preset dataset according to the local binary pattern histogram, where the preset dataset includes a living dataset and a non-living dataset.
The classification detection unit 505 is configured to input the obtained local binary pattern histogram to a pre-trained classification model for classification detection to obtain a classification result.
And the face recognition unit 506 is configured to perform face image recognition according to the classification result.
It should be noted that, for convenience and brevity of description, the specific working process of the apparatus and each unit described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server or a terminal.
With reference to FIG. 7, the computer device includes a processor, memory, and a network interface connected by a system bus, where the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any one of a number of face recognition methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium, which when executed by a processor, causes the processor to perform any one of a number of face recognition methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
Acquiring an image to be detected; performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space; extracting a local binary pattern feature value of the target image, and carrying out histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram; inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result; and carrying out face image recognition according to the classification result.
In one embodiment, the processor is configured to, when implementing the extracting the local binary pattern feature value of the target image, perform histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram, implement:
Extracting local binary pattern characteristic values corresponding to multiple channels of the target image in the preset color space; carrying out histogram statistics on the local binary pattern characteristic values of each channel to obtain a local binary pattern histogram of the channel; and combining a plurality of the channel local binary pattern histograms to generate a local binary pattern histogram.
In one embodiment, before implementing the inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result, the processor is further configured to implement:
And calculating the chi-square distance between the target image and an image in a preset data set according to the local binary pattern histogram, wherein the preset data set comprises a living body data set and a non-living body data set.
Correspondingly, when the processor inputs the obtained local binary pattern histogram to a pre-trained classification model to perform classification detection to obtain a classification result, the processor is configured to implement:
And inputting the local binary pattern histogram and the chi-square distance into a pre-trained classification model for classification detection to obtain a classification result.
In one embodiment, when implementing the computing, according to the local binary pattern histogram, a chi-square distance between the target image and an image in a preset dataset, the processor is configured to implement:
According to the local binary pattern histogram, calculating the chi-square distance between the target image and the image in the preset data set by using a preset chi-square distance formula;
Wherein, the preset chi-square distance formula is:
d(Hx,Hr,Hf)=dγ(Hx,Hr)-dγ(Hx,Hf)
D (H x,Hr,Hf) is the chi-square distance between the target image and the image in the preset dataset; h x is a local binary pattern feature value of the target image, and H r and H f are average chi-square distances of the living and non-living data sets, respectively.
In one embodiment, when implementing the face image recognition according to the classification result, the processor is configured to implement:
If the classification result is a living body image, carrying out image recognition on a face image in the image to be detected; and if the classification result is a non-living body image, outputting verification failure information.
In one embodiment, when implementing the image recognition of the face image in the image to be detected, the processor is configured to implement:
determining a face image in the target image; and comparing and identifying the face image with the face features acquired in advance.
In one embodiment, the preset color space comprises an HSV color space or a YCbCr color space; the classification model includes a support vector machine classifier.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize any face recognition method provided by the embodiment of the application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the computer device.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (6)

1. A face recognition method, comprising:
acquiring an image to be detected;
performing color space conversion on the image to be detected to obtain a target image corresponding to the image to be detected in a preset color space;
Extracting a local binary pattern feature value of the target image, and carrying out histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram; the extracting the local binary pattern feature value of the target image, and performing histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram, including: extracting local binary pattern characteristic values corresponding to multiple channels of the target image in the preset color space; carrying out histogram statistics on the local binary pattern characteristic values of each channel to obtain a local binary pattern histogram of the channel; combining a plurality of the channel local binary pattern histograms to generate a local binary pattern histogram;
Inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection to obtain a classification result; before the obtained local binary pattern histogram is input into a pre-trained classification model for classification detection to obtain a classification result, the method further comprises the following steps: calculating the chi-square distance between the target image and an image in a preset data set according to the local binary pattern histogram, wherein the preset data set comprises a living body data set and a non-living body data set; the step of inputting the obtained local binary pattern histogram to a pre-trained classification model for classification detection to obtain a classification result comprises the following steps: inputting the local binary pattern histogram and the chi-square distance into a pre-trained classification model for classification detection to obtain a classification result; the calculating the chi-square distance between the target image and the image in the preset data set according to the local binary pattern histogram comprises the following steps: according to the local binary pattern histogram, calculating the chi-square distance between the target image and the image in the preset data set by using a preset chi-square distance formula; wherein, the preset chi-square distance formula is:
Wherein, The chi-square distance between the target image and the image in the preset data set is set; /(I)For the local binary pattern feature value of the target image,/>And/>Average chi-square distances for the living and non-living data sets, respectively;
carrying out face image recognition according to the classification result; the step of performing facial image recognition according to the classification result comprises the following steps: if the classification result is a living body image, carrying out image recognition on a face image in the image to be detected; and if the classification result is a non-living body image, outputting verification failure information.
2. The face recognition method according to claim 1, wherein the performing image recognition on the face image in the image to be detected includes:
Determining a face image in the target image; and
And comparing and identifying the face image with the face features acquired in advance.
3. The face recognition method of claim 1, wherein the preset color space comprises an HSV color space or a YCbCr color space; the classification model includes a support vector machine classifier.
4. A face recognition device, comprising:
the image acquisition unit is used for acquiring an image to be detected;
The image conversion unit is used for carrying out color space conversion on the image to be detected so as to obtain a target image corresponding to the image to be detected in a preset color space;
The image processing unit is used for extracting local binary pattern characteristic values of the target image, and carrying out histogram statistics according to the local binary pattern characteristic values to obtain a local binary pattern histogram; the extracting the local binary pattern feature value of the target image, and performing histogram statistics according to the local binary pattern feature value to obtain a local binary pattern histogram, including: extracting local binary pattern characteristic values corresponding to multiple channels of the target image in the preset color space; carrying out histogram statistics on the local binary pattern characteristic values of each channel to obtain a local binary pattern histogram of the channel; combining a plurality of the channel local binary pattern histograms to generate a local binary pattern histogram;
The classification detection unit is used for inputting the obtained local binary pattern histogram into a pre-trained classification model for classification detection so as to obtain a classification result; before the obtained local binary pattern histogram is input into a pre-trained classification model for classification detection to obtain a classification result, the method further comprises the following steps: calculating the chi-square distance between the target image and an image in a preset data set according to the local binary pattern histogram, wherein the preset data set comprises a living body data set and a non-living body data set; the step of inputting the obtained local binary pattern histogram to a pre-trained classification model for classification detection to obtain a classification result comprises the following steps: inputting the local binary pattern histogram and the chi-square distance into a pre-trained classification model for classification detection to obtain a classification result; the calculating the chi-square distance between the target image and the image in the preset data set according to the local binary pattern histogram comprises the following steps: according to the local binary pattern histogram, calculating the chi-square distance between the target image and the image in the preset data set by using a preset chi-square distance formula; wherein, the preset chi-square distance formula is:
Wherein, The chi-square distance between the target image and the image in the preset data set is set; /(I)For the local binary pattern feature value of the target image,/>And/>Average chi-square distances for the living and non-living data sets, respectively;
the face recognition unit is used for recognizing the face image according to the classification result; the step of performing facial image recognition according to the classification result comprises the following steps: if the classification result is a living body image, carrying out image recognition on a face image in the image to be detected; and if the classification result is a non-living body image, outputting verification failure information.
5. A computer device, the computer device comprising a memory and a processor;
The memory is used for storing a computer program;
the processor being configured to execute the computer program and to implement the face recognition method according to any one of claims 1 to 3 when the computer program is executed.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to implement the face recognition method according to any one of claims 1 to 3.
CN201910268066.1A 2019-04-03 2019-04-03 Face recognition method, device, computer equipment and storage medium Active CN110084135B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910268066.1A CN110084135B (en) 2019-04-03 2019-04-03 Face recognition method, device, computer equipment and storage medium
PCT/CN2019/103136 WO2020199475A1 (en) 2019-04-03 2019-08-28 Facial recognition method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910268066.1A CN110084135B (en) 2019-04-03 2019-04-03 Face recognition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110084135A CN110084135A (en) 2019-08-02
CN110084135B true CN110084135B (en) 2024-04-23

Family

ID=67414217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910268066.1A Active CN110084135B (en) 2019-04-03 2019-04-03 Face recognition method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110084135B (en)
WO (1) WO2020199475A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084135B (en) * 2019-04-03 2024-04-23 平安科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN110427746A (en) * 2019-08-08 2019-11-08 腾讯科技(深圳)有限公司 Sliding block verifies code verification method, device, storage medium and computer equipment
CN110598719A (en) * 2019-09-11 2019-12-20 南京师范大学 Method for automatically generating face image according to visual attribute description
CN110717444A (en) * 2019-10-09 2020-01-21 北京明略软件系统有限公司 Lipstick number identification method and device
CN112883762A (en) * 2019-11-29 2021-06-01 广州慧睿思通科技股份有限公司 Living body detection method, device, system and storage medium
CN111696080B (en) * 2020-05-18 2022-12-30 江苏科技大学 Face fraud detection method, system and storage medium based on static texture
CN111709312B (en) * 2020-05-26 2023-09-22 上海海事大学 Local feature face recognition method based on combined main mode
CN112001785A (en) * 2020-07-21 2020-11-27 小花网络科技(深圳)有限公司 Network credit fraud identification method and system based on image identification
CN112200075B (en) * 2020-10-09 2024-06-04 西安西图之光智能科技有限公司 Human face anti-counterfeiting method based on anomaly detection
CN112200080A (en) * 2020-10-10 2021-01-08 平安国际智慧城市科技股份有限公司 Face recognition method and device, electronic equipment and storage medium
CN112465753B (en) * 2020-11-16 2024-05-28 北京工业大学 Pollen particle detection method and device and electronic equipment
CN112560742A (en) * 2020-12-23 2021-03-26 杭州趣链科技有限公司 Human face in-vivo detection method, device and equipment based on multi-scale local binary pattern
CN113221695B (en) * 2021-04-29 2023-12-12 深圳数联天下智能科技有限公司 Method for training skin color recognition model, method for recognizing skin color and related device
CN113283405A (en) * 2021-07-22 2021-08-20 第六镜科技(北京)有限公司 Mask detection method and device, computer equipment and storage medium
CN113591865B (en) * 2021-07-28 2024-03-26 深圳甲壳虫智能有限公司 Loop detection method and device and electronic equipment
CN113724091A (en) * 2021-08-13 2021-11-30 健医信息科技(上海)股份有限公司 Insurance claim settlement method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332086A (en) * 2011-06-15 2012-01-25 夏东 Facial identification method based on dual threshold local binary pattern
KR20160044668A (en) * 2014-10-15 2016-04-26 서울시립대학교 산학협력단 Face identifying method, face identifying apparatus and computer program executing the method
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
KR20180094453A (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9171226B2 (en) * 2012-09-26 2015-10-27 Carnegie Mellon University Image matching using subspace-based discrete transform encoded local binary patterns
GB2519620B (en) * 2013-10-23 2015-12-30 Imagination Tech Ltd Skin colour probability map
CN105426816A (en) * 2015-10-29 2016-03-23 深圳怡化电脑股份有限公司 Method and device of processing face images
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN110084135B (en) * 2019-04-03 2024-04-23 平安科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332086A (en) * 2011-06-15 2012-01-25 夏东 Facial identification method based on dual threshold local binary pattern
KR20160044668A (en) * 2014-10-15 2016-04-26 서울시립대학교 산학협력단 Face identifying method, face identifying apparatus and computer program executing the method
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
KR20180094453A (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2020199475A1 (en) 2020-10-08
CN110084135A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110084135B (en) Face recognition method, device, computer equipment and storage medium
Liu et al. Learning deep models for face anti-spoofing: Binary or auxiliary supervision
US11423701B2 (en) Gesture recognition method and terminal device and computer readable storage medium using the same
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Atoum et al. Face anti-spoofing using patch and depth-based CNNs
CN105893920B (en) Face living body detection method and device
Li et al. Multi-angle head pose classification when wearing the mask for face recognition under the COVID-19 coronavirus epidemic
Boulkenafet et al. On the generalization of color texture-based face anti-spoofing
WO2017190646A1 (en) Facial image processing method and apparatus and storage medium
WO2019137178A1 (en) Face liveness detection
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
US11315360B2 (en) Live facial recognition system and method
Hajraoui et al. Face detection algorithm based on skin detection, watershed method and gabor filters
CN111832405A (en) Face recognition method based on HOG and depth residual error network
JPWO2017061106A1 (en) Information processing apparatus, image processing system, image processing method, and program
Paul et al. Rotation invariant multiview face detection using skin color regressive model and support vector regression
Yadav et al. Fast face detection based on skin segmentation and facial features
Marasco et al. Deep color spaces for fingerphoto presentation attack detection in mobile devices
Deng et al. Attention-aware dual-stream network for multimodal face anti-spoofing
Gangopadhyay et al. FACE DETECTION AND RECOGNITION USING HAAR CLASSIFIER AND LBP HISTOGRAM.
CN113221842A (en) Model training method, image recognition method, device, equipment and medium
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
Xu et al. Face detection based on skin color segmentation and AdaBoost algorithm
He et al. Face Spoofing Detection Based on Combining Different Color Space Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant