WO2019114580A1 - 活体检测方法、计算机装置及计算机可读存储介质 - Google Patents

活体检测方法、计算机装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2019114580A1
WO2019114580A1 PCT/CN2018/119189 CN2018119189W WO2019114580A1 WO 2019114580 A1 WO2019114580 A1 WO 2019114580A1 CN 2018119189 W CN2018119189 W CN 2018119189W WO 2019114580 A1 WO2019114580 A1 WO 2019114580A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
feature
matrix
living body
sample
Prior art date
Application number
PCT/CN2018/119189
Other languages
English (en)
French (fr)
Inventor
余梓彤
严蕤
牟永强
Original Assignee
深圳励飞科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳励飞科技有限公司 filed Critical 深圳励飞科技有限公司
Publication of WO2019114580A1 publication Critical patent/WO2019114580A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the invention belongs to the field of face anti-counterfeiting, and in particular relates to a living body detecting method, a computer device and a computer readable storage medium.
  • Non-interactive in vivo detection techniques are generally classified into two types based on detection of color texture information and detection based on image motion information. Among them, the basic idea of the living body detection technology based on color texture information is to use the face color texture information for classification and recognition, but the method lacks face motion information and is easily attacked by high-definition pictures or videos.
  • the basic idea of the living body detection technology based on image motion information is to use the micro-motion information of the face and the simple face texture information, but the method lacks deep extraction of the discriminative features of the face, and is also easily captured by high-definition pictures or Video information attack. As a result, the recognition accuracy of the existing living body detection system is low and the safety is poor.
  • the existing living body detection system has a problem of low recognition accuracy and poor safety.
  • the invention provides a living body detecting method, a computer device and a computer readable storage medium, aiming at solving the problems of low recognition accuracy and poor safety of the existing living body detection system.
  • a first aspect of the present invention provides a living body detecting method, and the living body detecting method includes:
  • the multi-layer perceptron is trained by using a preset training set to determine a multi-layer perceptron model
  • the first color space is an RGB color space
  • the second color space is a Lab color space
  • the extracting the face image converted into an intermediate frame of the second color space Texture features include:
  • the extracting the local phase quantization texture feature of the preset neighborhood of the face image converted into the intermediate frame of the Lab color space includes:
  • the merging the texture feature with the dynamic mode feature to obtain the merged fusion feature comprises:
  • the multi-level local phase quantization texture feature of the preset neighborhood is merged with the dynamic mode feature to obtain the merged fusion feature.
  • the extracting the dynamic mode features of the consecutive N frames of the face image includes:
  • the dynamic mode feature of the dynamic mode feature that extracts the continuous N frame face image includes:
  • the column vector of (m*n)*1 is used to represent the m*n gray value data included in the face image, and the first one consisting of N-1 column vectors corresponding to the face image of the front N-1 frame is obtained.
  • a data matrix and a second data matrix consisting of N-1 column vectors corresponding to the back N-1 frame face image, where m and n are positive integers;
  • the obtaining the adjoint matrix of the linear mapping matrix according to the first data matrix and the second data matrix comprises:
  • the inverse matrix of the upper triangular matrix, the pseudo inverse matrix of the lower triangular matrix, and the second data matrix are multiplied to obtain an adjoint matrix of the linear mapping matrix.
  • the multi-layer perceptron includes at least a first fully connected layer and a second fully connected layer, wherein the multi-layer perceptron is trained by using a preset training set, and determining the multi-layer perceptron model includes:
  • each sample in the preset training set includes a face image of at least consecutive N frames
  • the parameters of the connection layer are determined to determine the multilayer perceptron model.
  • the preset condition includes that the calculated number of total losses is equal to a preset number of times threshold or the total loss is less than or equal to a preset loss threshold.
  • a second aspect of the present invention provides a living body detection system, the living body detection system comprising:
  • a training module for training a multi-layer perceptron with a preset training set to determine a multi-layer perceptron model
  • An acquiring module configured to acquire a face image of consecutive N frames to be detected, where the N is a positive integer greater than 3;
  • a conversion module configured to convert a face image of an intermediate frame in the face image of the consecutive N frames from a first color space to a second color space, wherein when the N is an odd number, the face of the middle frame The image is a face image of the (N+1)/2th frame, and when N is an even number, the face image of the intermediate frame is a face image of the N/2th frame or the N/2+1th frame;
  • a texture feature extraction module configured to extract a texture feature of the face image converted into an intermediate frame of the second color space
  • a dynamic mode feature extraction module configured to extract a dynamic mode feature of the continuous N frame face image
  • a fusion module configured to fuse the texture feature with the dynamic mode feature to obtain a merged fusion feature
  • a probability acquisition module configured to input the fusion feature to the multi-layer perceptron model, to obtain a predicted probability value of the living body tag and a predicted probability value of the non-living tag;
  • a determining module configured to determine, when the predicted probability value of the living body label is greater than a predicted probability value of the non-living label, a face image of the continuous N frame is a living face image;
  • the determining module is further configured to determine that the face image of the consecutive N frames is a non-living face image when the predicted probability value of the non-living tag is smaller than the predicted probability value of the non-living tag.
  • a third aspect of the present invention provides a computer apparatus, comprising: a processor, wherein the processor is configured to implement a living body detecting method according to any of the above embodiments when executing a computer program stored in a memory.
  • a fourth aspect of the present invention provides a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the living body detecting method according to any of the above embodiments.
  • the face image of the continuous N frames is detected by using the fusion feature of the face image of the continuous N frames and the trained multi-layer perceptron model, thereby determining the face of the continuous N frame.
  • the image is a living face image or a non-living face image. Since the fusion feature includes a texture feature and a dynamic mode feature, the recognition accuracy and safety of the living body detection can be improved.
  • FIG. 1 is a flowchart of an implementation of a living body detecting method according to an embodiment of the present invention
  • step S105 in the living body detecting method according to an embodiment of the present invention
  • step S101 in a living body detecting method is a flowchart of an implementation of step S101 in a living body detecting method according to an embodiment of the present invention
  • FIG. 4 is a functional block diagram of a living body detection system according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a dynamic mode feature extraction module 105 in a living body detection system according to an embodiment of the present invention
  • FIG. 6 is a structural block diagram of a training module 101 in a living body detection system according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a computer apparatus according to an embodiment of the present invention.
  • FIG. 1 shows an implementation flow of a living body detecting method according to an embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • the living body detection method includes:
  • step S101 the multi-layer perceptron is trained by using the preset training set to determine the multi-layer perceptron model.
  • the preset training set is a preset training set, and the training set includes a large number of face images for training the multi-layer perceptron model.
  • the multi-layer perceptron is a feedforward artificial neural network (English name: FF-ANN) model that maps multiple input data sets onto a single output data set.
  • FF-ANN feedforward artificial neural network
  • the multi-layer perceptron is trained by using a large number of face images included in the preset training set, and the multi-layer perceptron model after the training is determined, so as to use the multi-layer perceptron model to face the face image.
  • the detection is performed to determine that the face picture is a living face picture or a non-living face picture.
  • Step S102 Acquire a face image of consecutive N frames to be detected, where N is a positive integer greater than 3.
  • a camera image of a mobile phone or an image recognition device of an access prevention system or a face anti-counterfeiting system, such as a camera may acquire a face image of a continuous N frame for a certain period of time; or capture a scene image by a monocular camera, and use face detection.
  • the algorithm detects the face image in real time and intercepts the face images of consecutive multiple frames.
  • N is a positive integer greater than 3.
  • a face image of 60 consecutive frames in a period of 1-2 seconds is acquired by the camera of the face security system to subsequently detect whether the face image of the continuous 60 frames is a living face image or a non-living face image.
  • the living body detecting method further includes: performing grayscale processing and/or normalizing the face images of the consecutive N frames. Processing.
  • the acquired face image may be preprocessed.
  • the acquired face image is subjected to grayscale processing or normalization processing.
  • the acquired face image can be smoothed, filtered, segmented, etc., and will not be described in detail here.
  • the face images of the consecutive N frames may be normalized according to face key point detection and face alignment.
  • Step S103 converting a face image of an intermediate frame in the face image of the consecutive N frames from a first color space to a second color space, wherein when N is an odd number, the face image of the middle frame is The face image of the (N+1)/2th frame, when N is an even number, the face image of the intermediate frame is the face image of the N/2th frame or the N/2+1th frame.
  • the face image of the intermediate frame is converted from the first color space to the second color space.
  • the face image of the intermediate frame when N is an odd number, the face image of the intermediate frame is the face image of the (N+1)/2 frame, assuming that N is 61, for example, here 31
  • the frame face image is the face image of the intermediate frame; when N is an even number, assuming that the above N is 60, the face image of the intermediate frame is the 30th frame face image or the 31st frame face image.
  • the RGB color space is the most commonly used color space, and the Lab color space is better than the RGB color space, it can better simulate the perception of color and highlight the opposite color space (ie, the red color channel a and the blue-yellow color channel b).
  • the first color space is an RGB color space
  • the second color space is a Lab color space.
  • the first color space is an RGB color space.
  • the RGB color space includes a red channel R, a green channel G, and a blue channel B.
  • the Lab color space includes two opposite color channels, a luminance channel L and a green-red color channel a and a blue-yellow color channel b.
  • the luminance channel L represents the brightness of the pixel, the value range is [0, 100], and the green-red color channel a represents from red to The range of green, the value range is [127, -128], the blue-yellow color channel b represents the range from yellow to blue, and the value range is [127, -128].
  • the face image of the intermediate frame in the face image of the consecutive N frames may be converted from the RGB color space to the Lab color space according to the following transformation:
  • L, a, and b are the values of the Lab color space luminance channel, the green-red color channel, and the blue-yellow color channel, respectively
  • R, G, and B are the values of the RGB color space red channel, the green channel, and the blue channel, respectively.
  • converting the face image of the intermediate frame in the face image of the consecutive N frames from the RGB color space to the Lab color space is not limited to the above transformation, and may also convert the face image from the RGB color space to the XYZ color by first. The space is then converted from the XYZ color space to the Lab color space, which will not be described in detail here.
  • the XYZ color space is based on the RGB color space of the International Lighting Commission. Through a large number of normal human visual measurement and statistics, a new color system established by using three imaginary primary colors X, Y and Z is used. More details will be described.
  • Step S104 extracting the texture feature of the face image converted into the intermediate frame of the second color space.
  • a texture feature is a visual feature that reflects homomorphism in an image. It is represented by the grayscale distribution of the neighborhood of the pixel and its surrounding space, ie, the local texture information. Texture features describe the spatial color distribution and intensity distribution of an image or a small area thereof. After performing color space conversion on the face image of the intermediate frame, the texture feature of the face image of the intermediate frame of the converted second color space is extracted, that is, the texture of the face image converted into the intermediate frame of the Lab color space is extracted. feature.
  • step S104 extract the texture of the face image converted into the intermediate frame of the second color space.
  • the feature includes: extracting the local phase quantization texture feature of the preset neighborhood of the face image converted to the intermediate frame of the Lab color space.
  • the local binary pattern (LBP) texture feature (hereinafter referred to as LBP texture feature) may be extracted in the spatial domain, or the frequency domain may be extracted.
  • LBP texture feature Local phase quantization (LPQ) texture feature of the face image (hereinafter referred to as LPQ texture feature).
  • LPQ texture feature Local phase quantization texture feature of the face image
  • the face image converted into the intermediate frame of the Lab color space is extracted to preset the LPQ texture feature of the neighborhood in the frequency domain.
  • the LBP texture feature of the face image converted to the intermediate frame of the Lab color space in the airspace may also be extracted, and details are not described herein again.
  • the preset neighborhood is a preset neighborhood, and is not particularly limited herein.
  • the preset neighborhood is a 3*3 neighborhood or a 5*5 neighborhood or a 7*7 neighborhood.
  • the multi-level LPQ texture feature of the preset neighborhood is extracted for the face image converted to the intermediate frame of the Lab color space, for example, Extracting the LPQ texture features of the 3*3 neighborhood, the 5*5 neighborhood, and the 7*7 neighborhood of the face image converted into the intermediate frame of the Lab color space, and splicing the extracted multi-level LPQ texture features,
  • the LPQ texture feature is a texture feature represented by a vector form.
  • the multi-level LPQ texture feature is fused: 3*3 neighborhood, 5*5 neighborhood, and 7*
  • the vector of the LPQ texture feature of the 7 neighborhood is spliced, and the spliced vector whose vector dimension is the sum of the three is formed, that is, the fused multi-level LPQ texture feature is used as the intermediate frame of the second color space.
  • the order of the LPQ texture feature splicing and the position of the vector of the 3*3 neighborhood, the 5*5 neighborhood, and the 7*7 neighborhood are not particularly limited, and can be freely combined and arranged.
  • the LPQ texture features of the 3*3 neighborhood, the 5*5 neighborhood, and the 7*7 neighborhood are sequentially spliced or sequentially 5*5 neighborhoods, 3*3 neighborhoods, and 7*7 neighborhoods.
  • Step S105 extracting dynamic mode features of the continuous N frame face images.
  • the dynamic mode feature of consecutive frames In view of the dynamic mode feature of consecutive frames, the dynamic motion information between consecutive frames is extracted, and the dynamic mode features of consecutive frames are extracted during the living body detection, which can better detect image attacks. Therefore, in order to improve the accuracy and security of the living body detection.
  • the dynamic mode feature of the continuous N frame face image is extracted.
  • the dynamic mode features are also represented in the form of vectors.
  • Step S106 the texture feature is merged with the dynamic mode feature to obtain the merged fusion feature.
  • the texture feature may be merged with the dynamic mode feature to obtain the merged fusion feature.
  • the fusion feature simultaneously includes rich texture features and dynamic motion information of the face images of the consecutive N frames, and thus, the accuracy of the living body detection can be improved.
  • step S106 the texture feature is merged with the dynamic mode feature, and the acquired fusion feature includes: multi-level local phase quantization texture feature of the preset neighborhood
  • the dynamic mode feature is fused to obtain the fused feature after the fusion.
  • the multi-level LPQ texture feature and the dynamic mode feature are all represented in the form of a vector, and the multi-level LPQ texture feature is spliced with the dynamic mode feature to obtain a merged fusion feature.
  • the multi-level LPQ texture features may be preceded, the dynamic mode features may be spliced in the subsequent order or the dynamic mode features may be preceded, and the multi-level LPQ texture features may be fused in the subsequent order.
  • Step S107 input the fusion feature to the multi-layer perceptron model, and obtain a predicted probability value of the living body tag and a predicted probability value of the non-living tag.
  • the fusion feature is input to the trained multi-layer perceptron model to perform feature mapping and normalization on the fusion feature, and obtain a prediction probability value of the corresponding living tag and a prediction probability of the non-living tag. value.
  • the predicted probability value of the living tag indicates that the face image to be detected is the predicted probability of the living face image
  • the predicted probability value of the non-live tag indicates that the face image to be detected is a non-live face image. Probability.
  • step S108 is performed to determine that the face image of the consecutive N frames is a living face image.
  • step S109 is performed to determine that the face image of the consecutive N frames is a non-living face image.
  • the image is a living face image; the predicted probability value of the non-living tag is greater than the predicted probability value of the non-living tag, and the face image of the consecutive N frames is determined to be a non-living face image.
  • step S105 extracting dynamic mode features of the continuous N frame face image includes:
  • the dynamic mode feature of the face image of the consecutive N frames contains a plurality of dynamic mode features
  • the most dynamic mode feature includes the largest dynamic structured information and the richest texture information between consecutive frames. Therefore, in order to improve the efficiency of the living body detection, in the embodiment of the present invention, the dynamic mode feature with the largest energy among the dynamic mode features of the continuous N frame face image is extracted.
  • the trained multi-layer perceptron model is used to detect the face image of the continuous N frame according to the fusion feature of the face image of consecutive N frames, and determine the person of the continuous N frame.
  • the face image is a predicted value of the living face image or the non-living tag.
  • the fused feature in the embodiment of the present invention includes the texture feature and the dynamic mode feature of the face image of the consecutive N frames, the living body detection can be improved. Identify accuracy and security.
  • FIG. 2 is a flowchart showing the implementation process of extracting the dynamic mode feature of the dynamic mode feature of the continuous N frame face image in step S105 included in the living body detecting method provided by the embodiment of the present invention, according to different requirements.
  • the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • only the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • Step S1051 using the column vector of (m*n)*1 to represent the m*n gray value data included in the face image, and acquiring the N-1 column vectors corresponding to the face image of the front N-1 frame.
  • a first data matrix and a second data matrix consisting of N-1 column vectors corresponding to the back N-1 frame face image, where m and n are positive integers.
  • the m*n gray value data included in each face image of the consecutive N frames is represented by a column vector of (m*n)*1, where m and n are positive integers. That is, the face image of the rth frame is represented as a column vector p r of (m*n)*1, where r is a positive integer less than or equal to N. Then, the column vectors corresponding to the face images of the first N-1 frame are sequentially formed into the first data matrix P 1 , and the column vectors corresponding to the face images of the subsequent N-1 frames are sequentially composed into the second data matrix P 2 . That is, the first data matrix P 1 and the second data matrix P 2 are obtained .
  • Step S1052 according to the first data matrix P 1 and P 2 of the second data matrix acquired along a linear mapping matrix A H matrix, wherein the matrix A is a linear mapping of the first data matrix and the P 1 The inverse matrix of the second data matrix P 2 The matrix after multiplication.
  • the linear mapping matrix A includes global visual dynamic information in the face image of the consecutive N frames, and the dynamic mode feature of the face image of the consecutive N frames can be obtained through the linear mapping matrix A.
  • step S1052 obtaining the adjoint matrix of the linear mapping matrix according to the first data matrix and the second data matrix includes:
  • the first data matrix is triangulated, and an upper triangular matrix and a lower triangular matrix of the first data matrix are respectively obtained.
  • the linear decomposition matrix A is solved by using the triangular decomposition.
  • Triangulation is a kind of matrix decomposition, which can decompose a matrix into the product of the unit upper triangular matrix and the unit lower triangular matrix.
  • the adjoint matrix H of the linear mapping matrix A when acquiring the adjoint matrix H of the linear mapping matrix A, it can also be obtained by other matrix decomposition methods, for example, orthogonal trigonometric decomposition (ie, QR decomposition), singular value decomposition, and will not be described in detail herein. .
  • orthogonal trigonometric decomposition ie, QR decomposition
  • singular value decomposition singular value decomposition
  • the inverse matrix U ⁇ 1 of the upper triangular matrix U is further obtained; and further, the pseudo of the lower triangular matrix L is obtained according to the lower triangular matrix L Inverse matrix L + .
  • the pseudo-inverse matrix is a generalized form of the inverse matrix, also called the generalized inverse matrix.
  • the matrix X is referred to as a pseudo-inverse matrix of the matrix K.
  • the adjoint matrix H of the linear mapping matrix A is obtained by multiplying the inverse matrix U -1 of the upper triangular matrix U, the pseudo inverse matrix L + of the lower triangular matrix L, and the second data matrix by P 2 .
  • Step S1053 obtaining the feature vector E vec and the feature value E val of the adjoint matrix H by eigenvalue decomposition.
  • Eigenvalue decomposition also known as spectral decomposition, is a method of decomposing a matrix into a product of matrices represented by eigenvalues and eigenvectors of a matrix. Typically, a matrix contains multiple eigenvalues and eigenvectors. After the adjoint matrix H is obtained, the eigenvector E vec and the eigenvalue E val of the adjoint matrix H can be obtained by eigenvalue decomposition.
  • Step S1054 the characteristic value E val determining the largest absolute eigenvalue of E val (K) corresponding to the eigenvectors E vec (K).
  • the adjoint matrix H includes a plurality of feature values
  • the feature value having the largest absolute value among the feature values corresponds to the dynamic mode feature having the largest energy among the dynamic mode features
  • the adjoint matrix H is calculated respectively
  • the absolute value of the included feature value E val is compared, and all the absolute values are compared, and the feature vector corresponding to the feature value having the largest absolute value among the feature values is determined.
  • the index position of the feature value E val of the adjoint matrix may be marked, and the feature value and the corresponding feature vector are associated.
  • the eigenvalue with the largest absolute value is the eigenvalue E val (K) with the index position K.
  • the eigenvalue E val (K) whose index position is K is determined ( K) corresponds to the feature vector E vec (K).
  • Step S1055 the first data matrix P 1 with the largest absolute eigenvalue of the eigenvalue E val (K) corresponding to the eigenvectors E vec (K) is multiplied, and the multiplied results taken Absolute value, the dynamic mode feature with the largest energy among the dynamic pattern features of the face image of the consecutive N frames is obtained.
  • the first data matrix P 1 and the feature value E val (K) having the largest absolute value are obtained.
  • the adjoint matrix H of the linear mapping matrix A is obtained by using triangulation, and the eigenvalues and eigenvectors of the adjoint matrix H are obtained by eigenvalue decomposition, and the eigenvalues with the largest absolute value of the eigenvalues are determined.
  • the feature vector further acquires the most dynamic dynamic mode feature of the dynamic mode feature of the face image of the continuous N frame.
  • the matrix calculation amount can be reduced, and the matrix operation is simplified. Therefore, the embodiment of the present invention can improve the living body detection. effectiveness.
  • FIG. 3 shows an implementation flow of step S101 in the living body detecting method according to the embodiment of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the parts related to the embodiment of the present invention are shown, which are detailed as follows:
  • the multi-layer perceptron in order to improve the recognition accuracy and security of the living body detection, as shown in FIG. 3, the multi-layer perceptron includes at least a first fully connected layer and a second fully connected layer, and step S101 is performed.
  • the multi-layer perceptron is trained by using a preset training set to determine a multi-layer perceptron model including:
  • Step S1011 The first sample and the second sample are randomly extracted from the preset training set, wherein each sample in the preset training set includes a face image of at least consecutive N frames.
  • the preset training set is a preset training set, which includes a large number of face images, that is, samples, and each sample in the preset training set includes a face image of at least consecutive N frames.
  • the first sample and the second sample are randomly extracted from the preset training set for training.
  • Step S1012 Extract the fusion feature of the first sample and the fusion feature of the second sample, respectively.
  • step S102 to step S106 may be specifically referred to. It will not be described in detail.
  • Step S1013 Input the fusion feature of the first sample and the fusion feature of the second sample into the multi-layer perceptron respectively, and acquire a Softmax loss of the first sample and a Softmax loss of the second sample.
  • the multi-layer perceptron further includes a Softmax layer
  • the preset training set further includes a label category of the sample
  • the label category includes a living label and a non-living label
  • each of the preset training sets is before training.
  • the label category of the sample is known and determined.
  • the output of the second fully connected layer is input to the Softmax layer of the multi-layer perceptron, and the Softmax layer of the multi-layer perceptron is mainly used for
  • the input features are normalized and can be normalized according to the following formula:
  • f(z i ) and f(z j ) represent the predicted probabilities of the labels of the first sample and the second sample after passing through the Softmax layer of the multilayer perceptron, respectively
  • z i and z j respectively represent the same
  • the output of the present and second samples after passing through the second fully connected layer of the multilayer perceptron, i and j respectively represent the tag categories represented, and k represents the number of tag categories, where there are only live tags and non-living tags, so In the embodiment of the present invention, k is 2.
  • the Softmax loss of the first sample and the Softmax loss of the second sample can be determined. It is assumed that the preset training set contains 2M samples, and each of the 2M samples includes a face image of at least consecutive N frames, where M is a positive integer. Specifically, the Softmax loss of the first sample and the Softmax loss of the second sample may be determined according to the following formula:
  • Step S1014 determining a contrast loss of the first sample and the second sample.
  • Contrast loss (English full name: Contrastive Loss) can well express the matching degree of paired samples, and can also be used to train the model of extracting features, which is mainly used in dimensionality reduction.
  • the contrast loss of the first sample and the second sample may be determined according to the following formula:
  • L c represents the contrast loss of the first sample and the second sample
  • M represents the number of the batch pair of the preset training set
  • y n is 1 when the first sample and the second sample are the same label category Is zero when the first sample and the second sample are different label categories
  • ie, y n may indicate whether the first sample and the second sample match
  • d represents the Euclidean distance of the first sample and the second sample
  • Step S1015 determining a total loss by the Softmax loss of the first sample, the Softmax loss of the second sample, and the contrast loss.
  • L L s (i) + L s (j) + weight * L c ;
  • L is the total loss of the first sample and the second sample
  • the weight is a preset weight parameter, that is, a preset weight parameter.
  • the weight is 0.003.
  • step S1016 is performed, and the parameter of the first fully connected layer and the second in the multi-layer perceptron are adjusted by a process of backpropagation by a stochastic gradient descent method.
  • the parameters of the fully connected layer Going to step S1011, steps S1011 through S1015 are performed.
  • the stochastic gradient descent is mainly used to perform weight update in the neural network model, and to update and adjust the parameters of the model in one direction to minimize the loss function.
  • Backpropagation is to calculate the product of the input signal and its corresponding weight in the forward propagation, then apply the activation function to the sum of these products, and then return the relevant error in the back propagation of the network model, using a random gradient.
  • the update weight value is decreased, and the weight parameter is updated in the opposite direction of the loss function gradient by calculating the gradient of the error function with respect to the weight parameter.
  • the pre-set condition of the loss convergence is a pre-set loss convergence condition.
  • the preset condition includes: the calculation of the total loss is equal to The preset number of times threshold or the total loss is less than or equal to a preset loss threshold.
  • the number of calculations of the total loss can be used as a condition for loss convergence.
  • the preset number of times threshold is a preset number of times threshold, where No special restrictions are imposed.
  • the preset loss threshold is a preset loss threshold, which is not particularly limited herein.
  • step S1017 When the total loss satisfies the preset condition of the loss convergence, step S1017 is performed, and the parameter of the first fully connected layer and the parameter of the second fully connected layer of the last calculation process before the preset condition satisfying the loss convergence are taken as The parameters of the first fully connected layer and the parameters of the second fully connected layer of the multilayer perceptron model are determined to determine the multilayer perceptron model.
  • the parameters of the first fully connected layer and the second fully connected layer of the last calculation process before the preset condition that satisfies the loss convergence are satisfied.
  • the parameter is used as a parameter of the first fully connected layer of the multi-layer perceptron model and a parameter of the second fully connected layer, thereby determining the trained multi-layer perceptron model.
  • step S1012 extracting the fusion feature of the first sample and the fusion feature of the second sample respectively includes:
  • step S104 For extracting the local phase quantization texture feature of the intermediate frame of the first sample or the second sample, reference may be made to the content related to step S104 above; for extracting the dynamic mode feature of the first sample or the second sample For the most dynamic mode feature of the medium energy, reference may be made to the content related to step S105 above; for localizing the texture feature of the intermediate frame of the first sample or the second sample with the first sample or the second sample.
  • step S106 For details, refer to step S106 above, and details are not described herein again.
  • the multi-layer perceptron is trained by using the fusion feature of the sample, and the parameters of the multi-connection layer of the multi-layer perceptron are adjusted by a process of back propagation by using a stochastic gradient descent method, and the total loss is satisfied.
  • the multi-layer perceptron model after training is determined.
  • the fusion feature of the sample in the embodiment of the present invention includes the multi-level texture feature of the sample and the dynamic mode feature with the largest energy, the recognition accuracy and safety of the living body detection can be improved.
  • the stochastic gradient descent method is faster than the other gradients, and the operation speed is faster, and the purpose of the fast convergence can be achieved. Therefore, the embodiment of the present invention can improve the efficiency of the living body detection.
  • FIG 4 shows the functional modules of the living body detection system provided by the embodiments of the present invention. For the convenience of description, only the parts related to the embodiments of the present invention are shown, which are as follows:
  • each module included in the living body detection system 10 is used to perform various steps in the corresponding embodiment of FIG. 1.
  • the living body detection system 10 includes a training module 101, an acquisition module 102, a conversion module 103, a texture feature extraction module 104, a dynamic mode feature extraction module 105, a fusion module 106, a probability acquisition module 107, and The module 108 is determined.
  • the training module 101 is configured to train a multi-layer perceptron by using a preset training set to determine a multi-layer perceptron model.
  • the acquiring module 102 is configured to acquire a face image of consecutive N frames to be detected, where the N is a positive integer greater than 3.
  • the conversion module 103 is configured to convert a face image of an intermediate frame in the face image of the consecutive N frames from a first color space to a second color space, where the intermediate frame is when N is an odd number
  • the face image is the face image of the (N+1)/2th frame, and when N is an even number, the face image of the intermediate frame is the face of the N/2th frame or the N/2+1th frame. image.
  • the texture feature extraction module 104 is configured to extract a texture feature of the face image converted into an intermediate frame of the second color space.
  • the dynamic mode feature extraction module 105 is configured to extract a dynamic mode feature of the continuous N frame face image.
  • the fusion module 106 is configured to fuse the texture feature with the dynamic mode feature to obtain the merged fusion feature.
  • the probability acquisition module 107 is configured to input the fusion feature to the multi-layer perceptron model, and obtain a predicted probability value of the living body tag and a predicted probability value of the non-living tag.
  • the determining module 108 is further configured to determine that the face image of the consecutive N frames is a non-living face image when the predicted probability value of the non-living tag is smaller than the predicted probability value of the non-living tag.
  • the trained multi-layer perceptron model is used to detect the face image of the continuous N frames according to the fusion feature of the face images of consecutive N frames, and the determining module 108 further determines the continuous
  • the face image of the N frame is a predicted probability value of the live face image or the non-living tag.
  • the fused feature in the embodiment of the present invention includes the texture feature and the dynamic mode feature of the face image of the consecutive N frames, Improve the recognition accuracy and safety of living body detection.
  • FIG. 5 is a structural block diagram of a dynamic mode feature extraction module 105 in a living body detection system according to an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are as follows:
  • each unit included in the dynamic mode feature extraction module 105 is used to perform various steps in the corresponding embodiment of FIG. 2.
  • the dynamic mode feature extraction module 105 includes a data matrix acquisition unit 1051, an adjoint matrix acquisition unit 1052, an eigenvalue decomposition unit 1053, a feature vector determination unit 1054, and a dynamic mode feature acquisition unit 1055.
  • the data matrix obtaining unit 1051 is configured to use the column vector of (m*n)*1 to represent m*n gray value data included in the face image, and obtain the corresponding image of the front N-1 frame face image.
  • the adjoint matrix obtaining unit 1052 is configured to obtain an adjoint matrix of a linear mapping matrix according to the first data matrix and the second data matrix, where the linear mapping matrix is the first data matrix and the first A matrix obtained by multiplying inverse matrices of two data matrices, where m and n are positive integers.
  • the feature value decomposition unit 1053 is configured to obtain feature vectors and feature values of the adjoint matrix by using feature value decomposition.
  • the feature vector determining unit 1054 is configured to determine a feature vector corresponding to the feature value having the largest absolute value among the feature values.
  • the dynamic mode feature acquiring unit 1055 is configured to multiply the first data matrix by a feature vector corresponding to a feature value closest to zero among the phase angle values, and take an absolute value of the multiplied result. Obtaining the most dynamic dynamic mode feature of the dynamic mode feature of the face image of the consecutive N frames.
  • the companion matrix of the linear mapping matrix is first acquired by the matrix obtaining unit 1052, and the eigenvalue decomposition unit 1053 obtains the eigenvalues and eigenvectors of the adjoint matrix by using eigenvalue decomposition, and determines that the absolute value of the eigenvalue is the largest.
  • the feature vector corresponding to the feature value, and the dynamic mode feature of the dynamic mode feature of the face image of the continuous N frame is obtained.
  • the dynamic mode feature in the embodiment of the present invention is the dynamic mode feature with the largest energy. Therefore, The embodiment of the invention can further improve the recognition accuracy and safety of the living body detection.
  • FIG. 6 is a structural block diagram of a training module 101 in a living body detection system according to an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • each unit included in the training module 101 is used to perform various steps in the corresponding embodiment of FIG. 3 .
  • the training module 101 includes: a sample extraction unit 1011, a fusion feature extraction unit 1012, a Softmax loss determination unit 1013, a comparison loss determination unit 1014, a total loss determination unit 1015, a parameter adjustment unit 1016, and The model determination unit 1017.
  • the sample extracting unit 1011 is configured to randomly extract a first sample and a second sample from a preset training set, where each sample in the preset training set includes a face image of at least consecutive N frames.
  • the fusion feature extraction unit 1012 is configured to separately extract the fusion feature of the first sample and the fusion feature of the second sample.
  • the Softmax loss determining unit 1013 is configured to input the fusion feature of the first sample and the fusion feature of the second sample into the multi-layer perceptron respectively, and acquire a Softmax loss of the first sample and the first Two samples of Softmax loss.
  • the comparison loss determining unit 1014 is configured to determine a contrast loss of the first sample and the second sample.
  • the total loss determining unit 1015 is configured to determine a total loss by a Softmax loss of the first sample, a Softmax loss of the second sample, and the contrast loss.
  • the parameter adjustment unit 1016 is configured to adjust a parameter of the first fully connected layer in the multi-layer perceptron by a process of back propagation by using a stochastic gradient descent method when the total loss does not satisfy a preset condition of loss convergence And parameters of the second fully connected layer.
  • the model determining unit 1017 is configured to: when the total loss satisfies a preset condition of the loss convergence, the parameter of the first fully connected layer and the second full connection of the last calculation process before the preset condition that satisfies the loss convergence
  • the parameters of the layer are used as parameters of the first fully connected layer of the multilayer perceptron model and parameters of the second fully connected layer to determine the multilayer perceptron model.
  • the multi-layer perceptron is trained by using the fusion feature of the sample, and the parameter adjustment unit 1016 adjusts the parameters of the multi-connection layer of the multi-layer perceptron by using a stochastic gradient descent method, and the model determines Unit 1017 determines the trained multi-layer perceptron model when the total loss satisfies a predetermined condition of loss convergence.
  • the fusion feature of the sample in the embodiment of the present invention includes the multi-level texture feature of the sample and the dynamic mode feature with the largest energy, the recognition accuracy and safety of the living body detection can be improved.
  • the stochastic gradient descent method is faster than the other gradients, and the operation speed is faster, and the purpose of the fast convergence can be achieved. Therefore, the embodiment of the present invention can improve the efficiency of the living body detection.
  • FIG. 7 is a schematic structural diagram of a computer device 1 according to a preferred embodiment of a method for detecting a living body according to an embodiment of the present invention.
  • the computer device 1 includes a memory 11, a processor 12, and an input/output device 13.
  • the computer device 1 is a device capable of automatically performing numerical calculation and/or information processing according to an instruction set or stored in advance, and the hardware includes, but not limited to, a microprocessor, an application specific integrated circuit (ASIC). ), Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer device 1 can be any electronic product that can interact with a user, such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PDA), a game machine, an interactive network television. (Internet Protocol Television, IPTV), smart wearable devices, etc.
  • the computer device 1 may be a server, including but not limited to a single network server, a server group composed of a plurality of network servers, or a cloud computing-based cloud composed of a large number of hosts or network servers, wherein the cloud Computation is a type of distributed computing, a super-virtual computer consisting of a cluster of loosely coupled computers.
  • the network in which the computer device 1 is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VPN), and the like.
  • VPN virtual private network
  • the memory 11 is used to store programs of the living body detecting method and various data, and realizes high-speed, automatic completion of access of programs or data during the operation of the computer device 1.
  • the memory 11 may be an external storage device and/or an internal storage device of the computer device 1. Further, the memory 11 may be a circuit having a storage function in a physical form, such as a RAM (Random-Access Memory), a FIFO (First In First Out), or the like, or the memory 11 It may be a storage device having a physical form, such as a memory stick, a TF card (Trans-flash Card), or the like.
  • the processor 12 can be a Central Processing Unit (CPU).
  • the CPU is a very large-scale integrated circuit, which is the computing core (Core) and the Control Unit of the computer device 1.
  • the processor 12 can execute an operating system of the computer device 1 and various types of installed applications, program codes, and the like, for example, execute an operating system in each module or unit in the living body detecting system 10, and various types of installed applications and program codes. To achieve a living body detection method.
  • the input/output device 13 is mainly used to implement an input/output function of the computer device 1, such as transceiving input digital or character information, or displaying information input by a user or information provided to a user and various menus of the computer device 1.
  • the modules/units integrated by the computer device 1 can be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.
  • the above-described characteristic means of the present invention can be realized by an integrated circuit and control the function of the living body detecting method described in any of the above embodiments. That is, the integrated circuit of the present invention is mounted in the computer device 1 such that the computer device 1 functions as follows:
  • the multi-layer perceptron is trained by using a preset training set to determine a multi-layer perceptron model
  • the functions of the living body detecting method can be installed in the computer device 1 by the integrated circuit of the present invention, so that the computer device 1 can perform the living body detecting method in any of the embodiments.
  • the functions implemented are not detailed here.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种活体检测方法、计算机装置及计算机可读存储介质。所述方法包括利用预设训练集训练多层感知器确定多层感知器模型(S101),获取待检测的连续N帧人脸图像(S102),将连续N帧人脸图像的中间帧人脸图像由第一颜色空间转换为第二颜色空间(S103),提取转换后的中间帧的人脸图像的纹理特征及连续N帧人脸图像的动态模式特征(S104,S105),融合纹理特征与动态模式特征获取融合特征(S106),利用多层感知器模型对融合特征进行特征映射,输出映射特征并做归一化处理,获得活体标签的预测概率值和非活体标签的预测概率值(S107),进而确定连续N帧人脸图像为活体或者非活体人脸图像(S108,S109)。所述活体检测方法、计算机装置及计算机可读存储介质中的融合特征包含纹理特征和动态模式特征,因此,可以提高活体检测的识别准确率以及安全性。

Description

活体检测方法、计算机装置及计算机可读存储介质
本申请要求于2017年12月13日提交中国专利局,申请号为201711330349.1、发明名称为“活体检测方法、计算机装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明属于人脸防伪领域,尤其涉及一种活体检测方法、计算机装置及计算机可读存储介质。
背景技术
在人脸识别或者人脸防伪系统中,一般需要采用活体检测技术,以防止不法分子利用他人的图像或者视频信息进行攻击。现有的活体检测技术一般分为交互式和非交互式两种方法。交互式的活体检测技术需要用户配合完成相应的动作,例如眨眼、摇头、微笑等,导致用户的体验较差且识别效果不甚理想。非交互式的活体检测技术一般分为基于颜色纹理信息进行检测和基于图像运动信息进行检测两种。其中,基于颜色纹理信息的活体检测技术的基本思想是利用人脸颜色纹理信息进行分类识别,但该方法缺乏人脸动作信息,容易被高清图片或者视频攻击。另外,基于图像运动信息的活体检测技术的基本思想是利用人脸的微运动信息及简单的人脸纹理信息,但该方法缺乏对人脸可判别性特征的深刻提取,也容易被高清图片或者视频信息攻击。由此,导致现有的活体检测系统的识别准确率低且安全性较差。
因此,现有的活体检测系统存在识别准确率低、安全性差的问题。
发明内容
本发明提供一种活体检测方法、计算机装置及计算机可读存储介质,旨在解决现有的活体检测系统存在的识别准确率低、安全性差的问题。
本发明第一方面提供一种活体检测方法,所述活体检测方法包括:
利用预设训练集训练多层感知器,确定多层感知器模型;
获取待检测的连续N帧的人脸图像,其中,所述N为大于3的正整数;
将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2帧或者第N/2+1帧的人脸图像;
提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征;
提取所述连续N帧人脸图像的动态模式特征;
将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征;
将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值;
当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为活体人脸图像;
当所述非活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为非活体人脸图像。
在较优的一实施例中,所述第一颜色空间为RGB颜色空间,所述第二颜色空间为Lab颜色空间,所述提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征包括:
提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的局部相位量化纹理特征。
在较优的一实施例中,所述提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的局部相位量化纹理特征包括:
提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的多级局部相位量化纹理特征;
所述将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特 征包括:
将所述预设邻域的多级局部相位量化纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。
在较优的一实施例中,所述提取所述连续N帧人脸图像的动态模式特征包括:
提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征。
在较优的一实施例中,所述提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征包括:
采用(m*n)*1的列向量表示人脸图像所包含的m*n个灰度值数据,获取由前N-1帧人脸图像所对应的N-1个列向量组成的第一数据矩阵及由后N-1帧人脸图像所对应的N-1个列向量组成的第二数据矩阵,其中,m、n为正整数;
根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵,其中,所述线性映射矩阵为所述第一数据矩阵与所述第二数据矩阵的逆矩阵相乘后的矩阵;
通过特征值分解获取所述伴随矩阵的特征向量和特征值;
确定所述特征值中绝对值最大的特征值所对应的特征向量;
将所述第一数据矩阵与所述绝对值最大的特征值所对应的特征向量相乘,并对相乘后的结果取绝对值,获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征。
在较优的一实施例中,所述根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵包括:
对所述第一数据矩阵进行三角分解,并分别获得所述第一数据矩阵的上三角矩阵和下三角矩阵;
获取所述上三角矩阵的逆矩阵以及所述下三角矩阵的伪逆矩阵;
将所述上三角矩阵的逆矩阵、所述下三角矩阵的伪逆矩阵以及所述第二数据矩阵相乘,获取所述线性映射矩阵的伴随矩阵。
在较优的一实施例中,所述多层感知器至少包括第一全连接层和第二全连接层,所述利用预设训练集训练多层感知器,确定多层感知器模型包括:
从预设训练集中随机抽取第一样本和第二样本,其中,所述预设训练集中的每个样本均包含至少连续N帧的人脸图像;
分别提取所述第一样本的融合特征和所述第二样本的融合特征;
将所述第一样本的融合特征和第二样本的融合特征分别输入所述多层感知器,获取所述第一样本的Softmax损失和所述第二样本的Softmax损失;
确定所述第一样本和所述第二样本的对比损失;
通过所述第一样本的Softmax损失、所述第二样本的Softmax损失以及所述对比损失确定总损失;
当所述总损失不满足损失收敛的预设条件,则利用随机梯度下降法通过反向传播的过程调整所述多层感知器中第一全连接层的参数和所述第二全连接层的参数;
重复上述过程,直至所述总损失满足损失收敛的预设条件;
将满足损失收敛的预设条件之前的最后一次迭代过程的第一全连接层的参数和第二全连接层的参数作为所述多层感知器模型的第一全连接层的参数和第二全连接层的参数,确定所述多层感知器模型。
在较优的一实施例中,所述预设条件包括所述总损失的计算次数等于预设次数阈值或者所述总损失小于或者等于预设损失阈值。
本发明第二方面提供一种活体检测系统,所述活体检测系统包括:
训练模块,用于利用预设训练集训练多层感知器,确定多层感知器模型;
获取模块,用于获取待检测的连续N帧的人脸图像,其中,所述N为大于3的正整数;
转换模块,用于将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2 帧或者第N/2+1帧的人脸图像;
纹理特征提取模块,用于提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征;
动态模式特征提取模块,用于提取所述连续N帧人脸图像的动态模式特征;
融合模块,用于将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征;
概率获取模块,用于将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值;
确定模块,用于当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为活体人脸图像;
所述确定模块,还用于当所述非活体标签的预测概率值小于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为非活体人脸图像。
本发明第三方面提供一种计算机装置,所述计算机装置包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现上述任一实施例所述活体检测方法。
本发明第四方面提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述活体检测方法。
在本发明中,利用连续N帧的人脸图像的融合特征以及训练好的多层感知器模型,对所述连续N帧的人脸图像进行检测,进而确定确定所述连续N帧的人脸图像为活体人脸图像或者非活体人脸图像,鉴于融合特征包括纹理特征和动态模式特征,因此,可以提高活体检测的识别准确率以及安全性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的活体检测方法的实现流程图;
图2是本发明实施例提供的活体检测方法中步骤S105的实现流程图;
图3是本发明实施例提供的活体检测方法中步骤S101的实现流程图;
图4是本发明实施例提供的活体检测系统的功能模块图;
图5是本发明实施例提供的活体检测系统中动态模式特征提取模块105的结构框图;
图6是本发明实施例提供的活体检测系统中训练模块101的结构框图;
图7是本发明实施例提供的计算机装置的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
图1示出了本发明实施例提供的活体检测方法的实现流程,根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
如图1所示,活体检测方法包括:
步骤S101,利用预设训练集训练多层感知器,确定多层感知器模型。
所述预设训练集为预先设置的训练集,训练集中包含了大量的用于训练多层感知器模型的人脸图片。多层感知器是一种前馈人工神经网络(英文全称:Feedforward Artificial Neural Networks,简称FF-ANN)模型,其将输入的多个数据集映射到单一的输出的数据集上。在本发明实施例中,利用所述预设训练集中包含的大量的人脸图片训练多层感知器,确定训练后的多层感知器模型,以便利用所述多层感知器模型对人脸图片进行检测,以判断人脸图片为活体人 脸图片或者非活体人脸图片。
步骤S102,获取待检测的连续N帧的人脸图像,其中,所述N为大于3的正整数。
为了检测人脸图像是活体人脸图像还是非活体人脸图像,首先需要通过图像获取设备获取连续N帧的人脸图像。例如,可以通过手机的摄像头或者门禁识别系统或者人脸防伪系统的图像获取设备,如摄像头等获取一定时间内的连续N帧的人脸图像;或者通过单目摄像头捕捉场景图像,使用人脸检测算法实时检测人脸图像,截取连续多帧的人脸图像。其中,N为大于3的正整数。例如,通过人脸防伪系统的摄像头获取到1-2秒时间内的连续60帧的人脸图像,以便后续检测该连续60帧的人脸图像是活体人脸图像还是非活体人脸图像。
在较优的一实施例中,为了进一步提高活体检测的识别准确率以及安全性,所述活体检测方法还包括:对所述连续N帧的人脸图像进行灰度化处理和/或归一化处理。
在获取到连续N帧的人脸图像后,可以对获取到的人脸图像进行预处理。例如,对获取到的人脸图像进行灰度化处理或者归一化处理。除此之外,还可以对获取到的人脸图像进行平滑、滤波、分割等预处理,此处不再详细赘述。另外,在对所述连续N帧的人脸图像进行归一化处理时可以根据人脸关键点检测和人脸对齐对所述连续N帧的人脸图像进行归一化处理。
步骤S103,将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2帧或者第N/2+1帧的人脸图像。
在获取到连续N帧的人脸图像后,将其中中间帧的人脸图像由第一颜色空间转换为第二颜色空间。对于中间帧的人脸图像的确定,当N为奇数,则中间帧的人脸图像为第(N+1)/2帧的人脸图像,假设以N为61为例,则此处第 31帧人脸图像即为中间帧的人脸图像;当N为偶数,假设以上述N为60为例,则中间帧的人脸图像为第30帧人脸图像或者第31帧人脸图像。
鉴于RGB颜色空间为最为常用的颜色空间,且Lab颜色空间相比于RGB颜色空间,能够更好的模拟人对颜色的感知、突出对立颜色空间(即红颜色通道a和蓝黄颜色通道b)的强选择性,因此,在较优的一实施例中,所述第一颜色空间为RGB颜色空间,所述第二颜色空间为Lab颜色空间。其中,RGB颜色空间包括红色通道R、绿色通道G以及蓝色通道B。Lab颜色空间包括亮度通道L以及绿红颜色通道a和蓝黄颜色通道b两个对立颜色通道,亮度通道L表示像素的亮度,取值范围是[0,100],绿红颜色通道a表示从红色到绿色的范围,取值范围是[127,-128],蓝黄颜色通道b表示从黄色到蓝色的范围,取值范围是[127,-128]。具体的,可以根据下述变换,将所述连续N帧的人脸图像中的中间帧的人脸图像由RGB颜色空间转换为Lab颜色空间:
L=0.2126*R+0.7152*G+0.0722*B;
a=1.4749*(2.2213*R-0.339*G+0.1177*B)+128;
b=0.6245*(0.1949*R+0.6057*G-0.8006*B)+128;
其中,L、a以及b分别是Lab颜色空间亮度通道、绿红颜色通道以及蓝黄颜色通道的值,R、G以及B分别是RGB颜色空间红色通道、绿色通道以及蓝色通道的值。
另外,将所述连续N帧的人脸图像中的中间帧的人脸图像由RGB颜色空间转换为Lab颜色空间不限于上述变换,还可以通过先将人脸图片由RGB颜色空间转换为XYZ颜色空间,再由XYZ颜色空间转换为Lab颜色空间,此处不再详细赘述。其中,XYZ颜色空间是国际照明委员会在RGB颜色空间的基础上,通过大量正常人视觉测量和统计,改用三个假想的原色X、Y以及Z建立的一个新的色度系统,此处不再详细赘述。
步骤S104,提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特 征。
纹理特征是一种反映图像中同质现象的视觉特征,通过像素及其周围空间邻域的灰度分布,即局部纹理信息,来表现。纹理特征描述的是图像或其中小块区域的空间颜色分布和光强分布。在对中间帧的人脸图像进行颜色空间转换后,即提取转换后的第二颜色空间的中间帧的人脸图像的纹理特征,即提取转换为Lab颜色空间的中间帧的人脸图像的纹理特征。
在较优的一实施例中,为了提高活体检测的效率,以及进一步提高活体检测的识别准确率以及安全性,步骤S104,提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征包括:提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的局部相位量化纹理特征。
在提取人脸图像的纹理特征时,可以在空域提取人脸图像的局部二值模式(英文全称:Local binary pattern,简称LBP)纹理特征(以下简称LBP纹理特征),也可以在频域提取人脸图像的局部相位量化(英文全称:Local phase quantization,简称LPQ)纹理特征(以下简称LPQ纹理特征)。为了提高活体检测的效率和准确率,在本发明实施例中,提取所述转换为Lab颜色空间的中间帧的人脸图像在频域上预设邻域的LPQ纹理特征。在其它实施例中,也可以提取所述转换为Lab颜色空间的中间帧的人脸图像在空域上的LBP纹理特征,此处不再详细赘述。
所述预设邻域为预先设置的邻域,此处并不做特别的限制。在较优的一实施例中,所述预设邻域为3*3邻域或者5*5邻域或者7*7邻域。另外,在较优的一实施例中,为了进一步提高活体检测的准确率,对于所述转换为Lab颜色空间的中间帧的人脸图像,提取预设邻域的多级LPQ纹理特征,例如,分别提取所述转换为Lab颜色空间的中间帧的人脸图像的3*3邻域、5*5邻域以及7*7邻域的LPQ纹理特征,并将提取的多级LPQ纹理特征拼接、进行融合,在本发明实施例中,LPQ纹理特征为向量形式表现的纹理特征,此处将多级的LPQ纹理特征进行融合是指:将3*3邻域、5*5邻域以及7*7邻域的LPQ纹理特征 的向量进行拼接,形成向量维度为三者之和的拼接后的向量,即为融合后的多级LPQ纹理特征,作为所述转换为第二颜色空间的中间帧的人脸图像的最终的纹理特征。另外,3*3邻域、5*5邻域以及7*7邻域的LPQ纹理特征拼接的顺序和向量的位置并不做特别的限制,可以自由组合和安排。例如,依序将3*3邻域、5*5邻域以及7*7邻域的LPQ纹理特征进行拼接或者依序将5*5邻域、3*3邻域以及7*7邻域的LPQ纹理特征等。
步骤S105,提取所述连续N帧人脸图像的动态模式特征。
鉴于连续帧的动态模式特征包含了连续帧之间的动态运动信息,在活体检测时提取连续帧的动态模式特征,能够更好的检测出图像攻击,因此,为了提高活体检测的准确率和安全性,在本发明实施例中,提取所述连续N帧人脸图像的动态模式特征。在本发明实施例中,动态模式特征同样是以向量的形式表示。
步骤S106,将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。
在分别获取到人脸图像的纹理特征和动态模式特征之后,即可将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。融合特征同时包含了所述连续N帧的人脸图像的丰富的纹理特征和动态运动信息,因此,可以提高活体检测的准确率。
在较优的一实施例中,步骤S106,将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征包括:将所述预设邻域的多级局部相位量化纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。
所述多级LPQ纹理特征和所述动态模式特征均是以向量的形式表示,将多级LPQ纹理特征与所述动态模式特征进行拼接,获得融合后的融合特征。另外,在进行拼接融合时,可以按照多级LPQ纹理特征在前,动态模式特征在后的顺序进行拼接或者按照动态模式特征在前,多级LPQ纹理特征在后的顺序进行拼 接融合。
步骤S107,将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值。
在获取到上述融合特征之后,即将所述融合特征输入至训练后的多层感知器模型对融合特征进行特征映射以及归一化,获取相应的活体标签的预测概率值和非活体标签的预测概率值。其中,所述活体标签的预测概率值表示待检测的人脸图片为活体人脸图片的预测概率,所述非活体标签的预测概率值表示待检测的人脸图片为非活体人脸图片的预测概率。对于利用所述多层感知器模型对融合特征进行特征映射以及归一化,可参照下文中多层感知器训练的相关内容,此处不再详细赘述。
当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则执行步骤S108,确定所述连续N帧的人脸图像为活体人脸图像。
当所述非活体标签的预测概率值大于所述非活体标签的预测概率值,则执行步骤S109,确定所述连续N帧的人脸图像为非活体人脸图像。
对获得的活体标签的预测概率值和非活体标签的预测概率值进行比较,在所述活体标签的预测概率值大于所述非活体标签的预测概率值时,确定所述连续N帧的人脸图像为活体人脸图像;在所述非活体标签的预测概率值大于所述非活体标签的预测概率值,确定所述连续N帧的人脸图像为非活体人脸图像。
在较优的一实施例中,为了进一步提高活体检测的识别准确率以及安全性,所述活体检测方法还包括:对所述动态模式特征进行归一化处理,获取归一化处理后的动态模式特征。
在较优的一实施例中,为了进一步提高活体检测的识别效率,步骤S105,提取所述连续N帧人脸图像的动态模式特征包括:
提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征。
鉴于所述连续N帧的人脸图像的动态模式特征包含很多个动态模式特征,其中,能量最大的动态模式特征包含了连续帧之间最大的动态结构化信息和最丰富的纹理信息。因此,为了提高活体检测的效率,在本发明实施例中,提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征。
在本发明实施例中,利用训练好的多层感知器模型,根据连续N帧的人脸图像的融合特征,对所述连续N帧的人脸图像进行检测,确定所述连续N帧的人脸图像为活体人脸图像或者非活体标签的预测概率值,鉴于本发明实施例中的融合特征包括所述连续N帧的人脸图像的纹理特征和动态模式特征,因此,可以提高活体检测的识别准确率以及安全性。
图2示出了本发明实施例提供的活体检测方法中步骤S105包含的:提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征的实现流程,根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
在较优的一实施例中,如图2所示,步骤S105包含的:所述提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征包括:
步骤S1051,采用(m*n)*1的列向量表示人脸图像所包含的m*n个灰度值数据,获取由前N-1帧人脸图像所对应的N-1个列向量组成的第一数据矩阵及由后N-1帧人脸图像所对应的N-1个列向量组成的第二数据矩阵,其中,m、n为正整数。
将所述连续N帧的每个人脸图像所包含的m*n个灰度值数据采用(m*n)*1的列向量表示,其中,m、n为正整数。即将第r帧的人脸图像表示为(m*n)*1的列向量p r,其中,r为小于或者等于N的正整数。之后将前N-1帧的人脸图像所对应的列向量依序组成第一数据矩阵P 1,将后N-1帧的人脸图像所对应的列向量依序组成第二数据矩阵P 2,即获得第一数据矩阵P 1和第二数据矩阵P 2
步骤S1052,根据所述第一数据矩阵P 1和所述第二数据矩阵P 2获取线性映 射矩阵A的伴随矩阵H,其中,所述线性映射矩阵A为所述第一数据矩阵P 1与所述第二数据矩阵P 2的逆矩阵
Figure PCTCN2018119189-appb-000001
相乘后的矩阵。
在获得上述所述第一数据矩阵P 1和所述第二数据矩阵P 2后,即可根据所述第一数据矩阵P 1和所述第二数据矩阵P 2获取所述线性映射矩阵A的伴随矩阵H,其中,所述线性映射矩阵A为所述第一数据矩阵P 1与所述第二数据矩阵P 2的逆矩阵
Figure PCTCN2018119189-appb-000002
相乘后的矩阵,即:
Figure PCTCN2018119189-appb-000003
所述线性映射矩阵A包含有所述连续N帧的人脸图像中全局的视觉动态信息,可以通过所述线性映射矩阵A获得所述连续N帧的人脸图像的动态模式特征。
在较优的一实施例中,为了提高活体检测方法的识别效率,步骤S1052,根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵包括:
对所述第一数据矩阵进行三角分解,并分别获得所述第一数据矩阵的上三角矩阵和下三角矩阵。
鉴于三角分解主要用于简化矩阵、特别是维度较大的矩阵的计算过程,可以提高计算效率,进而提高活体检测的效率,因此,在本发明实施例中,采用三角分解求解线性映射矩阵A的伴随矩阵H。三角分解(即LU分解,LU Decomposition)是矩阵分解的一种,其可以将一个矩阵分解为单位上三角矩阵和单位下三角矩阵的乘积。在本发明实施例中,对所述第一数据矩阵P 1进行三角分解,获取所述第一数据矩阵P 1的上三角矩阵U和下三角矩阵L,即:P 1=L*U。
另外,在获取所述线性映射矩阵A的伴随矩阵H时,也可以通过其他的矩阵分解的方法求得,例如,正交三角分解(即QR分解)、奇异值分解,此处不再详细赘述。
获取所述上三角矩阵U的逆矩阵U -1以及所述下三角矩阵L的伪逆矩阵L +
在获得所述第一数据矩阵P 1的上三角矩阵U后,进而获得所述上三角矩阵U 的逆矩阵U -1;另外,根据所述下三角矩阵L获得所述下三角矩阵L的伪逆矩阵L +。伪逆矩阵是逆矩阵的广义形式,也称广义逆矩阵。当存在一个与矩阵K的逆矩阵K -1同型的矩阵X满足K*X*K=K,且X*K*X=X,此时称矩阵X为矩阵K的伪逆矩阵。
将所述上三角矩阵U的逆矩阵U -1、所述下三角矩阵L的伪逆矩阵L +以及所述第二数据矩阵相乘P 2,获取所述线性映射矩阵A的伴随矩阵H。
在获得所述上三角矩阵U的逆矩阵U -1、所述下三角矩阵L的伪逆矩阵L +以及所述第二数据矩阵P 2,将逆矩阵U -1、伪逆矩阵L +以及第二数据矩阵P 2相乘,获得所述线性映射矩阵A的伴随矩阵H,即:H=U -1*L +*P 2
步骤S1053,通过特征值分解获取所述伴随矩阵H的特征向量E vec和特征值E val
特征值分解,又称为谱分解,其是将矩阵分解为由矩阵的特征值和特征向量表示的矩阵之积的方法。通常情况下,矩阵包含有多个特征值和特征向量。在获得伴随矩阵H后,可通过特征值分解求得伴随矩阵H的特征向量E vec和特征值E val
步骤S1054,确定所述特征值E val中绝对值最大的特征值E val(K)所对应的特征向量E vec(K)。
鉴于所述伴随矩阵H包含了多个特征值,且所述特征值中绝对值最大的特征值和动态模式特征中能量最大的动态模式特征相对应,此处,分别计算所述伴随矩阵H所包含的特征值E val的绝对值,并对所有的绝对值进行比较,确定所述特征值中绝对值最大的特征值所对应的特征向量。例如,可以标记伴随矩阵的特征值E val的索引位置,并将特征值和对应的特征向量对应起来。假设绝对值最大的特征值为索引位置为K的特征值E val(K),则在确定索引位置为K的特征值E val(K)后,即确定索引位置为K的特征值E val(K)所对应的特征向量E vec(K)。
步骤S1055,将所述第一数据矩阵P 1与所述特征值中绝对值最大的特征值 E val(K)所对应的特征向量E vec(K)相乘,并对相乘后的结果取绝对值,获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征。
在确定上述绝对值最大的索引位置为K的特征值E val(K)所对应的特征向量E vec(K)后,将第一数据矩阵P 1与绝对值最大的特征值E val(K)所对应的特征向量E vec(K)相乘,并对相乘后的向量中的元素取绝对值,假设能量最大的动态模式特征为DM,则有:DM=abs(P 1*E vec(K)),至此,即可获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征。
在本发明实施例中,利用三角分解获取所述线性映射矩阵A的伴随矩阵H,并通过特征值分解获取伴随矩阵H的特征值和特征向量,确定特征值中绝对值最大的特征值所对应的特征向量,进而获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征,鉴于三角分解可以降低矩阵计算量,简化矩阵运算,因此,本发明实施例可以提高活体检测效率。
图3示出了本发明实施例提供的活体检测方法中步骤S101的实现流程,根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
在较优的一实施例中,为了提高活体检测的识别准确率以及安全性,如图3所示,所述多层感知器至少包括第一全连接层和第二全连接层,步骤S101,利用预设训练集训练多层感知器,确定多层感知器模型包括:
步骤S1011,从预设训练集中随机抽取第一样本和第二样本,其中,所述预设训练集中的每个样本均包含至少连续N帧的人脸图像。
所述预设训练集为预先设置的训练集,其包括大量的人脸图片,即样本,且所述预设训练集中的每个样本均包含至少连续N帧的人脸图像。从所述预设训练集中随机抽取第一样本和第二样本,以进行训练。
步骤S1012,分别提取所述第一样本的融合特征和所述第二样本的融合特征。
在从所述预设训练集中抽取第一样本和第二样本之后,分别提取所述第一样本和所述第二样本的融合特征,具体可参照上述步骤S102至步骤S106的内容,此处不再详细赘述。
步骤S1013,将所述第一样本的融合特征和第二样本的融合特征分别输入所述多层感知器,获取所述第一样本的Softmax损失和所述第二样本的Softmax损失。
在本发明实施例中,所述多层感知器至少包括第一全连接层和第二全连接层,所述第一全连接层和第二全连接层用于对所述融合特征进行特征映射,具体的,所述第一全连接层和第二全连接层均采用激活函数对所述任一融合特征向量做特征映射变换。鉴于Relu(修正线性单元,英文全称:Rectified linear unit,简称ReLU)激活函数可以加速回归模型的收敛,提高回归模型训练的速度和效率,因此,在较优的一实施例中,所述第一全连接层和第二全连接层均采用Relu激活函数对所述任一融合特征向量做特征映射变换。所述多层感知器还包括Softmax层,所述预设训练集中还包括样本的标签类别,所述标签类别包括活体标签和非活体标签两类,在训练之前,所述预设训练集中每个样本的标签类别是已知的和确定的。
在经过所述多层感知器的全连接层的特征映射后,将第二全连接层的输出输入至所述多层感知器的Softmax层,所述多层感知器的Softmax层主要用于对输入的特征进行归一化处理,具体可以按照下述公式进行归一化处理:
Figure PCTCN2018119189-appb-000004
以及
Figure PCTCN2018119189-appb-000005
其中,f(z i)和f(z j)分别表示第一样本和第二样本的在经过多层感知器的Softmax层之后的标签的预测概率,z i和z j分别表示第一样本和第二样本在经过所述多层感知器的第二全连接层后的输出,i和j分别表示代表的标签类别,k表示标签类别数,此处只有活体标签和非活体标签,因此,在本发明实施例中,k为2。
在确定第一样本和第二样本的输出f(z i)和f(z j)后,即可确定所述第一样本的Softmax损失和所述第二样本的Softmax损失。假设所述预设训练集中包含2M个样本,且2M个样本中每个样本均包含至少连续N帧的人脸图像,其中,M为正整数。具体的,可以根据下述公式确定所述第一样本的Softmax损失和所述第二样本的Softmax损失:
Figure PCTCN2018119189-appb-000006
以及
Figure PCTCN2018119189-appb-000007
其中,L s(i)和L s(j)分别表示第一样本和第二样本的Softmax损失,M表示所述预设训练集中批量样本对的数量,y i和y j分别表示第一样本和第二样本的真实的标签类别,即在确定第一样本的Softmax损失时,对于第一样本来说,其y i为1,而对于第一样本之外的其他样本,其y i均为零;在确定第二样本的Softmax损失时,对于第二样本来说,其y i为1,而对于第二样本之外的其他样本,其y i均为零。至此,即可分别确定第一样本和第二样本的Softmax损失。
步骤S1014,确定所述第一样本和所述第二样本的对比损失。
对比损失(英文全称:Contrastive Loss)可以很好的表达成对样本的匹配程度,也能够很好的用于训练提取特征的模型,其主要用于降维中。在本发明实施例中,可以根据如下公式确定所述第一样本和第二样本的对比损失:
Figure PCTCN2018119189-appb-000008
其中,L c表示第一样本和第二样本的对比损失,M表示所述预设训练集批量样本对的数量,y n在第一样本和第二样本为相同的标签类别时为1,在第一样本和第二样本为不同的标签类别时为零,即y n可以表示第一样本和第二样本是否匹配,d表示第一样本和第二样本的欧氏距离,具体的第一样本和第二样本欧氏距离的计算,此处不再详细赘述,m ij为预设距离阈值,即预先设置的距离阈值,其能够影响多层感知器模型训练的收敛速度和性能,在较优的一实施例中,所述预设距离阈值m ij的范围为0.01至0.1。
步骤S1015,通过所述第一样本的Softmax损失、所述第二样本的Softmax损失以及所述对比损失确定总损失。
在分别获取到上述第一样本和第二样本的Softmax损失L s(i)、L s(j)以及第一样本和第二样本的对比损失L c后,即可根据下述公式确定所述第一样本和所述第二样本的总损失:
L=L s(i)+L s(j)+weight*L c
其中,L为第一样本和第二样本的总损失,weight为预设权重参数,即预先设置的权重参数,在较优的一实施例中,weight为0.003。
当所述总损失不满足损失收敛的预设条件,则执行步骤S1016,利用随机梯度下降法通过反向传播的过程调整所述多层感知器中第一全连接层的参数和所述第二全连接层的参数。跳转至步骤S1011,执行步骤S1011至步骤S1015。
随机梯度下降主要用于在神经网络模型中进行权重更新,在一个方向上更新和调整模型的参数,来最小化损失函数。反向传播是先在前向传播中计算输入信号的乘积及其对应的权重,然后将激活函数作用于这些乘积的总和,之后在网络模型的反向传播过程中回传相关误差,使用随机梯度下降更新权重值,通过计算误差函数相对于权重参数的梯度,在损失函数梯度的相反方向上更新权重参数。在多层感知器的第一全连接层和第二全连接层满足以下公式:S=W*T+B,其中S表示输出特征,T表示输入特征,W表示全连接层中神经元的权值,B表示偏置项。因此,在本发明实施例中,在所述总损失L不满足损失收敛的预设条件时,则利用随机梯度下降法通过反向传播的过程调整所述回归模型的所述第一全连接层的参数和所述第二全连接层的参数,即调整全连接层神经元的权值W和偏置项。在调整所述回归模型的第一全连接层的参数和第二全连接层的参数后,跳转至步骤S1011,执行步骤S1011至步骤S1015。
所述损失收敛的预设条件为预先设置的损失收敛的条件,在较优的一实施例中,为了进一步提高活体检测的识别效率,所述预设条件包括:所述总损失 的计算次数等于预设次数阈值或者所述总损失小于或者等于预设损失阈值。
在设置所述损失收敛的条件时,可以将总损失的计算次数,即上述过程的迭代过程的次数作为损失收敛的条件。例如,在所述总损失的计算次数等于预设次数阈值时,认为总损失满足损失收敛的预设条件,停止训练多层感知器,所述预设次数阈值为预先设置的次数阈值,此处并不做特别的限制。或者在所述总损失小于或者等于预设损失阈值时,认为所述总损失满足损失收敛的预设条件,所述预设损失阈值为预先设置的损失阈值,此处并不做特别的限制。
当所述总损失满足损失收敛的预设条件,则执行步骤S1017,将满足损失收敛的预设条件之前的最后一次计算过程的第一全连接层的参数和第二全连接层的参数作为所述多层感知器模型的第一全连接层的参数和第二全连接层的参数,确定所述多层感知器模型。
在所述总损失满足损失收敛的预设条件时,即停止训练多层感知器,将满足损失收敛的预设条件之前的最后一次计算过程的第一全连接层的参数和第二全连接层的参数作为所述多层感知器模型的第一全连接层的参数和第二全连接层的参数,以此确定训练后的多层感知器模型。
在较优的一实施例中,为了进一步提高活体检测的识别准确率以及安全性,步骤S1012,分别提取所述第一样本的融合特征和所述第二样本的融合特征包括:
分别提取所述第一样本的中间帧的局部相位量化纹理特征和所述第一样本的动态模式特征中能量最大的动态模式特征。
将所述第一样本的中间帧的局部相位量化纹理特征和所述第一样本的动态模式特征中能量最大的动态模式特征进行融合,获取所述第一样本的融合特征。
分别提取所述第二样本的中间帧的局部相位量化纹理特征和所述第二样本的动态模式特征中能量最大的动态模式特征。
将所述第二样本的中间帧的局部相位量化纹理特征和所述第二样本的动态 模式特征中能量最大的动态模式特征进行融合,获取所述第二样本的融合特征。
对于提取所述第一样本或者第二样本的中间帧的局部相位量化纹理特征,具体可以参照上述步骤S104相关的内容;对于提取所述第一样本或者所述第二样本的动态模式特征中能量最大的动态模式特征,具体可以参照上述步骤S105相关的内容;对于将所述第一样本或者第二样本的中间帧的局部相位量化纹理特征和所述第一样本或者第二样本的动态模式特征中能量最大的动态模式特征进行融合,获取所述第一样本或者第二样本的融合特征,具体请参照上述步骤S106,此处均不再详细赘述。
在本发明实施例中,利用样本的融合特征训练所述多层感知器,采用随机梯度下降法通过反向传播的过程调整所述多层感知器全连接层的参数,在所述总损失满足损失收敛的预设条件时,确定训练后的多层感知器模型。鉴于本发明实施例中的样本的融合特征包含了样本的多级纹理特征和能量最大的动态模式特征,因此,可以提高活体检测的识别准确率和安全性。另外,随机梯度下降法相比于其他的梯度下降,运算速度更快,能够达到快速收敛的目的,因此,本发明实施例还可以提高活体检测的效率。
图4示出了本发明实施例提供的活体检测系统的功能模块,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
参考图4,所述活体检测系统10所包括的各个模块用于执行图1对应实施例中的各个步骤,具体请参阅图1以及图1对应实施例中的相关描述,此处不再赘述。在较优的一实施例中,所述活体检测系统10包括训练模块101、获取模块102、转换模块103、纹理特征提取模块104、动态模式特征提取模块105、融合模块106、概率获取模块107以及确定模块108。
所述训练模块101,用于利用预设训练集训练多层感知器,确定多层感知器模型。
所述获取模块102,用于获取待检测的连续N帧的人脸图像,其中,所述 N为大于3的正整数。
所述转换模块103,用于将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2帧或者第N/2+1帧的人脸图像。
所述纹理特征提取模块104,用于提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征。
所述动态模式特征提取模块105,用于提取所述连续N帧人脸图像的动态模式特征。
所述融合模块106,用于将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。
所述概率获取模块107,用于将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值。
所述确定模块108,用于当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为活体人脸图像。
所述确定模块108,还用于当所述非活体标签的预测概率值小于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为非活体人脸图像。
在本发明实施例中,利用训练好的多层感知器模型,根据连续N帧的人脸图像的融合特征,对所述连续N帧的人脸图像进行检测,确定模块108进而确定所述连续N帧的人脸图像为活体人脸图像或者非活体标签的预测概率值,鉴于本发明实施例中的融合特征包括所述连续N帧的人脸图像的纹理特征和动态模式特征,因此,可以提高活体检测的识别准确率以及安全性。
图5示出了本发明实施例提供的活体检测系统中动态模式特征提取模块105的结构框图,为了便于说明,仅示出了与本发明实施例相关的部分,详述 如下:
参考图5,所述动态模式特征提取模块105所包括的各个单元用于执行图2对应实施例中的各个步骤,具体请参阅图2以及图2对应实施例中的相关描述,此处不再赘述。在较优的一实施例中,所述动态模式特征提取模块105包括数据矩阵获取单元1051、伴随矩阵获取单元1052、特征值分解单元1053、特征向量确定单元1054以及动态模式特征获取单元1055。
所述数据矩阵获取单元1051,用于采用(m*n)*1的列向量表示人脸图像所包含的m*n个灰度值数据,获取由前N-1帧人脸图像所对应的N-1个列向量组成的第一数据矩阵及由后N-1帧人脸图像所对应的N-1个列向量组成的第二数据矩阵。
所述伴随矩阵获取单元1052,用于根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵,其中,所述线性映射矩阵为所述第一数据矩阵与所述第二数据矩阵的逆矩阵相乘后的矩阵,其中,m、n为正整数。
所述特征值分解单元1053,用于通过特征值分解获取所述伴随矩阵的特征向量和特征值。
所述特征向量确定单元1054,用于确定所述特征值中绝对值最大的特征值所对应的特征向量。
所述动态模式特征获取单元1055,用于将所述第一数据矩阵与所述相位角度值中最接近零的特征值所对应的特征向量相乘,并对相乘后的结果取绝对值,获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征。
在本发明实施例中,首先伴随矩阵获取单元1052获取所述线性映射矩阵的伴随矩阵,特征值分解单元1053通过特征值分解获取伴随矩阵的特征值和特征向量,确定特征值中绝对值最大的特征值所对应的特征向量,进而获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征,本发明实施例中的动态模式特征为能量最大的动态模式特征,因此,本发明实施例可以进一 步提高活体检测的识别准确率以及安全性。
图6示出了本发明实施例提供的活体检测系统中训练模块101的结构框图,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
参考图6,所述训练模块101所包括的各个单元用于执行图3对应实施例中的各个步骤,具体请参阅图3以及图3对应实施例中的相关描述,此处不再赘述。在较优的一实施例中,所述训练模块101包括:样本抽取单元1011、融合特征提取单元1012、Softmax损失确定单元1013、对比损失确定单元1014、总损失确定单元1015、参数调整单元1016以及模型确定单元1017。
所述样本抽取单元1011,用于从预设训练集中随机抽取第一样本和第二样本,其中,所述预设训练集中的每个样本均包含至少连续N帧的人脸图像。
所述融合特征提取单元1012,用于分别提取所述第一样本的融合特征和所述第二样本的融合特征。
所述Softmax损失确定单元1013,用于将所述第一样本的融合特征和第二样本的融合特征分别输入所述多层感知器,获取所述第一样本的Softmax损失和所述第二样本的Softmax损失。
所述对比损失确定单元1014,用于确定所述第一样本和所述第二样本的对比损失。
所述总损失确定单元1015,用于通过所述第一样本的Softmax损失、所述第二样本的Softmax损失以及所述对比损失确定总损失。
所述参数调整单元1016,用于在所述总损失不满足损失收敛的预设条件时,利用随机梯度下降法通过反向传播的过程调整所述多层感知器中第一全连接层的参数和所述第二全连接层的参数。
所述模型确定单元1017,用于在所述总损失满足损失收敛的预设条件时,将满足损失收敛的预设条件之前的最后一次计算过程的第一全连接层的参数和 第二全连接层的参数作为所述多层感知器模型的第一全连接层的参数和第二全连接层的参数,确定所述多层感知器模型。
在本发明实施例中,利用样本的融合特征训练所述多层感知器,参数调整单元1016采用随机梯度下降法通过反向传播的过程调整所述多层感知器全连接层的参数,模型确定单元1017在所述总损失满足损失收敛的预设条件时,确定训练后的多层感知器模型。鉴于本发明实施例中样本的融合特征包含了样本的多级纹理特征和能量最大的动态模式特征,因此,可以提高活体检测的识别准确率和安全性。另外,随机梯度下降法相比于其他的梯度下降,运算速度更快,能够达到快速收敛的目的,因此,本发明实施例还可以提高活体检测的效率。
图7是本发明实施例提供的实现活体检测方法的较佳实施例的计算机装置1的结构示意图。如图7所示,计算机装置1包括存储器11、处理器12及输入输出设备13。
所述计算机装置1是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机装置1可以是任何一种可与用户进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、游戏机、交互式网络电视(Internet Protocol Television,IPTV)、智能式穿戴式设备等。所述计算机装置1可以是服务器,所述服务器包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量主机或网络服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。所述计算机装置1所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual  Private Network,VPN)等。
存储器11用于存储活体检测方法的程序和各种数据,并在计算机装置1运行过程中实现高速、自动地完成程序或数据的存取。存储器11可以是计算机装置1的外部存储设备和/或内部存储设备。进一步地,存储器11可以是集成电路中没有实物形式的具有存储功能的电路,如RAM(Random-Access Memory,随机存取存储设备)、FIFO(First In First Out,)等,或者,存储器11也可以是具有实物形式的存储设备,如内存条、TF卡(Trans-flash Card)等等。
处理器12可以是中央处理器(CPU,Central Processing Unit)。CPU是一块超大规模的集成电路,是计算机装置1的运算核心(Core)和控制核心(Control Unit)。处理器12可执行计算机装置1的操作系统以及安装的各类应用程序、程序代码等,例如执行活体检测系统10中的各个模块或者单元中的操作系统以及安装的各类应用程序、程序代码,以实现活体检测方法。
输入输出设备13主要用于实现计算机装置1的输入输出功能,比如收发输入的数字或字符信息,或显示由用户输入的信息或提供给用户的信息以及计算机装置1的各种菜单。
所述计算机装置1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内 容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上说明的本发明的特征性的手段可以通过集成电路来实现,并控制实现上述任意实施例中所述活体检测方法的功能。即,本发明的集成电路安装于所述计算机装置1中,使所述计算机装置1发挥如下功能:
利用预设训练集训练多层感知器,确定多层感知器模型;
获取待检测的连续N帧的人脸图像,其中,所述N为大于3的正整数;
将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2帧或者第N/2+1帧的人脸图像;
提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征;
提取所述连续N帧人脸图像的动态模式特征;
将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征;
将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值;
当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为活体人脸图像;
当所述非活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为非活体人脸图像。
在任意实施例中所述活体检测方法所能实现的功能都能通过本发明的集成电路安装于所述计算机装置1中,使所述计算机装置1发挥任意实施例中所述活体检测方法所能实现的功能,在此不再详述。
在本发明所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个模块或装置也可以由一个模块或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。

Claims (10)

  1. 一种活体检测方法,其特征在于,所述活体检测方法包括:
    利用预设训练集训练多层感知器,确定多层感知器模型;
    获取待检测的连续N帧的人脸图像,其中,所述N为大于3的正整数;
    将所述连续N帧的人脸图像中的中间帧的人脸图像由第一颜色空间转换为第二颜色空间,其中,当N为奇数,则所述中间帧的人脸图像为第(N+1)/2帧的人脸图像,当N为偶数,则所述中间帧的人脸图像为第N/2帧或者第N/2+1帧的人脸图像;
    提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征;
    提取所述连续N帧人脸图像的动态模式特征;
    将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特征;
    将所述融合特征输入至所述多层感知器模型,获得活体标签的预测概率值和非活体标签的预测概率值;
    当所述活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为活体人脸图像;
    当所述非活体标签的预测概率值大于所述非活体标签的预测概率值,则确定所述连续N帧的人脸图像为非活体人脸图像。
  2. 如权利要求1所述的活体检测方法,其特征在于,所述第一颜色空间为RGB颜色空间,所述第二颜色空间为Lab颜色空间,所述提取所述转换为第二颜色空间的中间帧的人脸图像的纹理特征包括:
    提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的局部相位量化纹理特征。
  3. 如权利要求2所述的活体检测方法,其特征在于,所述提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的局部相位量化纹理特征包括:
    提取所述转换为Lab颜色空间的中间帧的人脸图像的预设邻域的多级局部相位量化纹理特征;
    所述将所述纹理特征与所述动态模式特征进行融合,获取融合后的融合特 征包括:
    将所述预设邻域的多级局部相位量化纹理特征与所述动态模式特征进行融合,获取融合后的融合特征。
  4. 如权利要求1所述的活体检测方法,其特征在于,所述提取所述连续N帧人脸图像的动态模式特征包括:
    提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征。
  5. 如权利要求4所述的活体检测方法,其特征在于,所述提取所述连续N帧人脸图像的动态模式特征中能量最大的动态模式特征包括:
    采用(m*n)*1的列向量表示人脸图像所包含的m*n个灰度值数据,获取由前N-1帧人脸图像所对应的N-1个列向量组成的第一数据矩阵及由后N-1帧人脸图像所对应的N-1个列向量组成的第二数据矩阵,其中,m、n为正整数;
    根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵,其中,所述线性映射矩阵为所述第一数据矩阵与所述第二数据矩阵的逆矩阵相乘后的矩阵;
    通过特征值分解获取所述伴随矩阵的特征向量和特征值;
    确定所述特征值中绝对值最大的特征值所对应的特征向量;
    将所述第一数据矩阵与所述绝对值最大的特征值所对应的特征向量相乘,并对相乘后的结果取绝对值,获取所述连续N帧的人脸图像的动态模式特征中能量最大的动态模式特征。
  6. 如权利要求5所述的活体检测方法,其特征在于,所述根据所述第一数据矩阵和所述第二数据矩阵获取线性映射矩阵的伴随矩阵包括:
    对所述第一数据矩阵进行三角分解,并分别获得所述第一数据矩阵的上三角矩阵和下三角矩阵;
    获取所述上三角矩阵的逆矩阵以及所述下三角矩阵的伪逆矩阵;
    将所述上三角矩阵的逆矩阵、所述下三角矩阵的伪逆矩阵以及所述第二数据矩阵相乘,获取所述线性映射矩阵的伴随矩阵。
  7. 如权利要求1所述的活体检测方法,其特征在于,所述多层感知器至少包括第一全连接层和第二全连接层,所述利用预设训练集训练多层感知器,确定多层感知器模型包括:
    从预设训练集中随机抽取第一样本和第二样本,其中,所述预设训练集中的每个样本均包含至少连续N帧的人脸图像;
    分别提取所述第一样本的融合特征和所述第二样本的融合特征;
    将所述第一样本的融合特征和第二样本的融合特征分别输入所述多层感知器,获取所述第一样本的Softmax损失和所述第二样本的Softmax损失;
    确定所述第一样本和所述第二样本的对比损失;
    通过所述第一样本的Softmax损失、所述第二样本的Softmax损失以及所述对比损失确定总损失;
    当所述总损失不满足损失收敛的预设条件,则利用随机梯度下降法通过反向传播的过程调整所述多层感知器中第一全连接层的参数和所述第二全连接层的参数;
    直至所述总损失满足损失收敛的预设条件,将满足损失收敛的预设条件之前的最后一次计算过程的第一全连接层的参数和第二全连接层的参数作为所述多层感知器模型的第一全连接层的参数和第二全连接层的参数,确定所述多层感知器模型。
  8. 如权利要求7所述的活体检测方法,其特征在于,所述预设条件包括所述总损失的计算次数等于预设次数阈值或者所述总损失小于或者等于预设损失阈值。
  9. 一种计算机装置,其特征在于,所述计算机装置包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现如权利要求1至8中任意一项所述活体检测方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至8中任意一项所述活体检测 方法。
PCT/CN2018/119189 2017-12-13 2018-12-04 活体检测方法、计算机装置及计算机可读存储介质 WO2019114580A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711330349.1A CN107992842B (zh) 2017-12-13 2017-12-13 活体检测方法、计算机装置及计算机可读存储介质
CN201711330349.1 2017-12-13

Publications (1)

Publication Number Publication Date
WO2019114580A1 true WO2019114580A1 (zh) 2019-06-20

Family

ID=62038296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119189 WO2019114580A1 (zh) 2017-12-13 2018-12-04 活体检测方法、计算机装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN107992842B (zh)
WO (1) WO2019114580A1 (zh)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427828A (zh) * 2019-07-05 2019-11-08 中国平安人寿保险股份有限公司 人脸活体检测方法、装置及计算机可读存储介质
CN110458024A (zh) * 2019-07-11 2019-11-15 阿里巴巴集团控股有限公司 活体检测方法及装置和电子设备
CN110675312A (zh) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及存储介质
CN110929680A (zh) * 2019-12-05 2020-03-27 四川虹微技术有限公司 一种基于特征融合的人脸活体检测方法
CN111105438A (zh) * 2019-11-12 2020-05-05 安徽大学 基于动态模式分解的运动检测方法、终端设备及计算机可读存储介质
CN111160216A (zh) * 2019-12-25 2020-05-15 开放智能机器(上海)有限公司 一种多特征多模型的活体人脸识别方法
CN111259831A (zh) * 2020-01-20 2020-06-09 西北工业大学 基于重组颜色空间的虚假人脸判别方法
CN111368764A (zh) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 一种基于计算机视觉与深度学习算法的虚假视频检测方法
CN111814682A (zh) * 2020-07-09 2020-10-23 泰康保险集团股份有限公司 人脸活体检测方法及装置
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN112036339A (zh) * 2020-09-03 2020-12-04 福建库克智能科技有限公司 人脸检测的方法、装置和电子设备
CN112183422A (zh) * 2020-10-09 2021-01-05 成都奥快科技有限公司 一种基于时空特征的人脸活体检测方法、装置、电子设备及存储介质
CN112541470A (zh) * 2020-12-22 2021-03-23 杭州趣链科技有限公司 基于超图的人脸活体检测方法、装置及相关设备
CN112613407A (zh) * 2020-12-23 2021-04-06 杭州趣链科技有限公司 基于联邦学习的人脸活体检测训练优化方法、装置及设备
CN112633113A (zh) * 2020-12-17 2021-04-09 厦门大学 跨摄像头的人脸活体检测方法及系统
CN112699811A (zh) * 2020-12-31 2021-04-23 中国联合网络通信集团有限公司 活体检测方法、装置、设备、储存介质及程序产品
CN112836625A (zh) * 2021-01-29 2021-05-25 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113255400A (zh) * 2020-02-10 2021-08-13 深圳市光鉴科技有限公司 活体人脸识别模型的训练、识别方法、系统、设备及介质
CN113269010A (zh) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN113283388A (zh) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 活体人脸检测模型的训练方法、装置、设备及存储介质
CN113378715A (zh) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113705362A (zh) * 2021-08-03 2021-11-26 北京百度网讯科技有限公司 图像检测模型的训练方法、装置、电子设备及存储介质
CN113887408A (zh) * 2021-09-30 2022-01-04 平安银行股份有限公司 活化人脸视频的检测方法、装置、设备及存储介质
CN114445898A (zh) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 人脸活体检测方法、装置、设备、存储介质及程序产品
CN114565918A (zh) * 2022-02-24 2022-05-31 阳光暖果(北京)科技发展有限公司 一种基于多特征提取模块的人脸静默活体检测方法与系统
WO2022156562A1 (zh) * 2021-01-19 2022-07-28 腾讯科技(深圳)有限公司 基于超声回波的对象识别方法、装置及存储介质
CN117437675A (zh) * 2023-10-23 2024-01-23 长讯通信服务有限公司 一种基于成分分解与重建的人脸静默活体检测方法、装置、计算机设备和存储介质

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992842B (zh) * 2017-12-13 2020-08-11 深圳励飞科技有限公司 活体检测方法、计算机装置及计算机可读存储介质
CN108446687B (zh) * 2018-05-28 2022-02-01 唯思电子商务(深圳)有限公司 一种基于移动端和后台互联的自适应人脸视觉认证方法
CN110580523B (zh) * 2018-06-07 2022-08-02 清华大学 一种模拟神经网络处理器的误差校准方法及装置
CN108960080B (zh) * 2018-06-14 2020-07-17 浙江工业大学 基于主动防御图像对抗攻击的人脸识别方法
CN109101925A (zh) * 2018-08-14 2018-12-28 成都智汇脸卡科技有限公司 活体检测方法
CN109344716A (zh) * 2018-08-31 2019-02-15 深圳前海达闼云端智能科技有限公司 活体检测模型的训练方法、检测方法、装置、介质及设备
CN109376592B (zh) * 2018-09-10 2021-04-27 创新先进技术有限公司 活体检测方法、装置和计算机可读存储介质
CN109543593A (zh) * 2018-11-19 2019-03-29 华勤通讯技术有限公司 回放攻击的检测方法、电子设备及计算机可读存储介质
CN109766785B (zh) * 2018-12-21 2023-09-01 中国银联股份有限公司 一种人脸的活体检测方法及装置
CN109711358B (zh) * 2018-12-28 2020-09-04 北京远鉴信息技术有限公司 神经网络训练方法、人脸识别方法及系统和存储介质
CN109886275A (zh) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 翻拍图像识别方法、装置、计算机设备和存储介质
CN110135259A (zh) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 静默式活体图片识别方法、装置、计算机设备和存储介质
CN110378219B (zh) * 2019-06-13 2021-11-19 北京迈格威科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN110298312B (zh) * 2019-06-28 2022-03-18 北京旷视科技有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN110334637A (zh) * 2019-06-28 2019-10-15 百度在线网络技术(北京)有限公司 人脸活体检测方法、装置及存储介质
CN110502998B (zh) * 2019-07-23 2023-01-31 平安科技(深圳)有限公司 车辆定损方法、装置、设备和存储介质
CN111160299A (zh) * 2019-12-31 2020-05-15 上海依图网络科技有限公司 活体识别方法及装置
CN112269976A (zh) * 2020-03-31 2021-01-26 周亚琴 物联网人工智能人脸验证方法及系统
CN111597938B (zh) * 2020-05-07 2022-02-22 马上消费金融股份有限公司 活体检测、模型训练方法及装置
CN111881726B (zh) * 2020-06-15 2022-11-25 马上消费金融股份有限公司 一种活体检测方法、装置及存储介质
CN111860357B (zh) * 2020-07-23 2024-05-14 中国平安人寿保险股份有限公司 基于活体识别的出勤率计算方法、装置、终端及存储介质
CN112084917B (zh) * 2020-08-31 2024-06-04 腾讯科技(深圳)有限公司 一种活体检测方法及装置
CN112200075B (zh) * 2020-10-09 2024-06-04 西安西图之光智能科技有限公司 一种基于异常检测的人脸防伪方法
CN114913565B (zh) * 2021-01-28 2023-11-17 腾讯科技(深圳)有限公司 人脸图像检测方法、模型训练方法、装置及存储介质
CN113221655B (zh) * 2021-04-12 2022-09-30 重庆邮电大学 基于特征空间约束的人脸欺骗检测方法
CN113255531B (zh) * 2021-05-31 2021-11-09 腾讯科技(深圳)有限公司 活体检测模型的处理方法、装置、计算机设备和存储介质
CN113221842B (zh) * 2021-06-04 2023-12-29 第六镜科技(北京)集团有限责任公司 模型训练方法、图像识别方法、装置、设备及介质
CN113269149B (zh) * 2021-06-24 2024-06-07 中国平安人寿保险股份有限公司 活体人脸图像的检测方法、装置、计算机设备及存储介质
CN113705425B (zh) * 2021-08-25 2022-08-16 北京百度网讯科技有限公司 活体检测模型的训练方法和活体检测的方法、装置、设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350065A (zh) * 2008-09-05 2009-01-21 哈尔滨工业大学 基于舌体特征的身份识别方法
CN105320950A (zh) * 2015-11-23 2016-02-10 天津大学 一种视频人脸活体检测方法
CN105354554A (zh) * 2015-11-12 2016-02-24 西安电子科技大学 基于颜色和奇异值特征的人脸活体检测方法
US20160217338A1 (en) * 2015-01-26 2016-07-28 Alibaba Group Holding Limited Method and device for face in-vivo detection
CN106874857A (zh) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 一种基于视频分析的活体判别方法及系统
CN107392142A (zh) * 2017-07-19 2017-11-24 广东工业大学 一种真伪人脸识别方法及其装置
CN107992842A (zh) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 活体检测方法、计算机装置及计算机可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400122A (zh) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 一种活体人脸的快速识别方法
CN103593598B (zh) * 2013-11-25 2016-09-21 上海骏聿数码科技有限公司 基于活体检测和人脸识别的用户在线认证方法及系统
US9985963B2 (en) * 2015-02-15 2018-05-29 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
CN104933414B (zh) * 2015-06-23 2018-06-05 中山大学 一种基于wld-top的活体人脸检测方法
CN105138967B (zh) * 2015-08-05 2018-03-27 三峡大学 基于人眼区域活动状态的活体检测方法和装置
CN111144293A (zh) * 2015-09-25 2020-05-12 北京市商汤科技开发有限公司 带交互式活体检测的人脸身份认证系统及其方法
CN105243376A (zh) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 一种活体检测方法和装置
CN105426827B (zh) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 活体验证方法、装置和系统
CN106897659B (zh) * 2015-12-18 2019-05-24 腾讯科技(深圳)有限公司 眨眼运动的识别方法和装置
CN105956572A (zh) * 2016-05-15 2016-09-21 北京工业大学 一种基于卷积神经网络的活体人脸检测方法
CN106096519A (zh) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 活体鉴别方法及装置
CN106372629B (zh) * 2016-11-08 2020-02-07 汉王科技股份有限公司 一种活体检测方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350065A (zh) * 2008-09-05 2009-01-21 哈尔滨工业大学 基于舌体特征的身份识别方法
US20160217338A1 (en) * 2015-01-26 2016-07-28 Alibaba Group Holding Limited Method and device for face in-vivo detection
CN105354554A (zh) * 2015-11-12 2016-02-24 西安电子科技大学 基于颜色和奇异值特征的人脸活体检测方法
CN105320950A (zh) * 2015-11-23 2016-02-10 天津大学 一种视频人脸活体检测方法
CN106874857A (zh) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 一种基于视频分析的活体判别方法及系统
CN107392142A (zh) * 2017-07-19 2017-11-24 广东工业大学 一种真伪人脸识别方法及其装置
CN107992842A (zh) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 活体检测方法、计算机装置及计算机可读存储介质

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427828B (zh) * 2019-07-05 2024-02-09 中国平安人寿保险股份有限公司 人脸活体检测方法、装置及计算机可读存储介质
CN110427828A (zh) * 2019-07-05 2019-11-08 中国平安人寿保险股份有限公司 人脸活体检测方法、装置及计算机可读存储介质
CN110458024A (zh) * 2019-07-11 2019-11-15 阿里巴巴集团控股有限公司 活体检测方法及装置和电子设备
CN110675312A (zh) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及存储介质
CN110675312B (zh) * 2019-09-24 2023-08-29 腾讯科技(深圳)有限公司 图像数据处理方法、装置、计算机设备以及存储介质
CN111105438A (zh) * 2019-11-12 2020-05-05 安徽大学 基于动态模式分解的运动检测方法、终端设备及计算机可读存储介质
CN111105438B (zh) * 2019-11-12 2023-06-06 安徽大学 基于动态模式分解的运动检测方法、终端设备及计算机可读存储介质
CN110929680A (zh) * 2019-12-05 2020-03-27 四川虹微技术有限公司 一种基于特征融合的人脸活体检测方法
CN111160216B (zh) * 2019-12-25 2023-05-12 开放智能机器(上海)有限公司 一种多特征多模型的活体人脸识别方法
CN111160216A (zh) * 2019-12-25 2020-05-15 开放智能机器(上海)有限公司 一种多特征多模型的活体人脸识别方法
CN111259831A (zh) * 2020-01-20 2020-06-09 西北工业大学 基于重组颜色空间的虚假人脸判别方法
CN111259831B (zh) * 2020-01-20 2023-03-24 西北工业大学 基于重组颜色空间的虚假人脸判别方法
CN113255400A (zh) * 2020-02-10 2021-08-13 深圳市光鉴科技有限公司 活体人脸识别模型的训练、识别方法、系统、设备及介质
CN113269010A (zh) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN113269010B (zh) * 2020-02-14 2024-03-26 深圳云天励飞技术有限公司 一种人脸活体检测模型的训练方法和相关装置
CN111368764B (zh) * 2020-03-09 2023-02-21 零秩科技(深圳)有限公司 一种基于计算机视觉与深度学习算法的虚假视频检测方法
CN111368764A (zh) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 一种基于计算机视觉与深度学习算法的虚假视频检测方法
CN111814682A (zh) * 2020-07-09 2020-10-23 泰康保险集团股份有限公司 人脸活体检测方法及装置
CN111968152B (zh) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN112036339B (zh) * 2020-09-03 2024-04-09 福建库克智能科技有限公司 人脸检测的方法、装置和电子设备
CN112036339A (zh) * 2020-09-03 2020-12-04 福建库克智能科技有限公司 人脸检测的方法、装置和电子设备
CN112183422A (zh) * 2020-10-09 2021-01-05 成都奥快科技有限公司 一种基于时空特征的人脸活体检测方法、装置、电子设备及存储介质
CN112633113A (zh) * 2020-12-17 2021-04-09 厦门大学 跨摄像头的人脸活体检测方法及系统
CN112541470A (zh) * 2020-12-22 2021-03-23 杭州趣链科技有限公司 基于超图的人脸活体检测方法、装置及相关设备
CN112613407A (zh) * 2020-12-23 2021-04-06 杭州趣链科技有限公司 基于联邦学习的人脸活体检测训练优化方法、装置及设备
CN112699811B (zh) * 2020-12-31 2023-11-03 中国联合网络通信集团有限公司 活体检测方法、装置、设备、储存介质及程序产品
CN112699811A (zh) * 2020-12-31 2021-04-23 中国联合网络通信集团有限公司 活体检测方法、装置、设备、储存介质及程序产品
WO2022156562A1 (zh) * 2021-01-19 2022-07-28 腾讯科技(深圳)有限公司 基于超声回波的对象识别方法、装置及存储介质
CN112836625A (zh) * 2021-01-29 2021-05-25 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113221767B (zh) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113378715A (zh) * 2021-06-10 2021-09-10 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113378715B (zh) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 一种基于彩色人脸图像的活体检测方法及相关设备
CN113283388A (zh) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 活体人脸检测模型的训练方法、装置、设备及存储介质
CN113283388B (zh) * 2021-06-24 2024-05-24 中国平安人寿保险股份有限公司 活体人脸检测模型的训练方法、装置、设备及存储介质
CN113705362B (zh) * 2021-08-03 2023-10-20 北京百度网讯科技有限公司 图像检测模型的训练方法、装置、电子设备及存储介质
CN113705362A (zh) * 2021-08-03 2021-11-26 北京百度网讯科技有限公司 图像检测模型的训练方法、装置、电子设备及存储介质
CN113887408A (zh) * 2021-09-30 2022-01-04 平安银行股份有限公司 活化人脸视频的检测方法、装置、设备及存储介质
CN113887408B (zh) * 2021-09-30 2024-04-23 平安银行股份有限公司 活化人脸视频的检测方法、装置、设备及存储介质
CN114445898B (zh) * 2022-01-29 2023-08-29 北京百度网讯科技有限公司 人脸活体检测方法、装置、设备、存储介质及程序产品
CN114445898A (zh) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 人脸活体检测方法、装置、设备、存储介质及程序产品
CN114565918A (zh) * 2022-02-24 2022-05-31 阳光暖果(北京)科技发展有限公司 一种基于多特征提取模块的人脸静默活体检测方法与系统
CN117437675A (zh) * 2023-10-23 2024-01-23 长讯通信服务有限公司 一种基于成分分解与重建的人脸静默活体检测方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN107992842A (zh) 2018-05-04
CN107992842B (zh) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2019114580A1 (zh) 活体检测方法、计算机装置及计算机可读存储介质
EP3084682B1 (en) System and method for identifying faces in unconstrained media
KR20230021043A (ko) 객체 인식 방법 및 장치, 및 인식기 학습 방법 및 장치
US8774499B2 (en) Embedded optical flow features
WO2018019126A1 (zh) 视频类别识别方法和装置、数据处理装置和电子设备
US20200012923A1 (en) Computer device for training a deep neural network
CN112052831B (zh) 人脸检测的方法、装置和计算机存储介质
US20140334736A1 (en) Face recognition method and device
US20120213422A1 (en) Face recognition in digital images
US8103058B2 (en) Detecting and tracking objects in digital images
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
WO2022257487A1 (zh) 深度估计模型的训练方法, 装置, 电子设备及存储介质
US20220147735A1 (en) Face-aware person re-identification system
CN109413510B (zh) 视频摘要生成方法和装置、电子设备、计算机存储介质
Xian et al. Evaluation of low-level features for real-world surveillance event detection
JP7297989B2 (ja) 顔生体検出方法、装置、電子機器及び記憶媒体
KR20230045098A (ko) 비디오 검출 방법, 장치, 전자 기기 및 저장 매체
WO2023138376A1 (zh) 动作识别方法、模型训练方法、装置及电子设备
CN113705361A (zh) 活体检测模型的方法、装置及电子设备
CN114677730A (zh) 活体检测方法、装置、电子设备及存储介质
CN111274946B (zh) 一种人脸识别方法和系统及设备
Wang et al. Edge computing-enabled crowd density estimation based on lightweight convolutional neural network
Zhang et al. A skin color model based on modified GLHS space for face detection
Dabas et al. Implementation of image colorization with convolutional neural network
CN114445898B (zh) 人脸活体检测方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18888035

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/10/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18888035

Country of ref document: EP

Kind code of ref document: A1