CN112541470A - Hypergraph-based face living body detection method and device and related equipment - Google Patents
Hypergraph-based face living body detection method and device and related equipment Download PDFInfo
- Publication number
- CN112541470A CN112541470A CN202011529068.0A CN202011529068A CN112541470A CN 112541470 A CN112541470 A CN 112541470A CN 202011529068 A CN202011529068 A CN 202011529068A CN 112541470 A CN112541470 A CN 112541470A
- Authority
- CN
- China
- Prior art keywords
- matrix
- feature
- features
- characteristic
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 239000011159 matrix material Substances 0.000 claims abstract description 224
- 230000004927 fusion Effects 0.000 claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000013145 classification model Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000004364 calculation method Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004806 packaging method and process Methods 0.000 claims description 5
- 238000001727 in vivo Methods 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face in-vivo detection method based on a hypergraph, which is applied to the field of face in-vivo detection and is used for improving the face in-vivo detection efficiency and the universality. The method comprises the following steps: extracting corresponding color features, local binary pattern features and image quality features from an acquired detected face image, encapsulating the color features, the local binary pattern features and the image quality features into triples, processing the triples based on a feature fusion model to obtain a feature fusion matrix, calculating a similarity matrix between the features in the feature fusion matrix, establishing an initial Laplace matrix according to the similarity matrix, acquiring a target Laplace matrix according to the initial Laplace matrix and the similarity matrix, and classifying the target Laplace matrix based on the feature classification model to obtain a living body detection result or a non-living body detection result.
Description
Technical Field
The invention relates to the field of human face living body detection, in particular to a human face living body detection method and device based on a hypergraph, computer equipment and a storage medium.
Background
Currently, face recognition technology is widely applied to edge computing devices (such as 5G edge computing devices, AI edge computing devices, etc.) for authentication. However, the existing face recognition system is easily misled by videos or photos, and the face of a living body cannot be recognized, so that the face living body detection technology is developed, and the defect that the face recognition system is easily misled by videos or photos is overcome.
However, one of the existing face in-vivo detection technologies needs to perform judgment through interaction with a user, that is, after the face of the user is detected, the user needs to perform actions such as blinking, shaking, mouth opening, smiling and the like according to action instructions such as blinking, shaking, mouth opening, smiling and the like given by the system to perform identity verification, and the detection efficiency is low; another method is that when face living body detection and recognition are performed, an infrared camera is required to acquire extra information such as an infrared image obtained by acquiring the intensity of face infrared light, and a depth image taking the distance (depth) from a 3D acquisition device (such as a 3D camera and a 3D sensor) to each point in a detected face as a pixel value, so that the requirement on hardware of the device is high, and the device cannot be applied to devices with different hardware configurations, resulting in insufficient universality.
In conclusion, the existing human face living body detection technology has the problems of low detection efficiency and insufficient equipment universality.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting a human face living body based on a hypergraph, computer equipment and a storage medium, and aims to solve the problems of low detection efficiency and insufficient equipment universality of the existing human face living body detection technology.
A face living body detection method based on a hypergraph comprises the following steps:
extracting corresponding color features, local binary pattern features and image quality features from the acquired detected face image, and packaging the color features, the local binary pattern features and the image quality features into a triple group;
processing the triple based on a feature fusion model to obtain a feature fusion matrix;
calculating a similarity matrix between the features in the feature fusion matrix;
establishing an initial Laplace matrix according to the similarity matrix;
acquiring a target Laplace matrix according to the initial Laplace matrix;
and classifying the target Laplace matrix based on a characteristic classification model to obtain a classification result, wherein the classification result is a living body detection result or a non-living body detection result.
A hypergraph-based face liveness detection device, comprising:
the feature extraction module is used for extracting corresponding color features, local binary pattern features and image quality features from the acquired detected face image and packaging the color features, the local binary pattern features and the image quality features into a triple;
the feature fusion module is used for processing the triple based on a feature fusion model to obtain a feature fusion matrix;
the similarity matrix calculation module is used for calculating a similarity matrix between the features in the feature fusion matrix;
the initial Laplace matrix establishing module is used for establishing an initial Laplace matrix according to the similarity matrix;
a target Laplace matrix obtaining module, configured to obtain a target Laplace matrix according to the initial Laplace matrix;
and the classification module is used for classifying the target Laplace matrix based on a characteristic classification model to obtain a classification result, wherein the classification result is a living body detection result or a non-living body detection result.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the hypergraph-based face liveness detection method when executing the computer program.
A computer-readable storage medium, storing a computer program which, when executed by a processor, implements the steps of the above-described hypergraph-based live face detection method.
According to the face living body detection method, the face living body detection device, the computer equipment and the storage medium based on the hypergraph, corresponding color features, local binary pattern features and image quality features are extracted from an acquired detected face image, the color features, the local binary pattern features and the image quality features are packaged into triples, the triples are processed based on a feature fusion model to obtain a feature fusion matrix, a similarity matrix between the features in the feature fusion matrix is calculated, an initial Laplace matrix is established according to the similarity matrix, a target Laplace matrix is acquired according to the initial Laplace matrix and the similarity matrix, the target Laplace matrix is classified based on a feature classification model to obtain a living body detection result or a non-living body detection result, and therefore the face living body detection efficiency and the universality are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a method for detecting a living human face based on a hypergraph according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another application environment of a hypergraph-based face in-vivo detection method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for live human face detection based on hypergraph according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a human face liveness detection device based on a hypergraph according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a computer device according to an embodiment of the invention.
FIG. 6 is a diagram of a single class matrix and a feature fusion matrix in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for detecting the living human face based on the hypergraph, provided by the embodiment of the invention, can be applied to an application environment such as the environment shown in fig. 1, wherein a computer device/terminal device/… … is communicated with a server through a network. The computer device/terminal device/… … may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices, among others. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 3, a method for detecting a living human face based on a hypergraph is provided, which is described by taking the method applied to the server in fig. 1 as an example, and includes the following steps:
step S301, extracting corresponding color features, local binary pattern features and image quality features from the acquired detected face image, and packaging the color features, the local binary pattern features and the image quality features into a triple.
The detected face image may be acquired by a camera, a mobile phone, a tablet or other image acquisition device before step S301.
In step S301, a mtcn (Multi-task convolutional neural network) model or an adaptive boosting model is used to extract corresponding color features, local binary pattern features, and image quality features from the obtained detected face image, and the color features, the local binary pattern features, and the image quality features are encapsulated in a struct structure by a computer language (e.g., C language) to form triples.
Among them, what needs to be particularly explained are: the MTCNN (Multi-task convolutional neural network) model is a Multi-task neural network model for face detection tasks proposed by Shenzhen research institute of the Chinese academy of sciences in 2016, and mainly adopts three cascaded networks and adopts the idea of adding a classifier to a candidate frame to perform rapid and efficient face detection. The three cascaded networks are respectively P-Net for quickly generating candidate windows, R-Net for filtering and selecting high-precision candidate windows and O-Net for generating final bounding boxes and key points of the human face. And many convolutional neural network models that deal with image problems, which also use image pyramids, bounding box regression, non-maximum suppression, etc.
The core of the adaptive boosting model is to train different classifiers (weak classifiers) aiming at the same training set, and then the weak classifiers are integrated to form a final classifier (strong classifier) with the strongest two and strongest.
struct, a structure, is also referred to directly as a "structure". In actual programming, it is often necessary to describe a data object with different types of data that are related. For example, when describing the comprehensive information of the detected face image, different types of data such as color features, local binary pattern features, image quality features, and the like of the detected face image need to be used. However, describing a data object with related different types of data can make programming extremely inconvenient. Thus, computer languages provide a type of data called a structure (struct) to describe data objects that require different types of data.
The color feature is a color numerical value describing the proportion of different colors in the whole image, and in this embodiment, the color numerical value of the detected face image is divided into a histogram of 256 dimensions.
The local binary pattern feature is an operator for describing local texture features of the image, and has the remarkable advantages of rotation invariance, gray scale invariance and the like.
The image quality feature is a sharpness value reflecting the image plane definition and the image edge sharpness, and in this embodiment, the sharpness of the image is measured by using a gradient histogram.
And S302, processing the triple based on the feature fusion model to obtain a feature fusion matrix.
In step S302, the feature fusion model includes an input layer, a full connection layer, and an output layer, where the input layer is used to input color features, local binary pattern features, and image quality features in the triplet; the full-connection layer comprises a first full-connection layer with 512 dimensions, a second full-connection layer with 1024 dimensions and a third full-connection layer, the first full-connection layer and the second full-connection layer are relu activation functions, and the third full-connection layer with 784 dimensions is a sigmoid activation function. The relu activation function and the sigmoid activation function are expressed by equations (1) and (2):
in the above formula, x is an input value and λ is a constant. Wherein, the input value x is further explained by combining the characteristic fusion model: the input matrix of the first full connection layer is the output matrix of the input layer, the input matrix of the second full connection layer is the output matrix of the first full connection layer, the input matrix of the third full connection layer is the output matrix of the second full connection layer, and the input matrix of the output layer is the output matrix of the third full connection layer.
For a triple group containing multiple features in the uploading server, an output matrix of a full connection layer in the feature fusion model is defined as:
where P is the class of features, which in the present application can be defaulted to 3, W is the optimized mapping parameter in the feature fusion model, aijThe correlation between the ith feature and the jth feature is represented, k represents the total number of features having correlation with this feature, and l represents the ith layer of the feature fusion model, which may be the first fully-connected layer, the second fully-connected layer, or the third fully-connected layer in this application.
For the output layer, based on the definition of the fully-connected layer, the extraction of the output layer is defined as:
(X)l=L(W)l-1(X)l-1
wherein X is a new feature formed by merging 3 kinds of features, L (W)l-1Is a weight matrix, (X)l-1Is the output matrix of the third fully connected layer.
For example, an image class has N features, each feature is D-dimensional, and is a vector with a length of D, and is a single-class matrix of D × N, and in this application, there are 3 such single-class matrices, which are respectively a color feature matrix, a local binary pattern feature matrix, and an image quality feature matrix, and the three types of matrices form a feature fusion matrix of D × N × 3, as shown in fig. 6, the left side is the single-class matrix, and the right side is the feature fusion matrix.
And step S303, calculating a similarity matrix between the features in the feature fusion matrix.
Specifically, step S303 includes the following steps 1 to 6:
1. and determining the nearest neighbor features adjacent to each feature in the feature fusion matrix and the number of the nearest neighbor features.
2. And calculating the distance average value according to the distance between each feature and each nearest neighbor feature and the number of the nearest neighbor features of each feature.
3. And calculating the corresponding distance standard deviation according to the distance between each feature and each nearest neighbor feature and the distance average value.
4. And acquiring corresponding similarity parameters according to Euclidean distances between each feature and each nearest neighbor feature and the distance standard deviation.
5. And acquiring corresponding non-similarity parameters for the condition that two features of the feature fusion matrix are not in adjacent relation.
6. And forming a similarity matrix between the features in the feature fusion matrix according to the similarity parameter and the non-similarity parameter.
Specifically, in step 1, a K-Nearest Neighbor algorithm may be used to determine Nearest Neighbor features adjacent to each feature in the feature fusion matrix and the number of the Nearest Neighbor features, where the K-Nearest Neighbor (KNN) classification algorithm is a theoretically mature method and is also one of the simplest machine learning algorithms. The method has the following steps: in feature space, if most of the k nearest (i.e. nearest in feature space) samples in the vicinity of a sample belong to a certain class, then the sample also belongs to this class, in this embodiment, by finding the k nearest neighbor features of a feature in the feature fusion matrix.
For the step 2, it is specifically: the distance average is calculated as follows:
in the above equation, dnIs the distance average value of the nth feature in the feature fusion matrix, wherein n is 1, 2 and 3iIs the distance between the nth feature and each of the nearest neighbor features, i is the ith nearest neighbor feature to which each feature corresponds,k, k is a positive integer, where k is also the nearest neighbor number for each feature.
For the step 3, it is specifically: the distance average is calculated as follows:
in the above equation, dnIs the distance average value of the nth feature in the feature fusion matrix, wherein n is 1, 2, 3iIs the distance between the nth feature and each nearest neighbor feature, i is the ith nearest neighbor feature corresponding to each feature, i is 1, 2, 3.Is the distance average value of the nth feature in the feature fusion matrix, wherein n is 1, 2, 3.
For the step 4, it is specifically: the similarity parameter is calculated as follows:
in the above equation, si,nIs the similarity parameter of the nth feature and the ith nearest neighbor, | xi-xn||2Is the Euclidean distance between the nth feature and the ith nearest neighbor, wherein i is 1, 2, 3.. k, k is a positive integer, k is the nearest neighbor number of each feature, n is 1, 2, 3.. m, m is a positive integer,is the distance average value of the nth feature in the feature fusion matrix, wherein n is 1, 2, 3.
For step 5, in the case that two features of the feature fusion matrix are not in a neighboring relationship, the obtained non-similarity parameter may be zero.
And step S304, establishing an initial Laplace matrix according to the similarity matrix.
Specifically, in step S304, the similarity characteristics between the characteristics are obtained according to the similarity parameters in the similarity matrix, and the similar characteristics are connected through the super edge. Thus, the initial laplacian matrix includes the eigennodes and the hyper-edges formed by the connection between every two adjacent eigennodes, and the eigennodes correspond to the features in the feature fusion matrix.
And S305, acquiring a target Laplace matrix according to the initial Laplace matrix.
For step S305, the step of obtaining the target laplacian matrix from the initial laplacian matrix includes the following steps a to c:
a. and calculating a weight matrix of the excess edge, a feature correlation matrix between every two feature nodes and a degree matrix of the excess edge.
b. And acquiring a degree matrix of the characteristic nodes according to the weight matrix of the excess edges and the characteristic incidence matrix between every two characteristic nodes.
c. And acquiring a target Laplace matrix according to the weight matrix of the excess edge, the characteristic incidence matrix between every two characteristic nodes, the degree matrix of the excess edge and the degree matrix of the characteristic nodes.
For the step a, the step of calculating the weight matrix of the excess edge specifically includes the following steps a1 to a2:
a1, obtaining the accumulated distance sum corresponding to each feature node according to the distance between each feature node and each adjacent feature node adjacent to the feature node.
a2, forming a weight matrix of the super-edge according to the accumulated distance sum corresponding to each characteristic node.
For the step a1, it is specifically: the cumulative distance sum is calculated as follows:
in the above equation, ZnIs the cumulative distance sum of the nth feature, where n is 1, 2, 3iAnd the distance between the nth feature and each nearest neighbor feature is defined, i is the ith nearest neighbor feature corresponding to each feature, i is 1, 2, 3.
For step a2 above, the weight matrix for the super edge is Zn,ZnIs the cumulative distance sum of the nth feature, where n is 1, 2, 3.
For the step a, the step of calculating the feature correlation matrix between every two feature nodes specifically includes the following steps a3 to a 5:
a3, when two characteristic nodes are in adjacent relation, determining the correlation coefficient between the two characteristic nodes as the adjacent correlation coefficient.
a4, when the two characteristic nodes are not in adjacent relation, determining the correlation coefficient between the two characteristic nodes as non-adjacent correlation coefficient.
and a5, acquiring a characteristic correlation matrix between every two characteristic nodes according to the adjacent correlation coefficient and the non-adjacent correlation coefficient.
For the step a3, it is specifically: and when the two characteristic nodes are in adjacent relation, determining that the correlation coefficient between the two characteristic nodes is an adjacent correlation coefficient which is 1.
For the step a4, it is specifically: and when the two characteristic nodes are not in adjacent relation, determining that the correlation coefficient between the two characteristic nodes is a non-adjacent correlation coefficient, and the non-adjacent correlation coefficient is 0.
Based on the above steps a3 and a4, the feature correlation matrix between two feature nodes is composed of 0 and 1.
For the step a, the step of calculating the degree matrix of the excess edge specifically includes the following steps a6 to a 7:
a6, calculating the total number of the feature nodes connected by all the super edges.
a7, forming a degree matrix of the super edge according to the total number of the characteristic nodes connected by all the super edges.
In the above step a6 and step a7, the super edge is an edge of the super graph including at least 2 feature nodes.
For the step b, it may specifically include the following steps b1 to b 2:
b1, when the correlation coefficient between every two characteristic nodes is an adjacent correlation coefficient, determining the degree of the characteristic node as the weight of the super edge corresponding to the characteristic node;
b2, forming a degree matrix of the characteristic nodes according to the weights of the super edges corresponding to all the characteristic nodes.
To better understand the above step c, the target laplace matrix is obtained according to the following equation:
in the formula, DVvIs the degree of the characteristic node in the V-th row in the degree matrix of the characteristic node, wherein V belongs to n, wherein n is 1, 2, 3eIs the correlation matrix of the e-th column, He' is the transpose of the correlation matrix of the e-th column.
And S306, classifying the target Laplacian matrix based on the feature classification model to obtain a classification result, wherein the classification result is a living body detection result or a non-living body detection result.
In step S306, the feature classification model may be SVM (support vector machines) or softmax. Among them, it should be noted that:
support Vector Machines (SVMs) are a two-class model.
Softmax, also known as multiple logistic regression, is a linear classifier used to implement multi-valued classification.
In one embodiment, the method for detecting the living human face based on the hypergraph further comprises the following steps A to C:
A. and when the feature fusion model needs to be updated, comparing the classification result with the classification standard result of the detected face image until the classification result is consistent with the correct classification result, and then obtaining the update parameters of the feature fusion model.
B. And optimizing the feature fusion model according to the updating parameters.
C. And carrying out hash calculation on the updating parameters by using a hash algorithm to obtain corresponding hash values, and storing the hash values in a block chain database.
In the step B, when the feature fusion model needs to be updated, comparing the classification result with the classification standard result of the detected face image until the classification result is consistent with the correct classification result, and obtaining the update parameter of the feature fusion model specifically includes:
when the feature fusion model needs to be updated, whether the face image to be detected is a living body or not is artificially confirmed, and the confirmed result is used as a classification standard result.
When the image to be detected of the detected living body is taken for detection, the model parameters are adjusted according to the condition that the detection result is not consistent with the classification standard result.
And if the results are not consistent, adjusting the model parameters until the detection result is consistent with the classification standard result to obtain the update parameters of the model.
And C, synchronizing the updated feature fusion model in each server of the distributed servers based on the obtained steps C and D.
For the step C, the method is applied in the application environment shown in fig. 2, where the blockchain is composed of a plurality of nodes capable of communicating with each other, each node can be regarded as a blockstore, each blockstore is used for storing data, all data are contained between each data node, the blockstore data has complete history records and can be rapidly restored and expanded, the area chain is divided into a public chain, a private chain and a federation chain, the public chain is open for any node, each person can participate in the blockchain calculation, and any person can download and obtain complete blockchain data, the private chain is some blockchains and does not want any person to participate in the system, the private chain is not open for outside, and is suitable for internal data management and auditing or open test of a specific organization, and the federation chain is completely equivalent for the authority of participating in each node, the method is characterized in that trusted exchange of data can be realized without complete mutual trust, each node of a alliance chain is usually organized by a corresponding entity mechanism, the node can be added into and quit from a network after authorization, in the process of using the whole block chain backup system, a digital signature is needed, the digital signature is designed to be a hash function, a public key of a sender and a private key of the sender, a block chain has a complete distributed storage characteristic, and in fact, more huge network data storage is realized while a data structure in the form of a hash algorithm is used for storing basic data.
In the step C, a hash algorithm is used to perform hash calculation on the update parameters to obtain corresponding hash values, and the hash values are stored in the block chain database for improving the security of data by using the characteristics of the block chain, so as to implement the storage, protection and necessary tracing of the history model.
Further, when the feature fusion model is updated based on the distributed server, the distributed server selects whether to train the new model according to the load of the distributed server, and the judgment basis of the load is as follows:
E. and in a preset time period, whether the feature fusion model is updated on the server or not is judged, and the priority of the server is higher if the preset time is longer. And/or
F. And in a preset time period, the number of times of the live body detection test is carried out by the server, and the higher the detection test number is, the higher the priority of the server is.
According to the face living body detection method, the face living body detection device, the computer equipment and the storage medium based on the hypergraph, corresponding color features, local binary pattern features and image quality features are extracted from an acquired detected face image, the color features, the local binary pattern features and the image quality features are packaged into triples, the triples are processed based on a feature fusion model to obtain a feature fusion matrix, a similarity matrix between the features in the feature fusion matrix is calculated, an initial Laplace matrix is established according to the similarity matrix, a target Laplace matrix is acquired according to the initial Laplace matrix and the similarity matrix, the target Laplace matrix is classified based on a feature classification model to obtain a living body detection result or a non-living body detection result, and therefore the face living body detection efficiency and the universality are improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a hypergraph-based living human face detection device is provided, and the hypergraph-based living human face detection device corresponds to the hypergraph-based living human face detection method in the embodiment one to one. As shown in fig. 4, the hypergraph-based living human face detection apparatus includes a feature extraction module 40, a feature fusion module 41, a similarity matrix calculation module 42, an initial laplacian matrix establishment module 43, a target laplacian matrix establishment module 44, and a classification module 45. The functional modules are explained in detail as follows:
the feature extraction module 40 is configured to extract corresponding color features, local binary pattern features, and image quality features from the acquired detected face image, and encapsulate the color features, the local binary pattern features, and the image quality features into a triplet;
a feature fusion module 41, configured to process the triplet to obtain a feature fusion matrix based on a feature fusion model;
a similarity matrix calculation module 42, configured to calculate a similarity matrix between the features in the feature fusion matrix;
an initial laplacian matrix establishing module 43, configured to establish an initial laplacian matrix according to the similarity matrix;
a target laplacian matrix obtaining module 44, configured to obtain a target laplacian matrix according to the initial laplacian matrix;
and a classification module 45, configured to classify the target laplacian matrix based on a feature classification model to obtain a classification result, where the classification result is a living body detection result or a non-living body detection result.
Further, the similarity matrix calculation module 42 includes a nearest neighbor feature determination unit, a distance average calculation unit, a distance standard deviation calculation unit, a similarity parameter acquisition unit, a non-similarity acquisition unit, and a similarity matrix configuration unit, and each functional unit is described in detail as follows:
a nearest neighbor feature determining unit, configured to determine nearest neighbor features adjacent to each feature in the feature fusion matrix and the number of the nearest neighbor features.
And the distance average value calculating unit is used for calculating the distance average value according to the distance between each feature and each nearest neighbor feature and the number of the nearest neighbor features of each feature.
A distance standard deviation calculation unit, configured to calculate a corresponding distance standard deviation according to the distance between each feature and each nearest neighbor feature and the distance average;
a similarity parameter obtaining unit, configured to obtain a corresponding similarity parameter according to the euclidean distance between each feature and each nearest neighbor feature and the distance standard deviation;
a non-similarity parameter obtaining unit, configured to obtain a corresponding non-similarity parameter for a case that two features of the feature fusion matrix are not in an adjacent relationship;
and the similarity matrix forming unit is used for forming a similarity matrix between the features in the feature fusion matrix according to the similarity parameter and the non-similarity parameter.
Further, the target laplacian matrix obtaining module 44 includes a matrix calculating unit, a degree matrix obtaining unit of the feature node, and a target laplacian matrix obtaining unit, and each functional unit is described in detail as follows:
the matrix calculation unit is used for calculating a weight matrix of the excess edge, a characteristic correlation matrix between every two characteristic nodes and a degree matrix of the excess edge;
a degree matrix obtaining unit of the feature nodes, configured to obtain a degree matrix of the feature nodes according to the weight matrix of the excess edge and a feature association matrix between every two feature nodes;
and the target Laplace matrix obtaining unit is used for obtaining a target Laplace matrix according to the weight matrix of the super edge, the characteristic correlation matrix between every two characteristic nodes, the degree matrix of the super edge and the degree matrix of the characteristic nodes.
Furthermore, the matrix calculation unit comprises a weight matrix calculation unit, a characteristic correlation matrix calculation unit and a degree matrix calculation unit of the excess edge.
The weight matrix calculation unit specifically includes an accumulated distance and acquisition unit and a super-edge weight matrix acquisition unit, and the detailed description of each functional unit is as follows:
and the accumulated distance sum acquisition unit is used for acquiring the accumulated distance sum corresponding to each characteristic node according to the distance between each characteristic node and each adjacent characteristic node adjacent to the characteristic node.
And the weight matrix acquisition unit of the super edge is used for forming the weight matrix of the super edge according to the accumulated distance sum corresponding to all the characteristic nodes.
The feature correlation matrix calculation unit specifically includes an adjacent correlation coefficient determination unit, a non-adjacent correlation coefficient determination unit, and a feature correlation matrix acquisition unit, and the detailed description of each functional unit is as follows:
and the adjacent correlation coefficient determining unit is used for determining the correlation coefficient between the two characteristic nodes as the adjacent correlation coefficient when the two characteristic nodes are in the adjacent relation.
And the non-adjacent correlation coefficient determining unit is used for determining the correlation coefficient between the two characteristic nodes as the non-adjacent correlation coefficient when the two characteristic nodes are not in adjacent relation.
And the characteristic incidence matrix obtaining unit is used for obtaining a characteristic incidence matrix between every two characteristic nodes according to the adjacent incidence coefficients and the non-adjacent incidence coefficients.
The above-mentioned degree matrix calculation unit of the excess edge specifically includes a total number calculation unit and a degree matrix constitution unit of the excess edge, and the detailed description of each functional unit is specifically as follows:
a total number calculating unit, configured to calculate the total number of feature nodes to which all the super edges are connected;
and the degree matrix forming unit of the excess edge is used for forming the degree matrix of the excess edge according to the total number of the characteristic nodes connected with all the excess edges.
Further, the degree matrix obtaining unit of the feature node comprises a weight determining unit and a reading matrix forming unit of the feature node, and the detailed description of each functional unit is as follows:
and the weight determining unit is used for determining the degree of the characteristic node as the weight of the super edge corresponding to the characteristic node when the correlation coefficient between every two characteristic nodes is the adjacent correlation coefficient.
And the degree matrix forming unit of the characteristic nodes is used for forming the degree matrix of the characteristic nodes according to the weights of the super edges corresponding to all the characteristic nodes.
In an embodiment, the hypergraph-based face in-vivo detection device further comprises a comparison module, an optimization processing module and a storage module, and the detailed description of each functional module is as follows:
and the comparison module is used for comparing the classification result with the classification standard result of the detected face image when the feature fusion model needs to be updated until the classification result is consistent with the correct classification result, and then obtaining the update parameters of the feature fusion model.
And the optimization processing module is used for optimizing the feature fusion model according to the updating parameters.
And the storage module is used for carrying out hash calculation on the updating parameters by utilizing a hash algorithm to obtain corresponding hash values and storing the hash values into a block chain database.
Wherein the meaning of "first" and "second" in the above modules/units is only to distinguish different modules/units, and is not used to define which module/unit has higher priority or other defining meaning. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus, and such that a division of modules presented in this application is merely a logical division and may be implemented in a practical application in a further manner.
For specific limitations of the apparatus for detecting a living human face based on a hypergraph, reference may be made to the above limitations of the method for detecting a living human face based on a hypergraph, which are not described herein again. All or part of the modules in the hypergraph-based living human face detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the hypergraph-based face in-vivo detection method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a hypergraph-based in-vivo human face detection method.
In one embodiment, a computer device is provided, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the hypergraph-based face liveness detection method in the above-mentioned embodiments, such as the steps 301 to 306 shown in fig. 3 and other extensions of the method and extensions of related steps. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units of the hypergraph-based living human face detection apparatus in the above-described embodiment, such as the functions of the modules 40 to 45 shown in fig. 4. To avoid repetition, further description is omitted here.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc.
The memory may be integrated in the processor or may be provided separately from the processor.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the hypergraph-based face liveness detection method in the above-described embodiments, such as the steps 301 to 306 shown in fig. 3 and extensions of other extensions and related steps of the method. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units of the hypergraph-based living human face detection apparatus in the above-described embodiments, such as the functions of the modules 40 to 45 shown in fig. 4. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (11)
1. A face living body detection method based on a hypergraph is characterized by comprising the following steps:
extracting corresponding color features, local binary pattern features and image quality features from the acquired detected face image, and packaging the color features, the local binary pattern features and the image quality features into a triple group;
processing the triple based on a feature fusion model to obtain a feature fusion matrix;
calculating a similarity matrix between the features in the feature fusion matrix;
establishing an initial Laplace matrix according to the similarity matrix;
acquiring a target Laplace matrix according to the initial Laplace matrix;
and classifying the target Laplace matrix based on a characteristic classification model to obtain a classification result, wherein the classification result is a living body detection result or a non-living body detection result.
2. The method of claim 1, wherein the step of computing a similarity matrix between features in the feature fusion matrix comprises:
determining nearest neighbor features adjacent to each feature in the feature fusion matrix and the number of the nearest neighbor features;
calculating a distance average value according to the distance between each feature and each nearest neighbor feature and the number of the nearest neighbor features of each feature;
calculating a corresponding distance standard deviation according to the distance between each feature and each nearest neighbor feature and the distance average value;
acquiring corresponding similarity parameters according to Euclidean distances between each feature and each nearest neighbor feature and the distance standard deviation;
acquiring corresponding non-similarity parameters under the condition that two features of the feature fusion matrix are not in an adjacent relation;
and forming a similarity matrix between the features in the feature fusion matrix according to the similarity parameter and the non-similarity parameter.
3. The method of claim 1, wherein the initial Laplace matrix comprises eigennodes and a hyper-edge formed between every two adjacent eigennodes due to a connection, the eigennodes corresponding to features in the feature fusion matrix;
the step of obtaining a target laplacian matrix according to the initial laplacian matrix comprises:
calculating a weight matrix of the excess edge, a feature association matrix between every two feature nodes and a degree matrix of the excess edge;
acquiring a degree matrix of the characteristic nodes according to the weight matrix of the excess edges and the characteristic incidence matrix between every two characteristic nodes;
and acquiring a target Laplace matrix according to the weight matrix of the excess edge, the characteristic incidence matrix between every two characteristic nodes, the degree matrix of the excess edge and the degree matrix of the characteristic nodes.
4. The method of claim 3, wherein the step of computing the weight matrix for the super-edge comprises:
acquiring an accumulated distance sum corresponding to each feature node according to the distance between each feature node and each adjacent feature node adjacent to the feature node;
and forming a weight matrix of the super edge according to the accumulated distance corresponding to each characteristic node.
5. The method of claim 3, wherein the step of computing the feature correlation matrix between each two feature nodes comprises:
when the two characteristic nodes are in an adjacent relation, determining the correlation coefficient between the two characteristic nodes as an adjacent correlation coefficient;
when the two characteristic nodes are not in an adjacent relation, determining that the correlation coefficient between the two characteristic nodes is a non-adjacent correlation coefficient;
and acquiring a characteristic incidence matrix between every two characteristic nodes according to the adjacent incidence coefficients and the non-adjacent incidence coefficients.
6. The method of claim 3, wherein the step of calculating the degree matrix of the excess edge comprises:
calculating the total number of characteristic nodes connected by all the super edges;
and forming a degree matrix of the super edge according to the total number of the characteristic nodes connected with all the super edges.
7. The method according to claim 3, wherein the step of obtaining the degree matrix of the feature nodes according to the weight matrix of the super edge and the feature correlation matrix between every two feature nodes comprises:
when the correlation coefficient between every two characteristic nodes is an adjacent correlation coefficient, determining the degree of the characteristic node as the weight of the super edge corresponding to the characteristic node;
and forming a degree matrix of the characteristic nodes according to the weights of the super edges corresponding to all the characteristic nodes.
8. The method of claim 1, further comprising:
when the feature fusion model needs to be updated, comparing the classification result with the classification standard result of the detected face image until the classification result is consistent with the correct classification result, and then obtaining the update parameters of the feature fusion model;
optimizing the feature fusion model according to the updating parameters;
and carrying out hash calculation on the updating parameters by using a hash algorithm to obtain corresponding hash values, and storing the hash values in a block chain database.
9. Super picture based human face live body detection device, its characterized in that includes:
the feature extraction module is used for extracting corresponding color features, local binary pattern features and image quality features from the acquired detected face image and packaging the color features, the local binary pattern features and the image quality features into a triple;
the feature fusion module is used for processing the triple based on a feature fusion model to obtain a feature fusion matrix;
the similarity matrix calculation module is used for calculating a similarity matrix between the features in the feature fusion matrix;
the initial Laplace matrix establishing module is used for establishing an initial Laplace matrix according to the similarity matrix;
a target laplacian matrix obtaining module, configured to obtain a target laplacian matrix according to the initial laplacian matrix and the similarity matrix;
and the classification module is used for classifying the target Laplace matrix based on a characteristic classification model to obtain a classification result, wherein the classification result is a living body detection result or a non-living body detection result.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the hypergraph-based face liveness detection method according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the hypergraph-based face liveness detection method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011529068.0A CN112541470B (en) | 2020-12-22 | 2020-12-22 | Hypergraph-based human face living body detection method and device and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011529068.0A CN112541470B (en) | 2020-12-22 | 2020-12-22 | Hypergraph-based human face living body detection method and device and related equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112541470A true CN112541470A (en) | 2021-03-23 |
CN112541470B CN112541470B (en) | 2024-08-13 |
Family
ID=75017102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011529068.0A Active CN112541470B (en) | 2020-12-22 | 2020-12-22 | Hypergraph-based human face living body detection method and device and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541470B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011013606A1 (en) * | 2009-07-31 | 2011-02-03 | 富士フイルム株式会社 | Image processing device and method, data processing device and method, program, and recording medium |
US20110243428A1 (en) * | 2010-04-01 | 2011-10-06 | Mithun Das Gupta | Bi-Affinity Filter: A Bilateral Type Filter for Color Images |
WO2019114580A1 (en) * | 2017-12-13 | 2019-06-20 | 深圳励飞科技有限公司 | Living body detection method, computer apparatus and computer-readable storage medium |
CN111079701A (en) * | 2019-12-30 | 2020-04-28 | 河南中原大数据研究院有限公司 | Face anti-counterfeiting method based on image quality |
CN111104917A (en) * | 2019-12-24 | 2020-05-05 | 杭州魔点科技有限公司 | Face-based living body detection method and device, electronic equipment and medium |
CN111488479A (en) * | 2019-01-25 | 2020-08-04 | 北京京东尚科信息技术有限公司 | Hypergraph construction method, hypergraph construction device, computer system and medium |
CN111667453A (en) * | 2020-04-21 | 2020-09-15 | 浙江工业大学 | Gastrointestinal endoscope image anomaly detection method based on local feature and class mark embedded constraint dictionary learning |
-
2020
- 2020-12-22 CN CN202011529068.0A patent/CN112541470B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011013606A1 (en) * | 2009-07-31 | 2011-02-03 | 富士フイルム株式会社 | Image processing device and method, data processing device and method, program, and recording medium |
US20110243428A1 (en) * | 2010-04-01 | 2011-10-06 | Mithun Das Gupta | Bi-Affinity Filter: A Bilateral Type Filter for Color Images |
WO2019114580A1 (en) * | 2017-12-13 | 2019-06-20 | 深圳励飞科技有限公司 | Living body detection method, computer apparatus and computer-readable storage medium |
CN111488479A (en) * | 2019-01-25 | 2020-08-04 | 北京京东尚科信息技术有限公司 | Hypergraph construction method, hypergraph construction device, computer system and medium |
CN111104917A (en) * | 2019-12-24 | 2020-05-05 | 杭州魔点科技有限公司 | Face-based living body detection method and device, electronic equipment and medium |
CN111079701A (en) * | 2019-12-30 | 2020-04-28 | 河南中原大数据研究院有限公司 | Face anti-counterfeiting method based on image quality |
CN111667453A (en) * | 2020-04-21 | 2020-09-15 | 浙江工业大学 | Gastrointestinal endoscope image anomaly detection method based on local feature and class mark embedded constraint dictionary learning |
Non-Patent Citations (3)
Title |
---|
张建勋;李涛;孙权;谢婷婷;: "猪眼肌B超图像纹理特征提取与分类", 重庆理工大学学报(自然科学), no. 02, 15 February 2013 (2013-02-15) * |
张欣;黄正海;李志明;: "基于判别超图和非负矩阵分解的人脸识别方法", 运筹学学报, no. 03, 15 September 2015 (2015-09-15) * |
胡正平;杜立翠;赵淑欢;: "基于局部和全局映射函数的流形降维空间球形覆盖分类算法", 模式识别与人工智能, no. 04, 15 April 2015 (2015-04-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN112541470B (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021077984A1 (en) | Object recognition method and apparatus, electronic device, and readable storage medium | |
US11138413B2 (en) | Fast, embedded, hybrid video face recognition system | |
US11087447B2 (en) | Systems and methods for quality assurance of image recognition model | |
CN110399799B (en) | Image recognition and neural network model training method, device and system | |
Sabour et al. | Matrix capsules with EM routing | |
CN109271870B (en) | Pedestrian re-identification method, device, computer equipment and storage medium | |
CN111275685B (en) | Method, device, equipment and medium for identifying flip image of identity document | |
CN111860147B (en) | Pedestrian re-identification model optimization processing method and device and computer equipment | |
CN111191568B (en) | Method, device, equipment and medium for identifying flip image | |
CN108399052B (en) | Picture compression method and device, computer equipment and storage medium | |
CN111797983A (en) | Neural network construction method and device | |
US11941087B2 (en) | Unbalanced sample data preprocessing method and device, and computer device | |
CN112084917A (en) | Living body detection method and device | |
CN110807437B (en) | Video granularity characteristic determination method and device and computer-readable storage medium | |
KR20230169104A (en) | Personalized biometric anti-spoofing protection using machine learning and enrollment data | |
CN111968134B (en) | Target segmentation method, device, computer readable storage medium and computer equipment | |
CN111382666B (en) | Apparatus and method with user authentication | |
CN114549913A (en) | Semantic segmentation method and device, computer equipment and storage medium | |
CN111860582B (en) | Image classification model construction method and device, computer equipment and storage medium | |
CN111145106A (en) | Image enhancement method, device, medium and equipment | |
CN112733901A (en) | Structured action classification method and device based on federal learning and block chain | |
Amosov et al. | Human localization in the video stream using the algorithm based on growing neural gas and fuzzy inference | |
CN116453232A (en) | Face living body detection method, training method and device of face living body detection model | |
CN111079587B (en) | Face recognition method and device, computer equipment and readable storage medium | |
CN113674152B (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |