CN112183480A - Face recognition method, face recognition device, terminal equipment and storage medium - Google Patents

Face recognition method, face recognition device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112183480A
CN112183480A CN202011180098.5A CN202011180098A CN112183480A CN 112183480 A CN112183480 A CN 112183480A CN 202011180098 A CN202011180098 A CN 202011180098A CN 112183480 A CN112183480 A CN 112183480A
Authority
CN
China
Prior art keywords
face
feature matrix
matrix
face image
authorized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011180098.5A
Other languages
Chinese (zh)
Inventor
高通
陈碧辉
郑新莹
黄源浩
肖振中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN202011180098.5A priority Critical patent/CN112183480A/en
Publication of CN112183480A publication Critical patent/CN112183480A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application is applicable to the technical field of digital image processing, and provides a face recognition method, a face recognition device, terminal equipment and a storage medium. According to the embodiment of the application, a first face image is processed according to a first face recognition algorithm to generate a first unary feature matrix; processing the second face image according to a second face recognition algorithm to generate a second feature matrix; processing the authorized face image according to a first face recognition algorithm to generate an authorized face feature matrix, and mapping the first unary feature matrix and the authorized face feature matrix to obtain a first feature matrix; processing the face image to be detected according to a second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second element feature matrix and the face feature matrix to be detected to obtain a second feature matrix; and calculating the similarity according to the first characteristic matrix and the second characteristic matrix, and when the similarity is greater than a preset threshold value, successfully recognizing the face, thereby improving the face recognition efficiency of different modes.

Description

Face recognition method, face recognition device, terminal equipment and storage medium
Technical Field
The present application belongs to the technical field of digital image processing, and in particular, to a face recognition method, apparatus, terminal device, and storage medium.
Background
With the development of society, face recognition is more and more common in people's life, and in the existing face recognition technology, face feature data obtained by adopting different face recognition algorithms generally has no comparability, that is, feature descriptions about face images output by using different algorithms are basically irrelevant, and the comparison of feature similarity cannot be directly performed. For example, a color face image and an infrared face image of a person cannot be directly compared to identify whether the color face and the infrared face are the same person. If the existing cross-modal face recognition model is used for face recognition of different modalities, a large amount of cross-modal data is usually required to be obtained through hybrid training, so that not only is the time period of the whole process longer, but also the face recognition efficiency of different modalities is lower.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, terminal equipment and a storage medium, and can solve the problem that face recognition efficiency of different modes is low.
In a first aspect, an embodiment of the present application provides a face recognition method, including:
acquiring a first face image of a preset target group, and processing the first face image according to a preset first face recognition algorithm to generate a first meta-feature matrix;
acquiring a second face image of the target group, and processing the second face image according to a preset second face recognition algorithm to generate a second feature matrix;
obtaining an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix;
acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix;
and calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is greater than a preset threshold value, successfully identifying the face.
Optionally, the mapping the first unary feature matrix and the authorized face feature matrix to obtain a first feature matrix includes:
performing transposition operation on the authorized face feature matrix to generate a first transposition matrix;
multiplying the first transpose matrix by the first unary feature matrix to obtain a first feature matrix;
optionally, the mapping the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix includes:
performing transposition operation on the face feature matrix to be detected to generate a second transposition matrix;
and multiplying the second transpose matrix by the second element feature matrix to obtain a second feature matrix.
Optionally, the calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix includes:
performing transposition operation on the second feature matrix to generate a third transposition matrix;
and calculating the product of the third transpose matrix and the first feature matrix to obtain the similarity.
Optionally, the processing the authorized face image according to the first face recognition algorithm further includes:
and preprocessing the authorized face image in a preset mode, and processing the preprocessed authorized face image according to the first face recognition algorithm.
Optionally, the preprocessing the authorized face image in a preset manner includes:
acquiring a preset number of face feature coordinates, and performing face alignment in an authorized face image according to the face feature coordinates;
performing image scale transformation of a preset specification on the authorized face image subjected to face alignment;
and carrying out image pixel normalization processing on the authorized face image subjected to image scale transformation.
In a second aspect, an embodiment of the present application provides a face recognition apparatus, including:
the first unitary matrix generation module is used for acquiring a first face image of a preset target group, processing the first face image according to a preset first face recognition algorithm and generating a first unitary feature matrix;
the second element matrix generation module is used for acquiring a second face image of the target group, processing the second face image according to a preset second face recognition algorithm and generating a second element characteristic matrix;
the first matrix generation module is used for acquiring an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix;
the second matrix generation module is used for acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second element feature matrix and the face feature matrix to be detected to obtain a second feature matrix;
and the similarity calculation module is used for calculating the similarity between the face image to be detected and the authorized face image according to the first characteristic matrix and the second characteristic matrix, and when the similarity is greater than a preset threshold value, the face recognition is successful.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any of the steps of the above-mentioned face recognition method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of any one of the above-mentioned face recognition methods.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute any one of the face recognition methods in the first aspect.
According to the embodiment of the application, a first face image of a preset target group is obtained, the first face image is processed according to a preset first face recognition algorithm, and a first meta-feature matrix is generated; acquiring a second face image of the target group, and processing the second face image according to a preset second face recognition algorithm to generate a second feature matrix; obtaining an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix; acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix; and calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is greater than a preset threshold value, successfully identifying the face. According to the method and the device, the first face image and the second face image of the preset target group are used and correspond to the first face recognition algorithm and the second face recognition algorithm respectively, meta-feature matrixes obtained by different face recognition algorithms of the same target group are obtained, the first face recognition algorithm and the second face recognition algorithm are used for processing authorized face images and face images to be detected of different modalities respectively to obtain corresponding feature matrixes, the meta-feature matrixes and the feature matrixes under the same face recognition algorithm are used for mapping, the authorized face images and the face images to be detected of different modalities are mapped to the same layer to be compared, the similarity is obtained, and therefore the face recognition efficiency of different modalities is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first flow of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second flow of a face recognition method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a third method for face recognition according to an embodiment of the present application;
fig. 4 is a fourth flowchart illustrating a face recognition method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Fig. 1 is a schematic flow chart of a face recognition method in an embodiment of the present application, where an execution subject of the method may be a terminal device, and as shown in fig. 1, the face recognition method may include the following steps:
step S101, a first face image of a preset target group is obtained, the first face image is processed according to a preset first face recognition algorithm, and a first meta-feature matrix is generated.
In this embodiment, in order to enable the face images in different modalities to be identified and compared, the terminal device may add a third-party parameter, so as to complete the mapping process of the face feature matrix. The terminal equipment can randomly obtain face images of a group of users, the obtained images are required to be suitable for a current face recognition algorithm, namely, a first face image of the preset target group, the first face image of the target group is processed according to the first face recognition algorithm, so that multi-dimensional face feature vectors of the target group corresponding to the first face recognition algorithm are obtained, and the multi-dimensional face feature vectors can form the first meta-feature matrix. The target group is a preset number of users randomly selected by the terminal equipment; the first face recognition algorithm is an algorithm capable of performing face recognition, and includes, but is not limited to, a face recognition algorithm based on a color image, a face recognition algorithm based on an infrared image, a face recognition algorithm based on a depth image, and the like, and correspondingly, the first face image is a face image of a target group suitable for the first face recognition algorithm, and includes, but is not limited to, a color face image, an infrared face image, a depth face image, and the like; the first meta-feature matrix may represent face features of the target group obtained by a first face recognition algorithm.
By way of specific example and not limitation, the face image of the target group as a standard for describing the face needs to contain people of different genders, different ages and different skin colors as much as possible so as to play a role of the standard. If the current target population is 384 stars, the 384-star color face images can be randomly acquired, and the 384-star color face images are processed by using a color image-based face recognition algorithm, namely a first face recognition algorithm, so as to generate 384-dimensional 128-dimensional face feature vectors, namely a 128-x 384 first meta-feature matrix, so as to represent the 384-star face features.
Step S102, a second face image of the target group is obtained, the second face image is processed according to a preset second face recognition algorithm, and a second feature matrix is generated.
In this embodiment, because there is a face image of a user to be identified and another face recognition algorithm needs to be used for face feature extraction, the terminal device obtains a second face image of a target group on which the first meta-feature matrix is established, and processes the second face image of the target group according to the second face recognition algorithm to obtain a multi-dimensional face feature loudness of the target group corresponding to the second face recognition algorithm, and the multi-dimensional face feature vectors can form the second meta-feature matrix. The second face recognition algorithm is an algorithm capable of performing face recognition, and includes, but is not limited to, a color image-based face recognition algorithm, an infrared image-based face recognition algorithm, a depth image-based face recognition algorithm, and the like, and correspondingly, the second face image is a face image of a target group suitable for the second face recognition algorithm, and includes, but is not limited to, a color face image, an infrared face image, a depth face image, and the like; the second feature matrix may represent the face features of the target group obtained by using a second face recognition algorithm.
As a specific example and not by way of limitation, because the current target population is 384 stars, infrared face images of 384 stars may be randomly acquired, and the infrared face images of 384 stars are processed by using a face recognition algorithm based on the infrared image, that is, a second face recognition algorithm, to generate 384 face feature vectors with 256 dimensions, that is, a second binary feature matrix of 256 × 384, so as to represent the face features of 384 stars.
Step S103, obtaining an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix.
In this embodiment, an authorized face image needs to be entered into the database in advance, so as to perform face recognition of a to-be-recognized face image of a to-be-recognized user according to the authorized face image, thereby determining whether to open or execute a corresponding function for the current to-be-recognized user according to a face recognition result, and in this process, there may be a case where the modality of the authorized face image entered into the database in advance is not consistent with that of the to-be-recognized face image currently waiting for recognition, for example, the authorized face image entered into the database in advance is a color face image, and the to-be-tested face image currently waiting for testing is an infrared face image obtained by using an infrared camera, and in this case, the corresponding face recognition algorithms are different due to different modalities of the images, so that the current authorized face image and the to-be-tested face image are the face image of the same user, the dimensions of the face features extracted by adopting different face recognition algorithms are different, so that the feature vectors of the subsequently obtained face features have the characteristic of incomparability and are incomparable.
Therefore, the terminal device can acquire the authorized face image, process the acquired authorized face image according to the preset first face recognition algorithm, generate the authorized face feature matrix, and utilize the first meta-feature matrix obtained from the first face image of the target group according to the first face recognition algorithm to participate in the mapping process of the authorized face feature matrix, that is, map the currently obtained authorized face feature matrix used for representing the face features of the authorized face image into another space, so that the comparison and recognition can be performed when the terminal device subsequently faces to-be-detected images of faces in different modalities, thereby enabling the face images in different modalities to have comparability, and improving the flexibility of deploying the face recognition system. The first face recognition algorithm is an algorithm capable of performing face recognition, and includes, but is not limited to, a face recognition algorithm based on a color image, a face recognition algorithm based on an infrared image, a face recognition algorithm based on a depth image, and the like.
Specifically, by way of example and not limitation, the color face images of 500 employees in a certain enterprise are input to a terminal device in advance, and the terminal device may process the color face images of the 500 employees by using a color image-based face recognition algorithm, where the color image-based face recognition algorithm is the first face recognition algorithm. After the face image is identified by the face identification algorithm based on the color image, a face feature vector with 128 dimensions can be obtained, 500 authorized face feature matrixes of 128 × 1 can be obtained from the color face image of 500 employees, and then the first unary feature matrix of 128 × 384 and the 500 authorized face feature matrixes in the example are mapped to obtain 500 first feature matrixes mapped into another space.
Optionally, when there is a face image to be tested waiting for face recognition currently, the authorized face image obtained in advance by the terminal device may be an authorized face image stored in the terminal device in advance, or an authorized face image directly called from face recognition databases of different manufacturers, or an authorized face image obtained from face recognition databases of different historical versions, and different face recognition algorithms may be used for face images of the same modality, so that the terminal device may perform corresponding mapping processing when processing the authorized face image in the database, so as to be comparable to a subsequent face image to be tested, and the process is more intelligent and expands that the terminal device can perform face recognition by using face images of different databases.
As a specific example and not by way of limitation, if the current authorized face image and the face image to be detected are color images, a first face recognition algorithm may output 128-dimensional face feature data, and a second face recognition algorithm may output 256-dimensional face feature data, and the dimensions of the output face feature vectors are different, so that the authorized face image and the face image to be detected are not comparable.
Optionally, the images of different modalities correspond to different face recognition algorithms, so that when the terminal device obtains an authorized face image, the authorized face image is determined and recognized, and when the authorized face image is determined and recognized to be suitable for a first face recognition algorithm stored in the terminal device in advance, the terminal device processes the authorized face image by using the determined first face recognition algorithm.
It can be understood that, because the human face is non-rigid, the human face recognition mainly includes human face image preprocessing, image feature extraction and classification recognition. The face image comprises three features of color, texture and shape, wherein the feature extraction aiming at the face image generally comprises texture structure feature extraction and can be divided into feature extraction in a time domain and feature extraction in a frequency domain. The feature extraction in the time domain mainly comprises the steps of carrying out global or local structure description by using methods of carrying out affine, mapping, special matrixes, inter-pixel-point relations in neighborhoods and the like on an image; the feature extraction in frequency is to use an image matrix containing human face features as signals, process the images by a signal analysis method, and then reconstruct the images, wherein the signal analysis method includes but is not limited to fourier transform, wavelet analysis, wavelet packet decomposition, and the like.
Optionally, as shown in fig. 2, step S103 includes:
step S201, transposing the authorized face feature matrix to generate a first transposing matrix.
Step S202, multiplying the first transfer matrix by a preset first unitary feature matrix to obtain a first feature matrix.
In this embodiment, after obtaining the authorized face feature matrix, in order to implement comparison with the face feature matrix to be detected, the authorized face feature matrix needs to be mapped, and is thus mapped to the same space for identification and comparison, so that the authorized face feature matrix is transposed, so that the authorized face feature matrix and the obtained first unitary feature matrix that uses the same face identification algorithm as the authorized face feature matrix are operated, and when obtaining the first transpose matrix, the first transpose matrix is multiplied by the first unitary feature matrix, so as to obtain the first feature matrix mapped to another space.
By way of specific example and not limitation, after obtaining the 128 × 1 authorized face feature matrix in the above example, the 128 × 1 authorized face feature matrix may be transposed to obtain a1 × 128 first transpose matrix, and then the 1 × 128 first transpose matrix may be multiplied by the 128 × 384 first meta feature matrix in the above example to obtain a first feature matrix 1 × 384 mapped to another space.
And step S104, acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second element feature matrix and the face feature matrix to be detected to obtain a second feature matrix.
In this embodiment, because there may be a situation that a to-be-detected face image of a to-be-identified user and an authorized face image have different modalities, after the terminal device acquires the to-be-detected face image, processes the acquired to-be-detected face image according to a preset second face recognition algorithm, generates a to-be-detected face feature matrix, and participates in the mapping process of the to-be-detected face feature matrix by using a second binary feature matrix obtained by using a second face image of a target group according to the second face recognition algorithm in advance, that is, maps the currently-obtained face to-be-detected feature matrix representing the face features of the to-be-detected face image into the same space as the first feature matrix, so as to compare the to-be-detected face image with the authorized face image, thereby improving the flexibility of deploying the face recognition system. The second face recognition algorithm is an algorithm capable of performing face recognition, and includes, but is not limited to, a face recognition algorithm based on a color image, a face recognition algorithm based on an infrared image, a face recognition algorithm based on a depth image, and the like, and correspondingly, the authorized face image and the face image to be detected include, but is not limited to, a color face image, an infrared face image, a depth face image, and the like.
As a specific example, but not limited to, the infrared face image of the current user to be detected is input to the terminal device, and the terminal device may process the infrared face image of the user to be detected by using a face recognition algorithm based on the infrared image, where the face recognition algorithm based on the infrared image is the second face recognition algorithm. After the face image is identified by the face identification algorithm based on the infrared image, a face feature vector with 256 dimensions can be obtained, so that 1 256 x 1 face feature matrix to be detected can be obtained from the infrared face image of the current user to be detected, and then the 256 x 384 second binary feature matrix in the example and the face feature matrix to be detected are subjected to mapping processing to obtain a second feature matrix mapped to the same space as the first feature matrix.
Accordingly, step S104 includes:
step S203, transposing the face feature matrix to be detected to generate a second transpose matrix.
And step S204, multiplying the second transposed matrix by the second element feature matrix to obtain a second feature matrix.
In this embodiment, after obtaining the face feature matrix to be detected, in order to implement comparison with the authorized face feature matrix, the face feature matrix to be detected needs to be mapped, so as to be mapped to the same space for identification and comparison, so that the face feature matrix to be detected is transposed so as to facilitate operation of the face feature matrix to be detected and the obtained second feature matrix applying the same face identification algorithm as the face feature matrix to be detected, and when obtaining the second transposed matrix, the second transposed matrix is multiplied by the second feature matrix, so as to obtain the second feature matrix mapped to the same space as the first feature matrix.
Specifically, but not by way of limitation, after obtaining the face feature matrix to be measured of 256 × 1 in the above example, the face feature matrix to be measured of 256 × 1 may be transposed to obtain a second transposed matrix of 1 × 256, and then the second transposed matrix of 1 × 256 is multiplied by the second binary feature matrix of 256 × 384 in the above example to obtain a second feature matrix 1 × 384 mapped to the same space as the first feature matrix.
Step S105, calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is larger than a preset threshold value, successfully recognizing the face.
In this embodiment, the similarity between the face image to be detected and the authorized face image is calculated through the first feature matrix and the second feature matrix mapped to the same space, and because the first feature matrix and the second feature matrix represent the face features of the authorized face image and the face image to be detected respectively and are both in the same space, whether the face is successfully recognized or not can be judged through the similarity obtained through calculation of the first feature matrix and the second feature matrix, when the similarity is greater than a preset threshold, the face is successfully recognized, and when the similarity is less than or equal to the preset threshold, the face is failed to recognize, which indicates that the face image to be detected does not belong to one of the authorized face images.
It is understood that, assuming that the first feature matrix of 1 × 384 is a [ a1, a2, … a384], each value of a 1-a 384 in the first feature matrix a represents the similarity of the employee a to one of the face images of the 384 target groups. For example, if employee A is most similar to person 2, then A2 might be 0.8; if it is least similar to person 100, the value of A100 may be 0.0000001, and it is to be understood that the sum of A1+ A2+ … + A384 is 1. Each authorized face image has a matrix of similarity with the face images of 384 target groups.
Similarly, assuming that the second feature matrix of 1 × 384 is B [ B1, B2, … B384], each value of B1-B384 in the second feature matrix B represents the similarity between the face B to be detected and one of the face images of the 384 target groups. For example, if the face image to be detected is most similar to the 2 nd person image, B2 may be 0.9; if it is least similar to person 100, the value of B100 may be 0.00000001, and it will be appreciated that the sum of B1+ B2+ … + B384 is 1. Therefore, if the face image to be detected and a certain authorized face image are most similar to one of the face images of the 384 target groups and/or are least similar to one of the face images, the face image to be detected and the authorized face image may be the same person with high similarity; in other words, if the face image to be measured is different from each authorized face image in the most similar faces and the least similar faces in the 384 randomly selected face images, the face image to be measured is likely not to belong to one of the authorized face images.
In summary, if the face B to be detected is the employee C in the authorized face image, in the first feature matrix of the employee C and the 384 face images calculated by the first face recognition algorithm, or in the second feature matrix of the face B to be detected and the 384 face images calculated by the second face recognition algorithm, the face B to be detected and the employee C are positioned to the same face of the 384 face images, that is, a certain column exists in the first feature matrix and the second feature matrix, and the numerical values of the two matrixes are very similar.
Optionally, as shown in fig. 3, step S105 includes:
step S301, performing a transpose operation on the second feature matrix to generate a third transpose matrix.
Step S302, calculating a product of the third transpose matrix and the first feature matrix to obtain the similarity.
In this embodiment, after the first feature matrix and the second feature matrix are obtained, in order to implement mutual calculation between the first feature matrix and the second feature matrix, the second feature matrix may be transposed so as to facilitate operation between the second feature matrix and the first feature matrix, so that when a third transposed matrix after the transposition operation is obtained, the third transposed matrix is multiplied by the second feature matrix to calculate a product, where the product is the similarity between the face to be detected and the authorized face.
As a specific example, but not by way of limitation, after obtaining the second feature matrix of 1 × 384 in the above example, the second feature matrix of 1 × 384 may be transposed to obtain a third transposed matrix of 384 × 1, and then the third transposed matrix of 384 × 1 is multiplied by 500 first feature matrices 1 × 384 in the above example to obtain corresponding products, where the products are set to 0.5 for the similarity between the second feature matrix and each first feature matrix, that is, the similarity between the face image to be detected and each authorized face image, and when the product is greater than 0.5, it is determined that the face image to be detected belongs to one of the authorized face images, and it is determined that the face image to be detected is successfully identified. If the product of the third transposed matrix and each first feature matrix is less than or equal to 0.5, the face image to be detected is not one of the authorized face images, and the identification fails.
Optionally, step S103 further includes:
and preprocessing the authorized face image in a preset mode, and processing the preprocessed authorized face image according to the first face recognition algorithm.
In this embodiment, since the face is non-rigid, when performing face recognition, the face image needs to be preprocessed so that the recognized feature result is more accurate, so that the authorized face image is preprocessed in a preset manner, and the preprocessed authorized face image is subsequently processed according to the first face recognition algorithm.
Optionally, the preprocessing performed on the authorized face image may be applied to a face image to be detected, a first face image of a target group, and a second face image of the target group.
Optionally, as shown in fig. 4, the preprocessing the authorized face image in a preset manner includes:
step S401, obtaining a preset number of face feature coordinates, and aligning faces in the authorized face image according to the face feature coordinates.
And S402, performing image scale transformation with preset specifications on the authorized face image subjected to face alignment.
And S403, performing image pixel normalization processing on the authorized face image subjected to image scale transformation.
In this embodiment, in order to improve the accuracy of the recognition result of the face image, the face image may be preprocessed in advance, specifically, a preset number of face feature coordinates may be obtained from the face image in advance, the face alignment in the authorized face image is performed according to the face feature coordinates, after the face alignment, the authorized face image is subjected to scale conversion of a preset specification so as to facilitate subsequent feature extraction, and after the image scale conversion is completed, the image pixel normalization processing is performed, so that the authorized face image without the face recognition algorithm is in a standard state, and a face recognition result is obtained more accurately subsequently.
In the embodiment of the application, an authorized face image is obtained, the authorized face image is processed according to a preset first face recognition algorithm to generate an authorized face feature matrix, and the authorized face feature matrix is mapped to obtain a first feature matrix; acquiring a face image to be detected, processing the face image to be detected according to a preset second face recognition algorithm to generate a face feature matrix to be detected, and mapping the face feature matrix to be detected to obtain a second feature matrix; and calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is greater than a preset threshold value, successfully identifying the face. According to the method and the device, the authorized face images and the face image to be detected in different modes are processed by the first face recognition algorithm and the second face recognition algorithm respectively to obtain the corresponding feature matrixes, and then the feature matrixes are mapped to be compared in the same level to obtain the similarity, so that the face recognition efficiency in different modes is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the above-mentioned face recognition method, fig. 5 is a schematic structural diagram of a face recognition apparatus in an embodiment of the present application, and as shown in fig. 5, the face recognition apparatus may include:
the first unitary matrix generating module 501 is configured to obtain a first face image of a preset target group, process the first face image according to a preset first face recognition algorithm, and generate a first unitary feature matrix.
A second binary matrix generating module 502, configured to obtain a second face image of the target group, process the second face image according to a preset second face recognition algorithm, and generate a second binary feature matrix.
A first matrix generation module 503, configured to obtain an authorized face image, process the authorized face image according to the first face recognition algorithm, generate an authorized face feature matrix, and perform mapping processing on the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix.
The second matrix generation module 504 is configured to acquire a face image to be detected, process the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and perform mapping processing on the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix.
And a similarity calculation module 505, configured to calculate a similarity between the to-be-detected face image and the authorized face image according to the first feature matrix and the second feature matrix, where when the similarity is greater than a preset threshold, face recognition is successful.
Optionally, the first matrix generating module 501 may include:
and the first matrix generating unit is used for performing transposition operation on the authorized face feature matrix to generate a first transposition matrix.
And the first matrix obtaining unit is used for multiplying the first transfer matrix by the first unary feature matrix to obtain a first feature matrix.
Optionally, the second matrix generating module 502 includes:
and the second matrix generation unit is used for performing transposition operation on the face feature matrix to be detected to generate a second transposition matrix.
And a second matrix obtaining unit, configured to multiply the second transposed matrix by the second binary feature matrix to obtain a second feature matrix.
Optionally, the similarity calculation module 505 may include:
and the third matrix generating unit is used for performing transposition operation on the second feature matrix to generate a third transposed matrix.
And the similarity obtaining unit is used for calculating the product of the third transposed matrix and the first feature matrix to obtain the similarity.
Optionally, the first matrix generating module 503 may include:
and the preprocessing unit is used for preprocessing the authorized face image in a preset mode and processing the preprocessed authorized face image according to the first face recognition algorithm.
Optionally, the preprocessing unit may include:
and the face alignment subunit is used for acquiring the face feature coordinates of a preset number and performing face alignment in the authorized face image according to the face feature coordinates.
And the scale transformation subunit is used for carrying out image scale transformation of preset specifications on the authorized face image subjected to face alignment.
And the pixel processing subunit is used for carrying out image pixel normalization processing on the authorized face image subjected to the image scale transformation.
In the embodiment of the application, an authorized face image is obtained, the authorized face image is processed according to a preset first face recognition algorithm to generate an authorized face feature matrix, and the authorized face feature matrix is mapped to obtain a first feature matrix; acquiring a face image to be detected, processing the face image to be detected according to a preset second face recognition algorithm to generate a face feature matrix to be detected, and mapping the face feature matrix to be detected to obtain a second feature matrix; and calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is greater than a preset threshold value, successfully identifying the face. According to the method and the device, the authorized face images and the face image to be detected in different modes are processed by the first face recognition algorithm and the second face recognition algorithm respectively to obtain the corresponding feature matrixes, and then the feature matrixes are mapped to be compared in the same level to obtain the similarity, so that the face recognition efficiency in different modes is improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the module described above may refer to corresponding processes in the foregoing system embodiments and method embodiments, and are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown.
As shown in fig. 6, the terminal device 6 of this embodiment includes: at least one processor 600 (only one shown in fig. 6), a memory 601 connected to the processor 600, and a computer program 602, such as a face recognition program, stored in the memory 601 and executable on the at least one processor 600. The processor 600 executes the computer program 602 to implement the steps in the above-mentioned embodiments of the face recognition method, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 600 executes the computer program 602 to implement the functions of the modules in the device embodiments, such as the functions of the modules 501 to 505 shown in fig. 5.
Illustratively, the computer program 602 may be partitioned into one or more modules that are stored in the memory 601 and executed by the processor 600 to accomplish the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 602 in the terminal device 6. For example, the computer program 602 may be divided into a first element matrix generating module 501, a second element matrix generating module 502, a first matrix generating module 503, a second matrix generating module 504, and a similarity calculating module 505, and each module has the following specific functions:
the first unitary matrix generation module 501 is configured to obtain a first face image of a preset target group, process the first face image according to a preset first face recognition algorithm, and generate a first unitary feature matrix;
a second binary matrix generation module 502, configured to obtain a second face image of the target group, process the second face image according to a preset second face recognition algorithm, and generate a second binary feature matrix;
a first matrix generation module 503, configured to obtain an authorized face image, process the authorized face image according to the first face recognition algorithm, generate an authorized face feature matrix, and perform mapping processing on the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix;
a second matrix generation module 504, configured to obtain a face image to be detected, process the face image to be detected according to the second face recognition algorithm, generate a face feature matrix to be detected, and perform mapping processing on the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix;
and a similarity calculation module 505, configured to calculate a similarity between the to-be-detected face image and the authorized face image according to the first feature matrix and the second feature matrix, where when the similarity is greater than a preset threshold, face recognition is successful.
The terminal device 6 may include, but is not limited to, a processor 600, a memory 601. Those skilled in the art will appreciate that fig. 6 is merely an example of the terminal device 6, and does not constitute a limitation to the terminal device 6, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, a bus, etc.
The Processor 600 may be a Central Processing Unit (CPU), and the Processor 600 may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 601 may in some embodiments be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. In other embodiments, the memory 601 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device 6. Further, the memory 601 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 601 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 601 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring a first face image of a preset target group, and processing the first face image according to a preset first face recognition algorithm to generate a first meta-feature matrix;
acquiring a second face image of the target group, and processing the second face image according to a preset second face recognition algorithm to generate a second feature matrix;
obtaining an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix;
acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix;
and calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix, and when the similarity is greater than a preset threshold value, successfully identifying the face.
2. The face recognition method of claim 1, wherein the mapping the first unary feature matrix and the authorized face feature matrix to obtain a first feature matrix comprises:
performing transposition operation on the authorized face feature matrix to generate a first transposition matrix;
and multiplying the first transfer matrix by the first unary feature matrix to obtain a first feature matrix.
3. The method of claim 2, wherein the mapping the second feature matrix and the face feature matrix to be detected to obtain a second feature matrix comprises:
performing transposition operation on the face feature matrix to be detected to generate a second transposition matrix;
and multiplying the second transpose matrix by the second element feature matrix to obtain a second feature matrix.
4. The method of claim 1, wherein the calculating the similarity between the face image to be detected and the authorized face image according to the first feature matrix and the second feature matrix comprises:
performing transposition operation on the second feature matrix to generate a third transposition matrix;
and calculating the product of the third transpose matrix and the first feature matrix to obtain the similarity.
5. The face recognition method according to any one of claims 1 to 4, wherein the processing the authorized face image according to the first face recognition algorithm comprises:
and preprocessing the authorized face image in a preset mode, and processing the preprocessed authorized face image according to the first face recognition algorithm.
6. The face recognition method of claim 5, wherein the preprocessing the authorized face image in a preset manner comprises:
acquiring a preset number of face feature coordinates, and performing face alignment in an authorized face image according to the face feature coordinates;
performing image scale transformation of a preset specification on the authorized face image subjected to face alignment;
and carrying out image pixel normalization processing on the authorized face image subjected to image scale transformation.
7. A face recognition apparatus, comprising:
the first unitary matrix generation module is used for acquiring a first face image of a preset target group, processing the first face image according to a preset first face recognition algorithm and generating a first unitary feature matrix;
the second element matrix generation module is used for acquiring a second face image of the target group, processing the second face image according to a preset second face recognition algorithm and generating a second element characteristic matrix;
the first matrix generation module is used for acquiring an authorized face image, processing the authorized face image according to the first face recognition algorithm to generate an authorized face feature matrix, and mapping the first meta-feature matrix and the authorized face feature matrix to obtain a first feature matrix;
the second matrix generation module is used for acquiring a face image to be detected, processing the face image to be detected according to the second face recognition algorithm to generate a face feature matrix to be detected, and mapping the second element feature matrix and the face feature matrix to be detected to obtain a second feature matrix;
and the similarity calculation module is used for calculating the similarity between the face image to be detected and the authorized face image according to the first characteristic matrix and the second characteristic matrix, and when the similarity is greater than a preset threshold value, the face recognition is successful.
8. The face recognition apparatus of claim 7, wherein the first matrix generation module comprises:
the first matrix generation unit is used for performing transposition operation on the authorized face feature matrix to generate a first transposition matrix;
and the first matrix obtaining unit is used for multiplying the first transfer matrix by the first unary feature matrix to obtain a first feature matrix.
9. A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of a method for face recognition according to any one of claims 1 to 6 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method for face recognition according to any one of claims 1 to 6.
CN202011180098.5A 2020-10-29 2020-10-29 Face recognition method, face recognition device, terminal equipment and storage medium Pending CN112183480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011180098.5A CN112183480A (en) 2020-10-29 2020-10-29 Face recognition method, face recognition device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011180098.5A CN112183480A (en) 2020-10-29 2020-10-29 Face recognition method, face recognition device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112183480A true CN112183480A (en) 2021-01-05

Family

ID=73917674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011180098.5A Pending CN112183480A (en) 2020-10-29 2020-10-29 Face recognition method, face recognition device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112183480A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090297046A1 (en) * 2008-05-29 2009-12-03 Microsoft Corporation Linear Laplacian Discrimination for Feature Extraction
CN108537187A (en) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 Task executing method, terminal device and computer readable storage medium
CN109753875A (en) * 2018-11-28 2019-05-14 北京的卢深视科技有限公司 Face identification method, device and electronic equipment based on face character perception loss
CN109902561A (en) * 2019-01-16 2019-06-18 平安科技(深圳)有限公司 A kind of face identification method and device, robot applied to robot
CN109934198A (en) * 2019-03-22 2019-06-25 北京市商汤科技开发有限公司 Face identification method and device
CN110909582A (en) * 2018-09-18 2020-03-24 华为技术有限公司 Face recognition method and device
CN111310743A (en) * 2020-05-11 2020-06-19 腾讯科技(深圳)有限公司 Face recognition method and device, electronic equipment and readable storage medium
WO2020147257A1 (en) * 2019-01-16 2020-07-23 平安科技(深圳)有限公司 Face recognition method and apparatus
CN111599044A (en) * 2020-05-14 2020-08-28 哈尔滨学院 Access control safety management system based on multi-mode biological feature recognition
CN111797696A (en) * 2020-06-10 2020-10-20 武汉大学 Face recognition system and method for on-site autonomous learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090297046A1 (en) * 2008-05-29 2009-12-03 Microsoft Corporation Linear Laplacian Discrimination for Feature Extraction
CN108537187A (en) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 Task executing method, terminal device and computer readable storage medium
CN110909582A (en) * 2018-09-18 2020-03-24 华为技术有限公司 Face recognition method and device
CN109753875A (en) * 2018-11-28 2019-05-14 北京的卢深视科技有限公司 Face identification method, device and electronic equipment based on face character perception loss
CN109902561A (en) * 2019-01-16 2019-06-18 平安科技(深圳)有限公司 A kind of face identification method and device, robot applied to robot
WO2020147257A1 (en) * 2019-01-16 2020-07-23 平安科技(深圳)有限公司 Face recognition method and apparatus
CN109934198A (en) * 2019-03-22 2019-06-25 北京市商汤科技开发有限公司 Face identification method and device
CN111310743A (en) * 2020-05-11 2020-06-19 腾讯科技(深圳)有限公司 Face recognition method and device, electronic equipment and readable storage medium
CN111599044A (en) * 2020-05-14 2020-08-28 哈尔滨学院 Access control safety management system based on multi-mode biological feature recognition
CN111797696A (en) * 2020-06-10 2020-10-20 武汉大学 Face recognition system and method for on-site autonomous learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张国庆;王正群;王颖静;徐伟;: "图像多模态扰动的人脸识别方法", 计算机工程与应用, no. 07, 9 December 2011 (2011-12-09) *

Similar Documents

Publication Publication Date Title
CN110033026B (en) Target detection method, device and equipment for continuous small sample images
CN109063785B (en) Charging pile fault detection method and terminal equipment
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN113705462B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN109948397A (en) A kind of face image correcting method, system and terminal device
CN113159147A (en) Image identification method and device based on neural network and electronic equipment
CN112507922A (en) Face living body detection method and device, electronic equipment and storage medium
CN110738219A (en) Method and device for extracting lines in image, storage medium and electronic device
CN114414935A (en) Automatic positioning method and system for feeder fault area of power distribution network based on big data
CN113298152B (en) Model training method, device, terminal equipment and computer readable storage medium
CN108416343A (en) A kind of facial image recognition method and device
CN113015022A (en) Behavior recognition method and device, terminal equipment and computer readable storage medium
CN115170869A (en) Repeated vehicle damage claim identification method, device, equipment and storage medium
CN111488810A (en) Face recognition method and device, terminal equipment and computer readable medium
CN114241585A (en) Cross-age face recognition model training method, recognition method and device
CN111353514A (en) Model training method, image recognition method, device and terminal equipment
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
US10922569B2 (en) Method and apparatus for detecting model reliability
CN112113638A (en) Water meter function self-checking device and method
CN109326324B (en) Antigen epitope detection method, system and terminal equipment
CN108629219B (en) Method and device for identifying one-dimensional code
CN112183480A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN113569070A (en) Image detection method and device, electronic equipment and storage medium
CN113705749A (en) Two-dimensional code identification method, device and equipment based on deep learning and storage medium
CN113139617A (en) Power transmission line autonomous positioning method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination