CN113792668A - Face recognition and access control method and device, computer equipment and storage medium - Google Patents

Face recognition and access control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113792668A
CN113792668A CN202111087459.6A CN202111087459A CN113792668A CN 113792668 A CN113792668 A CN 113792668A CN 202111087459 A CN202111087459 A CN 202111087459A CN 113792668 A CN113792668 A CN 113792668A
Authority
CN
China
Prior art keywords
face
recognized
feature
facial
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111087459.6A
Other languages
Chinese (zh)
Inventor
胡琨
秦昊煜
于志鹏
吴一超
梁鼎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202111087459.6A priority Critical patent/CN113792668A/en
Publication of CN113792668A publication Critical patent/CN113792668A/en
Priority to PCT/CN2022/104602 priority patent/WO2023040436A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure provides a face recognition and access control method, device, computer device and storage medium, comprising: acquiring a face image to be recognized, and extracting first face features of the face image to be recognized; matching the first face features with a plurality of second face features respectively to obtain matching results, and obtaining face recognition results based on the matching results; the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.

Description

Face recognition and access control method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of face recognition technologies, and in particular, to a face recognition method, a face recognition device, an access control method, an access control device, a computer device, and a storage medium.
Background
The face recognition is one of the most widely applied biometric technologies at present, is also the most common algorithm in the security field, and has mature application in many scenes, such as entrance guard passing and the like.
In the related art, generally, the features of the acquired face image are compared with the features of the images stored in the database one by one to judge whether the face recognition is passed, and in order to ensure the recognition accuracy, a user is often required to upload a new face image to the database again at intervals, which is tedious to operate.
Disclosure of Invention
The embodiment of the disclosure at least provides a face recognition method, a face recognition device, an entrance guard control method, a face recognition device, a computer device and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a face recognition method, including:
acquiring a face image to be recognized, and extracting first face features of the face image to be recognized;
matching the first face features with a plurality of second face features respectively to obtain matching results, and obtaining face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
In the method, after the face image to be recognized is acquired, the face recognition can be performed based on a plurality of second face features and the first face features of the face image to be recognized, and the second face features are determined based on the third face features of the registered face image and the first face features of the historical face image, so that the second face features can reflect the features of the registered face image and the historical face image simultaneously. Since the historical face images acquired at different times may have more or less differences over time, and the change process is continuous, that is, the historical face images may change, and the second face features depend on the historical face images, the second face features may be updated periodically/aperiodically, that is, the second face features may also change correspondingly. Therefore, the user can continuously generate the updated second face features for face recognition without actively updating the face image. Therefore, when the face image to be recognized is recognized, the influence on the recognition result caused by the change of the face of the user at different moments can be reduced, and the face recognition precision is improved.
In a possible implementation, before the matching the first facial features with the plurality of second facial features respectively, the method further includes:
and under the condition that no historical face image matched with the registered face image of the first target user exists, taking the third face feature of the registered face image corresponding to the first target user as the second face feature of the first target user.
Therefore, after the user is registered, when the face of the user is recognized for the first time, the face can be recognized based on the third face feature of the registered image of the user, and the passing rate of the face recognition is improved under the condition that the recognition accuracy is ensured.
In a possible implementation manner, the method further includes updating a second facial feature of the user to be recognized in the facial image to be recognized according to the following method:
determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in the target feature group aiming at the target feature group; determining a second similarity between the first facial feature of the facial image to be recognized and a third facial feature of the second target user in the target feature group;
and under the condition that the updating condition is determined to be met based on the first similarity and the second similarity corresponding to each feature group, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
Simultaneously, by comparing the similarity between the first face feature and the second face feature and the similarity between the third face feature, the first face feature when the second face feature of the user to be identified is updated can be ensured, the first face feature of the user to be identified is taken as the first face feature of the user to be identified, and the accuracy of the second face feature is further improved.
In one possible embodiment, in response to the face image to be recognized being acquired from a first scene, the update condition includes:
the first similarity corresponding to the feature group is greater than a first preset threshold, and the second similarity corresponding to the feature group is greater than a second preset threshold.
Therefore, the first preset threshold and the second preset threshold are set, so that the adopted first feature can be ensured to be higher in precision when the second feature is updated.
In one possible embodiment, in response to the face image to be recognized being acquired from a second scene, the updating condition further includes:
the number of feature groups of which the first similarity is greater than a first preset threshold and the second similarity is greater than a second preset threshold is less than a predetermined number.
By setting the number of the feature groups, the problem that face recognition cannot be performed after updating due to feature updating errors can be solved.
In a possible implementation manner, before updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the method further includes:
and determining a second face feature matched with the first face feature of the face image to be recognized in the matching result as a second face feature of the user to be recognized.
In a possible implementation manner, the updating the second facial feature of the user to be recognized based on the first facial feature of the facial image to be recognized includes:
determining updating weights respectively corresponding to the first face features of the face image to be recognized and the second face features of the user to be recognized based on a first similarity between the first face features of the face image to be recognized and the second face features of the user to be recognized;
and updating the second face features of the user to be identified based on the updating weight, the first face features of the face image to be identified and the second face features of the user to be identified.
Therefore, when the second face features of the user to be recognized are updated, the first face features and the second face features of the user to be recognized are combined simultaneously, the second face features can be gradually updated, and the influence of feature updating on the recognition accuracy is reduced.
In a possible implementation manner, the determining, based on a first similarity between a first facial feature of the facial image to be recognized and a second facial feature of the user to be recognized, update weights respectively corresponding to the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized includes:
determining a first updating weight corresponding to the first facial feature of the facial image to be recognized and a second updating weight corresponding to the second facial feature of the user to be recognized based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
In a second aspect, an embodiment of the present disclosure further provides an access control method, including:
responding to the face recognition request, and controlling the image acquisition equipment to acquire a face image to be recognized;
based on the first aspect or the face recognition method described in any one of the possible embodiments of the first aspect, performing face recognition on the face image to be recognized, and determining a face recognition result;
and performing door lock control based on the face recognition result.
In a third aspect, an embodiment of the present disclosure further provides a face recognition apparatus, including:
the characteristic extraction module is used for acquiring a face image to be recognized and extracting first face characteristics of the face image to be recognized;
the matching module is used for respectively matching the first face features with a plurality of second face features to obtain matching results and obtaining face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
In a possible embodiment, before the matching the first facial features with the plurality of second facial features respectively, the matching module is further configured to:
and under the condition that no historical face image matched with the registered face image of the first target user exists, taking the third face feature of the registered face image corresponding to the first target user as the second face feature of the first target user.
In a possible implementation manner, the apparatus further includes an updating module, configured to update the second facial feature of the user to be recognized in the facial image to be recognized according to the following method:
determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in the target feature group aiming at the target feature group; determining a second similarity between the first facial feature of the facial image to be recognized and a third facial feature of the second target user in the target feature group;
and under the condition that the updating condition is determined to be met based on the first similarity and the second similarity corresponding to each feature group, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
In one possible embodiment, in response to the face image to be recognized being acquired from a first scene, the update condition includes:
the first similarity corresponding to the feature group is greater than a first preset threshold, and the second similarity corresponding to the feature group is greater than a second preset threshold.
In one possible embodiment, in response to the face image to be recognized being acquired from a second scene, the updating condition further includes:
the number of feature groups of which the first similarity is greater than a first preset threshold and the second similarity is greater than a second preset threshold is less than a predetermined number.
In a possible implementation manner, before updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the updating module is further configured to:
and determining a second face feature matched with the first face feature of the face image to be recognized in the matching result as a second face feature of the user to be recognized.
In a possible implementation manner, the updating module, when updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, is configured to:
determining updating weights respectively corresponding to the first face features of the face image to be recognized and the second face features of the user to be recognized based on a first similarity between the first face features of the face image to be recognized and the second face features of the user to be recognized;
and updating the second face features of the user to be identified based on the updating weight, the first face features of the face image to be identified and the second face features of the user to be identified.
In a possible implementation manner, the updating module, when determining, based on a first similarity between a first facial feature of the facial image to be recognized and a second facial feature of the user to be recognized, update weights respectively corresponding to the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, is configured to:
determining a first updating weight corresponding to the first facial feature of the facial image to be recognized and a second updating weight corresponding to the second facial feature of the user to be recognized based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
In a fourth aspect, an embodiment of the present disclosure further provides an access control device, including:
the acquisition module is used for responding to the face identification request and controlling the image acquisition equipment to acquire a face image to be identified;
the identification module is configured to perform face identification on the face image to be identified based on the first aspect or the face identification method according to any possible implementation manner of the first aspect, and determine a face identification result;
and the control module is used for controlling the door lock based on the face recognition result.
In a fifth aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer apparatus is run, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect, or performing the steps of the second aspect described above.
In a sixth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps of the first aspect, or any one of the possible implementations of the first aspect, or performs the steps of the second aspect.
For the description of the effects of the face recognition apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the face recognition method, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required to be used in the embodiments will be briefly described below, and the drawings herein are incorporated into and constitute a part of this specification, and show the embodiments consistent with the present disclosure and together with the description serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a face recognition method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for updating a second face feature of a user to be recognized in a face image to be recognized in the face recognition method provided in the embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific method for updating a second facial feature of the user to be recognized based on a first facial feature of the facial image to be recognized in the face recognition method provided by the embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an architecture of a face recognition apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an architecture of an access control device provided in an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure.
In the related art, when performing face recognition, generally, registered images of all users are obtained first, and then face features in the registered images are extracted and stored. When the identification image of the user to be identified is shot, extracting the face features in the identification image, comparing the extracted face features in the identification image with the face features in the registered image, and determining that the face identification is passed if the comparison is successful.
In order to ensure the recognition accuracy, the user is often required to upload new face images to the database at intervals, which is cumbersome to operate.
If the time interval for updating the face image in the database is long, the face features of the user may change with the lapse of time, for example, the user may become fat, thin, beard, etc., which may cause the difference between the face features in the captured recognition image and the face features in the registered image, and may affect the recognition accuracy.
In addition, the background of the registered image is usually simple, for example, the registered image is a certificate photo, and the like, so when extracting the facial features of the registered image, the background has a small influence on the facial features, and when actually applying, the background of the recognized image may be relatively disordered, and when extracting the facial features of the recognized image, the background may be influenced by disordered background noise, thereby affecting the recognition accuracy.
Based on the above research, the present disclosure provides a face recognition and access control method, device, computer device, and storage medium, after obtaining a face image to be recognized, the face recognition may be performed based on a plurality of second face features and a first face feature of the face image to be recognized, and since the second face features are determined based on a third face feature of a registered face image and the first face feature of a historical face image, the second face features may reflect features of the registered face image and the historical face image at the same time. As there are more or less differences between the historical face images acquired at different times over time, and the change process is continuous, that is, the historical face images change, and the second face features depend on the historical face images, the second face features are often updated periodically/aperiodically, that is, the second face features may also change correspondingly. Therefore, the user can continuously generate the updated second face features for face recognition without actively updating the face image. Therefore, when the face image to be recognized is recognized, the influence on the recognition result caused by the change of the face of the user at different moments can be reduced, and the face recognition accuracy is improved.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a face recognition method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the face recognition method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the face recognition method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a face recognition method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 102, where:
step 101, obtaining a face image to be recognized, and extracting a first face feature of the face image to be recognized.
And 102, respectively matching the first face features with a plurality of second face features to obtain matching results, and obtaining face recognition results based on the matching results.
The second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
In a possible implementation manner, the obtaining of the face image to be recognized may be obtaining face images shot in different application scenes, for example, a face payment scene, a security inspection scene, an entrance guard scene, and other scenes involved in the process of recognizing the face of the user or authenticating the identity of the user may be obtained.
If the execution main body corresponding to the method provided by the disclosure is the server, the image acquisition device uploads the image to the server after shooting the face image to be recognized; if the execution main body corresponding to the method provided by the disclosure is a terminal or other processing equipment, the terminal or other processing equipment may store the second face features of all registered users in advance, and then may receive the face image to be recognized, which is sent by the image acquisition device.
It should be noted that the execution subject corresponding to the method provided by the present disclosure is an application scenario corresponding to a terminal or other processing device, and the application scenario may include a scenario with a large number of registered users, such as a social district, a sales place, and the like, or may include a scenario with a small number of registered users, such as a company, an exhibition, and the like.
In a possible implementation manner, when extracting the first facial feature of the facial image to be recognized, the facial image to be recognized may be input into a neural network trained in advance, and the neural network may enable the first facial feature corresponding to the facial image to be recognized. All the facial features described in this disclosure can be represented in the form of vectors.
The neural network can be obtained based on sample image training carrying user identification. For example, a sample image carrying a user identifier may be input into a neural network to obtain face features corresponding to the sample image, then, based on the face features corresponding to each sample image, a plurality of sample images corresponding to the same user are determined, then, based on the user identifier of the sample image and the plurality of determined sample images corresponding to the same user, the network accuracy of the neural network in the training process is determined, and the neural network is continuously trained under the condition that the accuracy does not meet a preset condition.
It should be noted that the training process of the neural network is only exemplary, and the disclosure is not limited to other methods for training the neural network.
The second facial features may be pre-stored, and for a registered user, two facial features of the user are stored in the database, one is a third facial feature in a registered image corresponding to the user, and the other is the second facial feature of the user, and the second facial feature of the user is determined based on the third facial feature of the user and the first facial feature of the historical facial image in the historical facial recognition for the user.
In a possible implementation manner, before the first face features are respectively matched with the plurality of second face features, in a case that there is no historical face image matched with the registered face image of the first target user, the third face features of the registered face image corresponding to the first target user may be used as the second face features of the first target user.
Here, the first target user may refer to one or more users stored in a database that have not undergone face recognition; or, the first target user may refer to a user to be matched who does not have a corresponding historical face image when performing face feature matching.
In practical application, when the historical face image is empty, that is, when the face of a certain user is recognized for the first time after the user is registered, the corresponding second face feature may be a third face feature in the registered image of the user. That is to say, when the face image to be recognized of any registered user is not obtained after registration is completed (which is equivalent to that the historical face image of the user is empty), the third face feature of the user stored in the database may be regarded as the second face feature of the user, or the second face feature and the third face feature of the user stored in the database are the same. Especially for the case that the storage space is not sufficient, storing only the third facial feature of the user may be equivalent to storing the second facial feature and the third facial feature of the user respectively, and when there is a difference between the two types of features of the user, the second facial feature may be additionally stored while the third facial feature is stored.
In a possible implementation manner, when the first facial feature is matched with a plurality of second facial features that are updated recently (that is, the second facial features that are updated recently and correspond to different registered users), a matching degree between the first facial feature and each second facial feature that is updated recently may be calculated (for example, a parameter that can represent similarity or an association relationship between the first facial feature and each second facial feature that is updated recently, such as a euclidean distance, may be calculated), a user corresponding to the second facial feature that has the highest matching degree and a corresponding matching degree that is greater than a preset matching degree is taken as a user in the facial image to be recognized, and a facial recognition result is determined to be passed recognition.
The determining of the face recognition result based on the matching result may be understood as that, when a second face feature matching the first face feature exists in the plurality of second face features updated recently, the face recognition result is a pass recognition; and when the second face features matched with the first face features do not exist in the plurality of second face features which are updated recently, the face recognition result is that the recognition is failed.
In a possible implementation manner, the stored plurality of second facial features may be updated continuously, specifically, an update trigger condition may be set, and when the update trigger condition is satisfied, the stored second facial features may be updated.
The exemplary update triggering condition may be updating every preset time, for example, once every week; or for a single user, the second face feature of the user is updated every preset number of times by the number of times of face recognition, for example, after the user a performs face recognition for 10 times, the second face feature of the user a may be updated. It should be noted that the update triggering condition may be set according to at least one of various factors such as the requirements of different scenes or the number of registered users, and may be a periodic or aperiodic determination manner. For example, when the scene has a high requirement on the accuracy of face recognition and/or the number of registered users is small, a short time interval may be used as the update trigger condition, and similarly, when the scene has a low requirement on the accuracy of face recognition and/or the number of registered users is large, a long time interval may be used as the update trigger condition, that is, the update period indicated by the update trigger condition is negatively related to the accuracy of face recognition and is positively or negatively related to the number of registered users. Of course, the time interval may be dynamically variable or fixed, and is not limited herein.
In practical applications, the second facial feature of the user may be updated on the premise that the user performs face recognition based on the facial image to be recognized. After the face image to be recognized is obtained, face recognition is performed based on the face image to be recognized, the face image to be recognized is determined to be the face image of the user B, and the second face feature of the user B can be updated if the update triggering condition is met. Certainly, after the user passes through the face recognition for multiple times, the images used by the multiple times of face recognition are respectively stored in the storage space authorized by the user such as the local or cloud end, and after a certain time or a certain number of images are accumulated, the updating of the second face features is completed. Especially for places such as communities, companies and the like where users frequently appear, the updating method can save the computing resources consumed by generating the second face features. Correspondingly, for places with low frequency of users such as exhibition and sales, in order to ensure that the users visit again, the second face features can meet the face recognition requirements in the face recognition process, and the face images visited all the time can be not stored according to the principle of protecting the privacy of the users, and the second face features can be updated only based on the images passing through the face recognition and the stored second face features.
In one possible implementation, the second facial feature of the user to be recognized in the face image to be recognized may be updated according to the method shown in fig. 2, which includes the following steps:
step 201, aiming at a target feature group, determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in the target feature group; and determining a second similarity between the first facial features of the facial image to be recognized and the third facial features of the second target user in the target feature group.
Here, the target feature group may refer to one or more feature groups, and may refer to a feature group to be matched, each user has a unique corresponding feature group for each registered user, and the first facial feature and the second facial feature of the user are stored in the feature group.
Step 202, under the condition that the first similarity and the second similarity corresponding to each feature group are determined to meet the updating condition, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
In a possible implementation manner, before updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the second facial features of the user to be recognized may be determined based on the matching result.
Illustratively, the second face features matched with the first face features of the face image to be recognized in the matching result are determined as the second face features of the user to be recognized.
The method for calculating the first similarity between the first facial feature and the second facial feature may be the same as the method for calculating the second similarity between the first facial feature and the third facial feature, and exemplary methods may be to calculate parameters such as euclidean distance, cosine distance, covariance and the like, which can represent the degree of similarity or association between two features.
Here, in response to the face image to be recognized being acquired from the first scene, the updating condition may include:
the first similarity corresponding to the feature group exceeds a first preset threshold and the second similarity corresponding to the feature group exceeds a second preset threshold.
Here, the first scene may refer to a scene in which a close relative appears less, such as an exhibition hall, a meeting place, a company, a sales place, and the like. The relatives may be brothers, sisters, parents, children and the like having a blood relationship, and a plurality of users belonging to the relatives usually have similar or partially identical facial features.
The above update condition may be described as:
Figure BDA0003266304380000141
wherein, f is for the same userqueryRepresenting a first facial feature, frecRepresenting a second face feature, fdbRepresenting a third facial feature, similar (f)query,frec) Denotes a first similarity, similar (f)query,fdb) Denotes a second degree of similarity, t1Is a first predetermined threshold value, t2Is a second preset threshold.
Therefore, under the condition that the first face features of the user are respectively close to the second face features and the third face features, the second face features of the user can be updated. Therefore, small changes of the user along with the time can be reflected in the second face features in time, and the face recognition accuracy of the user when visiting again is guaranteed.
In another possible embodiment, in response to the face image to be recognized being acquired from a second scene, the updating condition further includes:
the number of feature groups of which the first similarity exceeds a first preset threshold and the second similarity exceeds a second preset threshold is less than a predetermined number.
Here, the second scene may be understood as a scene in which a close relative appears more, such as a community, a home door lock, or the like.
The update condition in the second scenario may be understood as satisfying the above formula, and the number n of feature groups satisfying the above formula is less than the predetermined number t3
Here, the first preset threshold t is generally set1If the maximum matching degree is greater than the preset matching degree in the face recognition process, that is, the maximum matching degree between the first face feature of the to-be-recognized face image of any user and the first second face feature that is updated recently, the maximum matching degree may exceed the preset matching degree, but does not exceed the first preset threshold, and in this case, the update condition is considered to be not satisfied.
In this way, by setting the first preset threshold and the second preset threshold which are higher than the preset matching degree, the adopted first feature can be ensured to have higher precision when the second feature is updated; in addition, by setting the number of the feature groups, the probability of wrong feature update of two users with similar features in the registered users can be reduced, so that the problem that the users are difficult to identify through faces after update due to the wrong update is solved.
For example, if the registered users include two users with similar facial features, such as twins, when the second feature of one user a is updated based on the first feature of the facial image to be detected of the user a, the number of the feature groups satisfying the above formula may be 2 (possibly, the feature groups corresponding to the two twins respectively), in this case, if the second feature is to be updated, for example, the second feature with the highest first similarity is directly updated, the second feature corresponding to the other user B may be erroneously updated, and the subsequent other user B may not pass through the face recognition when performing the face recognition, or, when performing the face recognition by the user a, the second feature stored in the base library may be erroneously identified as the user B, affecting the face recognition accuracy of user a and user B.
For example, if the feature group satisfying the above formula includes a feature group 1, a feature group 2, and a feature group 3, if the second feature of the feature group 1 is updated based on the first feature of the facial image to be detected, it is possible that the user to be detected in the facial image to be detected is not the same user as the user corresponding to the feature group 1, but the user to be detected is similar to the user corresponding to the feature group 1 at a certain angle, and after the update is performed in this case, the user corresponding to the feature group 1 may be unable to pass through face recognition subsequently.
In this case, it is described that the registered users include a plurality of users having characteristics close to the user characteristics, for example, may be close to each other, and therefore, in order to increase the recognition throughput of the user, the second facial characteristics of the user may not be updated.
In a possible implementation manner, when the second face features of the user to be detected are updated, the first face features in the image to be detected can be directly used as the second face features of the user to be detected, and although the updating method can improve the updating speed, the updated second face features may have lower accuracy, and further the accuracy of face recognition may be affected.
In another possible implementation, when updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the method as shown in fig. 3 may include the following steps:
step 301, determining, based on a first similarity between a first face feature of the facial image to be recognized and a second face feature of the user to be recognized, update weights corresponding to the first face feature of the facial image to be recognized and the second face feature of the user to be recognized respectively.
Step 302, updating the second face features of the user to be recognized based on the updating weight, the first face features of the face image to be recognized and the second face features of the user to be recognized.
Therefore, when the second face features of the user to be recognized are updated, the first face features and the second face features of the user to be recognized are combined simultaneously, the second face features can be gradually updated, and the influence of the updated features on the recognition accuracy is reduced.
In a possible implementation manner, when determining update weights corresponding to a first facial feature of the facial image to be recognized and a second facial feature of the user to be recognized respectively based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a first update weight corresponding to the first facial feature of the facial image to be recognized and a second update weight corresponding to the second facial feature of the user to be recognized may be determined based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold value;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
For example, the preset fixed value is generally 1, and the update weight corresponding to the first facial feature of the facial image to be recognized may be determined based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a second preset threshold, and a preset weight threshold range, and then the update weight corresponding to the second facial feature of the user to be recognized may be determined based on the update weight corresponding to the first facial feature of the facial image to be recognized.
For example, when determining the update weight corresponding to the first facial feature of the facial image to be recognized, the update weight may be calculated according to the following formula:
Figure BDA0003266304380000171
wherein α represents an update weight corresponding to the first facial feature of the facial image to be recognized, similar (f)query,frec) Denotes a first degree of similarity, t1A first predetermined threshold, α, representing the correspondence of the first degree of similarityubRepresenting a preset maximum weight, αlbRepresenting a preset weight minimum.
And after determining the updating weight alpha corresponding to the first face feature of the face image to be recognized, taking 1-alpha as the updating weight corresponding to the second face feature of the user to be recognized.
For example, when the second facial feature of the user to be recognized is updated based on the update weight, the first facial feature of the facial image to be recognized, and the second facial feature of the user to be recognized, the calculation may be performed by the following formula:
frec_new=α*fquery+(1-α)*frec
wherein f isrec_newRepresenting the updated second facial features.
As can be seen from the above formula, the update weight corresponding to the first facial feature of the facial image to be recognized is in direct proportion to the first similarity, that is, the higher the similarity between the first facial feature and the second feature of the user to be recognized is, the higher the proportion of the first facial feature of the user to be recognized is when updating the second feature is.
When the second face feature is updated, in order to reduce the occurrence of the situation that the face feature changes greatly due to the update and further influences the subsequent face recognition, the update weight threshold range can be set to control the update amplitude. Generally, the weight threshold range may be 0.08-0.15, and may also be adjusted based on the actual update requirement of the user.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, the embodiment of the present disclosure further provides an access control method, where the access control method may be applied to a controller corresponding to an access control or a door lock, the controller is connected to an image acquisition device, and the connection mode of the controller may be wired connection or wireless connection, and the wireless connection mode may include, for example, bluetooth connection, wireless network connection, and the like.
The method comprises the following steps:
step 1, responding to a face recognition request, and controlling an image acquisition device to acquire a face image to be recognized.
Here, the face recognition request may be sent when a user requesting to open the door is detected, and for example, an infrared detection device may be disposed at a door entrance or a door lock position, and when the infrared detection device detects the user, the face recognition request may be sent to the controller.
And 2, based on the face recognition method of the embodiment, carrying out face recognition on the face image to be recognized and determining a face recognition result.
And 3, controlling the door lock based on the face recognition result.
And if the face recognition result is that the recognition is passed, controlling the door lock to be opened, and if the face recognition result is that the recognition is failed, controlling the door lock to be closed.
By the method, the access control or the door lock can be accurately controlled, and the influence of the face recognition precision on the access control or the door lock control is reduced.
Based on the same inventive concept, a face recognition device corresponding to the face recognition method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the face recognition method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 4, there is shown a schematic diagram of an architecture of a face recognition apparatus according to an embodiment of the present disclosure, where the apparatus includes: a feature extraction module 401, a matching module 402 and an update module 403; wherein the content of the first and second substances,
the feature extraction module 401 is configured to acquire a face image to be recognized and extract a first face feature of the face image to be recognized;
a matching module 402, configured to match the first face features with a plurality of second face features respectively to obtain matching results, and obtain face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
In a possible implementation, before the matching the first facial features with the plurality of second facial features respectively, the matching module 402 is further configured to:
and under the condition that no historical face image matched with the registered face image of the first target user exists, taking the third face feature of the registered face image corresponding to the first target user as the second face feature of the first target user.
In a possible implementation manner, the apparatus further includes an updating module 403, configured to update the second facial feature of the user to be recognized in the facial image to be recognized according to the following method:
determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in the target feature group aiming at the target feature group; determining a second similarity between the first facial feature of the facial image to be recognized and a third facial feature of the second target user in the target feature group;
and under the condition that the updating condition is determined to be met based on the first similarity and the second similarity corresponding to each feature group, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
In one possible embodiment, in response to the face image to be recognized being acquired from a first scene, the update condition includes:
the first similarity corresponding to the feature group is greater than a first preset threshold, and the second similarity corresponding to the feature group is greater than a second preset threshold.
In one possible embodiment, in response to the face image to be recognized being acquired from a second scene, the updating condition further includes:
the number of feature groups of which the first similarity is greater than a first preset threshold and the second similarity is greater than a second preset threshold is less than a predetermined number.
In a possible implementation manner, before updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the updating module 403 is further configured to:
and determining a second face feature matched with the first face feature of the face image to be recognized in the matching result as a second face feature of the user to be recognized.
In a possible implementation manner, the updating module 403, when updating the second facial feature of the user to be recognized based on the first facial feature of the facial image to be recognized, is configured to:
determining updating weights respectively corresponding to the first face features of the face image to be recognized and the second face features of the user to be recognized based on a first similarity between the first face features of the face image to be recognized and the second face features of the user to be recognized;
and updating the second face features of the user to be identified based on the updating weight, the first face features of the face image to be identified and the second face features of the user to be identified.
In a possible implementation manner, the updating module 403, when determining, based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, the update weights respectively corresponding to the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, is configured to:
determining a first updating weight corresponding to the first facial feature of the facial image to be recognized and a second updating weight corresponding to the second facial feature of the user to be recognized based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same concept, an access control device corresponding to the access control method is further provided in the embodiments of the present disclosure, as shown in fig. 5, an architectural schematic diagram of the access control device provided in the embodiments of the present disclosure includes an acquisition module 501, an identification module 502, and a control module 503; specifically, the method comprises the following steps:
the acquisition module 501 is used for responding to a face recognition request and controlling the image acquisition equipment to acquire a face image to be recognized;
the recognition module 502 is configured to perform face recognition on the face image to be recognized based on the face recognition method described in the foregoing embodiment, and determine a face recognition result;
and a control module 503, configured to perform door lock control based on the face recognition result.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 6, a schematic structural diagram of a computer device 600 provided in the embodiment of the present disclosure includes a processor 601, a memory 602, and a bus 603. The memory 602 is used for storing execution instructions and includes a memory 6021 and an external memory 6022; the memory 6021 is also referred to as an internal memory, and temporarily stores operation data in the processor 601 and data exchanged with the external memory 6022 such as a hard disk, the processor 601 exchanges data with the external memory 6022 through the memory 6021, and when the computer device 600 operates, the processor 601 communicates with the memory 602 through the bus 603, so that the processor 601 executes the following instructions:
acquiring a face image to be recognized, and extracting first face features of the face image to be recognized;
matching the first face features with a plurality of second face features respectively to obtain matching results, and obtaining face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
In a possible implementation, the instructions executed by the processor 601 further include, before the matching the first facial features with the plurality of second facial features respectively:
and under the condition that no historical face image matched with the registered face image of the first target user exists, taking the third face feature of the registered face image corresponding to the first target user as the second face feature of the first target user.
In a possible implementation, the processor 601 executes instructions, and the method further includes updating a second facial feature of the user to be recognized in the facial image to be recognized according to the following method:
determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in the target feature group aiming at the target feature group; determining a second similarity between the first facial feature of the facial image to be recognized and a third facial feature of the second target user in the target feature group;
and under the condition that the updating condition is determined to be met based on the first similarity and the second similarity corresponding to each feature group, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
In one possible embodiment, the processor 601 executes instructions in response to the face image to be identified being acquired from a first scene, the update condition includes:
the first similarity corresponding to the feature group is greater than a first preset threshold, and the second similarity corresponding to the feature group is greater than a second preset threshold.
In a possible implementation, the processor 601 executes instructions that, in response to the acquisition of the face image to be identified from the second scene, the update condition further includes:
the number of feature groups of which the first similarity is greater than a first preset threshold and the second similarity is greater than a second preset threshold is less than a predetermined number.
In a possible implementation, the processor 601 executes instructions, before updating the second facial features of the user to be recognized based on the first facial features of the face image to be recognized, the method further includes:
and determining a second face feature matched with the first face feature of the face image to be recognized in the matching result as a second face feature of the user to be recognized.
In a possible implementation, the instructions executed by the processor 601, which are used for updating the second facial feature of the user to be recognized based on the first facial feature of the facial image to be recognized, include:
determining updating weights respectively corresponding to the first face features of the face image to be recognized and the second face features of the user to be recognized based on a first similarity between the first face features of the face image to be recognized and the second face features of the user to be recognized;
and updating the second face features of the user to be identified based on the updating weight, the first face features of the face image to be identified and the second face features of the user to be identified.
In a possible implementation manner, in the instructions executed by the processor 601, the determining, based on the first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, the update weights corresponding to the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized respectively includes:
determining a first updating weight corresponding to the first facial feature of the facial image to be recognized and a second updating weight corresponding to the second facial feature of the user to be recognized based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
Alternatively, processor 601 may execute the following instructions:
responding to the face recognition request, and controlling the image acquisition equipment to acquire a face image to be recognized;
based on the face recognition method of the above embodiment, the face image to be recognized is subjected to face recognition, and a face recognition result is determined;
and performing door lock control based on the face recognition result.
The disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the face recognition method described in the above method embodiments. Wherein the storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the face recognition method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in this disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can still modify or easily conceive of changes in the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A face recognition method, comprising:
acquiring a face image to be recognized, and extracting first face features of the face image to be recognized;
matching the first face features with a plurality of second face features respectively to obtain matching results, and obtaining face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
2. The method according to claim 1, prior to said matching said first facial features with a plurality of second facial features respectively, further comprising:
and under the condition that no historical face image matched with the registered face image of the first target user exists, taking the third face feature of the registered face image corresponding to the first target user as the second face feature of the first target user.
3. The method according to claim 1 or 2, characterized in that the method further comprises updating the second facial features of the user to be recognized in the facial image to be recognized according to the following method:
determining a first similarity between a first face feature of the face image to be recognized and a second face feature of a second target user in a target feature group aiming at the target feature group; determining a second similarity between the first facial feature of the facial image to be recognized and a third facial feature of the second target user in the target feature group;
and under the condition that the updating condition is determined to be met based on the first similarity and the second similarity corresponding to each feature group, updating the second face feature of the user to be recognized based on the first face feature of the face image to be recognized.
4. The method of claim 3, wherein in response to the face image to be recognized being captured from a first scene, the update condition comprises:
the first similarity corresponding to the feature group is greater than a first preset threshold, and the second similarity corresponding to the feature group is greater than a second preset threshold.
5. The method of claim 4, wherein in response to the facial image to be recognized being captured from a second scene, the updating condition further comprises:
the number of feature groups with the first similarity larger than a first preset threshold and the second similarity larger than a second preset threshold is smaller than a preset number.
6. The method according to claim 3, wherein before updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized, the method further comprises:
and determining a second face feature matched with the first face feature of the face image to be recognized in the matching result as a second face feature of the user to be recognized.
7. The method according to any one of claims 3 to 6, wherein the updating the second facial features of the user to be recognized based on the first facial features of the facial image to be recognized comprises:
determining updating weights respectively corresponding to the first face feature of the face image to be recognized and the second face feature of the user to be recognized based on a first similarity between the first face feature of the face image to be recognized and the second face feature of the user to be recognized;
and updating the second face features of the user to be recognized based on the updating weight, the first face features of the face image to be recognized and the second face features of the user to be recognized.
8. The method according to claim 7, wherein the determining, based on a first similarity between first facial features of the facial image to be recognized and second facial features of the user to be recognized, update weights respectively corresponding to the first facial features of the facial image to be recognized and the second facial features of the user to be recognized comprises:
determining a first updating weight corresponding to the first facial feature of the facial image to be recognized and a second updating weight corresponding to the second facial feature of the user to be recognized based on a first similarity between the first facial feature of the facial image to be recognized and the second facial feature of the user to be recognized, a preset weight threshold range and a second preset threshold;
the first updating weight is in direct proportion to the first similarity, and the sum of the first updating weight and the second updating weight is a preset fixed value.
9. An access control method, comprising:
responding to the face recognition request, and controlling the image acquisition equipment to acquire a face image to be recognized;
based on the face recognition method of any one of claims 1 to 8, carrying out face recognition on the face image to be recognized, and determining a face recognition result;
and performing door lock control based on the face recognition result.
10. A face recognition apparatus, comprising:
the characteristic extraction module is used for acquiring a face image to be recognized and extracting first face characteristics of the face image to be recognized;
the matching module is used for matching the first face features with a plurality of second face features respectively to obtain matching results and obtaining face recognition results based on the matching results;
the second face features are determined based on third face features of the registered face images and face features of historical face images matched with the registered face images in historical face recognition.
11. An access control device, comprising:
the acquisition module is used for responding to the face recognition request and controlling the image acquisition equipment to acquire a face image to be recognized;
the identification module is used for carrying out face identification on the face image to be identified based on the face identification method of any one of claims 1 to 8 and determining a face identification result;
and the control module is used for controlling the door lock based on the face recognition result.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the face recognition method according to any one of claims 1 to 8 or performing the steps of the access control method according to claim 9.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the face recognition method according to any one of claims 1 to 8, or performs the steps of the access control method according to claim 9.
CN202111087459.6A 2021-09-16 2021-09-16 Face recognition and access control method and device, computer equipment and storage medium Pending CN113792668A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111087459.6A CN113792668A (en) 2021-09-16 2021-09-16 Face recognition and access control method and device, computer equipment and storage medium
PCT/CN2022/104602 WO2023040436A1 (en) 2021-09-16 2022-07-08 Face recognition and door security control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111087459.6A CN113792668A (en) 2021-09-16 2021-09-16 Face recognition and access control method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113792668A true CN113792668A (en) 2021-12-14

Family

ID=78878593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111087459.6A Pending CN113792668A (en) 2021-09-16 2021-09-16 Face recognition and access control method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113792668A (en)
WO (1) WO2023040436A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040436A1 (en) * 2021-09-16 2023-03-23 上海商汤智能科技有限公司 Face recognition and door security control
CN117011922A (en) * 2023-09-26 2023-11-07 荣耀终端有限公司 Face recognition method, device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384385B2 (en) * 2014-11-06 2016-07-05 Intel Corporation Face recognition using gradient based feature analysis
CN110532991A (en) * 2019-09-04 2019-12-03 深圳市捷顺科技实业股份有限公司 A kind of face identification method, device and equipment
CN112818909A (en) * 2021-02-22 2021-05-18 Oppo广东移动通信有限公司 Image updating method and device, electronic equipment and computer readable medium
CN113792668A (en) * 2021-09-16 2021-12-14 深圳市商汤科技有限公司 Face recognition and access control method and device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040436A1 (en) * 2021-09-16 2023-03-23 上海商汤智能科技有限公司 Face recognition and door security control
CN117011922A (en) * 2023-09-26 2023-11-07 荣耀终端有限公司 Face recognition method, device and storage medium
CN117011922B (en) * 2023-09-26 2024-03-08 荣耀终端有限公司 Face recognition method, device and storage medium

Also Published As

Publication number Publication date
WO2023040436A1 (en) 2023-03-23

Similar Documents

Publication Publication Date Title
CN109635872B (en) Identity recognition method, electronic device and computer program product
US20210286870A1 (en) Step-Up Authentication
CN107872436B (en) Account identification method, device and system
KR101938033B1 (en) Biometric authentication in connection with camera-equipped devices
JP5805040B2 (en) Person authentication dictionary update method, person authentication dictionary update apparatus, person authentication dictionary update program, and person authentication system
CN113792668A (en) Face recognition and access control method and device, computer equipment and storage medium
CN107491674B (en) Method and device for user authentication based on characteristic information
CN104424414A (en) Method for logging a user in to a mobile device
CN115862088A (en) Identity recognition method and device
TW201937392A (en) System and method for biometric authentication in connection with camera-equipped devices
Nedjah et al. Efficient fingerprint matching on smart cards for high security and privacy in smart systems
AU2011252761B2 (en) Automatic identity enrolment
CN110569913A (en) Scene classifier training method and device, scene recognition method and robot
Tao et al. Biometric authentication for a mobile personal device
AU2011252761A1 (en) Automatic identity enrolment
CN109886239B (en) Portrait clustering method, device and system
Stragapede et al. IJCB 2022 mobile behavioral biometrics competition (MobileB2C)
JP2018128736A (en) Face authentication system, face authentication method and face authentication program
Kuznetsov et al. Biometric authentication using convolutional neural networks
CN112183167A (en) Attendance checking method, authentication method, living body detection method, device and equipment
CN111400621B (en) Position information authenticity verification method and device and electronic equipment
CN110956098B (en) Image processing method and related equipment
CN112102551A (en) Device control method, device, electronic device and storage medium
Szczepanik et al. Security lock system for mobile devices based on fingerprint recognition algorithm
CN107844735B (en) Authentication method and device for biological characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056240

Country of ref document: HK