CN111460859A - User identification method and device, electronic equipment and computer readable medium - Google Patents

User identification method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111460859A
CN111460859A CN201910053761.6A CN201910053761A CN111460859A CN 111460859 A CN111460859 A CN 111460859A CN 201910053761 A CN201910053761 A CN 201910053761A CN 111460859 A CN111460859 A CN 111460859A
Authority
CN
China
Prior art keywords
user
photo
face
database
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910053761.6A
Other languages
Chinese (zh)
Inventor
董博
李艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910053761.6A priority Critical patent/CN111460859A/en
Publication of CN111460859A publication Critical patent/CN111460859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The disclosure relates to a user identification method, a user identification device, an electronic device and a computer readable medium. The user identification method comprises the following steps: acquiring a first photo of a user; matching the user according to the first photo, and storing the first photo to a first database after the matching is successful; acquiring a second photo of the user; and identifying the user in the first database according to the second photo. The user identification method, the user identification device, the electronic equipment and the computer readable medium can improve the accuracy, robustness and response speed of face identification.

Description

User identification method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of face recognition, and in particular, to a user recognition method, apparatus, electronic device, and computer readable medium.
Background
With the gradual maturity of face recognition technology, the face recognition method has been applied to various practical scenes, such as user recognition methods for unmanned supermarkets, community security, office building security, check-in and the like. In face recognition, there are mainly a face detection algorithm and a face recognition algorithm. The existing face detection method such as the SSD network has higher requirements on the environment and poorer robustness in the process of detecting a small face. The existing face recognition algorithm has low accuracy when the faces are similar.
The existing user identification method is generally applied to a cloud computing server, so that the response speed is low and the cost is high. Furthermore, the prior art user identification is typically 1: the N identification method is to identify the photos to be identified and the massive photos to determine the identity of the user, and the accuracy and the response speed of the user identification result are reduced due to the extremely high calculation complexity.
Therefore, a new user identification method, apparatus, electronic device and computer readable medium are needed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present disclosure provides a user identification method, device, electronic device and computer readable medium, which can improve accuracy, robustness and response speed of face identification.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, a user identification method is provided, which includes: acquiring a first photo of a user; matching the user according to the first photo, and storing the first photo to a first database after the matching is successful; acquiring a second photo of the user; and identifying the user in the first database according to the second photo.
In an exemplary embodiment of the present disclosure, further comprising: and detecting the first photo by using a face detection algorithm, and putting the first photo into a registered photo database after the detection is successful.
In an exemplary embodiment of the present disclosure, detecting the first photograph using a face detection algorithm includes: using S3And detecting the first photo by an FD face detection algorithm.
In an exemplary embodiment of the present disclosure, matching the user according to the first photograph includes: acquiring identification information and a third photo of the user; acquiring a first photo of the user in the registered photo database according to the identification information; and matching the first picture and the third picture by using a face detection algorithm, a face correction algorithm and a face recognition algorithm.
In an exemplary embodiment of the present disclosure, further comprising: and determining parameters of a face detection algorithm according to the position parameters of the user when the user shoots the second picture and the third picture.
In an exemplary embodiment of the present disclosure, matching the first photograph and the third photograph using a face rectification algorithm and a face recognition algorithm includes: using S3And matching the first photo and the third photo by using an FD face detection algorithm, an MTCNN correction algorithm and an Arcface face recognition method.
In an exemplary embodiment of the present disclosure, further comprising: the face recognition algorithm is trained using face samples of multiple dimensions, including race and age.
In an exemplary embodiment of the present disclosure, storing the first photograph to a first database includes: acquiring a first face characteristic value of the first photo; and storing the first face characteristic value to a first database.
In an exemplary embodiment of the present disclosure, identifying the user from the second photograph includes: acquiring a second face characteristic value of the second photo; determining a second distance according to the second face characteristic value and a plurality of first face characteristic values in the first database; and confirming that the user identification is successful when the second distance meets a threshold condition.
In an exemplary embodiment of the present disclosure, further comprising: after the user is successfully identified, communicating with a merchandise identification and settlement system to settle the user.
In an exemplary embodiment of the present disclosure, further comprising: and deleting the first photo corresponding to the user from the first database after the user finishes settlement and leaves.
According to an aspect of the present disclosure, there is provided a user identification apparatus, including: the first shooting module is used for acquiring a first photo of a user; the first matching module is used for matching the user according to the first photo, and storing the first photo to a first database after the first photo is successfully matched; the second shooting module is used for acquiring a second picture of the user; and the second identification module is used for identifying the user in the first database according to the second photo.
According to an aspect of the present disclosure, an electronic device is provided, the electronic device including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as above.
According to an aspect of the disclosure, a computer-readable medium is proposed, on which a computer program is stored, which program, when being executed by a processor, carries out the method as above.
According to the user identification method, the user identification device, the electronic equipment and the computer readable medium, the accuracy, the robustness and the response speed of face identification can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 is a system block diagram illustrating a user identification method and apparatus according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of user identification according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of user identification according to an example embodiment.
FIG. 4 is a flow chart illustrating a method of user identification according to an example embodiment.
Fig. 5 is a flow chart illustrating a method of user identification according to another exemplary embodiment.
Fig. 6 is a block diagram illustrating a user identification device according to an example embodiment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a schematic diagram illustrating a computer-readable storage medium according to an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
Fig. 1 is a system block diagram illustrating a user identification method and apparatus according to an exemplary embodiment.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for a user identification system operated by the user with the terminal devices 101, 102, 103. The backend management server may analyze and otherwise process the received data such as the user identification request, and feed back a processing result (e.g., the second distance, the user identity — just an example) to the terminal device.
The server 105 may, for example, take a first photograph of the user; the server 105 may, for example, match the user according to the first photo, and store the first photo in a first database after the matching is successful; the server 105 may, for example, take a second photograph of the user. The server 105 may identify the user from the second photograph, for example in the first database.
The server 105 may be a server of one entity, and may also be composed of a plurality of servers, for example, a part of the server 105 may be, for example, used as a user identification task submitting system in the present disclosure, for obtaining a task to be executed by a user identification command; and a portion of the server 105 may also be used, for example, as a user identification system in the present disclosure, for obtaining a first photograph of a user; matching the user according to the first photo, and storing the first photo to a first database after the matching is successful; acquiring a second photo of the user; and identifying the user in the first database according to the second photo.
It should be noted that the method for user identification provided by the embodiment of the present disclosure may be executed by the server 105, and accordingly, a device for user identification may be disposed in the server 105. And the request end provided for the user to submit the user identification task and obtain the user identification result is generally located in the terminal equipment 101, 102, 103.
According to the user identification method and the user identification device, the accuracy, the robustness and the response speed of face identification can be improved.
FIG. 2 is a flow chart illustrating a method of user identification according to an example embodiment. The user identification method 20 includes at least steps S202 to S208. The user identification method of the embodiment can be executed in the local server to improve the identification speed and robustness and reduce the cost.
As shown in fig. 2, in S202, a first photograph of a user is acquired. When the user uses the user identification method of the present application, the user may first become a registered user, and the steps S202 to S208 are executed after uploading the first photo as the registered photo.
In one embodiment, further comprising: and detecting the first photo by using a face detection algorithm, and putting the registered photo into a registered photo database after the detection is successful. After the first photo of the user is obtained, the face detection algorithm may detect whether a face exists in the first photo to verify the validity of the first photo. The registered photo database stores all the face photos that have become registered users. The face detection algorithm may, for example, use S3FD face detection algorithm. S3The FD face detection algorithm is a face detection model based on an SSD (Single Shot Multi Box Detector) network, the detection speed can reach real time, the FD face detection algorithm is suitable for multi-scale face detection, and the defect of low small face detection rate of the SSD network is overcome.
In S204, the user is matched according to the first photo, and the first photo is stored in the first database after the matching is successful. The first database is used for storing all the first photos successfully matched. For example, in an unmanned supermarket, the photos of the current in-store users are stored in a first database; also for example, in a cell or a building, stored in the first database are photos of users currently in the cell or the building; also for example in a railway station, stored in the first database are user photos currently within the railway station.
In one embodiment, matching the user based on the first photograph includes: acquiring identification information and a third photo of the user; acquiring a first photo of a user in a registered photo database according to the identification information; and matching the first picture and the third picture by using a face detection algorithm, a face correction algorithm and a face recognition algorithm. A third picture of the user may be taken by the camera at the supermarket entrance, for example, when the user enters an unmanned mall for shopping; for example, when the user enters a lodging cell or an office building, a third photo of the user is taken through a camera at the entrance of the cell or the office building; for example, when the user gets into a train station and takes a third picture of the user through a camera at the entrance, it should be understood that the technical solution of the present invention is not particularly limited to this. Wherein the identification information of the user can be obtained simultaneously when the user takes the third picture. The identification information of the user may be obtained by scanning an identification code, for example, a two-dimensional code, a barcode, etc., and may also be, for example, a device with an NFC function, a card, etc., and may also be, for example, a magnetic card, an IC card, a password, etc., which is not limited in this respect. The identification information of the user may be a unique identification code of the user and is stored in the registered photo database simultaneously with the first photo at the time of user registration to identify the registered photo and may be used to find the registered photo of the user. The face correction algorithm can detect key points of the detected face and calibrate the key points; the face recognition algorithm can extract the characteristic values of the key points of the face and compare the characteristic values with the registered photos to realize the matching of the face.
In one embodiment, matching the first photograph and the third photograph using a face rectification algorithm and a face recognition algorithm comprises: using S3And matching the first photo and the third photo by using an FD face detection algorithm, an MTCNN correction algorithm and an Arcface face recognition method. The MTCNN correction algorithm carries out affine transformation by positioning the characteristic points and extracting the key points of a plurality of faces, and has high accuracy. The Arcface face recognition method improves the traditional objective function, cleans training data, can be applied to a plurality of classification networks, can increase the characteristic distance between similar faces and improves the recognition precision.
In one embodiment, further comprising: the face recognition algorithm is trained using multi-dimensional face samples, including race and age. According to the embodiment, the accuracy and the robustness of the face recognition algorithm can be improved.
In one embodiment, storing the first photograph to the first database comprises: acquiring a first face characteristic value of the first photo; and storing the first face characteristic value to a first database. The first face characteristic value of the first photo can be obtained through a face rectification algorithm and a face recognition algorithm. The first database may further store a user identifier corresponding to the first face feature value.
In S206, a second photograph of the user is obtained. A second picture of the user may be taken by the camera at a supermarket exit (or settlement area), for example, when the user is ready for settlement after shopping in an unmanned mall; for example, when the user leaves a residential cell or an office building, a second picture of the user is taken through the camera at an exit of the cell or the office building; for example, when the user leaves the station at a railway station, the camera is used to take a second picture of the user at the exit.
In one embodiment, further comprising: and determining parameters of a face detection algorithm according to the position parameters of the user when the user shoots the second picture and the third picture. The position parameters are parameters such as a distance and an angle between the face of the user and the shooting camera when the user takes the second picture and the third picture, and the invention is not limited to this. Due to the different installation positions of the cameras and the different specific environments of the doorway, the position parameters of the user when taking the second picture and the third picture will not be identical. The parameters of the face detection algorithm are respectively set according to different position parameters when the user shoots the second picture and the third picture, so that the accuracy of the face detection algorithm can be improved.
In S208, the user is identified from the second photograph in the first database. The first database stores first face characteristic values of a plurality of users. The embodiment performs recognition through the second photo and the plurality of first face feature values in the first database, and improves response speed and accuracy compared with recognition through a mass registration photo in the registration photo database.
In one embodiment, identifying the user from the second photograph includes: acquiring a second face characteristic value of the second photo; determining a second distance according to the second face characteristic value and a plurality of first face characteristic values in the first database; and confirming that the user identification is successful when the second distance meets a threshold condition. The distance represents the similarity between different faces, and the closer the distance, the more similar the faces are. And respectively calculating a plurality of first distances between the second face characteristic value and the plurality of first face characteristic values, and selecting the minimum first distance as a second distance. The distance may be, for example, a euclidean distance.
In one embodiment, further comprising: after the user is successfully identified, the system communicates with a commodity identification and settlement system to settle the user. When the user identification method is applied to an unmanned supermarket, when the user identification is successful, the identification corresponding to the user can be obtained in the first database so as to obtain the corresponding information of the user, such as balance, whether to be a member or not and the like; and settlement is performed for the user through the commodity identification and settlement system.
In one embodiment, further comprising: and after the user finishes settlement and leaves, deleting the first photo corresponding to the user from the first database. When the user identification method is applied to the unmanned supermarket, the first photo of the user in the first database can be deleted after settlement is finished; when the user identification method is applied to a cell or an office building, the first photo of the user in the first database can be deleted when the user leaves the cell or signs in and leaves; when the user identification method is applied to the railway station, the first photo of the user in the first database can be deleted after the user leaves the station or leaves the station by bus. Further, the first database stores the first face feature value of the first photo and the identification information of the corresponding user, and in this embodiment, the first face feature value and the identification information corresponding to the user are deleted after the user leaves.
In one embodiment, a user may take a first photograph and a second photograph and a third photograph in a closed environment. The human face detection, correction and recognition algorithm is sensitive to the environment, and the embodiment can reduce the interference of external environments such as illumination, shelters and the like to user recognition.
According to the user identification method disclosed by the invention, the accuracy, the robustness and the response speed of face identification can be improved by acquiring the first photo of the user, storing the first photo in the first database and identifying the user in the first database according to the second photo.
FIG. 3 is a flow chart illustrating a method of user identification according to an example embodiment. The user identification method 30 includes at least steps S302 to S306.
As shown in fig. 3, in S302, the identification information of the user and the third photograph are acquired. The identification information of the user and the third photo have been introduced above, and are not described herein again.
In S304, a first photo of the user is obtained in the registered photo database according to the identification information. In the registered photo database, the registered photos of the user correspond to the identification information one by one, so that the first photo of a certain specified user can be accurately acquired.
In S306, the first photo and the first photo are matched using a face rectification algorithm and a face recognition algorithm. And calculating characteristic values of the first photo and the first photo respectively, calculating a characteristic distance, and successfully matching when the characteristic distance meets a threshold condition.
FIG. 4 is a flow chart illustrating a method of user identification according to an example embodiment. The user identification method 40 includes at least steps S402 to S406.
In S402, a second face feature value of the second photograph is acquired. The method for calculating the second face feature value is the same as the method for calculating the first face feature value, and the above description has been given for details, which is not repeated herein.
In S404, a second distance is determined according to the second face feature value and the plurality of first face feature values in the first database. The second distance is a characteristic distance between the summary of the first database and the second face characteristic value, and the second distance may be, for example, a euclidean distance. Calculating a plurality of first distances between the second face characteristic value and a plurality of first face characteristic values in the first database; and determining a second distance among the plurality of first distances.
In S406, when the second distance satisfies the threshold condition, it is confirmed that the user identification is successful. A threshold value may be preset, and the specific value of the threshold value may be obtained through experience or actual tests. And when the second distance is smaller than the threshold value, confirming that the user identification is successful, otherwise, confirming that the identification is failed.
According to the user identification method disclosed by the invention, the first photo of the user is obtained, the first photo is stored in the first database, and the user is identified in the first database according to the second photo, so that the accuracy, robustness and response speed of face identification can be improved.
According to the user identification method disclosed by the invention, the registered photo of the user is obtained according to the identification information of the user, and then the first photo and the registered photo are matched, so that the matching calculation amount can be reduced, and the response speed is improved.
According to the user identification method disclosed by the invention, the second photo is identified with the plurality of first face characteristic values in the first database, so that the identification speed and accuracy can be improved.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
Fig. 5 is a flow chart illustrating a method of user identification according to another exemplary embodiment. The user identification method 50 shown in fig. 5 is applied to an unmanned supermarket scene. The user recognition method 50 index includes steps S501 to S521.
As shown in fig. 5, in S501, the process starts.
In S502, the user registers. The user may generate an identification of the user at registration, which may include its identity information.
In S503, the user requests to upload his photograph as the first photograph after registering.
In S504, S is used3The FD face detection algorithm detects the registered photos uploaded by the user.
In S505, it is determined whether a face can be detected, and if so, the first photo is uploaded to a background database, and the next step is executed, and if not, the process returns to S503. The background database stores first photos of all registered users.
In S506, when the user enters the unmanned supermarket, the third photo is obtained and the code is scanned. The current user identifier is obtained by scanning the code, and the user identifier may also be obtained in other manners, such as a card with an NFC function, which is not limited in this respect. In one embodiment, the registered photo corresponding to the user in the background database is obtained through the code scanning information.
In S507, it is determined whether the user passes through the entry gate, if so, the next step is performed, and if not, the process returns to S506.
In S508, the face detected in the first photograph is aligned using the MTCNN correction algorithm. Wherein, S can be used3And the FD face detection algorithm carries out face detection on the first photo. Further, S is used3And performing face detection and correction on the third photo by using an FD face detection algorithm and an MTCNN correction algorithm.
In S509, a first face feature value of the first photo is calculated by using an Arcface face recognition algorithm, and is matched with the third photo.
In S510, the first face feature value is added to the in-store database after matching is successful, otherwise, the process returns to step S507.
In S511, the user enters the closed settlement area after shopping and closes the door to take a second picture.
In S512, S is used3And detecting the second photo by the FD face detection algorithm.
In S513, if no face is detected, the process returns to step S512.
In S514, the face detected in the second photograph is rectified using the MTCNN face rectification algorithm.
In S515, a second face feature value is extracted from the corrected second picture by using an Arcface face recognition algorithm.
In S516, euclidean distances between the second face feature value and all the first face feature values in the store database are calculated, and the smallest euclidean distance is selected by comparison.
In S517, the minimum euclidean distance is compared with a preset threshold, if the minimum euclidean distance is smaller than the threshold, the corresponding user information is found to be performed in the next step, and if the minimum euclidean distance is larger than the threshold, the process returns to S512.
In S518, after the information of the user is determined by the face recognition algorithm, the user communicates with the product recognition and settlement system to perform a subsequent settlement operation.
In S519, the exit gate opens the door after the settlement communication is completed.
In S520, after the user leaves the store, the first face feature value corresponding to the user is deleted from the store library.
In S521, the process ends.
User identification method according to the present disclosure, using S3The FD human face detection algorithm detects the human face, and can improve the detection accuracy of the small face.
According to the user identification method disclosed by the invention, the MTCNN correction algorithm is utilized to correct the detected human face, so that the identification capability of the human face in multiple angles can be improved; the Arcface face recognition method is used for recognizing the faces, so that the characteristic value distances of similar but different faces can be increased, and the recognition accuracy is improved.
According to the user identification method disclosed by the invention, the first photo is compared with the face in the store database instead of the background database, so that the calculation complexity can be reduced, and the response speed can be increased.
According to the user identification method disclosed by the invention, the face identification algorithm model is trained by using the multi-dimensional face sample, so that the accuracy and robustness of user identification can be improved.
According to the user identification method disclosed by the invention, the accuracy and robustness of face detection and face identification can be improved by acquiring the first picture and the second picture in a closed environment.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 6 is a block diagram illustrating a user identification device according to an example embodiment. The user identification means 60 comprise at least: a first photographing module 602, a first matching module 604, a second photographing module 606, and a second recognition module 608.
The first photographing module 602 is configured to obtain a first picture of a user. When the user uses the user identification device of the application, the user can become a registered user firstly, and the first photo can be uploaded as the registered photo.
In one embodiment, when the user uploads the registered photo, the first photo can be detected by using a face detection algorithm, and the first photo is placed in the registered photo database after the detection is successful.
The first matching module 604 is configured to match the user according to the first photo, and store the first photo in a first database after the matching is successful. The first database is used for storing all the first photos successfully matched.
In one embodiment, the first matching module 604 is configured to obtain identification information of the user and the third photo; acquiring a first photo of a user in a registered photo database according to the identification information; and matching the first picture and the third picture by using a face rectification algorithm and a face recognition algorithm. A third picture of the user may be taken by the camera at the supermarket entrance, for example, when the user enters an unmanned mall for shopping; for example, when the user enters a lodging cell or an office building, a third photo of the user is taken through a camera at the entrance of the cell or the office building; for example, when the user gets into a train station and takes a third picture of the user through a camera at the entrance, it should be understood that the technical solution of the present invention is not particularly limited to this.
In one embodiment, the first matching module 604 is configured to match the first photograph and the third photograph using an MTCNN correction algorithm and an Arcface face recognition method.
In one embodiment, the first matching module 604 is used to train a face recognition algorithm using face samples in multiple dimensions, including race and age.
In one embodiment, the first matching module 604 is configured to obtain a first face feature value of the first photo; and storing the first face characteristic value to a first database.
The second photographing module 606 is used for acquiring a second photo of the user. Wherein a second photograph of the user may be taken in the enclosed area. A second picture of the user may be taken by the camera at a supermarket exit (or settlement area), for example, when the user is ready for settlement after shopping in an unmanned mall; for example, when the user leaves a residential cell or an office building, a second picture of the user is taken through the camera at an exit of the cell or the office building; for example, when the user leaves the station at a railway station, the camera is used to take a second picture of the user at the exit.
In one embodiment, the second photographing module 606 is configured to determine the parameters of the face detection algorithm according to the location parameters of the user when the first photograph and the second photograph are taken.
The second identification module 608 is configured to identify the user in the first database according to the second photo. The first database stores first face characteristic values of a plurality of users.
In one embodiment, the second recognition module 608 is configured to obtain a second face feature value of the second photo; determining a second distance according to the second face characteristic value and a plurality of first face characteristic values in the first database; and confirming that the user identification is successful when the second distance meets a threshold condition.
In one embodiment, the second identification module 608 is further configured to communicate with an article identification and settlement system to settle the user after successful identification of the user.
In one embodiment, the second identification module 608 is further configured to delete the first photo corresponding to the user from the first database after the user completes the settlement and leaves.
According to the user identification device disclosed by the invention, the first photo of the user is acquired, the first photo is stored in the first database, and the user is identified in the first database according to the second photo, so that the accuracy, robustness and response speed of face identification can be improved.
According to the user identification device disclosed by the invention, the registered photo of the user is obtained according to the identification information of the user, and then the first photo and the registered photo are matched, so that the matching calculation amount can be reduced, and the response speed is improved.
According to the user identification device disclosed by the invention, the second photo and the plurality of first face characteristic values in the first database are identified, so that the identification speed and accuracy can be improved.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
An electronic device 200 according to this embodiment of the present disclosure is described below with reference to fig. 7. The electronic device 200 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 200 is embodied in the form of a general purpose computing device. The components of the electronic device 200 may include, but are not limited to: at least one processing unit 210, at least one memory unit 220, a bus 230 connecting different system components (including the memory unit 220 and the processing unit 210), a display unit 240, and the like.
Wherein the storage unit stores program code executable by the processing unit 210 to cause the processing unit 210 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 210 may perform the steps as shown in fig. 2, 3, 4, 5.
The memory unit 220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)2201 and/or a cache memory unit 2202, and may further include a read only memory unit (ROM) 2203.
The storage unit 220 may also include a program/utility 2204 having a set (at least one) of program modules 2205, such program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Electronic device 200 may also communicate with one or more external devices 300 (e.g., keyboard, pointing device, Bluetooth device, etc.), and may also communicate with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., router, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present disclosure.
Fig. 8 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the disclosure.
Referring to fig. 8, a program product 400 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, C + +, or the like, as well as conventional procedural programming languages, such as the "C" language or similar programming languages.
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring a first photo of a user; matching the user according to the first photo, and storing the first photo to a first database after the matching is successful; acquiring a second photo of the user; and identifying the user in the first database according to the second photo.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (14)

1. A method for identifying a user, comprising:
acquiring a first photo of a user;
matching the user according to the first photo, and storing the first photo to a first database after the matching is successful;
acquiring a second photo of the user; and
and identifying the user in the first database according to the second photo.
2. The method of claim 1, further comprising:
and detecting the first photo by using a face detection algorithm, and putting the first photo into a registered photo database after the detection is successful.
3. The method of claim 2, wherein detecting the first photograph using a face detection algorithm comprises:
using S3And detecting the first photo by an FD face detection algorithm.
4. The method of claim 2, wherein matching the user according to the first photograph comprises:
acquiring identification information and a third photo of the user;
acquiring a first photo corresponding to the user in the registered photo database according to the identification information; and
and matching the first picture and the third picture by using a face detection algorithm, a face correction algorithm and a face recognition algorithm.
5. The method of claim 4, further comprising:
and determining parameters of a face detection algorithm according to the position parameters of the user when the user shoots the second picture and the third picture.
6. The method of claim 4, wherein matching the first photograph and the third photograph using a face detection algorithm, a face rectification algorithm, and a face recognition algorithm comprises:
using S3And matching the first photo and the third photo by using an FD face detection algorithm, an MTCNN correction algorithm and an Arcface face recognition method.
7. The method of claim 4, further comprising:
the face recognition algorithm is trained using face samples of multiple dimensions, including race and age.
8. The method of claim 1, wherein storing the first photograph to a first database comprises:
acquiring a first face characteristic value of the first photo; and
and storing the first face characteristic value to a first database.
9. The method of claim 8, wherein identifying the user from the second photograph comprises:
acquiring a second face characteristic value of the second photo;
determining a second distance according to the second face characteristic value and a plurality of first face characteristic values in the first database; and
and when the second distance meets a threshold condition, confirming that the user identification is successful.
10. The method of claim 1, further comprising:
after the user is successfully identified, communicating with a merchandise identification and settlement system to settle the user.
11. The method of claim 10, further comprising:
and deleting the first photo corresponding to the user from the first database after the user finishes settlement and leaves.
12. A user identification device, comprising:
the first shooting module is used for acquiring a first photo of a user;
the first matching module is used for matching the user according to the first photo, and storing the first photo to a first database after the first photo is successfully matched;
the second shooting module is used for acquiring a second picture of the user; and
and the second identification module is used for identifying the user in the first database according to the second photo.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
14. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-11.
CN201910053761.6A 2019-01-21 2019-01-21 User identification method and device, electronic equipment and computer readable medium Pending CN111460859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910053761.6A CN111460859A (en) 2019-01-21 2019-01-21 User identification method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910053761.6A CN111460859A (en) 2019-01-21 2019-01-21 User identification method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN111460859A true CN111460859A (en) 2020-07-28

Family

ID=71684101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910053761.6A Pending CN111460859A (en) 2019-01-21 2019-01-21 User identification method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111460859A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480994A (en) * 2017-06-23 2017-12-15 阿里巴巴集团控股有限公司 A kind of settlement method, access control method and device
CN108198315A (en) * 2018-01-31 2018-06-22 深圳正品创想科技有限公司 A kind of auth method and authentication means
CN108765651A (en) * 2018-05-15 2018-11-06 青岛海信智能商用系统股份有限公司 A kind of unmanned shop purchase method, device and computer equipment
US20180367727A1 (en) * 2016-02-26 2018-12-20 Alibaba Group Holding Limited Photographed Object Recognition Method, Apparatus, Mobile Terminal and Camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180367727A1 (en) * 2016-02-26 2018-12-20 Alibaba Group Holding Limited Photographed Object Recognition Method, Apparatus, Mobile Terminal and Camera
CN107480994A (en) * 2017-06-23 2017-12-15 阿里巴巴集团控股有限公司 A kind of settlement method, access control method and device
CN108198315A (en) * 2018-01-31 2018-06-22 深圳正品创想科技有限公司 A kind of auth method and authentication means
CN108765651A (en) * 2018-05-15 2018-11-06 青岛海信智能商用系统股份有限公司 A kind of unmanned shop purchase method, device and computer equipment

Similar Documents

Publication Publication Date Title
JP7196893B2 (en) Face matching system, face matching method, and program
US11244435B2 (en) Method and apparatus for generating vehicle damage information
CN109325964B (en) Face tracking method and device and terminal
CN109034069B (en) Method and apparatus for generating information
CN108038176B (en) Method and device for establishing passerby library, electronic equipment and medium
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
US11126827B2 (en) Method and system for image identification
WO2021017303A1 (en) Person re-identification method and apparatus, computer device and storage medium
US20200387744A1 (en) Method and apparatus for generating information
CN109086834B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN108460365B (en) Identity authentication method and device
CN110555428B (en) Pedestrian re-identification method, device, server and storage medium
JP6265499B2 (en) Feature amount extraction device and location estimation device
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN108615006B (en) Method and apparatus for outputting information
TW202232367A (en) Face recognition method and apparatus, device, and storage medium
CN105303449A (en) Social network user identification method based on camera fingerprint features and system thereof
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN110717407A (en) Human face recognition method, device and storage medium based on lip language password
KR101743169B1 (en) System and Method for Searching Missing Family Using Facial Information and Storage Medium of Executing The Program
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN112385180A (en) System and method for matching identity and readily available personal identifier information based on transaction time stamp
CN111460859A (en) User identification method and device, electronic equipment and computer readable medium
CN110956098B (en) Image processing method and related equipment
CN112183167B (en) Attendance checking method, authentication method, living body detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210305

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

Effective date of registration: 20210305

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

TA01 Transfer of patent application right