CN112926491A - User identification method and device, electronic equipment and storage medium - Google Patents

User identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112926491A
CN112926491A CN202110288293.8A CN202110288293A CN112926491A CN 112926491 A CN112926491 A CN 112926491A CN 202110288293 A CN202110288293 A CN 202110288293A CN 112926491 A CN112926491 A CN 112926491A
Authority
CN
China
Prior art keywords
gait
face
user
feature
user identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110288293.8A
Other languages
Chinese (zh)
Inventor
邓泳
张锦元
林晓锐
沈超建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110288293.8A priority Critical patent/CN112926491A/en
Publication of CN112926491A publication Critical patent/CN112926491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The disclosure provides a user identification method, and relates to the field of artificial intelligence. The user identification method comprises the following steps: the method comprises the steps of collecting a first face image and a first step image of a user to be identified. And acquiring a face recognition result based on the first face image, wherein the face recognition result comprises a first user identification. Acquiring a gait recognition result and first step state features based on the first step state image, wherein the method comprises the following steps: acquiring the first step-state feature based on the first step-state image, and acquiring the gait recognition result based on the first step-state feature, wherein the gait recognition result comprises a second user identifier. And when the second user identification fails to be obtained, the first step-state feature is bound with the first user identification. The disclosure also provides a user identification device, an electronic apparatus, and a storage medium.

Description

User identification method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly, to a user identification method, a user identification apparatus, an electronic device, and a storage medium.
Background
With the development of the technology, the application of the face recognition technology is more and more extensive. For example, after the user performs face registration, the face information of the user can be obtained, and face recognition can be performed on the user subsequently. In use, the recognition result is not ideal due to the fact that the face recognition distance is limited, the camera cannot capture objective factors such as the front face and mask shielding, and the like. The gait recognition technique is not affected by the above factors, and can be applied to a user recognition scenario.
In the course of implementing the disclosed concept, the inventors found that there are at least the following problems in the prior art:
on the basis that a plurality of users have face registration, the gait registration of the users needs to consume higher cost, and the user experience is reduced.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a user identification method capable of automatically performing gait registration for a user, and a user identification apparatus, an electronic device, and a storage medium.
One aspect of the disclosed embodiments provides a user identification method. The user identification method comprises the following steps: the method comprises the steps of collecting a first face image and a first step image of a user to be identified. And acquiring a face recognition result based on the first face image, wherein the face recognition result comprises a first user identification. Acquiring a gait recognition result and first step state features based on the first step state image, wherein the method comprises the following steps: acquiring the first step-state feature based on the first step-state image, and acquiring the gait recognition result based on the first step-state feature, wherein the gait recognition result comprises a second user identifier. And when the second user identification fails to be obtained, the first step-state feature is bound with the first user identification.
According to the embodiment of the disclosure, the face recognition result further includes a first face confidence score corresponding to the first user identifier, and the gait recognition result further includes a first step confidence score corresponding to the second user identifier. The method further comprises the following steps: when the second user identification is successfully obtained and the first user identification is the same as the second user identification: and obtaining a third decision score based on the first face confidence score, the first step confidence score and respective weight parameters thereof, and determining that the user to be recognized corresponds to the first user identifier when the third decision score is in a preset range.
According to an embodiment of the present disclosure, the deriving a third decision score based on the first face confidence score, the first step confidence score and their respective weighting parameters includes: obtaining a first decision score based on a face recognition weight and the first face confidence score, wherein a weight parameter of the first face confidence score comprises the face recognition weight, and obtaining a second decision score based on a gait recognition weight and the first step confidence score, wherein a weight parameter of the first step confidence score comprises the gait recognition weight. Obtaining the third decision score based on the first decision score and the second decision score.
According to an embodiment of the present disclosure, before obtaining a third decision score based on the first face confidence score, the first step confidence score and their respective weighting parameters, the method further includes: and acquiring a second face image and a second step image of each known user in N known users, wherein each known user has a third user identifier, and N is an integer greater than or equal to 1. And respectively acquiring a second face confidence score and a second step state confidence score corresponding to each third user identification based on the second face image and the second step state image of each known user. Obtaining a fourth decision score corresponding to each third user identification based on the second face confidence score and the second step state confidence score of each known user; and fitting the N second face confidence scores, the N second step state confidence scores and the N fourth decision scores based on a decision model to obtain the face recognition weight and the gait recognition weight.
According to an embodiment of the present disclosure, wherein the decision model comprises a linear decision function, the method further comprises: and performing linear fitting on the N second face confidence scores, the N second step state confidence scores and the N fourth decision scores based on the linear decision function.
According to an embodiment of the present disclosure, when the second subscriber identity fails to be obtained, the binding the first step-state feature with the first subscriber identity includes: after the first step state feature is successfully acquired, generating a gait recognition mark representing the successful acquisition of the first step state feature; and when the second user identifier fails to be acquired and has the gait recognition mark, binding the first step-state feature with the first user identifier.
According to an embodiment of the present disclosure, the acquiring the gait recognition result based on the first step-state feature includes: searching a gait database for a target gait feature matched with the first step feature, wherein the gait database comprises a plurality of second step features, each second step feature is bound with a corresponding second user identifier, the target gait feature belongs to the plurality of second step features, and when the target gait feature is searched, acquiring the second user identifier bound with the target gait feature.
According to an embodiment of the present disclosure, wherein the searching the gait database for the target gait feature matching the first step feature comprises: and comparing the first step state feature vector with a plurality of second step state feature vectors one by one, wherein the first step state feature comprises the first step state feature vector, and the second step state feature comprises the second step state feature vector. And when the similarity of one second step state feature vector and the first step state feature vector meets a preset condition, determining the second step state feature vector as a target gait feature vector, wherein the target gait feature comprises the target gait feature vector.
Another aspect of the disclosed embodiments provides a user identification apparatus. The user identification device comprises an image acquisition module, a face identification module, a gait identification module and a gait registration module. The image acquisition module is used for acquiring a first face image and a first step image of a user to be identified. The face recognition module is used for obtaining a face recognition result based on the first face image, and the face recognition result comprises a first user identification. The gait recognition module is configured to obtain a gait recognition result and a first step feature based on the first step image, where the gait recognition module includes: acquiring the first step-state feature based on the first step-state image, and acquiring the gait recognition result based on the first step-state feature, wherein the gait recognition result comprises a second user identifier. The gait registration module is used for binding the first step-state feature with the first user identifier when the second user identifier fails to be acquired.
Another aspect of the disclosed embodiments provides an electronic device. The electronic device includes one or more memories, and one or more processors. The memory stores executable instructions. The processor executes the executable instructions to implement the method as described above.
Another aspect of the embodiments of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Yet another aspect of an embodiment of the present disclosure provides a computer program product comprising computer programs/instructions which, when executed by a processor, implement the method as described above.
One or more of the above-described embodiments may provide the following advantages or benefits: the method can at least partially solve the problem of gait registration of the user, can acquire a face recognition result based on the first face image, and binds the first step-state features with the first user identifier when the acquisition of the second user identifier based on the first step-state image fails, so that the gait registration of the user is realized. In addition, when the second user identifier is successfully acquired, the gait recognition technology and the face recognition technology can be combined, a third decision score is obtained based on the first face confidence score, the first step confidence score and respective weight parameters thereof, and then the identity of the user to be identified can be determined by using the third decision score.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a user identification method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of a user identification method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram for binding a first step-wise feature with a first subscriber identity in accordance with an embodiment of the present disclosure;
fig. 4 schematically illustrates a flow chart for acquiring a gait recognition result based on a first step profile according to an embodiment of the disclosure;
figure 5 schematically illustrates a flow chart for searching for a target gait feature in an embodiment in accordance with the present disclosure;
FIG. 6 schematically shows a flow chart for obtaining a third decision score according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a flow diagram of a user identification method according to yet another embodiment of the present disclosure;
FIG. 8 schematically shows a block diagram of a user identification device according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a face recognition module according to an embodiment of the present disclosure;
fig. 10 schematically illustrates a block diagram of a gait recognition module according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a flow chart of operation of a user identification device according to an embodiment of the present disclosure;
FIG. 12 schematically illustrates a flow diagram of the operation of a user identification device according to another embodiment of the present disclosure; and
FIG. 13 schematically illustrates a block diagram of a computer system suitable for implementing the user identification method and apparatus according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
For a clearer description of the embodiments of the present disclosure, the following explanations are made of terms related in the present disclosure:
the "face database" refers to a database storing face features of known users whose faces have been registered, wherein the face features of one known user are bound to a user identifier.
The "gait database" refers to a database storing gait characteristics of known users who have registered gait, wherein the gait characteristics of a known user are bound to a user identifier.
The "first user identifier" refers to a user identifier obtained from a face database when a face of a user to be recognized is recognized.
The "second user identifier" refers to a user identifier obtained from a gait database when the user to be identified performs gait identification.
The third user identifier refers to user identifiers obtained from a face database when N known users perform face recognition, and user identifiers obtained from a gait database when the known users perform gait recognition, wherein the data of the N known users provide support for fitting based on a decision model.
The "first face confidence score" refers to a confidence score returned when a face of a user to be recognized is recognized, and corresponds to the first user identifier.
The "first step status confidence score" refers to the confidence score returned when the gait recognition is performed on the user to be recognized, and the confidence score corresponds to the second user identifier.
The "second face confidence score" refers to the confidence score returned when face recognition is performed on a known user. And the fitting data corresponds to a third user identifier and is used as fitting data when the face recognition weight and the gait recognition weight are solved based on decision model fitting.
The "second-step confidence score" refers to the confidence score returned when gait recognition is performed on a known user. And the fitting data corresponds to a third user identifier and is used as fitting data when the face recognition weight and the gait recognition weight are solved based on decision model fitting.
The "first decision score" refers to a decision score derived based on the first face confidence score and the face recognition weights in the decision model.
The "second decision score" refers to a decision score based on the first step confidence score and the gait recognition weights in the decision model.
The "third decision score" is calculated based on the first decision score and the second decision score.
And the fourth decision score is obtained based on the second face confidence score and the second step state confidence score and is used as fitting data when the face recognition weight and the gait recognition weight are solved based on decision model fitting.
The embodiment of the disclosure provides a user identification method and device. The user identification method comprises the following steps: the method comprises the steps of collecting a first face image and a first step image of a user to be identified. And acquiring a face recognition result based on the first face image, wherein the face recognition result comprises a first user identifier. Acquiring a gait recognition result and first step state features based on the first step state image, wherein the method comprises the following steps: and acquiring a first step state feature based on the first step state image, and acquiring a gait recognition result based on the first step state feature, wherein the gait recognition result comprises a second user identifier. And when the second user identification fails to be obtained, the first step-state characteristic is bound with the first user identification.
By using the user identification method of the embodiment of the disclosure, a face identification result can be obtained based on the first face image, and when the second user identifier is failed to be obtained based on the first step image, the first step feature is bound with the first user identifier, so that the gait registration of the user is automatically realized.
It should be noted that the user identification method and apparatus of the present disclosure may be used in the financial field (e.g., customer identification of a bank outlet), and may also be used in any field other than the financial field (e.g., company employee check-in, pursuit of evasion, or residential area access control system).
Fig. 1 schematically illustrates an exemplary system architecture 100 to which a user identification method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a camera 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between camera 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
First, the camera 101 may automatically capture real-time video of the user 104 and upload to the server 103 via the network 102. The server 103 may then add a calibration rectangular box to the user's whole-body image in the video. Next, the server 103 may extract the face image of the user 104 from the calibration rectangle in the video. After preprocessing such as face alignment and image noise reduction is performed on the face image of the user 104, the face image is input to a face recognition system in the server 103. Meanwhile, the whole body image of the user 104 can be extracted from the calibration rectangular frame in the video, and the whole body image of the user 104 is subjected to preprocessing such as image binarization, contour segmentation and image noise reduction and then transmitted to the gait recognition system in the server 103. It should be noted that a plurality of cameras 101 may also be provided, one part of which is used for directly acquiring a face image, and the other part of which is used for directly acquiring a gait image. The present disclosure is not limited as to the manner in which the images are acquired. In addition, the face recognition system and the gait recognition system can realize the recognition function through the corresponding neural network models.
The camera 101 may be a monitoring camera arranged independently, or may be a camera unit on an electronic device such as a smart phone, a camera device, a tablet computer, a laptop computer, or a desktop computer.
The server 103 may be a server that provides various services, such as a background management server (for example only) that provides support for adding a calibration rectangle to the user 104 in the video. And for example, a background management server for supporting user identification of a face identification system and a gait identification system. The background management server can analyze and process the received video data and the like, and can extract a face image or a gait image and further analyze and process the face image or the gait image.
It should be noted that the user identification method provided by the embodiment of the present disclosure may be generally executed by the server 103. Accordingly, the user identification device provided by the embodiment of the present disclosure may be generally disposed in the server 103. The user identification method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 103 and is capable of communicating with the camera 101 and/or the server 103. Accordingly, the user identification device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 103 and capable of communicating with the camera 101 and/or the server 103.
It should be understood that the number of cameras, networks, and servers in fig. 1 is merely illustrative. There may be any number of cameras, networks, and servers, as desired for implementation.
The following describes in detail the working flow of the user identification method of the present disclosure, taking a scenario in which a bank outlet performs user identification as an example.
Referring to FIG. 1, for example, user 104 is going to a banking outlet to transact business. The camera 101 may be disposed outside the bank outlet, and may ensure that the image of the user 104 is obtained in time. For example, after the camera 101 acquires a real-time video of the user 104, the real-time video is transmitted to the server 103 of the bank outlet, and the video data is processed. And can identify the user using a face recognition system and a gait recognition system in the server 103.
Fig. 2 schematically shows a flow chart of a user identification method according to an embodiment of the present disclosure.
As shown in fig. 2, the method may include operations S210 to S270.
In operation S210, a first face image and a first step image of a user to be recognized are acquired.
In operation S220, a face recognition result is obtained based on the first face image, the face recognition result including a first user identifier.
According to an embodiment of the present disclosure, first, a face feature vector may be acquired based on a first face image (for example only). And then searching a target characteristic vector matched with the face characteristic vector in a face database, wherein the face database stores a plurality of face characteristic vectors, and each face characteristic vector is bound with a corresponding first user identifier.
For example, the user 104 has registered a face at a banking site, and the face image and the face feature vector thereof have been stored in a face database, and the user 104 is assigned a number (i.e., a first user identifier) representing the identity thereof when storing. Before the user 104 enters the bank outlet again, the first face image of the user 104 may be acquired through the camera 101, and the first face image is searched in the face database according to the acquired face feature vector.
Specifically, the cosine similarity may be calculated from the face feature vector of the user 104 and a plurality of face feature vectors in the face database, for example, when the cosine similarity is greater than a preset value, a target feature vector may be determined, and a user identifier bound to the target feature vector may be obtained.
In other embodiments of the present disclosure, the euclidean distance between the face feature vector of the user 104 and the face feature vectors in the face database may also be calculated. When the Euclidean distance is smaller than a preset value, the vector similarity is represented to meet the preset condition, and the target characteristic vector can be determined.
In operation S230, a gait recognition result and first step characteristics are acquired based on the first step image. Wherein, include: and acquiring a first step state feature based on the first step state image, and acquiring a gait recognition result based on the first step state feature, wherein the gait recognition result comprises a second user identification.
In operation S240, it is determined whether the second subscriber identity is successfully obtained. When the acquisition fails, operation S250 is performed. When the second user identifier is successfully obtained and the first user identifier is the same as the second user identifier, operation S260 is performed.
In operation S250, when the second subscriber identity acquisition fails, the first step-wise feature is bound with the first subscriber identity.
According to embodiments of the present disclosure, the user 104 may not have gait registration at a banking outlet. The gait recognition system cannot acquire the second user identification through the first step profile of the user 104. At this time, the first step-state feature may be bound to a first user identifier obtained by face recognition and stored in a gait database.
For example, a bank outlet has a huge user group, each user has already completed face registration, and application of the gait recognition technology requires gait registration of the user to acquire gait information. Notifying each user to go to the bank website again for gait registration through the staff consumes great labor cost, wastes the time of the user and even causes the user dissatisfaction. By using the user identification method of the embodiment of the disclosure, when the user actively goes to a bank outlet to handle business, the face identification result, the gait identification result and the first step feature of the user to be identified can be automatically obtained. When the second user identification mark is failed to be obtained, the gait registration can be automatically completed by using the first user identification mark recognized by the face without delaying the user time.
Although the various operations of the methods are described above in a particular order, embodiments of the disclosure are not so limited, and the operations described above may be performed in other orders as desired. For example, step S220 may be performed after step S230, or may be performed simultaneously. In some embodiments, the generation of the user-item interaction matrix and feature extraction of the image may be performed independently of the prediction process.
In operation S260, when the second user identifier is successfully obtained and the second user identifier is the same as the first user identifier, a third decision score is obtained based on the first face confidence score, the first step confidence score and their respective weight parameters. The face recognition result further comprises a first face confidence score corresponding to the first user identification, and the gait recognition result further comprises a first step state confidence score corresponding to the second user identification.
According to an embodiment of the present disclosure, referring to fig. 1, for example, performing face recognition on the user 104 results in a first user identification and a first face confidence score, while performing gait recognition on the user 104 results in a second user identification and a second face confidence score. In some embodiments of the present disclosure, the face recognition result may include a plurality of first user identifiers and corresponding first face confidence scores, and the gait recognition result may include a plurality of second user identifiers and corresponding second step confidence scores. The same first and second subscriber identities may first be matched (i.e. representing the same user). A third decision score is then derived based on the corresponding first face confidence score, first step confidence score, and their respective weight parameters.
In operation S270, when the third decision score is within a predetermined range, it is determined that the user to be recognized corresponds to the first user identifier.
According to the embodiment of the present disclosure, for example, after the euclidean distance between the face feature vector of the user 104 and the target face feature vector in the face recognition database is calculated, normalization is performed to map the euclidean distance to the range between [0 and 1], and thus the first face confidence score can be obtained. Meanwhile, the Euclidean distance between the gait feature vector of the user 104 and the target gait feature vector in the gait recognition database is calculated, normalization is carried out to map the Euclidean distance between the gait feature vector and the target gait feature vector in the gait recognition database to [0, 1], and accordingly the first step state confidence score can be obtained. Then, the predetermined range of the third decision score may be specifically set according to the weighting parameter, the first face confidence score and the first step confidence score, and when the third decision score is within the predetermined range, it indicates that the identity of the user 104 can be identified, and the relevant data can be retrieved through the identity. Therefore, the face recognition and the gait recognition can be reasonably combined, and the accuracy of user recognition is improved.
FIG. 3 schematically illustrates a flow diagram for binding a first step-wise feature with a first subscriber identity in an embodiment in accordance with the present disclosure.
As shown in fig. 3, the binding of the first step-wise feature with the first subscriber identity in operation S250 may further include operations S310 to S320.
In operation S310, after the first step-state feature is successfully acquired, a gait recognition flag indicating that the acquisition of the first step-state feature is successful may be generated.
In operation S320, when the second ue fails to acquire and has the gait recognition flag, the first step-state feature is bound to the first ue.
According to the embodiment of the disclosure, for example, when face recognition and gait recognition are performed, if the second user identifier fails to be acquired, the binding operation can be executed by judging whether a gait recognition mark exists, so that the execution efficiency is improved. Specifically, the gait recognition model may be used to extract first step features from the first step image, for example, the first step features may be in the form of eigenvalues, eigenvectors, or matrices. After the first step characteristics are obtained, the gait recognition model may assign a variable, such as "1" or "TRUE". The binding operation may be performed by reading the value of the variable to "1" or "TRUE". It should be noted that the "generating the gait recognition flag" may be an operation of assigning a variable, or an operation of generating and assigning a variable, which is not limited in the present disclosure.
Fig. 4 schematically illustrates a flow chart for acquiring a gait recognition result based on a first step profile according to an embodiment of the disclosure.
As shown in fig. 4, acquiring the gait recognition result based on the first step profile in operation S230 may include operations S410 to S420.
In operation S410, a gait database is searched for a target gait feature matching the first step feature, where the gait database includes a plurality of second step features, each second step feature is bound to a corresponding second subscriber identity, and the target gait feature belongs to the plurality of second step features.
In operation S420, when the target gait feature is searched, a second user identifier bound to the target gait feature is acquired.
According to an embodiment of the present disclosure, referring to fig. 1, for example, the user 104 has already performed gait registration at a banking website, the gait characteristics of the user are already stored in a gait database, and the user 104 is assigned a user number (i.e., a second user identifier) representing the identity of the user when storing the gait characteristics, where the user number is the same as the identity number of the user 104 in the face database.
First, the face image of the user 104 may be input to a face recognition neural network for processing, so as to obtain a first step feature. The gait feature stored at the time of registration of the user 104 (i.e., the target gait feature) may then be searched in the gait database based on the first step profile and its user identification obtained.
Figure 5 schematically illustrates a flow chart for searching for a target gait feature in an embodiment according to the present disclosure.
As shown in fig. 5, the searching for the target gait feature matching the first step profile in the gait database in operation S410 may include operations S510 to S520.
In operation S510, the first-step feature vector is compared with a plurality of second-step feature vectors, wherein the first-step feature vector includes a first-step feature vector, and the second-step feature vector includes a second-step feature vector.
In operation S520, when the similarity between a second step state feature vector and a first step state feature vector meets a predetermined condition, the second step state feature vector is determined to be a target gait feature vector, wherein the target gait feature vector includes the target gait feature vector.
According to the embodiment of the disclosure, before the user 104 enters a bank outlet, the first step image of the user 104 can be acquired through the camera 101, and the first step image is searched in the gait database according to the acquired first step feature vector. Specifically, the cosine similarity between the first step state feature vector and a plurality of second step state feature vectors stored in the gait database can be calculated one by one. And when the cosine similarity between the second step state feature vector and the first step state feature vector is greater than a preset value, determining that the second step state feature vector is a target gait feature vector, and acquiring the identity number bound with the target gait feature vector.
In other embodiments of the present disclosure, a euclidean distance between the first step state feature vector and the second step state feature vector may also be calculated, and when the euclidean distance is smaller than a preset value, it indicates that the vector similarity satisfies a preset condition, so that the target feature vector may be determined.
Fig. 6 schematically shows a flow chart for obtaining a third decision score according to an embodiment of the present disclosure.
As shown in fig. 6, obtaining a third decision score based on the first face confidence score, the first step confidence score and their respective weight parameters in operation S260 may include operations S610 to S630.
In operation S610, a first decision score is obtained based on the face recognition weight and a first face confidence score, wherein a weight parameter of the first face confidence score includes the face recognition weight.
In operation S620, a second decision score is obtained based on the gait recognition weight and the first step confidence score, wherein a weight parameter of the first step confidence score includes the gait recognition weight.
In operation S630, a third decision score is obtained based on the first decision score and the second decision score.
In some embodiments of the present disclosure, the decision model may be as follows:
S=λf×Sfg×Sgformula 1
Wherein S represents a third decision score, λfRepresenting face recognition weight, λgRepresenting the gait recognition weight, SfRepresenting face confidence score, SgA gait confidence score is represented.
For example, first, a face confidence score S of the user 104 is acquiredfThen, the face recognition weight lambda can be obtainedfAnd SfThe result of the multiplication is the first decision score. Meanwhile, a gait confidence score S of the user 104 is obtainedgThen, the gait recognition weight lambda can be obtainedgAnd SgThe result of the multiplication is the second decision score. Then, the first decision score and the second decision score are comparedAnd adding the numbers to obtain a third decision score.
Fig. 7 schematically shows a flow chart of a user identification method according to a further embodiment of the present disclosure.
As shown in fig. 7, the user identification method can obtain a face identification weight and a gait identification weight, and may include operations S710 to S740.
In operation S710, a second face image and a second step image of each of N known users are acquired, where each known user has a third user id, and N is an integer greater than or equal to 1.
In operation S720, a second face confidence score and a second step confidence score corresponding to each third user identifier are respectively obtained based on the second face image and the second step image of each known user.
According to an embodiment of the present disclosure, N known users have performed face registration and gait registration, and assigned corresponding third user identifications. Therefore, after the second face image and the second step state image of each known user are acquired, the face information and the gait information can be extracted, and the second face confidence score and the second step state confidence score are obtained respectively based on the data in the face database and the data in the gait database.
In operation S730, a fourth decision score corresponding to each third user identifier is obtained based on the second face confidence score and the second step confidence score of each known user.
In accordance with an embodiment of the present disclosure, equation 1 may be referenced, where the second face confidence score SfAnd a second step of the state confidence score SgAs is known, based on equation 1, the fourth decision score S may be determined according to related experience, or may be obtained through artificial intelligence or the like, and may be associated with the third user identifier.
In operation S740, fitting the N second face confidence scores, the N second step confidence scores, and the N fourth decision scores based on the decision model to obtain a face recognition weight and a gait recognition weight.
According to an embodiment of the present disclosure, referring to equation 1 again, the decision model includes a linear decision function, and may perform linear fitting on the N second face confidence scores, the N second step state confidence scores, and the N fourth decision scores based on the linear decision function.
Specifically, for example, each known user has a corresponding set of data including a second face confidence score, a second step confidence score, and a fourth decision score. Respectively substituting N groups of data into formula 1 to obtain a linear equation set, and solving based on the least square method principle to obtain the face recognition weight lambdafAnd gait recognition weight λg. In some embodiments of the present disclosure, a fitting function in programming languages such as Matlab, Python, and the like may be called to obtain a face recognition weight λ through fittingfAnd gait recognition weight λg
According to the embodiment of the disclosure, the decision model may also be a nonlinear decision function, for example, N groups of data are respectively substituted into the nonlinear decision function, and face recognition weight λ is obtained by exponential fitting, gaussian fitting or piecewise function fitting, etcfAnd gait recognition weight λg
Fig. 8 schematically shows a block diagram of a user identification device according to an embodiment of the present disclosure.
As shown in fig. 8, the user recognition device 800 may include an image capture module 810, a face recognition module 820, a gait recognition module 830 and a gait registration module 840. The user identification apparatus 800 may be used to implement the user identification method described with reference to fig. 2 to 7.
The capture images module 810 may perform, for example, operation 210 for capturing a first facial image and a first step image of a user to be identified.
The face recognition module 820 may perform operation 220, for example, to obtain a face recognition result based on the first face image, the face recognition result including a first user identification.
Fig. 9 schematically illustrates a block diagram of a face recognition module 820 according to an embodiment of the present disclosure.
As shown in fig. 9 and referring to fig. 1, the face recognition module 820 may include a face image extraction unit 921, a face image preprocessing unit 922, a face recognition unit 923, and a face recognition result returning unit 924.
The face image extraction unit 921 may be configured to extract a face image of the user 104 from a calibration rectangular box in a video (captured by the camera 101).
The face image preprocessing unit 922 can be used to preprocess the face image of the user 104, for example, including face alignment, image denoising, and the like.
The face recognition unit 923 may be configured to transmit the preprocessed image to a face neural network for recognition, and complete 1: n searches (e.g., compare the facial feature vectors of the user 104 to N facial feature vectors in a facial database one-to-one).
The face recognition result returning unit 924 may be configured to return the face confidence score and the customer number (i.e., the first user identification).
The gait recognition module 830 may perform operation 230, for example, to obtain a gait recognition result and a first step feature based on the first step image, including: and acquiring a first step state feature based on the first step state image, and acquiring a gait recognition result based on the first step state feature, wherein the gait recognition result comprises a second user identifier.
Fig. 10 schematically illustrates a block diagram of a gait recognition module 830 according to an embodiment of the disclosure.
As shown in fig. 10 and referring to fig. 1, the gait recognition module 830 may include a torso image extraction unit 1031, a torso image preprocessing unit 1032, a gait recognition unit 1033, and a gait recognition result returning unit 1034.
The torso image extraction unit 1031 may be used to extract a whole body torso image of the user 104 from a calibration rectangular box in the video (captured by the camera 101).
The torso image preprocessing unit 1032 may be used to preprocess the whole body torso image of the user 104, including image binarization, contour segmentation, image noise reduction processing, and the like.
The gait recognition unit 1033 may be configured to transmit the preprocessed image to a gait neural network for recognition, and complete 1: for N search, reference may be specifically made to operations S310 to S320 and operations S410 to S420, which are not described herein.
The gait recognition result returning unit 1034 may be configured to return a gait confidence score, a client number (i.e., a second user identifier), a gait feature flag, and a first step feature.
The gait registration module 840 may, for example, perform operation 250 to bind the first step profile with the first subscriber identity when the second subscriber identity acquisition fails.
According to an embodiment of the present disclosure, after the gait recognition unit 1033 successfully obtains the first step-state feature, a gait recognition flag representing that the first step-state feature is successfully obtained may be generated, and then returned to the gait registration module 840. Therefore, the gait registration module 840 binds the first step-wise feature with the first subscriber identity when the second subscriber identity fails to be obtained and the gait recognition flag is determined to be present. Therefore, the step that the gait registration module 840 judges whether the first step characteristics can be used or not is omitted, and the binding operation is executed by judging whether the gait identification mark exists or not, so that the execution efficiency is improved.
According to an embodiment of the present disclosure, the user identification apparatus 800 may further include an identity matching module. The identity matching module may be configured to, for example, when the second user identifier is successfully obtained and the first user identifier is the same as the second user identifier: and obtaining a third decision score based on the first face confidence score, the first step confidence score and respective weight parameters thereof, and determining that the user to be recognized corresponds to the first user identifier when the third decision score is in a preset range.
According to an embodiment of the present disclosure, the identity matching module may further perform operations S610 to S630, referring to equation 1, to obtain a first decision score based on the face recognition weight and the first face confidence score, obtain a second decision score based on the gait recognition weight and the first step confidence score, and obtain a third decision score based on the first decision score and the second decision score.
According to an embodiment of the present disclosure, the user recognition apparatus 800 may further include a fitting module, configured to acquire a second facial image and a second step image of each of N known users, where each of the N known users has a third user identifier, and N is an integer greater than or equal to 1. And respectively acquiring a second face confidence score and a second step state confidence score corresponding to each third user identification based on the second face image and the second step state image of each known user. And obtaining a fourth decision score corresponding to each third user identification based on the second face confidence score and the second step state confidence score of each known user. And fitting the N second face confidence scores, the N second step state confidence scores and the N fourth decision scores based on the decision model to obtain a face recognition weight and a gait recognition weight.
Taking a scenario of user identification performed by a bank outlet as an example, referring to fig. 1, an operation flow of the user identification device according to the embodiment of the present disclosure is described in detail with reference to fig. 11 to 12.
Fig. 11 schematically shows a flow chart of the operation of a user identification device according to an embodiment of the present disclosure.
As shown in fig. 11, in operation S1110, it may be detected that the user 104 enters a banking site.
In operation S1120, when the user 104 enters the shooting range of the camera 101, the camera 101 may be used to automatically capture a real-time video of the user 104, and transmit the video to the face recognition system and the gait recognition system in real time.
In operation S1130, the face recognition system (which may include the face recognition module 820, for example) in the server 103 performs face information extraction through the face image in the video, and returns a face recognition result and a first face confidence score.
In some embodiments of the present disclosure, a plurality of users have performed face registration at a banking outlet. In the process of using the face recognition technology at a bank outlet, it is found that when the effective distance between the camera 101 and the user 104 is 1-5 m, the face feature vector can be successfully acquired to perform face recognition. When the staff of the bank website acquires the identity number of the user 104, the user 104 already enters the website, so that the preparation time reserved for the staff is insufficient. Therefore, the defects of the face recognition system can be supplemented by adopting the gait recognition system.
In operation S1140, a gait recognition system (which may include the gait recognition module 830, for example) in the server 103 performs gait information extraction from the gait image in the video and returns a gait recognition result, a first step confidence score, a gait feature flag and a first step feature.
According to the embodiment of the disclosure, the effective identification distance of the gait identification system can reach 50 meters, and after the identity of a user is identified, longer preparation time can be provided for workers. The gait recognition system can also solve the problem that the face is shielded by the mask, and the requirement on the camera is reduced.
According to the embodiment of the disclosure, a gait recognition result, a gait feature flag and a first step feature are returned every time the user is subjected to gait recognition. The identity matching system can conveniently determine whether to carry out identity matching operation or gait registration operation in real time according to the gait recognition result.
In operation S1150, the identity matching system (which may include, for example, the gait registration module 840 and the identity matching module) receives the face recognition result (including, for example, the first user identification and the first face confidence score), and the gait recognition result (including, for example, the second user identification and the first step confidence score), the gait signature and the first step feature. When the identity matching system reads that the gait recognition result is null (i.e. does not contain the second user id), the first step confidence score may also be null accordingly. The gait feature flag may be read, for example, the gait feature flag is TRUE, which indicates that the gait recognition system successfully acquires the first step-state feature, and then the first user identifier in the face recognition result may be bound with the first step-state feature and stored in the gait recognition database. When the identity matching system obtains the second user identification, the first face confidence score and the first step confidence score may be input into the decision model and the user identification of the user 104 may be output.
In operation S1160, the staff member at the bank site knows the user id of the user 104 through the display system (e.g., a wristwatch, a PAD, or a mobile phone), and can obtain information about the name, asset, and whether to subscribe to the service of the user 104. In some embodiments of the present disclosure, a data pushing system may be further provided, and is configured to push the processing result of the identity matching system to the display system for a worker to view. The staff checks the information of the user 104 in the display system, so that the website can conveniently perform customer identification, accurate marketing and intelligent operation.
Fig. 12 schematically shows a flowchart of the operation of a user identification device according to another embodiment of the present disclosure.
As shown in fig. 12, the operation flow of the subscriber identity device may include operations S1110 to S1150, which are not described herein. The workflow of the identity matching system in operation S1150 may further include operations S1251 through S1254.
In operation S1251, the identity matching system receives the face recognition result, the gait feature flag, and the first step feature, and then determines the recognition result. The method comprises the following specific steps:
1) when the face recognition result and the gait recognition result are both null, the first face image and the first step image of the user to be recognized which are possibly collected are not in accordance with the standard, and no effective characteristic is extracted. Accordingly, operation S1120 is re-executed;
2) when the face recognition result includes the first user recognition identifier and the first face confidence score, but the gait recognition result is null, determining whether a gait recognition mark is present, for example, reading that the gait recognition mark is TRUE, that is, indicating that the gait feature extraction has been completed for the same user, and executing operation S1252;
3) when the face recognition result and the gait recognition result meet the requirements, operation S1253 is performed. In some embodiments of the present disclosure, if the face recognition result is null and the gait recognition result includes the second user identifier and the first step confidence score, operation S1252 may also be performed. At this time, it is indicated that the user has already performed gait registration, and the face recognition technology is easily affected by distance, angle or light, and even if the user may wear a mask to enter a bank outlet, the stability of the face recognition result is weak, so that the user can be recognized based on the gait recognition result.
In operation S1252, the first user identifier in the face recognition result is bound to the first step-state feature and stored in the gait recognition database, so as to complete gait registration.
In operation S1253, referring to equation 1, the first face confidence score and the first step confidence score are input into the decision model, and it is determined whether the user to be recognized corresponds to the first user identifier (the first user identifier is the same as the second user identifier) according to the decision score. Reference may be made to operations S610 to S630, which are not described herein.
In operation S1254, when the decision score satisfies a predetermined condition, the first user identification is output.
The user identification device of the embodiment of the disclosure adds gait identification on the basis of realizing user identification by face identification. When a user enters a website, the camera can automatically capture face information and gait information of the user, simultaneously is connected with the face recognition system and the gait recognition system for identity recognition, then the results of the face recognition and the gait recognition are sent to the identity matching system, and then the identity information of the user is obtained. And finally, the identity information is transmitted to the display system through the data push system to be checked by the staff, so that the accuracy of the user identification result is improved, and enough time is reserved for the staff.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the image capturing module 810, the face recognition module 820, the gait recognition module 830 and the gait registration module 840 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the capture image module 810, the face recognition module 820, the gait recognition module 830 and the gait registration module 840 can be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or by any other reasonable manner of integrating or packaging a circuit, such as hardware or firmware, or by any one of three implementations of software, hardware and firmware, or by any suitable combination of any of them. Alternatively, at least one of the capture image module 810, the face recognition module 820, the gait recognition module 830 and the gait registration module 840 can be at least partially implemented as a computer program module, which when executed can perform corresponding functions.
FIG. 13 schematically illustrates a block diagram of a computer system suitable for implementing the user identification method and apparatus according to an embodiment of the present disclosure. The computer system illustrated in FIG. 13 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 13, a computer system 1300 according to an embodiment of the present disclosure includes a processor 1301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1302 or a program loaded from a storage section 1308 into a Random Access Memory (RAM) 1303. The processor 1301 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1301 may also include onboard memory for caching purposes. Processor 1301 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 1303, various programs and data necessary for the operation of the system 1300 are stored. The processor 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. The processor 1301 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 1302 and/or the RAM 1303. Note that the programs may also be stored in one or more memories other than the ROM 1302 and RAM 1303. The processor 1301 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
In accordance with an embodiment of the present disclosure, system 1300 may also include an input/output (I/O) interface 1305, which is also connected to bus 1304. The system 1300 may also include one or more of the following components connected to the I/O interface 1305: an input portion 1306 including a keyboard, a mouse, and the like; an output section 1307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1308 including a hard disk and the like; and a communication section 1309 including a network interface card such as a LAN card, a modem, or the like. The communication section 1309 performs communication processing via a network such as the internet. A drive 1310 is also connected to the I/O interface 1305 as needed. A removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1310 as necessary, so that a computer program read out therefrom is mounted into the storage portion 1308 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications component 1309 and/or installed from removable media 1311. The computer program, when executed by the processor 1301, performs the functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 602 and/or RAM 603 described above and/or one or more memories other than the ROM 602 and RAM 603.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the image recognition method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 1301, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via communications component 1309, and/or installed from removable media 1311. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A user identification method, comprising:
acquiring a first face image and a first step image of a user to be identified;
acquiring a face recognition result based on the first face image, wherein the face recognition result comprises a first user identifier;
acquiring a gait recognition result and first step state features based on the first step state image, wherein the method comprises the following steps:
acquiring the first step state feature based on the first step state image;
acquiring the gait recognition result based on the first step-state feature, wherein the gait recognition result comprises a second user identifier;
wherein the content of the first and second substances,
and when the second user identification fails to be obtained, the first step-state characteristic is bound with the first user identification.
2. The user recognition method of claim 1, wherein the face recognition result further comprises a first face confidence score corresponding to the first user identification, the gait recognition result further comprises a first step confidence score corresponding to the second user identification, the method further comprising:
when the second user identification is successfully obtained and the first user identification is the same as the second user identification:
obtaining a third decision score based on the first face confidence score, the first step confidence score and respective weight parameters thereof;
when the third decision score is within a predetermined range, determining that the user to be recognized corresponds to the first user identifier.
3. The user identification method of claim 2, wherein said deriving a third decision score based on the first face confidence score, first step confidence score, and their respective weighting parameters comprises:
obtaining a first decision score based on a face recognition weight and the first face confidence score, wherein a weight parameter of the first face confidence score comprises the face recognition weight;
obtaining a second decision score based on a gait recognition weight and the first step confidence score, wherein a weight parameter of the first step confidence score comprises the gait recognition weight;
obtaining the third decision score based on the first decision score and the second decision score.
4. The user identification method of claim 3, wherein, prior to deriving a third decision score based on the first face confidence score, the first step confidence score, and their respective weighting parameters, further comprising:
acquiring a second face image and a second step image of each known user in N known users, wherein each known user has a third user identifier, and N is an integer greater than or equal to 1;
respectively acquiring a second face confidence score and a second step state confidence score corresponding to each third user identification based on the second face image and the second step state image of each known user;
obtaining a fourth decision score corresponding to each third user identification based on the second face confidence score and the second step state confidence score of each known user; and
fitting the N second face confidence scores, the N second step state confidence scores and the N fourth decision scores based on a decision model to obtain the face recognition weight and the gait recognition weight.
5. The user identification method of claim 4, wherein the decision model comprises a linear decision function, the method further comprising:
and performing linear fitting on the N second face confidence scores, the N second step state confidence scores and the N fourth decision scores based on the linear decision function.
6. The user identification method according to claim 1, wherein, when the second subscriber identity acquisition fails, the binding the first step-wise feature with the first subscriber identity comprises:
after the first step state feature is successfully acquired, generating a gait recognition mark representing the successful acquisition of the first step state feature;
and when the second user identifier fails to be acquired and has the gait recognition mark, binding the first step-state feature with the first user identifier.
7. The user identification method according to claim 1, wherein the acquiring the gait recognition result based on the first step-wise feature comprises:
searching a gait database for a target gait feature matched with the first step feature, wherein the gait database comprises a plurality of second step features, each second step feature is bound with a corresponding second user identifier, and the target gait feature belongs to the plurality of second step features;
and when the target gait feature is searched, acquiring the second user identification bound with the target gait feature.
8. The user identification method of claim 7, wherein the searching a gait database for a target gait feature matching the first step feature comprises:
comparing the first step state feature vector with a plurality of second step state feature vectors one by one, wherein the first step state feature comprises the first step state feature vector, and the second step state feature comprises the second step state feature vector;
and when the similarity of one second step state feature vector and the first step state feature vector meets a preset condition, determining the second step state feature vector as a target gait feature vector, wherein the target gait feature comprises the target gait feature vector.
9. A user identification device comprising:
the image acquisition module is used for acquiring a first face image and a first step image of a user to be identified;
the face recognition module is used for acquiring a face recognition result based on the first face image, and the face recognition result comprises a first user identifier;
the gait recognition module is used for acquiring a gait recognition result and first step state features based on the first step state image, and comprises:
acquiring the first step state feature based on the first step state image;
acquiring the gait recognition result based on the first step-state feature, wherein the gait recognition result comprises a second user identifier;
and the gait registration module is used for binding the first step-state feature with the first user identifier when the second user identifier fails to be acquired.
10. An electronic device, comprising:
one or more memories storing executable instructions; and
one or more processors executing the executable instructions to implement the method of any one of claims 1-8.
11. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 8.
CN202110288293.8A 2021-03-17 2021-03-17 User identification method and device, electronic equipment and storage medium Pending CN112926491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110288293.8A CN112926491A (en) 2021-03-17 2021-03-17 User identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110288293.8A CN112926491A (en) 2021-03-17 2021-03-17 User identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112926491A true CN112926491A (en) 2021-06-08

Family

ID=76175796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110288293.8A Pending CN112926491A (en) 2021-03-17 2021-03-17 User identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112926491A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963411A (en) * 2021-10-26 2022-01-21 电科智动(深圳)科技有限公司 Training method and device of identity recognition model, police scooter and storage medium
WO2023107065A1 (en) * 2021-12-06 2023-06-15 Bartin Üni̇versi̇tesi̇ Intelligent system that detects suspects with gait analysis and facial recognition hybrid model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963411A (en) * 2021-10-26 2022-01-21 电科智动(深圳)科技有限公司 Training method and device of identity recognition model, police scooter and storage medium
WO2023107065A1 (en) * 2021-12-06 2023-06-15 Bartin Üni̇versi̇tesi̇ Intelligent system that detects suspects with gait analysis and facial recognition hybrid model

Similar Documents

Publication Publication Date Title
CN108776787B (en) Image processing method and device, electronic device and storage medium
US11392792B2 (en) Method and apparatus for generating vehicle damage information
CN108229419B (en) Method and apparatus for clustering images
CN112184508B (en) Student model training method and device for image processing
WO2022037541A1 (en) Image processing model training method and apparatus, device, and storage medium
WO2021253510A1 (en) Bidirectional interactive network-based pedestrian search method and system, and device
US11048917B2 (en) Method, electronic device, and computer readable medium for image identification
CN109086834B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN110751675B (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN113158909B (en) Behavior recognition light-weight method, system and equipment based on multi-target tracking
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
CN112926491A (en) User identification method and device, electronic equipment and storage medium
CN109902644A (en) Face identification method, device, equipment and computer-readable medium
CN110245554B (en) Pedestrian movement trend early warning method, system platform and storage medium
CN110780965A (en) Vision-based process automation method, device and readable storage medium
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
CN114663871A (en) Image recognition method, training method, device, system and storage medium
Podorozhniak et al. Usage of Mask R-CNN for automatic license plate recognition
CN111062374A (en) Identification method, device, system, equipment and readable medium of identity card information
Liu et al. Brand marketing decision support system based on computer vision and parallel computing
CN112333182B (en) File processing method, device, server and storage medium
CN113344064A (en) Event processing method and device
CN111860066B (en) Face recognition method and device
CN116778534B (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination