CN112232424A - Identity recognition method and device, electronic equipment and storage medium - Google Patents

Identity recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112232424A
CN112232424A CN202011132611.3A CN202011132611A CN112232424A CN 112232424 A CN112232424 A CN 112232424A CN 202011132611 A CN202011132611 A CN 202011132611A CN 112232424 A CN112232424 A CN 112232424A
Authority
CN
China
Prior art keywords
users
target user
user
information
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011132611.3A
Other languages
Chinese (zh)
Inventor
郑莞蓉
蒋小可
鲍纪奎
李启琛
季聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sensetime Technology Co Ltd
Original Assignee
Chengdu Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sensetime Technology Co Ltd filed Critical Chengdu Sensetime Technology Co Ltd
Priority to CN202011132611.3A priority Critical patent/CN112232424A/en
Publication of CN112232424A publication Critical patent/CN112232424A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present disclosure relates to an identity recognition method and apparatus, an electronic device, and a storage medium, the method including: respectively determining the space-time similarity of a plurality of first users and a target user based on historical space-time distribution information of the users among a plurality of positions in the target area, the entering information of the first users which have entered the target area and the leaving information of the target user, wherein the historical space-time distribution information represents the corresponding relation of the duration and the probability of the users at a pair of entering and leaving positions, and the entering and leaving positions comprise entering positions and leaving positions; respectively determining the visual similarity of the plurality of first users and the target user according to the entry information of the plurality of first users and the exit information of the target user; and identifying the target user from the plurality of first users according to the space-time similarity and the visual similarity. The embodiment of the disclosure can improve the accuracy of identity recognition.

Description

Identity recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an identity recognition method and apparatus, an electronic device, and a storage medium.
Background
The development of urban public facilities is changing day by day, and great convenience is provided for life, travel and tourism of people. For example, in rail transit scenarios, subways have become the primary means of transport for passenger trips. In order to better cope with the peak of passenger flow in the face of huge passenger traffic, some technologies can be combined with the traditional authentication mode, so that the speed and the quality of authentication are improved.
In recent years, face recognition technology is rapidly developed in various industries, and some cities use the combination of the face recognition technology and the traditional ticket selling and checking mode to verify the identity of passengers. Compared with the traditional ticket selling and checking mode, the mode can better improve the ticket selling and checking speed and reduce the jam condition in the peak time. However, in the ticket selling and checking mode based on face recognition, under the condition that face data of passengers reach millions, misjudgment is easy to occur to passengers with similar face information.
Disclosure of Invention
The present disclosure provides an identity recognition technical scheme.
According to an aspect of the present disclosure, there is provided an identity recognition method, including:
respectively determining the space-time similarity of a plurality of first users and a target user based on historical space-time distribution information of the users among a plurality of positions in the target area, entry information of the plurality of first users which have currently entered the target area and exit information of the target user, wherein the historical space-time distribution information represents the corresponding relation of duration and probability of the users at a pair of entry and exit positions, the entry and exit positions comprise entry positions and entry times and entry images, the entry information comprises entry positions, entry times and entry images, and the exit information comprises exit positions, exit times and exit images; respectively determining the visual similarity of the plurality of first users and the target user according to the entry information of the plurality of first users and the exit information of the target user; and identifying the target user from the plurality of first users according to the space-time similarity and the visual similarity.
In one or more possible implementations, the determining spatiotemporal similarities of the plurality of first users and the target user based on historical spatiotemporal distribution information of users between a plurality of locations of the target area, entry information of a plurality of first users that have currently entered the target area, and exit information of the target user, respectively, includes: for each first user, determining an accessible position of the target user and a target duration experienced at the accessible position according to the leaving information of the target user and the entering information of the first user; and determining the probability corresponding to the target duration in the historical space-time distribution information of the accessible positions, and taking the probability corresponding to the target duration as the space-time similarity.
In one or more possible implementation manners, the determining, according to the entry information of the plurality of first users and the exit information of the target user, the visual similarity between the plurality of first users and the target user includes: and aiming at each first user, matching the entering image of the first user with the leaving image of the target user to obtain the visual similarity between each first user and the target user.
In one or more possible implementations, the identifying the target user from the plurality of first users according to the spatio-temporal similarity and the visual similarity includes: determining fusion similarity of the plurality of first users and the target user according to the space-time similarity and the visual similarity; and identifying the target user from the plurality of first users according to the fusion similarity.
In one or more possible implementations, the determining the fused similarity between the plurality of first users and the target user according to the spatio-temporal similarity and the visual similarity includes: and performing weighted fusion on the space-time similarity and the visual similarity to obtain the fusion similarity between the plurality of first users and the target user.
In one or more possible implementations, the performing weighted fusion on the spatio-temporal similarity and the visual similarity to obtain the fused similarity between the first users and the target user includes: screening out at least one first user with the space-time similarity larger than a first threshold and/or with the visual similarity larger than a second threshold from the plurality of first users; and performing weighted fusion on the spatial-temporal similarity and the visual similarity of the screened at least one first user and the target user, and determining the fusion similarity of the at least one first user and the target user.
In one or more possible implementations, the method further includes: counting the number of historical samples in each time interval aiming at a pair of access positions in the target area, wherein one historical sample represents one access record of a historical user; and generating historical spatiotemporal distribution information of the access positions according to the number of historical samples in each time interval and the total number of the historical samples of the access positions.
In one or more possible implementations, the method further includes: and smoothing the historical space-time distribution information to obtain the smoothed historical space-time distribution information of the access position.
In one or more possible implementations, after the identifying the target user from the plurality of first users, the method further includes: and storing the entering information and the leaving information of the target user as a one-time entering and exiting record, and/or generating the consumption information of the target user according to the entering information and the leaving information of the target user.
According to an aspect of the present disclosure, there is provided an identification apparatus including:
a first determination module, configured to determine spatiotemporal similarities between a plurality of first users and a target user respectively based on historical spatiotemporal distribution information of the users among a plurality of locations within the target area, entry information of the plurality of first users who have currently entered the target area, and exit information of the target user, where the historical spatiotemporal distribution information represents a correspondence between durations and probabilities experienced by the users at a pair of entry and exit locations, the entry and exit locations include an entry location and an exit location, the entry information includes an entry location, an entry time, and an entry image, and the exit information includes an exit location, an exit time, and an exit image;
a second determining module, configured to determine, according to entry information of the multiple first users and exit information of the target user, visual similarities between the multiple first users and the target user respectively;
and the identification module is used for identifying the target user from the plurality of first users according to the space-time similarity and the visual similarity.
In one or more possible implementations, the first determining module is configured to determine, for each of the first users, an accessible location of the target user and a target duration experienced at the accessible location according to departure information of the target user and entry information of the first user; and determining the probability corresponding to the target duration in the historical space-time distribution information of the accessible positions, and taking the probability corresponding to the target duration as the space-time similarity.
In one or more possible implementation manners, the second determining module is configured to, for each first user, match an incoming image of the first user with an outgoing image of the target user, and obtain a visual similarity between each first user and the target user.
In one or more possible implementations, the identifying module is configured to determine a fusion similarity between the plurality of first users and the target user according to the spatio-temporal similarity and the visual similarity; and identifying the target user from the plurality of first users according to the fusion similarity.
In one or more possible implementation manners, the identifying module is configured to perform weighted fusion on the spatio-temporal similarity and the visual similarity to obtain a fused similarity between the plurality of first users and the target user.
In one or more possible implementations, the identifying module is configured to screen out, among the plurality of first users, at least one first user whose spatio-temporal similarity is greater than a first threshold and/or whose visual similarity is greater than a second threshold; and performing weighted fusion on the spatial-temporal similarity and the visual similarity of the screened at least one first user and the target user, and determining the fusion similarity of the at least one first user and the target user.
In one or more possible implementations, the apparatus further includes: the counting module is used for counting the number of historical samples in each time interval aiming at a pair of access positions in the target area, wherein one historical sample represents one access record of a historical user; and generating historical spatiotemporal distribution information of the access positions according to the number of historical samples in each time interval and the total number of the historical samples of the access positions.
In one or more possible implementation manners, the statistical module is further configured to smooth the historical spatiotemporal distribution information to obtain the smoothed historical spatiotemporal distribution information of the entry and exit positions.
In one or more possible implementations, the method further includes: and the generating module is used for storing the entering information and the leaving information of the target user as a one-time entering and exiting record and/or generating the consumption information of the target user according to the entering information and the leaving information of the target user.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the spatiotemporal similarities between the plurality of first users and the target user are respectively determined based on historical spatiotemporal distribution information of the users among the plurality of positions in the target area, the entering information of the plurality of first users which have currently entered the target area, and the leaving information of the target user, and then the visual similarities between the plurality of first users and the target user are respectively determined according to the entering information of the plurality of first users and the leaving information of the target user, so that the target user can be identified from the plurality of first users according to the spatiotemporal similarities and the visual similarities. Therefore, the space-time similarity and the visual similarity of the first user and the target user can be combined and are jointly used for the identity recognition of the users, the accuracy of the identity recognition is improved, and the false detection caused by the fact that the face image is simply used for the identity recognition is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an identification method according to an embodiment of the present disclosure.
FIG. 2 illustrates a distribution diagram of historical spatiotemporal distribution information in accordance with an embodiment of the present disclosure.
Fig. 3 shows a flowchart of an example of an identification method according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an identification device according to an embodiment of the disclosure.
FIG. 5 shows a block diagram of an example of an electronic device in accordance with an embodiment of the present disclosure.
FIG. 6 shows a block diagram of an example of an electronic device in accordance with an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The identity recognition scheme provided by the embodiment of the disclosure can be applied to indoor and outdoor scenes such as rail transit, urban subways, scenic spots and exhibition halls. For example, in large and medium-sized urban rail transit with large passenger flow, through the identity recognition scheme provided by the embodiment of the disclosure, identity verification is performed on users entering and leaving the station, besides the identity verification is performed on the users through the traditional face recognition technology, historical space-time distribution information of the users entering and leaving the station can be fused into a face recognition process, and passengers from which stations are more likely to get off the station at the current station can be inferred in an auxiliary mode through the historical space-time distribution information. For another example, in a scenic spot with a plurality of scenic spots, through the identity recognition scheme provided by the embodiment of the present disclosure, identity verification can be performed on users who enter and exit the scenic spot, and through historical spatiotemporal distribution information, it can be assisted to infer which scenic spots enter the scenic spot and visitors may leave the scenic spot at the current scenic spot. Therefore, the false detection phenomenon that the face image is simply used for matching due to the fact that the flow of the user is increased can be reduced to a great extent, and therefore the accuracy rate of identity recognition is improved.
The identity recognition method provided by the embodiment of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the data processing method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server. The identity recognition method according to the embodiment of the present disclosure is described below by taking an electronic device as an execution subject.
Fig. 1 shows a flowchart of an identification method according to an embodiment of the present disclosure, as shown in fig. 1, the identification method includes:
step S11, determining spatiotemporal similarities between a plurality of first users and a target user based on historical spatiotemporal distribution information of users among a plurality of locations within the target area, entry information of the plurality of first users that have currently entered the target area, and exit information of the target user, respectively.
In the embodiment of the disclosure, the electronic device may acquire historical spatiotemporal distribution information of user entering and leaving among a plurality of positions in a target area, and acquire entering information of a plurality of first users who have currently entered the target area and leaving information of a target user who has currently left the target area. The first user may be a user who has entered the target area at any one location and has not acquired the leaving information, and for the first user, the entering information that the first user entered the target area may be acquired. The target user may be a user who leaves the target area at any position, and for the target user, the leaving information that the target user leaves the target area may be acquired. The information such as the place and the time when the target user enters the target area is not determined, so that the station entering information of the target user is not determined. For a plurality of first users who have entered the target area, one or some of the first users may be currently leaving the target area, i.e. the target user is a member of the plurality of first users. The target area may include a plurality of locations, for example, the target area may be a ground drop station, a rail transit station, a scenic spot, etc., and the user may enter the target area at any one location and leave the target area at any one location, for example, the user may enter at any one station and exit at any one station. The historical spatio-temporal distribution information may represent a correspondence of a duration and a probability experienced by any one user at a pair of in-out locations, and the pair of in-out locations may include an in-out location and an out-of location, so that a likelihood that at least one first user is a target user, i.e., a spatio-temporal similarity of the at least one first user to the target user, may be determined among the plurality of first users based on the historical spatio-temporal distribution information, the in-out information of the plurality of first users, and the out-of-information of the target user.
Here, the historical spatio-temporal distribution information may be generated based on the entry and exit records of the historical users, which is equivalent to providing guidance for spatio-temporal distribution of the current user through the historical spatio-temporal distribution information of the historical users entering and exiting the target area, and all the users may be considered to conform to the rules provided by the historical spatio-temporal distribution information, that is, the historical spatio-temporal distribution information may represent a correspondence between a duration and a probability experienced by any one user at a pair of entry and exit positions, so that the first user and the target user may be considered to also satisfy the historical spatio-temporal distribution information. For example, the historical spatiotemporal distribution information may represent a probability that a time period that a user spends entering the target area at location a leaving the target area at location B is a certain time period, i.e., it may be understood as a likelihood that a time period that a user spends entering the target area at location a leaving the target area at location B is a certain time period. The time length of the user from the position A to the position B is usually within a certain time interval, so that the time length of the user is more likely to be within the time interval, and the time length of the user is less likely to be within other time intervals, so that the corresponding relation of the probability that the time length of the user in a pair of in-out positions is a certain time length can be represented by historical space-time distribution information.
Here, each pair of in-out locations may correspond to one historical spatiotemporal distribution information. A pair of in-and-out positions may be understood as a pair of different in-and-out positions in the case where the position 1 enters the target area and leaves the target area at the position 2, and the order of the in-and-out positions is reversed, that is, the pair of in-and-out positions may be considered as a different pair of in-and-out positions, that is, the pair of in-and-out positions may be related to the order of the in-and-out positions, for example, in the in-and-out scene of a subway, due to some reasons, such as differences in incoming and outgoing terrains of ground falls, differences in moving speeds of subways, and the like, which may cause a user to spend time when he enters the station at the station 1 and leaves the station 2, and there is a difference in the time spent when he enters the station at. In some implementations, a pair of entry and exit locations may be understood as entering the target area at location 1 and exiting the target area at location 2, or entering the target area at location 2 and exiting the target area at location 1, i.e., a pair of entry and exit locations may be independent of the order of entry and exit locations.
Step S12, determining visual similarities between the first users and the target user according to the entry information of the first users and the exit information of the target user.
In the embodiment of the present disclosure, the entry information of the plurality of first users and the exit information of the target user may be respectively matched, and the visual similarity between the plurality of first users and the target user may be determined. Here, the entry information may be information related to entry of the user into the target area, and may be understood as arrival information, or the like acquired when the user arrives or arrives at the target area. The entry information may include an entry location, an entry time, and an entry image. The departure information may be information related to the departure of the user from the target area, and may be understood as information acquired when the user departs from the target area. The departure information may include a departure location, a departure time, and a departure image. In the case that the user enters the target area or leaves the target area, the position where the user enters the target area or the position and time where the user leaves the target area may be recorded, and the entry position, the entry time, the exit position, and the exit time may be obtained, respectively. Further, under the condition that the user enters the target area or leaves the target area, image acquisition can be carried out on the user, and an entering image and a leaving image of the user are obtained.
In some implementations, for each of a plurality of first users, an incoming image of the first user may be matched with an outgoing image of a target user, resulting in a visual similarity of each first user to the target user. When the first user is subjected to image acquisition, image acquisition can be usually performed on the face of the user, so that the entering image or the leaving image of the first user can include the face image of the user, and under the condition that the entering image of the first user is matched with the leaving image of the target user, the face image of the first user can be matched with the face image of the target user, so that the visual similarity between the first user and the target user is obtained.
In some implementations, the incoming image of the first user may also be matched with images in a user database, a historical image in the user database that matches the incoming image of the first user is determined, and then the outgoing image of the target user may be matched with the historical image of the first user, and the visual similarity of the target user to the first user is determined. The historical images of a first user may include one or more images, which may include historically acquired facial images of the user. The leaving image of the target user is matched with the historical image of the first user, so that the success rate of image matching can be improved, and the false detection condition caused by the large difference between the entering image and the leaving image can be reduced.
Here, some face matching algorithms or neural networks may be used to match the face image of the first user with the face image of the target user, and the disclosure does not limit the specific matching manner. By matching the incoming image of the first user with the outgoing image of the target user, the identity of the target user can be determined within a certain range of the first user, thereby providing a basis for the identity recognition of the subsequent target user.
It should be noted that, the present disclosure does not limit the sequence of step S11 and step S12, step S12 may be executed after step S11 is executed, step S11 may be executed after step S12 is executed, and step S11 and step S12 may be executed at the same time.
In some implementations, in order to reduce the matching amount of the images, some first users may be first screened out from the plurality of first users according to the spatio-temporal similarity, that is, after the step S11 is executed, the plurality of users may be screened out according to the spatio-temporal similarity obtained in the step S11, for example, the first users having a spatio-temporal similarity greater than a certain similarity threshold with the target user may be screened out from the plurality of first users, and then the screened-out incoming image of the first user is matched with the outgoing image of the target user to obtain the visual similarity between the first user and the target user, so that the matching amount of the images may be reduced to a certain extent, and the efficiency of the identity recognition may be improved.
In some implementations, in order to reduce the amount of computation of the spatiotemporal similarity, some first users may be screened out from the plurality of first users according to the visual similarity, that is, after the step S12 is executed, the plurality of users may be screened out according to the visual similarity obtained in the step S12, for example, the first users whose visual similarity to the target user is greater than a certain similarity threshold may be screened out from the plurality of first users, and then the spatiotemporal similarity between the screened first users and the target user is determined, so that the amount of computation of the spatiotemporal similarity may be reduced to a certain extent, and the efficiency of the identity recognition may be improved.
Step S13, identifying the target user from the plurality of first users according to the spatio-temporal similarity and the visual similarity.
In the embodiment of the disclosure, the space-time similarity and the visual similarity between the plurality of first users and the target user can be respectively combined to identify the target user from the plurality of first users. For example, a score may be generated based on the spatio-temporal similarity and the visual similarity, with a higher score indicating a higher likelihood that the target user is the corresponding first user. Therefore, the scores of the plurality of first users and the target user can be sorted from large to small, and then the first user with the highest score in the sorting sequence is judged as the target user.
The embodiment of the disclosure can combine the space-time similarity and the visual similarity of the first user and the target user, thereby improving the accuracy of identity recognition and reducing false detection of identity recognition by simply using a face image. In the scenes of rail transit, scenic spots and the like with large passenger flow and large number of face images, the fact that the most similar faces are searched in million-level face images by simply utilizing a face recognition technology is a small challenge, so that the identity of a user can be verified by a single face recognition task assisted by historical space-time distribution information of the user, and the accuracy of identity recognition is improved.
After the target user is identified from the plurality of first users, the entering information and the leaving information of the target user can be stored as one-time entering and exiting record, so that the entering information and the leaving information of the target user can be used as a historical sample for updating subsequent historical space-time distribution information, and subsequent information quantitative statistics and calculation are facilitated.
In some application scenarios, for example, in a ground fall, a station of a high-speed rail entering or exiting, or a scenic spot visit, etc., after a target user is identified from a plurality of first users, consumption information of the target user is generated according to entering information and leaving information of the target user, so that fee deduction can be automatically realized according to the consumption information of the target user, ticket selling or checking efficiency is improved, and safety and accuracy of automatic ticket selling or checking are improved.
In step S11, the spatio-temporal similarities between the first users and the target user may be determined based on the historical spatio-temporal distribution information of the users entering and leaving the location among the locations, the entering information of the first users, and the leaving information of the target user, so that the spatio-temporal similarities may be used as the basis for the user identification. The process of determining spatiotemporal similarity is described below in one or more implementations.
In some implementations, for each of the plurality of first users, historical spatiotemporal distribution information may be searched for according to departure information of the target user and entry information of the first user, resulting in spatiotemporal similarity of the first user and the target user.
Here, the first user in the current target area may be understood as a user who currently has the entrance information and does not have the exit information. For a first user of the plurality of first users, the known information of the first user may be the incoming information of the first user. For the target user, the known information of the target user may be the leaving information of the target user, so that historical spatio-temporal distribution information may be searched according to the leaving information of the target user and the entering information of the first user, and the possibility that the first user is the same person as the target user may be judged, that is, the spatio-temporal similarity between the first user and the target user is determined. Historical spatiotemporal distribution information can provide user flow direction distribution between one position and another position in different time periods, so that the possibility that the first user leaves a target area at the leaving position of a target user can be inferred by searching the historical spatiotemporal distribution information, and an effective basis is provided for user identification.
In some examples, for each first user, the accessible position of the target user and the target time duration experienced at the accessible position may be determined according to the departure information of the target user and the entry information of the first user, that is, the accessible position of the target user may be determined according to the inbound position of the first user and the outbound position of the target user, and the time duration experienced by the target user at the accessible position may be determined according to the inbound time of the first user and the outbound time of the target user. An accessible location may be understood as a pair of locations where a target user may access a target area. Then, the probability corresponding to the target duration is determined in the historical space-time distribution information corresponding to the accessible position, namely, the probability corresponding to the target duration can be searched in the historical space-time distribution information corresponding to the determined accessible position, and then the searched probability corresponding to the target duration can be used as the space-time similarity between the first user and the target user, so that the space-time similarity between the first user and the target user can be quickly determined.
Here, the historical spatiotemporal distribution information may be represented by a profile, a table, or the like, and the present disclosure is not limited to a specific representation. FIG. 2 illustrates a graph of historical spatiotemporal distribution information, the abscissa of which may represent the length of time a user has experienced at a pair of entry and exit locations and the ordinate of which may represent a probability, in accordance with an embodiment of the disclosure. In order to reduce the time scale marked on the abscissa, the time duration may be quantified in a fixed time interval, for example, a time interval of 10 minutes is taken as a quantification unit of the time scale, i.e., it is understood that the time length obtained by multiplying the time scale marked on the abscissa by 10 minutes may be equal to the actual time length.
For example, in a subway station entering and exiting scene, a first user gets on a station entering a subway station, after n time intervals, the probability of getting off the station entering the target user is very high, for the current station exiting behavior of the target user, the probability that the first user getting on the station entering the subway station before n time intervals and the target user are the same person is very high, so that the probability can be represented by historical space-time distribution information, the probability that the target user is the first user can be inquired through a distribution graph of the historical space-time distribution information, and the inquired probability can be used as the space-time similarity between the target user and the first user. Wherein n may be an integer.
In the embodiment of the disclosure, the space-time similarity between the first user and the target user can be determined according to the historical space-time distribution information of a pair of in-out positions, so that effective reference information can be provided for user identification. The historical spatiotemporal distribution information may be generated based on the entry and exit records of the historical user entering the target area and leaving the target area, the entry and exit records of the historical user may provide a reference for current user identification, and the process of obtaining the historical spatiotemporal distribution information is described below through one or more implementation manners.
In some implementations, the historical number of samples in each time interval may be counted for a pair of in-out locations, i.e., for a pair of in-location entering the target area and out-location leaving the target area, and then historical spatiotemporal distribution information for the pair of in-out locations may be generated according to the historical number of samples in each time interval and the total number of historical samples for the pair of in-out locations.
In this implementation, one history sample represents one entry and exit record of a history user entering and leaving the target area, and in order to make the historical spatiotemporal distribution information have confidence, the history sample in the last period of time (not less than a certain time) may be obtained, for example, the history sample in the last two weeks may be obtained. The historical samples may include historical user's entry locations, exit locations, and the length of time elapsed at a pair of entry and exit locations. To facilitate distinguishing between different locations, the entry location and the exit location may be assigned a unique number for identification. The length of time spent at a pair of in-and-out locations may be determined by subtracting the in-and-out time from the out-of time, and for statistical purposes, the length of time spent at an in-and-out location may be quantified in a time interval, for example, ten minutes. The number of historical samples in a pair of access locations and each time interval may then be counted to obtain a statistical model (e.g., a statistical table, a statistical matrix, etc.) based on the access locations and the time intervals, where the statistical model may be: total number of locations x total number of time intervals. An element in the statistical model may experience a number of historical samples in the z-th time interval of duration from the entry location x to the exit location y. Wherein z is an integer. The historical number of samples in each time interval may be divided by the total number of historical samples from the entry location x to the exit location y, which may result in historical spatiotemporal distribution information from the entry location x to the exit location y.
Here, the statistical model may be obtained by modeling data and simulating a user flow direction between the respective locations, and historical spatiotemporal distribution information of the user entering and exiting the target area between the respective locations may be obtained by the entry information and the exit information of the historical user, that is, the entry information and the exit information of the historical user may be used to predict a behavior of the current user entering or exiting the target area.
In order to further improve the reliability of the historical spatiotemporal distribution information, the historical spatiotemporal distribution information may be smoothed to obtain the historical spatiotemporal distribution information of a pair of entry and exit positions after the smoothing, and for example, the historical spatiotemporal distribution information may be subjected to gaussian smoothing to improve the reliability of the historical spatiotemporal distribution information after the smoothing. The distribution diagram of the historical spatiotemporal distribution information shown in fig. 2 is a distribution diagram after smoothing, and it can be seen that the probability curve corresponding to n time intervals from the position x to the position y after smoothing is relatively smooth, so that the historical spatiotemporal distribution information provided by the distribution diagram can better meet the requirements of an actual scene.
In step S13, the target user may be identified from the plurality of first users according to the spatio-temporal similarity and the visual similarity, so that the spatio-temporal similarity and the visual similarity may be combined to improve the accuracy of the identity identification. In some possible implementations, the fusion similarity between the plurality of first users and the target user may also be determined according to the spatial-temporal similarity and the visual similarity, for example, for one of the plurality of first users, the spatial-temporal similarity and the visual similarity between the first user and the target user may be added to obtain the fusion similarity between the first user and the target user. Then, the target user may be identified from the multiple first users according to the fusion similarity, for example, the first users corresponding to the fusion similarity greater than a certain similarity threshold may be determined as the target users, or the fusion similarities between the multiple first users and the target users may be sorted from large to small, and then the first user with the maximum fusion similarity may be determined as the target user, so that the target user may be identified by the fusion similarity in which the spatio-temporal similarity and the visual similarity are fused, and the target user may be identified from the multiple users entering the target area.
In some examples, in the case of determining the fusion similarity between the plurality of first users and the target user, the temporal-spatial similarity and the visual similarity between each first user and the target user may be weighted and fused, for example, the temporal-spatial similarity and the visual similarity may be weighted and summed according to a preset weight, so as to obtain the fusion similarity between the plurality of first users and the target user respectively. Here, the weight of the spatiotemporal similarity and the weight of the visual similarity may be set according to actual requirements, and in some examples, the weight of the spatiotemporal similarity may be smaller than the weight of the visual similarity, so that the historical spatiotemporal distribution information may be used as auxiliary judgment information for face recognition, and the historical spatiotemporal distribution information is used to correct misrecognition caused by judging the identity of the user with a single face image, thereby improving the accuracy of user identity recognition.
In order to reduce the processing amount for identifying the target user, at least one first user with the space-time similarity larger than a first threshold and/or the visual similarity larger than a second threshold may be screened out from a plurality of first users, and then the space-time similarity and the visual similarity between the screened out at least one first user and the target user are weighted and fused to determine the fusion similarity between the at least one first user and the target user. And further identifying the target user from the screened at least one first user according to the fusion similarity between the screened at least one first user and the target user. For example, the temporal-spatial similarities of the plurality of first users and the target user may be respectively compared with a first threshold, and one or more first users having a temporal-spatial similarity greater than the first threshold are screened out from the plurality of first users, or the visual similarities of the plurality of first users and the target user may be respectively compared with a second threshold, and one or more first users having a visual similarity greater than the second threshold are screened out from the plurality of first users, or the temporal-spatial similarities of the plurality of first users and the target user may be compared with the first threshold, and at the same time, the visual similarities of the plurality of first users and the target user are respectively compared with the second threshold, and then at least one first user having a temporal-spatial similarity greater than the first threshold and having a visual similarity greater than the second threshold is screened out from the plurality of first users. And then determining the fusion similarity between the screened one or more first users and the target user respectively, so that the first user with the maximum fusion similarity in the screened one or more first users can be determined as the target user, the processing amount of identity recognition can be reduced to a certain extent, and the efficiency of identity recognition is improved.
Here, the first threshold and the second threshold may be set according to actual application requirements, and the first threshold and the second threshold may be the same or different.
The identity recognition method provided by the embodiment of the present disclosure is exemplarily described below by an example. Fig. 3 shows a flowchart of an example of an identity recognition method according to an embodiment of the present disclosure, in this example, the target area may be a ground-drop station, the entry location may be an inbound station, the entry information may be inbound information (including inbound time, inbound station, inbound image), the exit location may be an outbound station, the exit information may be outbound information (including outbound time, outbound station, outbound image), the target user may be a current outbound user, and the first user may be a current inbound user. As shown in fig. 3, the identification process provided by this example may include the following steps:
step S301, obtaining the outbound information of a target user;
step S302, searching historical space-time distribution information of users entering and exiting a site according to the entering information of a plurality of currently entered first users and the exiting information of target users, and calculating the space-time similarity between each first user and the target user in the plurality of currently entered first users;
step S303, matching the inbound image of each first user with the outbound image of the target user, and calculating the visual similarity between each first user and the target user;
step S304, carrying out weighted fusion on the space-time similarity and the visual similarity of each first user and the target user to obtain the fusion similarity of each first user and the target user;
step S305, sorting the fusion similarity of the plurality of first users who have entered the station and the target user from large to small to obtain the first user with the maximum fusion similarity, and judging the first user with the maximum fusion similarity as the target user.
And step S306, generating consumption information of the target user according to the outbound site and the inbound site of the target user.
According to the identity recognition scheme provided by the disclosure, in the process of user identity recognition, the time-space information (the entering time and the entering place) of the user entering the target area and the face image are fused, so that the identity of the user is more accurately judged. Compared with the method for carrying out identity recognition by singly utilizing face recognition, the method is more suitable for rail transit scenes with large-scale data, and the condition of similar user misinformation caused by the increase of the magnitude of the collected images can be effectively reduced. Furthermore, over time, the accuracy of identification provided by the present disclosure may be incrementally iteratively enhanced due to the continual accumulation of historical big data. Meanwhile, for the false detection condition possibly caused by the fact that images in the database are not similar to field images in single face recognition, the robustness of identity recognition can be improved after historical in-and-out information is fused.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an identity recognition apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the identity recognition methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 4 shows a block diagram of an identification apparatus according to an embodiment of the present disclosure, as shown in fig. 4, the apparatus including:
a first determining module 41, configured to determine spatiotemporal similarities between a plurality of first users and a target user respectively based on historical spatiotemporal distribution information of the users among a plurality of locations within the target area, entry information of the plurality of first users who have currently entered the target area, and exit information of the target user, where the historical spatiotemporal distribution information represents a correspondence between a duration and a probability experienced by the users at a pair of entry and exit locations, the entry and exit locations include an entry location and an exit location, the entry information includes an entry location, an entry time, and an entry image, and the exit information includes an exit location, an exit time, and an exit image;
a second determining module 42, configured to determine, according to the entry information of the multiple first users and the exit information of the target user, visual similarities between the multiple first users and the target user respectively;
an identifying module 43, configured to identify the target user from the plurality of first users according to the spatio-temporal similarity and the visual similarity.
In one or more possible implementations, the first determining module 41 is configured to determine, for each of the first users, an accessible location of the target user and a target duration experienced at the accessible location according to the leaving information of the target user and the entering information of the first user; and determining the probability corresponding to the target duration in the historical space-time distribution information of the accessible positions, and taking the probability corresponding to the target duration as the space-time similarity.
In one or more possible implementations, the second determining module 42 is configured to, for each first user, match the incoming image of the first user with the outgoing image of the target user to obtain a visual similarity between each first user and the target user.
In one or more possible implementations, the identifying module 43 is configured to determine a fusion similarity between the plurality of first users and the target user according to the spatio-temporal similarity and the visual similarity; and identifying the target user from the plurality of first users according to the fusion similarity.
In one or more possible implementations, the identifying module 43 is configured to perform weighted fusion on the spatio-temporal similarity and the visual similarity to obtain a fused similarity between the plurality of first users and the target user.
In one or more possible implementations, the identifying module 43 is configured to screen out, among the plurality of first users, at least one first user whose spatio-temporal similarity is greater than a first threshold and/or whose visual similarity is greater than a second threshold; and performing weighted fusion on the spatial-temporal similarity and the visual similarity of the screened at least one first user and the target user, and determining the fusion similarity of the at least one first user and the target user.
In one or more possible implementations, the apparatus further includes: the counting module is used for counting the number of historical samples in each time interval aiming at a pair of access positions in the target area, wherein one historical sample represents one access record of a historical user; and generating historical spatiotemporal distribution information of the access positions according to the number of historical samples in each time interval and the total number of the historical samples of the access positions.
In one or more possible implementation manners, the statistical module is further configured to smooth the historical spatiotemporal distribution information to obtain the smoothed historical spatiotemporal distribution information of the entry and exit positions.
In one or more possible implementations, the method further includes: and the generating module is used for storing the entering information and the leaving information of the target user as a one-time entering and exiting record and/or generating the consumption information of the target user according to the entering information and the leaving information of the target user.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the identity recognition method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the identification method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may further include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and a computer program productAn input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. An identity recognition method, comprising:
respectively determining the space-time similarity of a plurality of first users and a target user based on historical space-time distribution information of the users among a plurality of positions in the target area, entry information of the plurality of first users which have currently entered the target area and exit information of the target user, wherein the historical space-time distribution information represents the corresponding relation of duration and probability of the users at a pair of entry and exit positions, the entry and exit positions comprise entry positions and entry times and entry images, the entry information comprises entry positions, entry times and entry images, and the exit information comprises exit positions, exit times and exit images;
respectively determining the visual similarity of the plurality of first users and the target user according to the entry information of the plurality of first users and the exit information of the target user;
and identifying the target user from the plurality of first users according to the space-time similarity and the visual similarity.
2. The method of claim 1, wherein the determining spatiotemporal similarities of the plurality of first users and the target user based on historical spatiotemporal distribution information of users between the plurality of locations of the target area, entry information of the plurality of first users that have currently entered the target area, and exit information of the target user, respectively, comprises:
for each first user, determining an accessible position of the target user and a target duration experienced at the accessible position according to the leaving information of the target user and the entering information of the first user;
and determining the probability corresponding to the target duration in the historical space-time distribution information of the accessible positions, and taking the probability corresponding to the target duration as the space-time similarity.
3. The method according to claim 1 or 2, wherein the determining the visual similarity between the plurality of first users and the target user according to the entry information of the plurality of first users and the exit information of the target user respectively comprises:
and aiming at each first user, matching the entering image of the first user with the leaving image of the target user to obtain the visual similarity between each first user and the target user.
4. The method of any one of claims 1 to 3, wherein the identifying the target user from the plurality of first users according to the spatio-temporal similarity and the visual similarity comprises:
determining fusion similarity of the plurality of first users and the target user according to the space-time similarity and the visual similarity;
and identifying the target user from the plurality of first users according to the fusion similarity.
5. The method of claim 4, wherein determining the fused similarity of the plurality of first users to the target user based on the spatiotemporal similarity and the visual similarity comprises:
and performing weighted fusion on the space-time similarity and the visual similarity to obtain the fusion similarity between the plurality of first users and the target user.
6. The method of claim 5, wherein the weighted fusion of the spatiotemporal similarity and the visual similarity to obtain the fused similarity between the first users and the target user comprises:
screening out at least one first user with the space-time similarity larger than a first threshold and/or with the visual similarity larger than a second threshold from the plurality of first users;
and performing weighted fusion on the spatial-temporal similarity and the visual similarity of the screened at least one first user and the target user, and determining the fusion similarity of the at least one first user and the target user.
7. The method of any one of claims 1 to 6, further comprising:
counting the number of historical samples in each time interval aiming at a pair of access positions in the target area, wherein one historical sample represents one access record of a historical user;
and generating historical spatiotemporal distribution information of the access positions according to the number of historical samples in each time interval and the total number of the historical samples of the access positions.
8. The method of claim 7, further comprising:
and smoothing the historical space-time distribution information to obtain the smoothed historical space-time distribution information of the access position.
9. The method of any one of claims 1 to 8, wherein after identifying the target user from the plurality of first users, further comprising:
and storing the entering information and the leaving information of the target user as a one-time entering and exiting record, and/or generating the consumption information of the target user according to the entering information and the leaving information of the target user.
10. An identification device, comprising:
a first determination module, configured to determine spatiotemporal similarities between a plurality of first users and a target user respectively based on historical spatiotemporal distribution information of the users among a plurality of locations within the target area, entry information of the plurality of first users who have currently entered the target area, and exit information of the target user, where the historical spatiotemporal distribution information represents a correspondence between durations and probabilities experienced by the users at a pair of entry and exit locations, the entry and exit locations include an entry location and an exit location, the entry information includes an entry location, an entry time, and an entry image, and the exit information includes an exit location, an exit time, and an exit image;
a second determining module, configured to determine, according to entry information of the multiple first users and exit information of the target user, visual similarities between the multiple first users and the target user respectively;
and the identification module is used for identifying the target user from the plurality of first users according to the space-time similarity and the visual similarity.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 9.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202011132611.3A 2020-10-21 2020-10-21 Identity recognition method and device, electronic equipment and storage medium Pending CN112232424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011132611.3A CN112232424A (en) 2020-10-21 2020-10-21 Identity recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011132611.3A CN112232424A (en) 2020-10-21 2020-10-21 Identity recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112232424A true CN112232424A (en) 2021-01-15

Family

ID=74108916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011132611.3A Pending CN112232424A (en) 2020-10-21 2020-10-21 Identity recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112232424A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269129A (en) * 2021-06-11 2021-08-17 成都商汤科技有限公司 Identity recognition method and device, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680046A (en) * 2013-11-29 2015-06-03 华为技术有限公司 User activity recognition method and device
US20170286845A1 (en) * 2016-04-01 2017-10-05 International Business Machines Corporation Automatic extraction of user mobility behaviors and interaction preferences using spatio-temporal data
US20180060684A1 (en) * 2016-08-31 2018-03-01 Beijing University Of Posts And Telecommunications Progressive vehicle searching method and device
CN108427911A (en) * 2018-01-30 2018-08-21 阿里巴巴集团控股有限公司 A kind of auth method, system, device and equipment
CN109740653A (en) * 2018-12-25 2019-05-10 北京航空航天大学 A kind of vehicle recognition methods again for merging visual appearance and space-time restriction
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110348379A (en) * 2019-07-10 2019-10-18 北京旷视科技有限公司 Target object determines method, apparatus, system and storage medium in a kind of public transport
CN110428448A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium
CN110619268A (en) * 2019-08-07 2019-12-27 北京市新技术应用研究所 Pedestrian re-identification method and device based on space-time analysis and depth features
CN110706377A (en) * 2019-08-19 2020-01-17 厦门路桥信息股份有限公司 Passing method, medium and system based on face recognition and riding code intercommunication
CN110765863A (en) * 2019-09-17 2020-02-07 清华大学 Target clustering method and system based on space-time constraint
CN110942036A (en) * 2019-11-29 2020-03-31 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN111369727A (en) * 2020-02-20 2020-07-03 上海商汤智能科技有限公司 Traffic control method and device
CN111522995A (en) * 2020-04-26 2020-08-11 重庆紫光华山智安科技有限公司 Target object analysis method and device and electronic equipment
CN111738221A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Face clustering method, face clustering device and computer-readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680046A (en) * 2013-11-29 2015-06-03 华为技术有限公司 User activity recognition method and device
US20170286845A1 (en) * 2016-04-01 2017-10-05 International Business Machines Corporation Automatic extraction of user mobility behaviors and interaction preferences using spatio-temporal data
US20180060684A1 (en) * 2016-08-31 2018-03-01 Beijing University Of Posts And Telecommunications Progressive vehicle searching method and device
CN108427911A (en) * 2018-01-30 2018-08-21 阿里巴巴集团控股有限公司 A kind of auth method, system, device and equipment
CN109740653A (en) * 2018-12-25 2019-05-10 北京航空航天大学 A kind of vehicle recognition methods again for merging visual appearance and space-time restriction
CN110110598A (en) * 2019-04-01 2019-08-09 桂林电子科技大学 The pedestrian of a kind of view-based access control model feature and space-time restriction recognition methods and system again
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110348379A (en) * 2019-07-10 2019-10-18 北京旷视科技有限公司 Target object determines method, apparatus, system and storage medium in a kind of public transport
CN110428448A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium
CN110619268A (en) * 2019-08-07 2019-12-27 北京市新技术应用研究所 Pedestrian re-identification method and device based on space-time analysis and depth features
CN110706377A (en) * 2019-08-19 2020-01-17 厦门路桥信息股份有限公司 Passing method, medium and system based on face recognition and riding code intercommunication
CN110765863A (en) * 2019-09-17 2020-02-07 清华大学 Target clustering method and system based on space-time constraint
CN110942036A (en) * 2019-11-29 2020-03-31 深圳市商汤科技有限公司 Person identification method and device, electronic equipment and storage medium
CN111369727A (en) * 2020-02-20 2020-07-03 上海商汤智能科技有限公司 Traffic control method and device
CN111522995A (en) * 2020-04-26 2020-08-11 重庆紫光华山智安科技有限公司 Target object analysis method and device and electronic equipment
CN111738221A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Face clustering method, face clustering device and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269129A (en) * 2021-06-11 2021-08-17 成都商汤科技有限公司 Identity recognition method and device, electronic equipment and storage medium
WO2022257306A1 (en) * 2021-06-11 2022-12-15 成都商汤科技有限公司 Identity identification method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN110009090B (en) Neural network training and image processing method and device
CN113228124B (en) Image processing method and device, electronic equipment and storage medium
US20210166040A1 (en) Method and system for detecting companions, electronic device and storage medium
CN110942036B (en) Person identification method and device, electronic equipment and storage medium
US20220084056A1 (en) Methods and apparatuses for managing visitor information, electronic devices and storage media
CN109543537B (en) Re-recognition model increment training method and device, electronic equipment and storage medium
CN111274426A (en) Category labeling method and device, electronic equipment and storage medium
WO2022134388A1 (en) Method and device for rider fare evasion detection, electronic device, storage medium, and computer program product
CN113688729B (en) Behavior recognition method and device, electronic equipment and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN112837454A (en) Passage detection method and device, electronic equipment and storage medium
CN113792622A (en) Frame rate adjusting method and device, electronic equipment and storage medium
US20170034092A1 (en) Method and device for processing short message
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
WO2022227562A1 (en) Identity recognition method and apparatus, and electronic device, storage medium and computer program product
CN114648796A (en) User identification method, device, storage medium and electronic equipment
CN111753611A (en) Image detection method, device and system, electronic equipment and storage medium
CN113762169A (en) People flow statistical method and device, electronic equipment and storage medium
CN112232424A (en) Identity recognition method and device, electronic equipment and storage medium
CN111523599B (en) Target detection method and device, electronic equipment and storage medium
CN110781975B (en) Image processing method and device, electronic device and storage medium
WO2022257306A1 (en) Identity identification method and apparatus, electronic device, and storage medium
CN110543928B (en) Method and device for detecting number of people on trackless rubber-tyred vehicle
CN109740557A (en) Method for checking object and device, electronic equipment and storage medium
CN114004868A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination