CN111968152A - Dynamic identity recognition method and device - Google Patents

Dynamic identity recognition method and device Download PDF

Info

Publication number
CN111968152A
CN111968152A CN202010680092.8A CN202010680092A CN111968152A CN 111968152 A CN111968152 A CN 111968152A CN 202010680092 A CN202010680092 A CN 202010680092A CN 111968152 A CN111968152 A CN 111968152A
Authority
CN
China
Prior art keywords
pedestrian
target
face
images
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010680092.8A
Other languages
Chinese (zh)
Other versions
CN111968152B (en
Inventor
蔡晓东
黄玳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin Topintelligent Communication Technology Co ltd
Original Assignee
Guilin Topintelligent Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin Topintelligent Communication Technology Co ltd filed Critical Guilin Topintelligent Communication Technology Co ltd
Priority to CN202010680092.8A priority Critical patent/CN111968152B/en
Publication of CN111968152A publication Critical patent/CN111968152A/en
Application granted granted Critical
Publication of CN111968152B publication Critical patent/CN111968152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention provides a dynamic identity recognition method and a device, wherein the method comprises the following steps: respectively obtaining a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras; respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images; carrying out image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups; respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results; and carrying out identification judgment on the plurality of matching results to obtain a target identity identification result. The invention improves the accuracy and reliability of identification, obtains richer target identity information, truly realizes non-cooperative target dynamic identity identification and more accurate target identity identification, and has lower equipment cost.

Description

Dynamic identity recognition method and device
Technical Field
The invention mainly relates to the technical field of target identification, in particular to a dynamic identity identification method and a device.
Background
At present, a face recognition method is mature day by day, and multiple sides are focused on static face recognition under a matched scene. In a monitoring scene, when the target identification accuracy based on the human face characteristics is interfered by factors such as the installation position and angle of a camera, the snapshot distance, the target activity, the light change and the like, the target identification accuracy is sharply reduced.
Deep learning is adopted to research a target identity recognition algorithm, and after high-dimensional features are extracted, cosine distance is usually adopted to calculate feature similarity, so that a feature matching result is obtained. Because the target biological characteristics in the monitoring scene are interfered by various realistic factors, the identification accuracy based on the single-mode characteristics (such as human face characteristics) is difficult to meet the requirements of people.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a dynamic identity identification method and a device.
The technical scheme for solving the technical problems is as follows: a dynamic identity recognition method comprises the following steps:
respectively obtaining a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
carrying out image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
Another technical solution of the present invention for solving the above technical problems is as follows: a dynamic identification device comprising:
the image acquisition module is used for respectively acquiring a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
the marking processing module is used for respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
the image matching module is used for performing image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
the matching result analysis module is used for respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and the identification result obtaining module is used for carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
The invention has the beneficial effects that: the method has the advantages that the multiple face images and the multiple pedestrian images are obtained by respectively marking the multiple face images to be processed and the multiple pedestrian images to be processed, real-time tracking of the recognition target is achieved, the multiple face images and the multiple pedestrian images are paired to obtain the multiple target pairing groups, the matching results of the multiple target pairing groups are analyzed respectively to obtain multiple matching results, richer target identity information is obtained, non-matching target dynamic identity recognition is really achieved, the target identity recognition results are obtained by recognition and judgment of the multiple matching results, the accuracy and reliability of recognition are improved, more accurate target identity recognition is achieved, and the equipment cost is low.
Drawings
Fig. 1 is a schematic flow chart of a dynamic identity recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a matching result screening method for dynamic identity recognition according to an embodiment of the present invention;
fig. 3 is a block diagram of a dynamic identity recognition apparatus according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a dynamic identity recognition method according to an embodiment of the present invention.
As shown in fig. 1, a dynamic identity recognition method includes the following steps:
respectively obtaining a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
carrying out image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
It should be understood that, install the surveillance camera head that the performance is superior at the main stream of people passageway and key crossing in community to utilize computer equipment to set up the parameter of camera, make the camera under the prerequisite of guaranteeing that the face is taken a candid photograph high quality, take a candid photograph simultaneously the pedestrian image of treating, the high quality can be taken a candid photograph simultaneously with the surveillance camera head that the performance is comparatively superior at present the human face image of treating with comparatively clear the pedestrian image of treating with the human face image of treating, its purpose is to reduce equipment cost and cost of labor, obtain the multiclass identity information of target simultaneously, it can guarantee through the same camera image of taking a candid photograph the human face image of treating with the pedestrian image of treating belongs to same target, utilize advanced target detection algorithm will the pedestrian image of treating detects, obtains the pedestrian image.
In the above embodiment, the plurality of face images and the plurality of pedestrian images are obtained by respectively labeling the plurality of face images to be processed and the plurality of pedestrian images to be processed, so that real-time tracking of the recognition target is realized, the plurality of face images and the plurality of pedestrian images are paired to obtain the plurality of target paired groups, the matching results of the plurality of target paired groups are respectively analyzed to obtain the plurality of matching results, richer target identity information is obtained, non-matching target dynamic identity recognition is really realized, the target identity recognition result is obtained by recognizing and judging the plurality of matching results, the accuracy and reliability of recognition are improved, more accurate target identity recognition is realized, and the equipment cost is lower.
Optionally, as an embodiment of the present invention, the process of respectively performing a labeling process on the multiple face images to be processed and the multiple pedestrian images to be processed to obtain multiple face images and multiple pedestrian images includes:
marking the face images to be processed in a mode of combining camera ID and snapshot time to obtain a plurality of face images;
and respectively carrying out pedestrian frame detection on the multiple pedestrian images to be detected by using a YOLO2 algorithm, and marking the detected pedestrian images to be detected in a mode of combining a camera ID (identity) and snapshot time to obtain multiple pedestrian images, wherein the camera ID is used for determining the arrangement position of each camera.
It should be understood that the pedestrian border is detected using the YOLO2 algorithm, resulting in a pedestrian image.
In the above embodiment, the multiple face images are obtained by processing the marks of the multiple face images to be processed in a manner of combining the camera ID and the snapshot time, the YOLO2 algorithm is used to detect the pedestrian frames of the multiple pedestrian images to be detected respectively, and the marks of the detected pedestrian images to be detected are processed in a manner of combining the camera ID and the snapshot time to obtain the multiple pedestrian images, so that the real-time tracking of the identified target is realized, the equipment cost and the labor cost are reduced, the various types of identity information of the target are obtained at the same time, and the accuracy and the reliability of the identification are improved.
Optionally, as an embodiment of the present invention, the process of respectively performing image screening processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target pairing groups includes:
performing primary screening processing on the face images according to the camera ID to obtain a plurality of target paired face images, and performing primary screening processing on the pedestrian images to obtain a plurality of target paired pedestrian images;
and carrying out one-to-one pairing processing on the target paired face images and the target paired pedestrian images to obtain a plurality of target paired groups.
In the above embodiment, a plurality of target paired face images are obtained by performing preliminary screening processing on a plurality of face images according to the camera ID, and a plurality of target paired pedestrian images are obtained by performing preliminary screening processing on a plurality of pedestrian images; and carrying out one-to-one pairing processing on the plurality of target paired face images and the plurality of target paired pedestrian images to obtain a plurality of target paired groups, so that synchronous capturing and tracking of the target images are realized, and the efficiency is improved.
Optionally, as an embodiment of the present invention, the process of performing preliminary screening processing on a plurality of face images according to the camera ID to obtain a plurality of target paired face images includes:
obtaining a plurality of cameras adjacent to each other according to the camera ID;
obtaining a plurality of face images adjacent to each other through a plurality of cameras adjacent to each other;
respectively obtaining a plurality of adjacent face snapshot times from the marks corresponding to each two adjacent face images;
respectively carrying out time interval calculation on the adjacent face snapshot time to obtain a plurality of face image time intervals;
carrying out distance calculation on the plurality of cameras which are adjacent to each other to obtain a plurality of camera intervals;
respectively carrying out speed calculation on each corresponding human face image time interval according to the intervals of the plurality of cameras to obtain a plurality of human face moving speeds;
comparing each face moving speed with a preset target moving speed, if the face moving speed is smaller than the preset target moving speed, obtaining target paired face images until all the to-be-detected face moving speeds are compared, and obtaining a plurality of target paired face images;
the process of primarily screening the plurality of pedestrian images to obtain a plurality of target paired pedestrian images comprises the following steps:
obtaining a plurality of pedestrian images adjacent to each other through the plurality of cameras adjacent to each other;
respectively obtaining a plurality of adjacent pedestrian snapshot times from the marks corresponding to each two adjacent pedestrian images;
respectively carrying out time interval calculation on the snap-shot time of the adjacent pedestrians to obtain a plurality of pedestrian image time intervals;
respectively carrying out speed calculation on each corresponding pedestrian image time interval according to the distance between the cameras to obtain a plurality of pedestrian moving speeds;
and comparing the moving speed of each pedestrian with the preset target moving speed, if the moving speed of each pedestrian is less than the preset target moving speed, obtaining target paired pedestrian images until all the moving speeds of the pedestrians to be treated are compared, and obtaining a plurality of target paired pedestrian images.
Specifically, the camera positions are fixed, a fixed distance exists between the two cameras, when two face images or pedestrian images are captured, a time interval exists, the fixed distance is divided by the time interval, the face moving speed or the pedestrian moving speed can be obtained through calculation, and then screening is carried out according to whether the face moving speed or the pedestrian moving speed is smaller than a preset target moving speed, so that a target paired face image or a target paired pedestrian image is obtained.
In the embodiment, the target paired face image and the target paired pedestrian image are obtained by judging the camera distance, the face image time interval and the pedestrian image time interval, so that the pairing number of the next step is reduced, and the efficiency is improved.
Optionally, as an embodiment of the present invention, the step of performing one-to-one pairing processing on the multiple target paired face images and the multiple target paired pedestrian images to obtain multiple target paired groups includes:
training the face recognition model based on a face recognition model, and respectively extracting face features of a plurality of target paired face images according to the trained face recognition model to obtain a plurality of face features to be compared;
calculating the face features to be compared and the face features in a preset feature library by utilizing a cosine distance formula to obtain a plurality of face similarity degrees to be compared;
training the pedestrian re-recognition model based on a pedestrian re-recognition model, and respectively extracting pedestrian features of a plurality of target paired pedestrian images according to the trained pedestrian re-recognition model to obtain a plurality of pedestrian features to be compared;
calculating the pedestrian characteristics to be compared and the pedestrian characteristics in a preset characteristic library by utilizing a cosine distance formula to obtain the similarity of the pedestrians to be compared;
comparing each face similarity to be compared with a threshold value respectively, if the face similarity is greater than the preset similarity, obtaining corresponding target paired face features until all the face similarities to be compared are compared, and obtaining a plurality of corresponding target paired face features;
comparing the similarity of each pedestrian to be compared with the threshold value respectively, if the similarity is greater than the preset similarity, obtaining corresponding target paired pedestrian features until all the similarities of the pedestrians to be compared are compared, and obtaining a plurality of corresponding target paired pedestrian features;
and carrying out one-to-one pairing processing on the target paired face features and the target paired pedestrian features to obtain a plurality of target paired groups, wherein each target paired group comprises a target paired face feature and a target paired pedestrian feature.
It should be understood that the face features to be compared and the features of the pedestrians to be compared are respectively extracted by using a trained face recognition model and a trained pedestrian re-recognition model, pairwise comparison is respectively performed on the face features to be compared and the features of the pedestrians to be compared, cosine similarity between the features is calculated, and when the face similarity to be compared or the similarity of the pedestrians to be compared exceeds a set threshold value, the target paired face images corresponding to the target paired face features and the target paired pedestrian images corresponding to the target paired pedestrian features are divided into the same group.
Specifically, the cosine similarity is an existing mainstream feature matching method, and is calculated by using a cosine distance formula, that is, a sixth formula:
Figure BDA0002585490550000071
wherein, A is the target feature to be identified, and B is the feature in the comparison library.
In the above embodiment, a plurality of target pairing groups are respectively paired for each pair of the human face features to be compared and the pedestrian features to be compared according to the similarity of the human faces to be paired or the similarity of the pedestrians to be paired is greater than the preset similarity, so that the target image capturing and tracking can be synchronously performed, and the efficiency is improved.
Optionally, as an embodiment of the present invention, the process of obtaining a plurality of matching results includes:
respectively carrying out series calculation on each target paired face feature and the paired target paired pedestrian feature through a first formula to obtain a plurality of target face pedestrian fusion features, wherein the first formula is as follows:
Fm=fm⊙pm
wherein, FmFor the mth target face pedestrian fusion feature, fmPairing face features for the mth target, pmA pedestrian feature is paired for the mth target, which is a feature series operation;
respectively carrying out series calculation on the face features of each feature library and the pedestrian features of the feature library through a second formula to obtain a plurality of feature library face pedestrian fusion features, wherein the second formula is as follows:
Fj,l=fj,l⊙pj,l
wherein, Fj,lFusing features for the face and pedestrian in the jth feature library fj,lFor the jth feature library face in the feature libraryFeature pj,lA pedestrian feature in the jth feature library in the feature library, which is a feature series operation;
respectively carrying out matching calculation on each target face pedestrian fusion feature and each feature library face pedestrian fusion feature through a third formula to obtain a plurality of matching results, wherein the third formula is as follows:
Scorem=[cos(Fm,F1,l),cos(Fm,F2,l),…,cos(Fm,FN,l)],
wherein, ScoremCos (F) as the mth matching resultm,Fj,l) The cosine similarity of the fusion feature of the mth target face pedestrian and the fusion feature of the jth feature library face pedestrian in the feature library, and N is the number of the fusion features of the feature library face pedestrian.
It should be understood that target paired face features and paired target paired pedestrian features are spliced end to obtain target face pedestrian fusion features, the cosine distance is used for calculating the similarity between the target face pedestrian fusion features and the face pedestrian fusion features in the feature library, and the similarity of the fusion features is sorted according to the serial number sequence in the library to obtain a similarity array called as a feature matching result or a matching result.
Specifically, the human face features and the pedestrian features are connected in series to obtain fusion features, and richer target identity information is obtained; compared with the mainstream multi-modal feature fusion recognition method, such as iris feature and face feature fusion recognition and fingerprint feature and iris feature fusion recognition, the method does not need target cooperation to collect biological features, and truly realizes non-cooperation target dynamic identity recognition.
In the above embodiment, a plurality of matching results are obtained by respectively performing matching calculation on each target face pedestrian fusion feature and each feature library face pedestrian fusion feature, so that richer target identity information can be obtained, the accuracy and reliability of target dynamic identity recognition are improved, and the non-matched target dynamic identity recognition is really realized without target matching to collect biological features.
Optionally, as an embodiment of the present invention, the process of performing identification determination on a plurality of matching results to obtain a target identity identification result includes:
counting the number of the matching results to obtain the number of the matching results;
when the number of the matching results is more than or equal to 3, screening the number of the matching results by using Euclidean distance to obtain 3 matching final results, calculating the 3 matching final results to obtain a first target identity recognition result, and taking the first target identity recognition result as a target identity recognition result;
when the number of the matching results is equal to 2, obtaining 2 matching final results, calculating the 2 matching final results to obtain a second target identity recognition result, and taking the second target identity recognition result as a target identity recognition result;
and when the number of the matching results is equal to 1, obtaining a third target identity recognition result according to the maximum subscript of the matching results, and taking the third target identity recognition result as a target identity recognition result.
Specifically, 3 matching results with higher confidence degrees are selected by utilizing the Euclidean distance, and the matching results with lower confidence degrees are ignored; as shown in fig. 2, the matching result screening is achieved by four details: firstly, dividing M matching results into K small groups comprising 3 groups of matching results according to a bubbling mode; then combining the matching results in the small groups in pairs to obtain 3 combined results; then, calculating the deviation of each combination by using the Euclidean distance, and finally summing the results of the 3 deviations; and finally, sorting the deviation sum of the K small groups, selecting 3 matching results with the minimum deviation sum in the small groups to obtain 3 matching final results, and calculating the 3 matching final results to obtain a first target identification result.
It should be understood that although richer identity information can be obtained by using the face and pedestrian feature fusion, part of target identity information is still lost in the fusion features, and a larger target identity misjudgment probability still exists on the basis of a single group of fusion features.
In the embodiment, the number of the matching results is judged to obtain the target identity recognition result, so that the reliability and the recognition accuracy of the method are improved, and the misjudgment probability of the target identity is reduced.
Optionally, as an embodiment of the present invention, the step of calculating the 3 final matching results to obtain the first target identification result includes:
calculating the 3 final matching results by a fourth formula to obtain a first target identity recognition result, wherein the fourth formula is as follows:
IDindex=ArgMax(Score1st+Score2nd+Score3rd),
wherein, IDindexIdentifying the first result for the target identity, Score1stFor the first matching end result, Score2ndFor the second matching end result, Score3rdFor the third matching final result, ArgMax is the output maximum subscript.
In the above embodiment, the first result of the target identity recognition is obtained by calculating the 3 final matching results according to the fourth formula, so that the reliability and the recognition accuracy of the method are improved, and the misjudgment probability of the target identity is reduced.
Optionally, as an embodiment of the present invention, the process of calculating the 2 final matching results to obtain the second target identity recognition result includes:
calculating the 2 final matching results through a fifth formula to obtain a second target identity recognition result, wherein the fifth formula is as follows:
IDindex=ArgMax(Score1st+Score2nd),
wherein, IDindexSecond result for target identification, Score1stFor the first matching end result, Score2ndFor the second matching final result, ArgMax is the output maximum subscript.
In the above embodiment, the second result of the target identity recognition is obtained by calculating the 2 final matching results according to the fifth formula, so that the reliability and the recognition accuracy of the method are improved, and the misjudgment probability of the target identity is reduced.
Optionally, as another embodiment of the present invention, a face recognition model and a pedestrian re-recognition model need to be trained, and the face feature to be paired and the pedestrian feature to be paired are output before a full-link layer; secondly, comprehensively utilizing space-time information of target activities, the human face features to be paired and the pedestrian features to be paired, judging that the two snap-shot images are the same target when activity space-time rules of targets in the two groups of snap-shot images in monitoring accord with actual conditions and cosine similarity of the human face features to be paired or the pedestrian features to be paired exceeds a preset threshold, and screening out multiple groups of target paired human face features and target paired pedestrian features of the same target in monitoring; then, the target paired face features and the target paired pedestrian features of each group are connected in series to obtain target face pedestrian fusion features, and the cosine distance is used for calculating the similarity of the target face pedestrian fusion features to obtain a matching result between the target face pedestrian fusion features and the features of a comparison library; and finally, making a fusion decision on the matching results of the multiple groups of target face pedestrian fusion characteristics, and outputting a target identity result.
Specifically, only a relatively advanced face recognition model and a relatively advanced pedestrian re-recognition model are respectively trained by utilizing public face and pedestrian data sets in advance, and are used for extracting the face features to be paired and the pedestrian features to be paired; then, judging whether the pedestrians belong to the same target or not according to the pedestrian activity rule and the feature similarity threshold value by utilizing the time-space information of the target activity, the face features to be paired and the pedestrian features to be paired, and realizing target tracking; finally, a target identity result is output through a matching result fusion decision of a plurality of groups of fusion characteristics; the invention only needs to train the face recognition model and the pedestrian re-recognition model once in advance, and does not need to train additional models in the realization process; in a monitoring scene, target tracking and identity recognition can be carried out on the community for 24 hours, the method is direct and easy to implement, and the equipment cost is low.
Fig. 3 is a block diagram of a dynamic identity recognition apparatus according to an embodiment of the present invention.
Optionally, as another embodiment of the present invention, as shown in fig. 3, a dynamic identity recognition apparatus includes:
the image acquisition module is used for respectively acquiring a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
the marking processing module is used for respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
the image matching module is used for performing image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
the matching result analysis module is used for respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and the identification result obtaining module is used for carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. It will be understood that the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A dynamic identity recognition method is characterized by comprising the following steps:
respectively obtaining a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
carrying out image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
2. The dynamic identity recognition method according to claim 1, wherein the process of labeling the plurality of face images to be processed and the plurality of pedestrian images to be processed respectively to obtain the plurality of face images and the plurality of pedestrian images comprises:
marking the face images to be processed in a mode of combining camera ID and snapshot time to obtain a plurality of face images;
and respectively carrying out pedestrian frame detection on the multiple pedestrian images to be detected by using a YOLO2 algorithm, and marking the detected pedestrian images to be detected in a mode of combining a camera ID (identity) and snapshot time to obtain multiple pedestrian images, wherein the camera ID is used for determining the arrangement position of each camera.
3. The dynamic identity recognition method according to claim 2, wherein the process of respectively performing image screening processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target pairing groups comprises:
performing primary screening processing on the face images according to the camera ID to obtain a plurality of target paired face images, and performing primary screening processing on the pedestrian images to obtain a plurality of target paired pedestrian images;
and carrying out one-to-one pairing processing on the target paired face images and the target paired pedestrian images to obtain a plurality of target paired groups.
4. The dynamic identity recognition method of claim 3, wherein the process of primarily screening the plurality of face images according to the camera ID to obtain a plurality of target paired face images comprises:
obtaining a plurality of cameras adjacent to each other according to the camera ID;
obtaining a plurality of face images adjacent to each other through a plurality of cameras adjacent to each other;
respectively obtaining a plurality of adjacent face snapshot times from the marks corresponding to each two adjacent face images;
respectively carrying out time interval calculation on the adjacent face snapshot time to obtain a plurality of face image time intervals;
carrying out distance calculation on the plurality of cameras which are adjacent to each other to obtain a plurality of camera intervals;
respectively carrying out speed calculation on each corresponding human face image time interval according to the intervals of the plurality of cameras to obtain a plurality of human face moving speeds;
comparing each face moving speed with a preset target moving speed, if the face moving speed is smaller than the preset target moving speed, obtaining target paired face images until all the to-be-detected face moving speeds are compared, and obtaining a plurality of target paired face images;
the process of primarily screening the plurality of pedestrian images to obtain a plurality of target paired pedestrian images comprises the following steps:
obtaining a plurality of pedestrian images adjacent to each other through the plurality of cameras adjacent to each other;
respectively obtaining a plurality of adjacent pedestrian snapshot times from the marks corresponding to each two adjacent pedestrian images;
respectively carrying out time interval calculation on the snap-shot time of the adjacent pedestrians to obtain a plurality of pedestrian image time intervals;
respectively carrying out speed calculation on each corresponding pedestrian image time interval according to the distance between the cameras to obtain a plurality of pedestrian moving speeds;
and comparing the moving speed of each pedestrian with the preset target moving speed, if the moving speed of each pedestrian is less than the preset target moving speed, obtaining target paired pedestrian images until all the moving speeds of the pedestrians to be treated are compared, and obtaining a plurality of target paired pedestrian images.
5. The dynamic identity recognition method of claim 4, wherein the process of performing one-to-one pairing on the target paired face images and the target paired pedestrian images to obtain a plurality of target paired groups comprises:
training the face recognition model based on a face recognition model, and respectively extracting face features of a plurality of target paired face images according to the trained face recognition model to obtain a plurality of face features to be compared;
calculating the face features to be compared and the face features in a preset feature library by utilizing a cosine distance formula to obtain a plurality of face similarity degrees to be compared;
training the pedestrian re-recognition model based on a pedestrian re-recognition model, and respectively extracting pedestrian features of a plurality of target paired pedestrian images according to the trained pedestrian re-recognition model to obtain a plurality of pedestrian features to be compared;
calculating the pedestrian characteristics to be compared and the pedestrian characteristics in a preset characteristic library by utilizing a cosine distance formula to obtain the similarity of the pedestrians to be compared;
comparing each face similarity to be compared with a threshold value respectively, if the face similarity is greater than the preset similarity, obtaining corresponding target paired face features until all the face similarities to be compared are compared, and obtaining a plurality of corresponding target paired face features;
comparing the similarity of each pedestrian to be compared with the threshold value respectively, if the similarity is greater than the preset similarity, obtaining corresponding target paired pedestrian features until all the similarities of the pedestrians to be compared are compared, and obtaining a plurality of corresponding target paired pedestrian features;
and carrying out one-to-one pairing processing on the target paired face features and the target paired pedestrian features to obtain a plurality of target paired groups, wherein each target paired group comprises a target paired face feature and a target paired pedestrian feature.
6. The dynamic identification method of claim 5, wherein the process of obtaining a plurality of matching results comprises:
respectively carrying out series calculation on each target paired face feature and the paired target paired pedestrian feature through a first formula to obtain a plurality of target face pedestrian fusion features, wherein the first formula is as follows:
Fm=fm⊙pm
wherein, FmFor the mth target face pedestrian fusion feature, fmPairing face features for the mth target, pmA pedestrian feature is paired for the mth target, which is a feature series operation;
respectively carrying out series calculation on the face features of each feature library and the pedestrian features of the feature library through a second formula to obtain a plurality of feature library face pedestrian fusion features, wherein the second formula is as follows:
Fj,l=fj,l⊙pj,l
wherein, Fj,lFusing features for the face and pedestrian in the jth feature library fj,lFor the jth feature library face feature in the feature library, pj,lA pedestrian feature in the jth feature library in the feature library, which is a feature series operation;
respectively carrying out matching calculation on each target face pedestrian fusion feature and each feature library face pedestrian fusion feature through a third formula to obtain a plurality of matching results, wherein the third formula is as follows:
Scorem=[cos(Fm,F1,l),cos(Fm,F2,l),…,cos(Fm,FN,l)],
wherein, ScoremCos (F) as the mth matching resultm,Fj,l) The cosine similarity of the fusion feature of the mth target face pedestrian and the fusion feature of the jth feature library face pedestrian in the feature library, and N is the number of the fusion features of the feature library face pedestrian.
7. The dynamic identity recognition method according to any one of claims 1 to 6, wherein the process of performing recognition decision on the plurality of matching results to obtain the target identity recognition result comprises:
counting the number of the matching results to obtain the number of the matching results;
when the number of the matching results is more than or equal to 3, screening the number of the matching results by using Euclidean distance to obtain 3 matching final results, calculating the 3 matching final results to obtain a first target identity recognition result, and taking the first target identity recognition result as a target identity recognition result;
when the number of the matching results is equal to 2, obtaining 2 matching final results, calculating the 2 matching final results to obtain a second target identity recognition result, and taking the second target identity recognition result as a target identity recognition result;
and when the number of the matching results is equal to 1, obtaining a third target identity recognition result according to the maximum subscript of the matching results, and taking the third target identity recognition result as a target identity recognition result.
8. The dynamic identity recognition method of claim 7, wherein the step of calculating the 3 final matching results to obtain the first target identity recognition result comprises:
calculating the 3 final matching results by a fourth formula to obtain a first target identity recognition result, wherein the fourth formula is as follows:
IDindex=ArgMax(Score1st+Score2nd+Score3rd),
wherein, IDindexIdentifying the first result for the target identity, Score1stFor the first matching end result, Score2ndFor the second matching end result, Score3rdFor the third matching final result, ArgMax is the output maximum subscript.
9. The dynamic identity recognition method of claim 7, wherein the step of calculating the 2 final matching results to obtain the second target identity recognition result comprises:
calculating the 2 final matching results through a fifth formula to obtain a second target identity recognition result, wherein the fifth formula is as follows:
IDindex=ArgMax(Score1st+Score2nd),
wherein, IDindexSecond result for target identification, Score1stFor the first matching end result, Score2ndFor the second matching final result, ArgMax is the output maximum subscript.
10. A dynamic identification device, comprising:
the image acquisition module is used for respectively acquiring a plurality of face images to be processed and a plurality of pedestrian images to be processed from a plurality of monitoring cameras;
the marking processing module is used for respectively marking the plurality of face images to be processed and the plurality of pedestrian images to be processed to obtain a plurality of face images and a plurality of pedestrian images;
the image matching module is used for performing image matching processing on the plurality of face images and the plurality of pedestrian images to obtain a plurality of target matching groups;
the matching result analysis module is used for respectively analyzing the matching results of the target matching groups to obtain a plurality of matching results;
and the identification result obtaining module is used for carrying out identification judgment on the plurality of matching results to obtain a target identity identification result.
CN202010680092.8A 2020-07-15 2020-07-15 Dynamic identity recognition method and device Active CN111968152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010680092.8A CN111968152B (en) 2020-07-15 2020-07-15 Dynamic identity recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010680092.8A CN111968152B (en) 2020-07-15 2020-07-15 Dynamic identity recognition method and device

Publications (2)

Publication Number Publication Date
CN111968152A true CN111968152A (en) 2020-11-20
CN111968152B CN111968152B (en) 2023-10-17

Family

ID=73360850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010680092.8A Active CN111968152B (en) 2020-07-15 2020-07-15 Dynamic identity recognition method and device

Country Status (1)

Country Link
CN (1) CN111968152B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364825A (en) * 2020-11-30 2021-02-12 支付宝(杭州)信息技术有限公司 Method, apparatus and computer-readable storage medium for face recognition
CN112560621A (en) * 2020-12-08 2021-03-26 北京大学 Identification method, device, terminal and medium based on animal image
CN112597850A (en) * 2020-12-15 2021-04-02 浙江大华技术股份有限公司 Identity recognition method and device
CN112699843A (en) * 2021-01-13 2021-04-23 上海云思智慧信息技术有限公司 Identity recognition method and system
CN113436229A (en) * 2021-08-26 2021-09-24 深圳市金大智能创新科技有限公司 Multi-target cross-camera pedestrian trajectory path generation method

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1696393A2 (en) * 2005-02-28 2006-08-30 Kabushiki Kaisha Toshiba Face authenticating apparatus and entrance and exit management apparatus
CN102332093A (en) * 2011-09-19 2012-01-25 汉王科技股份有限公司 Identity authentication method and device adopting palmprint and human face fusion recognition
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN105912912A (en) * 2016-05-11 2016-08-31 青岛海信电器股份有限公司 Method and system for user to log in terminal by virtue of identity information
JP2016218873A (en) * 2015-05-25 2016-12-22 マツダ株式会社 Vehicle-purposed pedestrian image acquisition device
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
US20180096595A1 (en) * 2016-10-04 2018-04-05 Street Simplified, LLC Traffic Control Systems and Methods
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
US20180330526A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vehicle vision system and method
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109409297A (en) * 2018-10-30 2019-03-01 咪付(广西)网络技术有限公司 A kind of personal identification method based on binary channels convolutional neural networks
WO2019056988A1 (en) * 2017-09-25 2019-03-28 杭州海康威视数字技术股份有限公司 Face recognition method and apparatus, and computer device
CN109815874A (en) * 2019-01-17 2019-05-28 苏州科达科技股份有限公司 A kind of personnel identity recognition methods, device, equipment and readable storage medium storing program for executing
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
CN109919093A (en) * 2019-03-07 2019-06-21 苏州科达科技股份有限公司 A kind of face identification method, device, equipment and readable storage medium storing program for executing
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
WO2019119505A1 (en) * 2017-12-18 2019-06-27 深圳云天励飞技术有限公司 Face recognition method and device, computer device and storage medium
CN109961031A (en) * 2019-01-25 2019-07-02 深圳市星火电子工程公司 Face fusion identifies identification, target person information display method, early warning supervision method and system
CN110188658A (en) * 2019-05-27 2019-08-30 Oppo广东移动通信有限公司 Personal identification method, device, electronic equipment and storage medium
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110298249A (en) * 2019-05-29 2019-10-01 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium
WO2019205369A1 (en) * 2018-04-28 2019-10-31 平安科技(深圳)有限公司 Electronic device, identity recognition method based on human face image and voiceprint information, and storage medium
WO2019228317A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Face recognition method and device, and computer readable medium
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
CN110796072A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Target tracking and identity recognition method based on double-task learning
WO2020037937A1 (en) * 2018-08-20 2020-02-27 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, terminal, and computer readable storage medium
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111368608A (en) * 2018-12-26 2020-07-03 杭州海康威视数字技术股份有限公司 Face recognition method, device and system

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1696393A2 (en) * 2005-02-28 2006-08-30 Kabushiki Kaisha Toshiba Face authenticating apparatus and entrance and exit management apparatus
CN102332093A (en) * 2011-09-19 2012-01-25 汉王科技股份有限公司 Identity authentication method and device adopting palmprint and human face fusion recognition
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
JP2016218873A (en) * 2015-05-25 2016-12-22 マツダ株式会社 Vehicle-purposed pedestrian image acquisition device
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN105912912A (en) * 2016-05-11 2016-08-31 青岛海信电器股份有限公司 Method and system for user to log in terminal by virtue of identity information
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
US20180096595A1 (en) * 2016-10-04 2018-04-05 Street Simplified, LLC Traffic Control Systems and Methods
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
US20180330526A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vehicle vision system and method
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
WO2019056988A1 (en) * 2017-09-25 2019-03-28 杭州海康威视数字技术股份有限公司 Face recognition method and apparatus, and computer device
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
WO2019119505A1 (en) * 2017-12-18 2019-06-27 深圳云天励飞技术有限公司 Face recognition method and device, computer device and storage medium
WO2019205369A1 (en) * 2018-04-28 2019-10-31 平安科技(深圳)有限公司 Electronic device, identity recognition method based on human face image and voiceprint information, and storage medium
WO2019228317A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Face recognition method and device, and computer readable medium
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
WO2020037937A1 (en) * 2018-08-20 2020-02-27 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, terminal, and computer readable storage medium
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109409297A (en) * 2018-10-30 2019-03-01 咪付(广西)网络技术有限公司 A kind of personal identification method based on binary channels convolutional neural networks
CN111368608A (en) * 2018-12-26 2020-07-03 杭州海康威视数字技术股份有限公司 Face recognition method, device and system
CN109815874A (en) * 2019-01-17 2019-05-28 苏州科达科技股份有限公司 A kind of personnel identity recognition methods, device, equipment and readable storage medium storing program for executing
CN109961031A (en) * 2019-01-25 2019-07-02 深圳市星火电子工程公司 Face fusion identifies identification, target person information display method, early warning supervision method and system
CN109919093A (en) * 2019-03-07 2019-06-21 苏州科达科技股份有限公司 A kind of face identification method, device, equipment and readable storage medium storing program for executing
CN109934176A (en) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 Pedestrian's identifying system, recognition methods and computer readable storage medium
CN110188658A (en) * 2019-05-27 2019-08-30 Oppo广东移动通信有限公司 Personal identification method, device, electronic equipment and storage medium
CN110298249A (en) * 2019-05-29 2019-10-01 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN110796072A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Target tracking and identity recognition method based on double-task learning
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAN, HUA等: "KISS+ for rapid and accurate pedestrian re-identification", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》, vol. 22, no. 1, pages 394 - 403, XP011828535, DOI: 10.1109/TITS.2019.2958741 *
周倩: "基于特征融合的步态识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 9, pages 138 - 1040 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364825A (en) * 2020-11-30 2021-02-12 支付宝(杭州)信息技术有限公司 Method, apparatus and computer-readable storage medium for face recognition
CN112560621A (en) * 2020-12-08 2021-03-26 北京大学 Identification method, device, terminal and medium based on animal image
CN112597850A (en) * 2020-12-15 2021-04-02 浙江大华技术股份有限公司 Identity recognition method and device
CN112597850B (en) * 2020-12-15 2022-04-19 浙江大华技术股份有限公司 Identity recognition method and device
CN112699843A (en) * 2021-01-13 2021-04-23 上海云思智慧信息技术有限公司 Identity recognition method and system
CN113436229A (en) * 2021-08-26 2021-09-24 深圳市金大智能创新科技有限公司 Multi-target cross-camera pedestrian trajectory path generation method

Also Published As

Publication number Publication date
CN111968152B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN111968152B (en) Dynamic identity recognition method and device
CN107609497B (en) Real-time video face recognition method and system based on visual tracking technology
CN107506702B (en) Multi-angle-based face recognition model training and testing system and method
CN111325115B (en) Cross-modal countervailing pedestrian re-identification method and system with triple constraint loss
CN104751136B (en) A kind of multi-camera video event back jump tracking method based on recognition of face
US11263435B2 (en) Method for recognizing face from monitoring video data
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN101216884B (en) A method and system for face authentication
CN105740758A (en) Internet video face recognition method based on deep learning
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN109145742B (en) Pedestrian identification method and system
CN102346847B (en) License plate character recognizing method of support vector machine
CN105335726B (en) Recognition of face confidence level acquisition methods and system
CN110163135B (en) Dynamic algorithm-based one-person one-file face clustering method and system
CN107145862A (en) A kind of multiple features matching multi-object tracking method based on Hough forest
CN102902986A (en) Automatic gender identification system and method
CN106650574A (en) Face identification method based on PCANet
CN110991397B (en) Travel direction determining method and related equipment
CN111027377A (en) Double-flow neural network time sequence action positioning method
CN109344909A (en) A kind of personal identification method based on multichannel convolutive neural network
CN110826390A (en) Video data processing method based on face vector characteristics
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
CN111582195B (en) Construction method of Chinese lip language monosyllabic recognition classifier
CN113436231A (en) Pedestrian trajectory generation method, device, equipment and storage medium
CN111950452A (en) Face recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant