CN114038048A - Identity type recognition system - Google Patents
Identity type recognition system Download PDFInfo
- Publication number
- CN114038048A CN114038048A CN202111455798.5A CN202111455798A CN114038048A CN 114038048 A CN114038048 A CN 114038048A CN 202111455798 A CN202111455798 A CN 202111455798A CN 114038048 A CN114038048 A CN 114038048A
- Authority
- CN
- China
- Prior art keywords
- feature
- identity
- user
- unit
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application provides an identity type recognition system, which comprises: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit; the image acquisition equipment is used for acquiring the information of the image to be processed; the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user; the characteristic determining unit is used for determining the identity characteristic according to the user characteristic; and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics. It can be seen that, the identity type of the user can be automatically identified by the identity type identification system, the identity of the user does not need to be manually identified as in the prior art, the problems of subjectivity, one-sidedness and limitation caused by manual evaluation according to experience are avoided, and the problem of identity identification errors caused by manual negligence is solved, so that the identification efficiency of the identity type of the user is improved, and further the user experience is improved.
Description
Technical Field
The application relates to the field of artificial intelligence, in particular to an identity type identification system.
Background
In the social management scene, a good method for identifying the identity type of people who are absent in the district is not provided, namely when one person appears, the system can detect the person and identify whether the person exists in the bottom library, but a stranger cannot identify whether the person is a takeaway or a district resident or an enterprise and public staff.
The existing main method mainly depends on manual identification, and experience deduces that the automatic labeling cannot be carried out. Therefore, the process of manually identifying the identity of a stranger is time-consuming, labor-consuming and low in efficiency, and the problem of identity identification errors caused by manual negligence easily occurs in the manual identification process. Therefore, a solution for identifying the type of identity is needed.
Disclosure of Invention
The application provides and realizes an identity type identification system to improve the efficiency of user identity type identification, thereby improving the user experience.
The application provides an identity type recognition system, the system includes: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit;
the image acquisition equipment is used for acquiring image information to be processed;
the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user;
the characteristic determining unit is used for determining the identity characteristic of the target user according to the user characteristic;
and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics.
Optionally, the image information to be processed includes an image and/or a video, and an acquisition position and an acquisition time corresponding to the image and/or the video.
Optionally, the image analysis unit includes a trained user feature recognition model;
and the image analysis unit is used for inputting the image information to be processed into the user characteristic identification model to obtain the user characteristics of the image information to be processed.
Optionally, the user characteristics include a face image, a face characteristic, a face attribute, a body image, a body characteristic, a body attribute, a behavior characteristic, and a position attribute.
Optionally, the identity features include target behaviors, action tracks, human faces and human body feature sets; the characteristic determining unit comprises a behavior recognizing unit, a clustering unit and a track determining unit;
the behavior identification unit is used for carrying out behavior identification according to the behavior action characteristics to obtain a target behavior;
the clustering unit is used for carrying out clustering processing according to the human face characteristics and the human body characteristics to obtain a human face and human body characteristic set corresponding to a target user;
and the track determining unit is used for determining the action track of the target user according to the human face and human body characteristic set corresponding to the target user and the position attribute, the acquisition position and the acquisition time respectively corresponding to each human face characteristic and each human body characteristic in the human face and human body characteristic set.
Optionally, the behavior identification unit is configured to determine a target behavior corresponding to the behavior action feature according to the behavior action feature and a position attribute, a collection position, and a collection time corresponding to the behavior action feature; and/or the presence of a gas in the gas,
the behavior recognition unit comprises a trained behavior recognition model; and the behavior recognition unit is used for inputting the behavior characteristics into the behavior recognition model to obtain the target behavior corresponding to the behavior action characteristics.
Optionally, the clustering unit is configured to determine similarity between the human face features and the human body features and features in each historical feature set respectively; and determining a human face feature set corresponding to the target user according to the similarity between the human face feature and the human body feature and the features in the historical feature sets.
Optionally, the clustering unit is specifically configured to determine fusion features corresponding to the at least two historical feature sets if similarities between the human face features and/or the human body features and features in the at least two historical feature sets are greater than a preset threshold; performing gain correction on the fusion features corresponding to the at least two historical feature sets respectively to obtain corrected fusion features corresponding to the at least two historical feature sets respectively; and determining the similarity between the human face features and/or the human body features and each corrected fusion feature, and taking the historical feature set corresponding to the corrected fusion feature with the maximum similarity as the human face feature set corresponding to the target user.
Optionally, the identity prediction unit includes a random forest model, where the random forest model includes a plurality of decision tree models, and the corresponding dimension weights of the decision tree models are different;
the identity prediction unit is used for respectively inputting the identity characteristics and the user characteristics into each decision tree model to obtain the identity prediction type of the target user output by each decision tree model;
and determining the identity type of the target user according to the identity prediction type of the target user output by each decision tree model.
Optionally, the system further includes a storage unit, configured to store the to-be-processed image information, the user characteristic, the identity characteristic, and the identity type of the target user.
It can be seen from the above technical solutions that the present application provides an identity type identification system, the system including: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit; the image acquisition equipment is used for acquiring image information to be processed; the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user; the characteristic determining unit is used for determining identity characteristics according to the user characteristics; and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics. Therefore, in the embodiment, after the image acquisition device acquires the image information to be processed, the image analysis unit, the feature determination unit and the identity prediction unit can be used for processing the image information to be processed to obtain the identity type of the target user in the image information to be processed, so that the identity type of the user can be automatically identified by using the identity type identification system, the identity of the user does not need to be manually identified as in the prior art, the problems of subjectivity, sidedness and limitation caused by manual evaluation according to experience are avoided, and the problem of identity identification error caused by manual negligence is solved, so that the efficiency of identifying the identity type of the user is improved, and the user experience is improved.
Further effects of the above-mentioned unconventional preferred modes will be described below in conjunction with specific embodiments.
Drawings
In order to more clearly illustrate the embodiments or prior art solutions of the present application, the drawings needed for describing the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a schematic structural diagram of an identity type recognition system according to an embodiment of the present application;
fig. 2 is a schematic diagram of a fusion process of a file fusion subunit according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following embodiments and accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor finds that in a social governance scene, a good method for identifying the identity type of a person is not available for the person who is present in the district, namely when one person appears, the system can detect the person and identify whether the person exists in a bottom bank or not, but strangers cannot identify whether the person is a takeaway or a district resident or an enterprise and public staff. The existing main method mainly depends on manual identification, and experience deduces that the automatic labeling cannot be carried out. Therefore, the process of manually identifying the identity of a stranger is time-consuming, labor-consuming and low in efficiency, and the problem of identity identification errors caused by manual negligence easily occurs in the manual identification process. Therefore, a solution for identifying the type of identity is needed.
Accordingly, the present application provides an identification type recognition system, in which the system comprises: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit; the image acquisition equipment is used for acquiring image information to be processed; the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user; the characteristic determining unit is used for determining identity characteristics according to the user characteristics; and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics. Therefore, in the embodiment, after the image acquisition device acquires the image information to be processed, the image analysis unit, the feature determination unit and the identity prediction unit can be used for processing the image information to be processed to obtain the identity type of the target user in the image information to be processed, so that the identity type of the user can be automatically identified by using the identity type identification system, the identity of the user does not need to be manually identified as in the prior art, the problems of subjectivity, sidedness and limitation caused by manual evaluation according to experience are avoided, and the problem of identity identification error caused by manual negligence is solved, so that the efficiency of identifying the identity type of the user is improved, and the user experience is improved.
Various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides an identity type identification system, where the system includes: the device comprises an image acquisition device, an image analysis unit, a feature determination unit and an identity prediction unit. The image acquisition equipment is connected with the image analysis unit, the image analysis unit is respectively connected with the feature determination unit and the identity prediction unit, and the feature determination unit is connected with the identity prediction unit. It should be noted that the image acquisition device, the image analysis unit, the feature determination unit, and the identity prediction unit may all be disposed on a terminal, the image acquisition device, the image analysis unit, the feature determination unit, and the identity prediction unit may all be terminals, respectively, and some units of the image acquisition device, the image analysis unit, the feature determination unit, and the identity prediction unit may also be disposed on a terminal.
In this embodiment, the image capturing device may be configured to capture image information to be processed. In this embodiment, the image capturing device may capture only an image, may also capture only a video, and may also capture both an image and a video, that is, the image information to be processed may include an image and/or a video, and a capturing position and a capturing time (i.e., a position and a time for capturing the image and/or the video) corresponding to the image and/or the video. In one implementation, the image capturing device may be a camera and snapshot device deployed at each point in the jurisdiction, so that the camera and snapshot device deployed at each point in the jurisdiction can be used for inputting video images of faces, human bodies and behaviors.
In this embodiment, the image analysis unit may be configured to perform feature extraction on the to-be-processed image information to obtain a user feature of a target user.
In one implementation, the image analysis unit may include a trained user feature recognition model. In this way, the image analysis unit may be configured to input the to-be-processed image information into the user feature recognition model, so as to obtain the user feature of the to-be-processed image information. It is emphasized that the user feature recognition model may be trained based on the training sample image and the user features of the user in the training sample image. The user characteristics may include face images, face characteristics, face attributes, body images, body characteristics, body attributes, behavior characteristics, and location characteristics.
Specifically, after the image analysis unit acquires the image information to be processed, a face image, a human body image, and a panoramic large image (for example, a background area) of the target user in the image information to be processed may be acquired first. Then, the face features and the face attributes of the target user can be determined according to the face image of the target user, the body features, the body attributes and the behavior and action features of the target user can be determined according to the body image of the target user, and the corresponding position features of the target user can be determined according to the scene large image.
The face image can be understood as a face image area of a user in the image information to be processed.
The face features can be understood as the characteristics of the face corresponding to the face image. The face features are usually 128-dimension 512-dimension numerical vectors, the face recognition deep learning model is used for processing the face in an inference stage to extract features of the face of the person different from the faces of other persons, the cosine similarity of the face features of a plurality of images of the person is obviously higher than that of the face of the person and the faces of the other persons, a threshold value is usually set, and the cosine similarity of a plurality of face features is higher than the threshold value, so that the face features of the person can be considered as the face features of the same person.
The face attribute can be understood as face information of a target user corresponding to the face image. The attribute of the face may include at least one of age, sex, color value, whether to wear glasses, whether to wear sunglasses, whether to smile, whether to wear a mask, race, whether to open eyes, whether to open mouth, whether to have a beard, hairstyle, and the like. It is emphasized that the specific value of the age attribute is difficult to be accurate based on the difference of the expressions in the lighting angles, and a plurality of age intervals can be divided according to the age groups according to experience, for example, the age intervals can be divided into 15 years or less, 16-25 years, 26-35 years, 36-45 years, 46-55 years and 56 years or more, which respectively correspond to teenagers, adolescents, middle-aged adults and elderly people. It should be noted that, the segmentation of the age group can adopt the result of the face attribute recognition model without manual sorting, so that the random selectivity of the model sample is provided, and the overfitting of the final identity recognition model is reduced.
The human body image can be understood as a human body image area of a user (can be understood as a limb image area under the head) in the image information to be processed.
The human body features can be understood as the characteristics of the human body corresponding to the human body image. The human body feature can be used for identifying a portrait in a short time, and is also called a feature of reid cross-mirror tracking, which is usually a feature vector with 512-1024 dimensions, and the matching degree of the feature is used for distinguishing the matching degree of the portrait, so that the front side and the back side of a person can be matched according to the feature vector in a continuous time, and the matching of long-term clothes changing and the like is not supported.
The human body attribute can be understood as human body information of a target user corresponding to the face image. The attributes of the human body generally include jacket type, jacket color, pants type, pants color, whether to be used as a rucksack or not, whether to be used as a backpack or not, whether to be used as a handbag or not, umbrella support and umbrella carrying, and the like.
A behavioral action characteristic may be understood as an action or behavioral state of a target user. For example, behavioral and performance characteristics include sitting, standing, lying, squatting, walking, running, and the like.
It can be understood that the image analysis unit may be configured to process multiple paths of videos and image information, where each of the videos or images has camera position information (i.e., an acquisition position) and a corresponding time (acquisition time), and may distinguish each of the captured human faces by using a space-time identifier, i.e., determine the captured human face information with the space-time identifier corresponding to the target user. Specifically, one piece of snapshot face information corresponding to the target user may include a face image, a panoramic large image, a camera ID, a name, a longitude and latitude, a face feature, face appearance time, a cluster archive ID, and the like, and thus, an information with a space-time identifier is formed.
In this embodiment, the feature attribute of the human face can be extracted by using the existing face recognition CNN convolutional neural network. The system is not limited to this, and may be implemented in combination with applications of face recognition models in the current industry, for example, resenext may be used.
It should be emphasized that, in this embodiment, the body attribute may further include identification information of take-away and express clothes and helmets, such as the identification of the clothes and helmets in jingdong, mei cluster, hungry. The behavior and action characteristics can also identify sitting, standing, lying, walking, squatting and running of the human body targets (namely the human bodies of the target users) detected in the pictures according to the gesture recognition algorithm. And the posture recognition of the image analysis unit needs to mark key joint points of a human body and train a deep learning model for human body detection and joint point positioning in a training stage, firstly detects the human body in the image and further performs coordinate regression on the key joint points when the posture recognition is performed on the image to obtain a skeleton of a target object, so that the posture of the target object is judged.
In this embodiment, the feature determining unit may be configured to determine the identity feature of the target user according to the user feature. In this embodiment, the identity feature of the user may be understood as a feature that can be used to reflect the identity type of the user, for example, the identity feature may include a target behavior, an action track, a human face, and a human body feature set.
In an implementation manner of this embodiment, the feature determination unit may include a behavior recognition unit, a clustering unit, and a trajectory determination unit.
The behavior identification unit may be configured to perform behavior identification according to the behavior action feature to obtain a target behavior. The target behavior may be understood as a behavior performed by the target user, and the target behavior may be understood as a continuous behavior performed by the target user over a period of time, such as leaving a camera range, traveling in a straight line, traveling in a snake, standing still, wandering, traveling at a specific speed, and the like.
In an implementation manner of this embodiment, the behavior identification unit may be configured to determine the target behavior corresponding to the behavior action feature according to the behavior action feature and the position attribute, the acquisition position, and the acquisition time corresponding to the behavior action feature. That is to say, the behavior recognition unit can perform time-dependent behavior recognition according to the motion characteristics of the characters in the video, such as recognizing the wandering, slow walking, fast walking, standing, lying and sitting of the target character according to the passing time and the traveling distance. It can be understood that, in this embodiment, the behavior recognition unit may also perform target tracking on a target user recognized in a scene snapshot of the same camera, perform behavior recognition according to a local movement track (determined according to a position attribute, a collection position, and a collection time) of a target, a movement direction, and a movement speed, and only need to judge whether the target user moves in a serpentine manner or in a straight manner, or moves back and forth, loiters, and stands. Based on the performance consideration of the application scene, the fine gesture and motion recognition is not needed, and only the direction and the speed of the gesture and the motion recognition are needed to be analyzed.
In one implementation manner of this embodiment, the behavior recognition unit may include a trained behavior recognition model. And the behavior recognition unit inputs the behavior characteristics into the behavior recognition model to obtain the target behavior corresponding to the behavior action characteristics. It is understood that the trained behavior recognition model may be trained based on the motion feature training samples and the behaviors corresponding to the motion feature training samples.
In this embodiment, the clustering unit is configured to perform clustering processing according to the face features and the human body features to obtain a face-human body feature set corresponding to the target user. It should be noted that, in this embodiment, in order to better distinguish the identity type of each user, the related data of the same user may be placed in the virtual archive corresponding to the user as much as possible, and for convenience of description, the virtual archive corresponding to the user may be referred to as a human face feature set. It should be noted that the facial feature set may include a facial feature and a human feature, or the facial feature set may include a facial feature set and a human feature set, where the facial feature set includes a facial feature and the human feature set includes a human feature.
The clustering unit is mainly used for the work of face feature clustering and human body feature clustering, the face feature clustering can be understood as the realization of long-term face feature matching, a series of photos of a target user are uniquely marked, and the mark can be used for matching the trip snapshot records of all places of one person of the target user. The human body feature clustering data has timeliness, long-term data cannot be processed together, one person can be permitted to have multiple files for the long term as much as possible, and the possibility of one file for multiple persons is reduced.
Specifically, the clustering unit may be configured to determine similarity between the human face features and the human body features and features in each historical feature set respectively; and determining a human face feature set corresponding to the target user according to the similarity between the human face feature and the human body feature and the features in the historical feature sets.
As an example, the clustering unit comprises two parts, namely a virtual archive feature library storage subunit and an archive fusion subunit. The virtual archive feature library stores the label, the feature, the last updating time, the creating time, the fusion feature number and other information of each historical feature set.
Next, a clustering process of the human face features is taken as an example, and it should be emphasized that the human body feature clustering is the same as the clustering process of the human face features. The implementation process of the file fusion subunit is shown in fig. 2, the virtual file feature library is empty, and is sequentially put into storage when the face feature data is accessed, and the storage rule is as follows:
and if the library is empty or has no similar features, warehousing, generating a new label (namely a historical feature library), and recording the fused feature number as 1. And using the newly generated historical feature library as a human face and human body feature set (or a human face feature set) corresponding to the target user.
If the similarity between the newly recorded face feature and one feature of a historical feature set in the library exceeds a preset threshold and the number of feature records is unique, the new feature tag adopts a feature tag compared in the library, the number of the fused features is increased by 1, the features in the library are fused with the new feature, namely the face feature is fused with the feature of which the similarity exceeds the preset threshold to obtain the fused feature, and the historical feature set corresponding to the feature is used as a face human feature set (or a face feature set) corresponding to a target user.
If the similarity between the human face features and the features in at least two historical feature sets is larger than a preset threshold, determining fusion features corresponding to the at least two historical feature sets respectively; performing gain correction on the fusion features corresponding to the at least two historical feature sets respectively to obtain corrected fusion features corresponding to the at least two historical feature sets respectively; and determining the similarity between the human face features and each modified fusion feature, and taking the historical feature set corresponding to the modified fusion feature with the maximum similarity as the human face feature set (or the human face feature set) corresponding to the target user.
Specifically, if there is a feature record with similarity exceeding the threshold between the new snapshot record and the existing features in the library and the record number is more than 1, the following algorithm is performed:
assuming that the similarity of the new features and f1, f2, …, fn, s1, s2, …, sn in the library exceeds a threshold value, the corresponding archive IDs are vid1, vid2, … vidn;
for each virtual file, which is formed by a series of face snapshot records, the face features are fused into a fusion feature representing the file, and the fusion feature is fused into a fusion feature of several digits of the original record number of the fusion feature. Let the fused feature number of vidk (k =1, 2., n) be Ck, the Sum of these n fused feature numbers be Sum = sigma (Ck), k =1,2, …, n. For these Ck, its softmax value is calculated: p _ k = exp (ck)/Sigma (exp (ci)), i ranges from 1 to n, P _ k represents the probability of k classes appearing; for example, if the fusion feature numbers of four virtual files are [1,2,3,4], the softmax value is calculated as [0.0321, 0.0871, 0.2369, 0.6439], the sum of the values is 1, and the difference is amplified. The following values are calculated for each tag vidk: and performing gain correction on the similarity sk to obtain a corrected fusion feature Vk, and reducing overlarge similarity influence caused by the difference of accidental sampling of the image.
Vk = sk + Delta _ k, where Delta _ k is a characteristic gain, which may be positive or negative.
Wherein the characteristic gain Delta _ k is calculated as follows:
delta _ k = (1-sk) × (Ck/Sum) + P _ k (-ln (Ik/5)), where Ik is the time of the last fusion of the label in the new-to-old order, the latest value is 1, and the next value is 2.
Where Ck/Sum is the positive excitation for the ratio of the number of fused features, the more it increases.
Ln (Ik/5) is the more weighted excitations that are closer in time, Ik =1,2,3,4,5,6 corresponding to values of 1.61, 0.92, 0.51, 0.22, 0, -0.18. It can be seen that the labels fused recently have positive excitation, the 2 nd and later have decreasing effect, and the 6 th position in time has weakening effect on feature similarity. P _ k (-ln (Ik/5)) has an inhibitory effect on the smaller number of clusters.
Through the method, the similarity between the face features in the new face record and the fusion features of the virtual files can be corrected, and the similarity is subjected to gain processing for the type which is updated recently and has a large number of fusion features. Accidental sampling difference is restrained, and the situations of one category and multiple categories are reduced.
And selecting the maximum value of Vk, taking the label with the corresponding label as the new characteristic, adding 1 to the fusion count of the label, and updating the final time.
In summary, the cluster unit selects the class mode to shield the influence of the number and the fusion time on class merging, promote the merging of the new and the most recent classes, and ensure the stability of the new and the most recent classes.
It should be noted that, in the face recognition algorithm, one key step is the feature extraction of the face, and one feature identifies one face image. In the training phase, a loss function in the training phase is defined to be cosface and arcface which are transited from past softmax to recent years in the industry, and the essence of the loss function is a maximum classification limit in cosine space and angle space. Under this mechanism we use the cosine distance (value 1 minus its cosine similarity) as the unit to measure its similarity. The extracted features may require the remaining chord distances of the multiple photographs identified as the same person to be as small as possible during the training phase, while the cosine distances of the features of different persons are to be as large as possible. Finally, the distance of the rest chords of the extracted face features of the training ground face recognition algorithm is close to 0 for the same person, namely the similarity of the rest chords is close to 1. Human features have similar characteristics.
For human body features, if the library is empty or has no similar features, the library is put into the library, a new tag (i.e., a historical feature library) is generated, and the number of fused features is recorded as 1. And using the newly generated historical feature library as a human face and human body feature set (or human body feature set) corresponding to the target user.
If the similarity between the newly recorded human body feature and one feature of a historical feature set in the library exceeds a preset threshold and the number of the feature records is unique, the new feature tag adopts a feature tag compared in the library, the number of the fused features is increased by 1, the features in the library are fused with the new feature, namely the human body feature and the feature with the similarity exceeding the preset threshold are fused to obtain the fused feature, and the historical feature set corresponding to the feature is used as a human face feature set (or a human body feature set) corresponding to a target user.
If the similarity between the human body features and the features in at least two historical feature sets is larger than a preset threshold, determining fusion features corresponding to the at least two historical feature sets respectively; performing gain correction on the fusion features corresponding to the at least two historical feature sets respectively to obtain corrected fusion features corresponding to the at least two historical feature sets respectively; and determining the similarity between the human body features and each modified fusion feature, and taking the historical feature set corresponding to the modified fusion feature with the maximum similarity as the human face and human body feature set (or human body feature set) corresponding to the target user.
And the track determining unit is used for determining the action track of the target user according to the human face and human body characteristic set corresponding to the target user, the position attribute respectively corresponding to each human face characteristic and human body characteristic in the human face and human body characteristic set, and the acquisition position and acquisition time. It can be understood that the track determination unit can associate records of a person at different places and different times, so that the track of a certain time can be described, and the travel rule can be described, for example, from which the points start and through which the points arrive at where the points are fixed every day.
And the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics. The type of the target user may be understood as the type of the target user, for example, the type of the target user may include day shift, night shift, vendor, take-out, block, watch, unknown, casual resident, etc. For example, in the social administration field, people-centered overview of the jurisdiction is needed, an approximate occupation category, age group and the like of pedestrians on the road surface are needed to be obtained, and the occupation category is divided into several categories of a day shift group, a night shift group, a street vendor, takeaway, barren picking up, duty keeping, unknown, leisure residents and the like, wherein the priority of the day shift group and the night shift group is low, and only for people who return in the morning and evening or go out in the daytime and night, the target of the street vendor at night is classified as a street vendor; assuming that the same person is present at night, has little position for a long time, is standing, has a pedestrian standing around, is apron, etc., the identity prediction unit may determine the identity type vendor of the target user based on the identity characteristics of the user and the user characteristics.
Specifically, the identity prediction unit includes a random forest model, where the random forest model includes a plurality of decision tree models, and the corresponding dimension weights of the decision tree models are different. In this embodiment, pruning training may be performed on the random forest model in advance according to the collected training data, and the number of trees and the depth parameter in the random forest model may be determined according to the classification effect, for example, the number of decision trees in the random forest is finally set to 12, and the depth of the trees is set to 7.
The identity prediction unit may be configured to input the identity feature and the user feature into each decision tree model respectively, so as to obtain an identity prediction type of the target user output by each decision tree model; and determining the identity type of the target user according to the identity prediction type of the target user output by each decision tree model, for example, the identity prediction type with the highest frequency of occurrence in the identity prediction types of the target user output by all decision tree models may be used as the identity type of the target user.
For example, the identity prediction unit obtains user characteristics of the target user, such as a face attribute (age group [ support for automatic segmentation ], gender, beard, hairstyle, glasses, mask, race, etc.), a body attribute (one-shoulder bag, two-shoulder bag, handbag, jacket color, jacket type, pants color, pants type, helmet color, apron, takeaway express uniform (unknown, kyoto, mei, hungry)), a camera area attribute or position attribute or collection position (business district, office building, residential district, trunk road, secondary branch road, etc.); and obtaining identity characteristics of the target user, such as target behaviors (loitering, staying, walking slowly, walking quickly, lying and sitting), travel laws or action tracks (sporadic, multiple times in one day, reciprocating at fixed time, multiple times in and out, returning in the morning and evening, and going out in the daytime and evening); the identity prediction unit can respectively input the user characteristics and the identity characteristics into each decision tree model to obtain the identity prediction type of the target user output by each decision tree model; and taking the identity prediction type with the highest frequency of occurrence in the identity prediction types of the target user output by all the decision tree models as the identity type of the target user.
It should be noted that, in this embodiment, the selection of the identity type prediction algorithm of the identity prediction unit is not limited to the random forest algorithm, and the gbdt principle may also be used for model training and prediction of newly acquired data. With the increase of the collection quantity, the range of the characteristics can be expanded, and the neural network is utilized to select the characteristics and automatically extract the behavior characteristics. May be used as an alternative implementation. The method of machine learning or deep learning can be performed by performing model training using a large amount of training data (and identity features and the user features and corresponding identity types). One implementation is statistical learning by machine-learned random forests, but is not so limited.
It should be noted that the system may further include a storage unit. The storage unit can be used for storing the image information to be processed, the user characteristics, the identity characteristics and the identity type of the target user, so that other units and equipment in the identity type recognition system can read or store data from the storage unit at any time.
It can be seen from the above technical solutions that the present application provides an identity type identification system, the system including: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit; the image acquisition equipment is used for acquiring image information to be processed; the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user; the characteristic determining unit is used for determining identity characteristics according to the user characteristics; and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics. Therefore, in the embodiment, after the image acquisition device acquires the image information to be processed, the image analysis unit, the feature determination unit and the identity prediction unit can be used for processing the image information to be processed to obtain the identity type of the target user in the image information to be processed, so that the identity type of the user can be automatically identified by using the identity type identification system, the identity of the user does not need to be manually identified as in the prior art, the problems of subjectivity, sidedness and limitation caused by manual evaluation according to experience are avoided, and the problem of identity identification error caused by manual negligence is solved, so that the efficiency of identifying the identity type of the user is improved, and the user experience is improved.
That is to say, the identity recognition system in this embodiment includes an image capturing device, an image analysis unit, a feature determination unit, and an identity prediction unit, where the feature determination unit includes a behavior recognition unit, a clustering unit, and a trajectory determination unit, and the clustering unit, the image analysis unit, and the identity prediction unit are core modules. The image analysis unit is used for analyzing attributes and characteristics of human faces and human bodies in videos and images, wherein the attributes and characteristics comprise age, gender, clothes, backpacks and the like. The identification of take-out and express unified clothing and helmets is increased. The clustering unit provides a function of labeling a unique virtual label to a person, so that the track description is performed by associating multiple occurrences of the person. An algorithm for gradually stabilizing and normalizing multiple labels of one person is provided, so that the problem that one person cannot be uniquely depicted is greatly reduced. The identity prediction unit utilizes the extracted multi-dimensional attributes to identify by combining the long-term attributes of the travel rule and the short-term attributes of the behavior characteristics, so that the occupation prediction of the current active person is achieved, and data support is provided for the current human-centered overall portrait of the district.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. The above-described apparatus and system embodiments are merely illustrative, in that elements described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. An identification type recognition system, the system comprising: the system comprises image acquisition equipment, an image analysis unit, a feature determination unit and an identity prediction unit;
the image acquisition equipment is used for acquiring image information to be processed;
the image analysis unit is used for extracting the characteristics of the image information to be processed to obtain the user characteristics of the target user;
the characteristic determining unit is used for determining the identity characteristic of the target user according to the user characteristic;
and the identity prediction unit is used for determining the identity type of the target user according to the identity characteristics and the user characteristics.
2. The system according to claim 1, wherein the image information to be processed comprises images and/or videos and corresponding acquisition positions and acquisition times of the images and/or videos.
3. The system of claim 1, wherein the image analysis unit comprises a trained user feature recognition model;
and the image analysis unit is used for inputting the image information to be processed into the user characteristic identification model to obtain the user characteristics of the image information to be processed.
4. The system according to any one of claims 1-3, wherein the user characteristics include face images, face characteristics, face attributes, body images, body characteristics, body attributes, behavior and motion characteristics, and location attributes.
5. The system of claim 4, wherein the identity features include target behaviors, action tracks, and human faces, human body feature sets; the characteristic determining unit comprises a behavior recognizing unit, a clustering unit and a track determining unit;
the behavior identification unit is used for carrying out behavior identification according to the behavior action characteristics to obtain a target behavior;
the clustering unit is used for carrying out clustering processing according to the human face characteristics and the human body characteristics to obtain a human face and human body characteristic set corresponding to a target user;
and the track determining unit is used for determining the action track of the target user according to the human face and human body characteristic set corresponding to the target user and the position attribute, the acquisition position and the acquisition time respectively corresponding to each human face characteristic and each human body characteristic in the human face and human body characteristic set.
6. The system according to claim 5, wherein the behavior identification unit is configured to determine a target behavior corresponding to the behavior action feature according to the behavior action feature and a position attribute, a collection position, and a collection time corresponding to the behavior action feature; and/or the behavior recognition unit comprises a trained behavior recognition model; and the behavior recognition unit is used for inputting the behavior characteristics into the behavior recognition model to obtain the target behavior corresponding to the behavior action characteristics.
7. The system according to claim 5, wherein the clustering unit is configured to determine similarity between the human face features and the human body features and features in each historical feature set respectively; and determining a human face feature set corresponding to the target user according to the similarity between the human face feature and the human body feature and the features in the historical feature sets.
8. The system according to claim 7, wherein the clustering unit is specifically configured to determine a fusion feature corresponding to each of the at least two historical feature sets if the similarity between the human face feature and/or the human body feature and a feature in each of the at least two historical feature sets is greater than a preset threshold; performing gain correction on the fusion features corresponding to the at least two historical feature sets respectively to obtain corrected fusion features corresponding to the at least two historical feature sets respectively; and determining the similarity between the human face features and/or the human body features and each corrected fusion feature, and taking the historical feature set corresponding to the corrected fusion feature with the maximum similarity as the human face feature set corresponding to the target user.
9. The system of claim 5, wherein the identity prediction unit comprises a random forest model, wherein the random forest model comprises a plurality of decision tree models, and the corresponding dimensional weights of each decision tree model are different;
the identity prediction unit is used for respectively inputting the identity characteristics and the user characteristics into each decision tree model to obtain the identity prediction type of the target user output by each decision tree model;
and determining the identity type of the target user according to the identity prediction type of the target user output by each decision tree model.
10. The system according to any one of claims 1-3, further comprising a storage unit for storing the image information to be processed, the user characteristics, the identity characteristics, and the identity type of the target user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111455798.5A CN114038048A (en) | 2021-12-02 | 2021-12-02 | Identity type recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111455798.5A CN114038048A (en) | 2021-12-02 | 2021-12-02 | Identity type recognition system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114038048A true CN114038048A (en) | 2022-02-11 |
Family
ID=80139564
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111455798.5A Pending CN114038048A (en) | 2021-12-02 | 2021-12-02 | Identity type recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114038048A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114971485A (en) * | 2022-06-09 | 2022-08-30 | 长安大学 | Traffic safety management method, system and storable medium |
CN115346170A (en) * | 2022-08-11 | 2022-11-15 | 北京市燃气集团有限责任公司 | Intelligent monitoring method and device for gas facility area |
CN118097720A (en) * | 2024-04-18 | 2024-05-28 | 浙江深象智能科技有限公司 | Staff identification method, device and equipment |
-
2021
- 2021-12-02 CN CN202111455798.5A patent/CN114038048A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114971485A (en) * | 2022-06-09 | 2022-08-30 | 长安大学 | Traffic safety management method, system and storable medium |
CN115346170A (en) * | 2022-08-11 | 2022-11-15 | 北京市燃气集团有限责任公司 | Intelligent monitoring method and device for gas facility area |
CN118097720A (en) * | 2024-04-18 | 2024-05-28 | 浙江深象智能科技有限公司 | Staff identification method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Walk and learn: Facial attribute representation learning from egocentric video and contextual data | |
CN109344787B (en) | Specific target tracking method based on face recognition and pedestrian re-recognition | |
CN114038048A (en) | Identity type recognition system | |
Yuan et al. | Discriminative video pattern search for efficient action detection | |
Bhattacharya et al. | Recognition of complex events: Exploiting temporal dynamics between underlying concepts | |
Huang et al. | Cross-domain image retrieval with a dual attribute-aware ranking network | |
US7783135B2 (en) | System and method for providing objectified image renderings using recognition information from images | |
US7809192B2 (en) | System and method for recognizing objects from images and identifying relevancy amongst images and information | |
CN104915351B (en) | Picture sort method and terminal | |
CN110414441B (en) | Pedestrian track analysis method and system | |
JP6639743B1 (en) | Search system, search method, and program | |
CN115909407A (en) | Cross-modal pedestrian re-identification method based on character attribute assistance | |
JP2020074111A (en) | Search system, search method, and program | |
Kumar | Crowd behavior monitoring and analysis in surveillance applications: a survey | |
CN112422898B (en) | Video concentration method introducing deep behavior understanding | |
Wang et al. | Deep learning for scene-independent crowd analysis | |
Pandit et al. | A review on clothes matching and recommendation systems based on user attributes | |
JP6637221B1 (en) | Search system, search method, and program | |
Elakkiya et al. | Interactive real time fuzzy class level gesture similarity measure based sign language recognition using artificial neural networks | |
Taubert et al. | Automated Lifelog Moment Retrieval based on Image Segmentation and Similarity Scores. | |
Dutra et al. | Re-identifying people based on indexing structure and manifold appearance modeling | |
Qiu et al. | Old man fall detection based on surveillance video object tracking | |
Zhang et al. | Representative fashion feature extraction by leveraging weakly annotated online resources | |
Siddiqui et al. | IoT based Human Activity Recognition using Deep learning | |
Bennur et al. | Face Mask Detection and Face Recognition of Unmasked People in Organizations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |