CN110717428A - Identity recognition method, device, system, medium and equipment fusing multiple features - Google Patents

Identity recognition method, device, system, medium and equipment fusing multiple features Download PDF

Info

Publication number
CN110717428A
CN110717428A CN201910923395.5A CN201910923395A CN110717428A CN 110717428 A CN110717428 A CN 110717428A CN 201910923395 A CN201910923395 A CN 201910923395A CN 110717428 A CN110717428 A CN 110717428A
Authority
CN
China
Prior art keywords
feature
features
person
database
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910923395.5A
Other languages
Chinese (zh)
Inventor
杨俊�
陶云峰
李志广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Is According To Figure Network Technology Co Ltd
Shanghai Yitu Network Science and Technology Co Ltd
Original Assignee
Shanghai Is According To Figure Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Is According To Figure Network Technology Co Ltd filed Critical Shanghai Is According To Figure Network Technology Co Ltd
Priority to CN201910923395.5A priority Critical patent/CN110717428A/en
Publication of CN110717428A publication Critical patent/CN110717428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to an identity recognition method fusing multiple characteristics, which comprises the following steps: constructing a feature database, wherein the face features and other features of a plurality of persons are dynamically acquired, and for each acquisition of each person in the plurality of persons, the face features of the persons are respectively associated with each feature in the other features and fused into the feature database; collecting the face features and other features of the person to be recognized, and respectively associating the face features of the person to be recognized with each feature in the other features; for each person in the feature database, respectively comparing all the features contained in the feature set of the person to be recognized and the person and calculating a comprehensive confidence coefficient, wherein the comprehensive confidence coefficient is equal to the sum of the products of the confidence coefficient of each feature and the weight coefficient of each feature; and identifying the person to be identified as the person with the maximum comprehensive confidence coefficient with the person to be identified in the feature database. The invention also relates to an identity recognition device, medium, equipment and system fusing the multiple characteristics.

Description

Identity recognition method, device, system, medium and equipment fusing multiple features
Technical Field
The invention belongs to the technical field of identity recognition, and particularly relates to an identity recognition method, an identity recognition device, an identity recognition system, an identity recognition medium and identity recognition equipment which are integrated with a plurality of characteristics.
Background
Current security products for person tracking, stranger detection, and the like generally employ a single means, such as identification mainly by face detection, which is limited by the deployment density, position, angle, and ambient lighting of the camera, and sometimes strangers may also avoid (e.g., by intentionally blocking) the camera.
Disclosure of Invention
In order to solve all or part of the problems, the invention provides an identity recognition method fusing a plurality of characteristics, which comprises the following steps:
constructing a feature database, wherein the face features and other features of a plurality of persons are dynamically acquired, and for each acquisition of each person in the plurality of persons, the face features of the person are respectively associated with each feature in the other features and then fused into the feature database;
collecting the face features and the other features of the person to be recognized, and respectively associating the face features of the person to be recognized with each feature in the other features;
for each person in the feature database, respectively comparing each feature contained in the feature set of the person to be identified and the person to be identified, and calculating a comprehensive confidence coefficient, wherein the comprehensive confidence coefficient is equal to the sum of the confidence coefficient of each feature and the product of the weight coefficient of each feature;
identifying the person to be identified as the person in the feature database having the greatest combined confidence with the person to be identified.
In one possible implementation, the number and type of features included in the other features may be different for each acquisition of each of the plurality of persons, the acquisition of the person to be identified.
In one possible implementation, the other features may include one or more of a vehicle feature, a personal electronic device feature, a body type feature, and a voiceprint feature, wherein the vehicle feature, the body type feature, and the voiceprint feature may be captured by a camera, the personal electronic device feature may be captured by a wireless signal detection unit, and the wireless signal detection unit is disposed near the camera.
In one possible implementation, if the camera captures a face of a person's vehicle and a person driving the vehicle, the face features and the vehicle features of the person are extracted from the video frames captured by the camera and directly associated.
In a possible implementation manner, if the wireless signal detection unit detects a wireless signal of a personal electronic device, the personal electronic device feature in the detected wireless signal is extracted, and the personal electronic device feature is associated with the facial feature of a person extracted from a video frame captured by the camera.
In one possible implementation, the personal electronic device features are associated to the facial features of a person extracted from the following video frames:
a video frame at a time later than a time when the wireless signal is detected to appear by a first time threshold; and/or
A video frame at a time earlier than a time at which the wireless signal is detected to disappear by a second time threshold.
In a possible implementation manner, the first time threshold and the second time threshold are related to a detection range of the wireless signal detection unit.
In one possible implementation, the wireless signal is a bluetooth signal, a WiFi signal, or a mobile network signal.
In one possible implementation, the method further includes:
and fusing the human face features and the other features of the person to be recognized into the feature database.
In one possible implementation manner, the fusing the facial features and the other features of the person to be recognized or the person to be collected when the feature database is constructed into the feature database includes:
comparing the facial features of the collected person or the person to be recognized with the facial features of the persons in the feature database and calculating face similarity;
fusing the facial features and the other features of the acquired person or the person to be recognized into a feature set of a person having the largest face similarity in the feature database, and determining the weight coefficients of the respective features in the feature set of the person having the largest face similarity after the fusing.
In one possible implementation, after the facial features and the other features of the person to be collected or identified are fused into the feature database, if a feature in the feature set of the person with the largest face similarity obtains a new feature value, the weight coefficient of the feature is reduced.
In one possible implementation, if a feature associated with a person in the feature database has a plurality of feature values, a plurality of confidences between the plurality of feature values of the feature of the person of the feature database and feature values of the feature of the person to be identified are respectively calculated, and the confidence between the feature of the person of the feature database and the feature of the person to be identified is equal to the maximum confidence among the plurality of confidences.
The invention also provides an identity recognition device fusing a plurality of characteristics, which comprises:
the system comprises a feature acquisition module, a feature recognition module and a feature recognition module, wherein the feature acquisition module is used for dynamically acquiring the face features and other features of a plurality of persons when a feature database is constructed, and is also used for acquiring the face features and other features of the persons to be recognized;
a feature extraction module, configured to, for each acquisition of each person in the plurality of persons when constructing a feature database, extract the facial features and the other features of the person from the features acquired by the feature acquisition module, and also extract the facial features and the other features of the person to be recognized from the features acquired by the feature acquisition module;
the feature association module is used for associating the facial features of the persons extracted by the feature extraction module with the other features of the persons respectively for each acquisition of each person in the plurality of persons when a feature database is constructed, and is also used for associating the facial features of the persons to be identified extracted by the feature extraction module with the other features of the persons to be identified respectively;
a feature fusion module for fusing the facial features and the other features of the person associated with the feature association module into a feature database for each acquisition of each of the plurality of persons when constructing the feature database;
and the characteristic comparison module is used for respectively comparing each characteristic contained in the characteristic set of the personnel to be identified and the personnel to be identified, calculating a comprehensive confidence coefficient, and identifying the personnel to be identified as the personnel with the maximum comprehensive confidence coefficient of the personnel to be identified in the characteristic database, wherein the comprehensive confidence coefficient is equal to the sum of the products of the confidence coefficients of the characteristics and the respective weight coefficients.
In a possible implementation manner, the feature collection module collects each time of each person in the plurality of persons and collects the person to be identified when the feature database is constructed, and the number and types of the features contained in the other features may be different.
In one possible implementation, the other features may include one or more of a vehicle feature, a personal electronic device feature, a body shape feature, and a voiceprint feature, wherein the vehicle feature, the body shape feature, and the voiceprint feature may be captured by a camera in the feature capture module, the personal electronic device feature may be captured by a wireless signal detection unit in the feature capture module, and the wireless signal detection unit is disposed near the camera.
In one possible implementation, if the camera captures a face of a person's vehicle and a person driving the vehicle, the feature association module directly associates the face features and the vehicle features of the person extracted by the feature extraction module from the video frames captured by the camera.
In a possible implementation manner, if the wireless signal detection unit detects a wireless signal of a personal electronic device, the feature extraction module extracts the personal electronic device feature in the detected wireless signal, and the feature association module associates the personal electronic device feature with the face feature of a person extracted from a video frame captured by the camera.
In one possible implementation, the feature association module associates the personal electronic device features to the facial features of the person extracted from the following video frames:
a video frame at a time later than a time when the wireless signal is detected to appear by a first time threshold; and/or
A video frame at a time earlier than a time at which the wireless signal is detected to disappear by a second time threshold.
In a possible implementation manner, the first time threshold and the second time threshold are related to a detection range of the wireless signal detection unit.
In one possible implementation, the wireless signal is a bluetooth signal, a WiFi signal, or a mobile network signal.
In one possible implementation, the feature fusion module is further configured to:
and fusing the human face features and the other features of the person to be recognized into the feature database.
In a possible implementation manner, the feature comparison module is further configured to compare the facial features of the person to be recognized or the person to be recognized when constructing the feature database with the facial features of the persons in the feature database and calculate a face similarity, so that the feature fusion module fuses the facial features and the other features of the person to be recognized or the person to be recognized into a feature set of the person with the largest face similarity in the feature database, and determines the weight coefficient of each feature in the feature set of the person with the largest face similarity after fusion.
In one possible implementation, after the facial features and the other features of the person to be collected or identified are fused into the feature database, if a feature in the feature set of the person with the largest face similarity obtains a new feature value, the weight coefficient of the feature is reduced.
In one possible implementation, if a feature associated with a person in the feature database has a plurality of feature values, the feature comparison module calculates a plurality of confidences between the plurality of feature values of the feature of the person of the feature database and feature values of the feature of the person to be identified, respectively, and makes a confidence between the feature of the person of the feature database and the feature of the person to be identified equal to a maximum confidence among the plurality of confidences.
The present invention also provides a nonvolatile storage medium on which an identification program fusing a plurality of features is stored, the program being executed by a computer to implement any one of the above-described identification methods fusing a plurality of features.
The invention also provides an identity recognition device integrating a plurality of characteristics, which comprises:
a memory storing an identification program fusing a plurality of features, the identification program being executable by a computer; and
a processor connected to the memory and configured to execute the multi-feature fused identity program to implement any of the multi-feature fused identity methods described above.
The invention also provides an identity recognition system fusing a plurality of characteristics, which comprises any identity recognition device fusing a plurality of characteristics.
In the invention, the human face characteristics and other characteristics (such as vehicle characteristics, electronic equipment characteristics and the like) are fused to carry out comprehensive identity recognition on the person to be recognized, and even if the person to be recognized intentionally shields the face, the person to be recognized can also carry out identity recognition through other characteristics, so that the accuracy of recognition is improved compared with the identity recognition only by using human face detection. In addition, in the process of constructing the feature database, the identification accuracy can be more accurately improved compared with a static database used in the prior art by dynamically acquiring the features of the personnel at different moments and dynamically adjusting the weight coefficient of each feature.
Drawings
FIG. 1 illustrates an example of an identification appliance that incorporates multiple features in accordance with an embodiment of the present invention;
FIG. 2 illustrates an example of an identification method that fuses multiple features in accordance with an embodiment of the present invention;
FIG. 3 illustrates an example of weighting coefficients for individual features of person 1 and person 2 in a feature database prior to feature fusion, in accordance with an embodiment of the present invention;
fig. 4 shows an example of the weight coefficients of the respective features of person 1 and person 2 in the feature database after feature fusion according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and other advantages and effects of the present invention will become apparent to those skilled in the art from the present disclosure. While the invention will be described in conjunction with the preferred embodiments, it is not intended that features of the invention be limited to these embodiments. On the contrary, the invention is described in connection with the embodiments for the purpose of covering alternatives or modifications that may be extended based on the claims of the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be practiced without these particulars. Moreover, some of the specific details have been left out of the description in order to avoid obscuring or obscuring the focus of the present invention. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
It should be noted that in this specification, like reference numerals and letters refer to like items in the following drawings, and thus, once an item is defined in one drawing, it need not be further defined and explained in subsequent drawings.
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 shows an example of an identification apparatus 1000 fusing a plurality of features according to an embodiment of the present invention, and as shown in the figure, the identification apparatus 1000 includes a feature acquisition module 1100, a feature extraction module 1200, a feature association module 1300, a feature fusion module 1400, a feature comparison module 1500, a feature database 1600, and a communication module 1700. Fig. 2 shows an example of an identification method fusing a plurality of features according to an embodiment of the present invention, and the following describes aspects of identification fusing a plurality of features and functions of respective modules of the identification apparatus 1000 shown in fig. 1 in an identification process with reference to fig. 2.
Block 201: the method comprises the steps of constructing a feature database, dynamically acquiring the face features and other features of a plurality of persons, associating the face features of the persons with the features in the other features respectively for each acquisition of each person in the plurality of persons, and then fusing the face features and the features into the feature database.
According to an embodiment of the present invention, before the identification of the person to be identified, a feature database 1600 including feature sets of a plurality of persons is constructed as a comparison reference. Unlike the prior art that uses a static database of facial features (e.g., identification card photographs of multiple people obtained through a public security system), the feature database 1600 according to embodiments of the present invention is dynamically changing, i.e., dynamically (over a period of time or continuously) acquiring facial features and other features of different people and fusing the respective acquired features into the feature database. For example, if a plurality of features of the person 1 are acquired at time t, each feature acquired at time t is fused to the feature set of the person 1 in the feature database, and if a plurality of features of the person 1 are acquired at time t 'different from time t, each feature acquired at time t' is fused to the feature set of the person 1 in the feature database again.
In the construction of the feature database, the feature acquisition module 1100 is configured to acquire facial features and other features of different people, and fig. 1 shows an example of the feature acquisition module 1100, and in fig. 1, the feature acquisition module includes a plurality of cameras 1101a to 1101n and a plurality of wireless signal detection units 1102a to 1102n, where the cameras 1101 may be configured to acquire facial features and vehicle features, and the vehicle features may indicate license plate information, type information, appearance information, and the like of a vehicle; the wireless signal detecting unit 1102 may be configured to detect a wireless signal of the personal electronic device, such as a bluetooth signal, a WiFi signal, a mobile network signal, and the like, and the personal electronic device feature may indicate a device ID of the personal electronic device in the wireless signal, such as a device ID of a mobile phone and/or a wearable device in the bluetooth signal, the WiFi signal, the mobile network signal, and the like. The cameras 1101a to 1101n are disposed at different positions according to the arrangement and density of the disposition, and each of the wireless signal detection units 1102a to 1102n is disposed in the vicinity of each of the cameras 1101a to 1101n, for example, the wireless signal detection unit 1102a is disposed in the vicinity of the camera 1101a, and the wireless signal detection unit 1102n is disposed in the vicinity of the camera 1101 n.
According to an embodiment of the present invention, the features other than the human face features may include one or more of body type features, voiceprint features, and the like, and the camera 1101 shown in fig. 1 may be used to capture one or more of body type features, voiceprint features, and the like. According to the embodiment of the present invention, the feature acquisition module 1100 is not limited to the form shown in fig. 1, and the feature acquisition module 1100 may further include other types of feature acquisition units depending on the feature type of the acquiring person; and depending on the particular deployment arrangement and density, the number of feature acquisition units of each type may be one or more, with each feature acquisition unit deployed near the camera 1101.
It should be noted that, for one person, the types and the number of the features acquired by the feature acquisition module 1100 at each time may be different (but all include facial features), for example, at time t, the facial features and the vehicle features of the person 1 are acquired by the feature acquisition module 1100, and at time t', the facial features, the vehicle features and the personal electronic device features of the person 1 are acquired by the feature acquisition module 1100.
In constructing the feature database, for one time of information acquisition of a certain person by the feature acquisition module 1100, the feature extraction module 1200 is configured to extract facial features and other features from the information acquired by the feature acquisition module 1100, for example, for the feature acquisition module 1100 shown in fig. 1, the feature extraction module 1200 may extract facial features and vehicle features from each video frame of a video stream captured by the camera 1101, and extract personal electronic device features from a wireless signal of the personal electronic device detected by the wireless signal detection unit 1102. The feature extraction module 1200 may also extract feature information from information collected by other types of feature collection units.
In constructing the feature database, for one time of information acquisition of a certain person by the feature acquisition module 1100, the feature association module 1300 is configured to associate the facial features extracted by the feature extraction module 1200 with respective features of other features, for example, for the feature acquisition module 1100 shown in fig. 1, the feature association module 1300 is configured to associate the facial features extracted by the feature extraction module 1200 with the extracted vehicle features and personal electronic device features, respectively.
In one example, if the feature extraction module 1200 extracts the facial features of the person driving the vehicle and the vehicle features of the driven vehicle from one video frame of the video stream captured by the camera 1101, the feature association module 1300 directly associates the extracted facial features and the vehicle features together.
In one example, if the feature extraction module 1200 detects the unit 1102 from the wireless signal at t1When detecting that the personal electronic device features are extracted from the wireless signal appearing at any moment, the feature association module 1300 associates the extracted personal electronic device features with t taken by the feature extraction module 1200 from the camera 11011+△t1△ t, wherein the human face characteristics are extracted from the video frames at the moment1Related to the detection range of the wireless signal by the wireless signal detection unit 1102.
In another example, if the feature extraction module 1200 detects the unit 1102 from the wireless signal at t2When the personal electronic device features are extracted from the wireless signal which disappears at the moment, the feature association module 1300 associates the extracted personal electronic device features with t which is shot by the feature extraction module 1200 from the camera 11012-△t2△ t, wherein the human face characteristics are extracted from the video frames at the moment2Related to the detection range of the wireless signal by the wireless signal detection unit 1102.
In another example, if the feature extraction module 1200 detects the unit 1102 from the wireless signal at t1The personal electronic device feature is extracted from the wireless signal detected to appear at the moment, and the wireless signal detection unit 1102 detects the wireless signal at t2When the same personal electronic device feature is extracted from the wireless signal that disappears is detected at any moment, the feature association module 1300 associates the extracted personal electronic device feature with the feature extracted by the feature extraction module 1200 and exists at t1+△t1Video frame and t of time instant2-△t2The feature comparison module 1500 is required to compare the face features in the video frame at the moment1+△t1Face feature extracted from video frame of time and from t2-△t2Comparing the face features extracted from the video frames at the moment, and determining that the two face features with the maximum similarity and the similarity exceeding a threshold belong to the faces of the same person.
The feature association module 1200 may also associate facial features to other features such as body type features, voice print features, etc.
When the feature database is constructed, for the feature acquisition module 1100 to acquire information of a certain person for one time, the feature fusion module 1400 fuses the face features and other features of the acquired person associated by the feature association module 1300 into the feature database 1600, and determines the weight coefficients of each feature to be used for calculating the comprehensive confidence coefficient when the person to be recognized performs identity recognition.
In an example, through the feature comparison module 1500, the face features of the acquired person are compared with the face features of the persons existing in the feature database, and the face similarity is calculated, so as to determine the person meeting the fusion condition in the feature database, that is, the person whose face feature has the greatest similarity with the face feature of the acquired person and whose similarity exceeds the threshold. Then, the face features and other features of the collected person are fused with all the features of the person meeting the fusion conditions in the feature database by the feature fusion module 1400, and the fused features are used as the feature set of the person.
In one example, each feature has a normalized weight coefficient in a feature set of one person of the feature database. Fig. 3 and 4 show examples of the weight coefficients of the respective features of the person 1 and the person 2 in the feature database before and after feature fusion by the feature fusion module 1400, respectively. The magnitude of the weighting factor for a feature depends on the confidence with which the identity is determined from the feature, for example, as shown in fig. 3 and 4, a human face feature may have a weighting factor greater than that of a vehicle feature or a personal electronic device feature.
The more feature values a feature has, the smaller the weight coefficient of the feature, and if a feature obtains a new feature value after the feature fusion module 1400 completes the feature fusion, the weight coefficient of the feature is decreased. For example, as shown in FIG. 3, person 1 in the feature database has facial features A before feature fusion by the feature fusion module 14001License plate number B1(vehicle characteristics) and Bluetooth device number C1(personal electronic characteristics) with corresponding weight coefficients of 0.4, 0.3, respectively; person 2 in the feature database has facial features A2License plate number B2(vehicle)Feature) and bluetooth device number C2(personal electronic characteristics), the corresponding weight coefficients are 0.4, 0.3, respectively. At a certain time, camera 1101 captures the face of person 1 and the vehicle driven by person 1, and feature extraction section 1200 extracts face feature a of person 11And number plate B1', the feature association module 1300 associates the facial features A with1And number plate B1' associated together, the feature comparison module 1500 determines the person meeting the fusion condition in the feature database as person 1, and the feature fusion module combines the newly collected face features A1And number plate B1' fused into the feature set of person 1 in the feature database, as shown in FIG. 3, the fused person 1 has face feature A1License plate number B1And number plate B1' (vehicle characteristics) and Bluetooth device number C1(personal electronic characteristic) since the merged vehicle characteristic has a new characteristic value B in the license plate number1' so the feature fusion module 1400 decreases the weighting factor for the vehicle features to 0.2 and correspondingly increases the weighting factor for the human face features to 0.45 and the weighting factor for the personal electronic device features to 0.35.
Similarly, if the camera 1101 captures the face of the person 2 and the radio detection unit detects the bluetooth signal of the personal electronic device of the person 2, the feature extraction unit 1200 extracts the facial feature a of the person 22And Bluetooth device number C2The feature association module 1300 associates the face feature A with the face feature A2And Bluetooth device number C2The feature comparison module 1500 determines the person meeting the fusion condition in the feature database as person 2, and the feature fusion module combines the newly collected face features A2And Bluetooth device number C2The feature set of the person 2 is merged into the feature set of the person 2 in the feature database, as shown in fig. 4, the feature set of the merged person 2 is not changed, and the feature of the personal electronic device still has only one feature value C in the aspect of the bluetooth device number2Therefore, the feature fusion module 1400 does not change the weight coefficients of the respective features.
It should be noted that the assignment of the weight coefficients by the feature fusion module 1400 is not limited to the form shown in fig. 3 and 4.
When the feature database is constructed, as described above, the feature comparison module 1500 is configured to compare the face features of the acquired person with the face features of the persons existing in the feature database, calculate face similarity, and determine the person meeting the fusion condition in the feature database, that is, the person whose face feature has the greatest similarity with the face feature of the acquired person and whose similarity exceeds the threshold.
The communication module 1700 is communicatively coupled to communication modules of other identification devices for sharing the feature database 1600 of the identification device 1000 to the other identification devices and for receiving other feature databases from the other identification devices. After the communication module 1700 receives the other feature databases, the feature fusion module 1400 may fuse the data of the other feature databases with the data of the feature database 1600, which is similar to the above fusion process and is not described herein again.
In another embodiment, before the feature database is constructed according to the above, the feature database may have pre-stored therein the facial features and other features of a plurality of persons collected by the public security system, for example, the public security system may collect the facial features of a person through a resident identification card, and collect and associate the vehicle features, the personal electronic device features, and the like of the corresponding person with the facial features of the corresponding person according to the identification card information. When the feature database is constructed as described above, the human face features and other features of the person collected by the feature collection module 1100 may be fused with the human face features and other features of the person pre-stored in the feature database by the feature fusion module 1400, as described above.
Block 202: the method comprises the steps of collecting the face features and other features of a person to be recognized, and respectively associating the face features of the person to be recognized with each feature in the other features.
Based on the occurrence of a specific event, for example, a public security system needs to control criminals, the feature acquisition modules of the identity recognition device 1000 and other identity recognition devices deployed in different areas may acquire face features and other features of people located within the acquisition range of the feature acquisition module, and a plurality of acquired people will be used as people to be recognized to perform identity recognition.
Similar to what is described above for the person added to the feature database, the facial features and the vehicle features of the person to be recognized may be captured by the camera 1101, the personal electronic device features of the person to be recognized may be detected by the wireless signal detection unit 1102, and other features of the person to be recognized may be captured by the camera 1101 and/or other types of feature capture units.
For the information acquisition of the feature acquisition module 1100 on a person to be identified, the feature extraction module 1200 extracts human face features and other features from the information acquired by the feature acquisition module 1100, for example, for the feature acquisition module 1100 shown in fig. 1, the feature extraction module 1200 may extract human face features and vehicle features from each video frame of a video stream captured by the camera 1101, and extract personal electronic device features from the wireless signal of the personal electronic device detected by the wireless signal detection unit 1102. The feature extraction module 1200 may also extract feature information from information collected by other types of feature collection units.
For the information collection of one person to be identified by the feature collection module 1100, the feature association module 1300 associates the facial features extracted by the feature extraction module 1200 with respective features of other features, for example, for the feature collection module 1100 shown in fig. 1, the feature association module 1300 is configured to associate the facial features extracted by the feature extraction module 1200 with the extracted vehicle features and the personal electronic device features, respectively.
Block 203: for each person in the feature database, the features included in the feature set of the person to be identified and the person are respectively compared through the feature comparison module 1500, and a comprehensive confidence coefficient is calculated, wherein the comprehensive confidence coefficient is equal to the sum of the products of the confidence coefficients of the features and the respective weight coefficients.
Wherein, if a feature of the person of the feature database has a plurality of feature values, the confidence between the feature of the person of the feature database and the feature of the person to be identified is equal to the greatest confidence among the confidences between the plurality of feature values of the feature of the person of the feature database and the feature value of the feature of the person to be identified.
For example, assume that the feature set of the person to be recognized includes a face feature a3License plate number B3(vehicle characteristics) and Bluetooth device number C3(personal electronic characteristics), the person 1 of the characteristic database shown in fig. 4 is calculated with their facial characteristics a, respectively1With the face characteristics A of the person to be identified3Confidence of (C)A1License plate number B1Number plate B of person to be identified3Confidence of (C)B1License plate number B1' with the license plate number B of the person to be identified3Confidence of (C)B1’Bluetooth equipment number C1With the bluetooth equipment number C of the personnel to be identified3Confidence of (C)C1. The vehicle features of person 1 due to the feature database include two license plate numbers B1And B1', the confidence between the vehicle feature of the person 1 of the feature database and the vehicle feature of the person to be recognized is therefore equal to the confidence CB1And confidence CB1’Maximum confidence of CB1(hypothesis confidence C)B1Greater than confidence CB1’). The overall confidence Con between person 1 of the feature database and the person to be identified is 0.45 × CA1+0.2*CB1+0.35*CC1. Similarly, a composite confidence may be calculated between the person to be identified and other persons in the feature database (which may include persons in the feature database received from other identification devices).
And a block 204: through the feature comparison module 1500, the person to be identified is identified as the person in the feature database having the greatest integrated confidence with the person to be identified.
After the identity recognition of the person to be recognized is completed, the feature fusion module can fuse the acquired face features and other features of the person to be recognized into the feature database, and the specific fusion process is similar to the above process and is not repeated.
In the embodiment of the invention, the human face characteristics and other characteristics (such as vehicle characteristics, electronic equipment characteristics and the like) are fused to carry out comprehensive identity recognition on the person to be recognized, and even if the person to be recognized intentionally shields the face, the person to be recognized can also carry out identity recognition through other characteristics, so that the accuracy of recognition is improved compared with the identity recognition only by using human face detection. In addition, in the process of constructing the feature database, the identification accuracy can be more accurately improved compared with a static database used in the prior art by dynamically acquiring the features of the personnel at different moments and dynamically adjusting the weight coefficient of each feature.
Embodiments of the present invention also provide a nonvolatile storage medium on which an identification program fusing a plurality of features is stored, the program being executed by a computer to implement the above-described identification method fusing a plurality of features.
The embodiment of the invention also provides an identity recognition device fusing a plurality of characteristics, which comprises:
a memory storing an identification program fusing a plurality of features, the identification program being executable by a computer; and
a processor connected to the memory and configured to execute an identification program fusing the plurality of features to implement the above-described identification method fusing the plurality of features.
Embodiments of the present invention also provide a system including the above-described identification apparatus that fuses multiple features.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing is a more detailed description of the invention, taken in conjunction with the specific embodiments thereof, and that no limitation of the invention is intended thereby. Various changes in form and detail, including simple deductions or substitutions, may be made by those skilled in the art without departing from the spirit and scope of the invention.

Claims (27)

1. An identity recognition method fusing a plurality of features, comprising:
constructing a feature database, wherein the face features and other features of a plurality of persons are dynamically acquired, and for each acquisition of each person in the plurality of persons, the face features of the person are respectively associated with each feature in the other features and then fused into the feature database;
collecting the face features and the other features of the person to be recognized, and respectively associating the face features of the person to be recognized with each feature in the other features;
for each person in the feature database, respectively comparing each feature contained in the feature set of the person to be identified and the person to be identified, and calculating a comprehensive confidence coefficient, wherein the comprehensive confidence coefficient is equal to the sum of the confidence coefficient of each feature and the product of the weight coefficient of each feature;
identifying the person to be identified as the person in the feature database having the greatest combined confidence with the person to be identified.
2. The method of claim 1, wherein the other features comprise different numbers and types of features for each acquisition of each of the plurality of persons, the acquisition of the person to be identified.
3. The method of claim 2, wherein the other features include one or more of vehicle features, personal electronic device features, body conformation features, and voiceprint features, wherein the vehicle features, body conformation features, and voiceprint features can be captured by a camera, wherein the personal electronic device features can be captured by a wireless signal detection unit, and wherein the wireless signal detection unit is disposed proximate to the camera.
4. A method according to claim 3, characterized in that if the camera captures the vehicle of a person and the face of a person driving the vehicle, the face features and the vehicle features of the person are extracted from the video frames captured by the camera and directly associated.
5. The method of claim 3, wherein if the wireless signal detection unit detects a wireless signal of a personal electronic device, extracting the personal electronic device feature from the detected wireless signal and associating the personal electronic device feature to the facial feature of the person extracted from the video frame captured by the camera.
6. The method of claim 5, wherein the personal electronic device features are associated with the facial features of the person extracted from the following video frames:
a video frame at a time later than a time when the wireless signal is detected to appear by a first time threshold; and/or
A video frame at a time earlier than a time at which the wireless signal is detected to disappear by a second time threshold.
7. The method of claim 6, wherein the first time threshold and the second time threshold are related to a detection range of the wireless signal detection unit.
8. The method of claim 5, wherein the wireless signal is a Bluetooth signal, a WiFi signal, or a mobile network signal.
9. The method of claim 1, further comprising:
and fusing the human face features and the other features of the person to be recognized into the feature database.
10. The method according to claim 1 or 9, wherein fusing the facial features and the other features of the person to be identified or the person collected when constructing the feature database into the feature database comprises:
comparing the facial features of the collected person or the person to be recognized with the facial features of the persons in the feature database and calculating face similarity;
fusing the facial features and the other features of the acquired person or the person to be recognized into a feature set of a person having the largest face similarity in the feature database, and determining the weight coefficients of the respective features in the feature set of the person having the largest face similarity after the fusing.
11. The method according to claim 10, wherein after fusing the facial features and the other features of the person to be collected or identified into the feature database, if a feature in the feature set of the person with the largest face similarity obtains a new feature value, the weight coefficient of the feature is reduced.
12. The method according to claim 1, characterized in that if one feature associated with one person in the feature database has a plurality of feature values, a plurality of confidences between the plurality of feature values of the feature of the person of the feature database and feature values of the feature of the person to be identified are respectively calculated, and the confidence between the feature of the person of the feature database and the feature of the person to be identified is equal to the maximum confidence among the plurality of confidences.
13. An identification device that incorporates multiple features, comprising:
the system comprises a feature acquisition module, a feature recognition module and a feature recognition module, wherein the feature acquisition module is used for dynamically acquiring the face features and other features of a plurality of persons when a feature database is constructed, and is also used for acquiring the face features and other features of the persons to be recognized;
a feature extraction module, configured to, for each acquisition of each person in the plurality of persons when constructing a feature database, extract the facial features and the other features of the person from the features acquired by the feature acquisition module, and also extract the facial features and the other features of the person to be recognized from the features acquired by the feature acquisition module;
the feature association module is used for associating the facial features of the persons extracted by the feature extraction module with the other features of the persons respectively for each acquisition of each person in the plurality of persons when a feature database is constructed, and is also used for associating the facial features of the persons to be identified extracted by the feature extraction module with the other features of the persons to be identified respectively;
a feature fusion module for fusing the facial features and the other features of the person associated with the feature association module into a feature database for each acquisition of each of the plurality of persons when constructing the feature database;
and the characteristic comparison module is used for respectively comparing each characteristic contained in the characteristic set of the personnel to be identified and the personnel to be identified, calculating a comprehensive confidence coefficient, and identifying the personnel to be identified as the personnel with the maximum comprehensive confidence coefficient of the personnel to be identified in the characteristic database, wherein the comprehensive confidence coefficient is equal to the sum of the products of the confidence coefficients of the characteristics and the respective weight coefficients.
14. The apparatus of claim 13, wherein the feature collection module collects each of the plurality of persons and the person to be identified when building the feature database, and wherein the other features may include different numbers and types of features.
15. The apparatus of claim 14, wherein the other features comprise one or more of a vehicle feature, a personal electronic device feature, a body conformation feature, and a voiceprint feature, wherein the vehicle feature, body conformation feature, and voiceprint feature can be captured by a camera in the feature capture module, wherein the personal electronic device feature can be captured by a wireless signal detection unit in the feature capture module, and wherein the wireless signal detection unit is disposed proximate to the camera.
16. The apparatus of claim 15, wherein the feature association module directly associates the facial features and the vehicle features of the person extracted by the feature extraction module from the video frames captured by the camera if the camera captures a vehicle of the person and a face of the person driving the vehicle.
17. The apparatus of claim 15, wherein if the wireless signal detection unit detects a wireless signal of a personal electronic device, the feature extraction module extracts the personal electronic device feature from the detected wireless signal, and the feature association module associates the personal electronic device feature with the facial feature of a person extracted from a video frame captured by the camera.
18. The apparatus of claim 17, wherein the feature association module associates the personal electronic device features to the facial features of the person extracted from the following video frames:
a video frame at a time later than a time when the wireless signal is detected to appear by a first time threshold; and/or
A video frame at a time earlier than a time at which the wireless signal is detected to disappear by a second time threshold.
19. The apparatus of claim 18, wherein the first time threshold and the second time threshold are related to a detection range of the wireless signal detection unit.
20. The apparatus of claim 17, wherein the wireless signal is a bluetooth signal, a WiFi signal, or a mobile network signal.
21. The apparatus of claim 13, wherein the feature fusion module is further configured to:
and fusing the human face features and the other features of the person to be recognized into the feature database.
22. The apparatus according to claim 13 or 21, wherein the feature comparison module is further configured to compare the facial features of the person to be identified or the person to be identified when constructing the feature database with the facial features of the respective persons in the feature database and calculate a face similarity, so that the feature fusion module fuses the facial features and the other features of the person to be identified or the person to be identified into a feature set of the person with the largest face similarity in the feature database, and determines the weight coefficients of the respective features in the feature set of the person with the largest face similarity after fusion.
23. The apparatus according to claim 22, wherein after fusing the facial features and the other features of the person to be recognized or the person to be recognized to the feature database, if a feature in the feature set of the person with the largest face similarity obtains a new feature value, the weight coefficient of the feature is reduced.
24. The apparatus of claim 13, wherein if a feature associated with a person in the feature database has a plurality of feature values, the feature alignment module respectively calculates a plurality of confidences between the plurality of feature values of the feature of the person of the feature database and feature values of the feature of the person to be identified, and makes a confidence between the feature of the person of the feature database and the feature of the person to be identified equal to a maximum confidence in the plurality of confidences.
25. A non-volatile storage medium, characterized in that an identification program fusing a plurality of features is stored on the storage medium, the program being executed by a computer to implement the identification method fusing a plurality of features of any one of claims 1 to 12.
26. An identification device that fuses a plurality of features, comprising:
a memory storing an identification program fusing a plurality of features, the identification program being executable by a computer; and
a processor connected to the memory and configured to execute the fused-plurality-feature identification program to implement the fused-plurality-feature identification method of any one of claims 1 to 12.
27. An identification system incorporating a plurality of features, comprising an identification device incorporating a plurality of features as claimed in any of claims 13 to 24.
CN201910923395.5A 2019-09-27 2019-09-27 Identity recognition method, device, system, medium and equipment fusing multiple features Pending CN110717428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910923395.5A CN110717428A (en) 2019-09-27 2019-09-27 Identity recognition method, device, system, medium and equipment fusing multiple features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910923395.5A CN110717428A (en) 2019-09-27 2019-09-27 Identity recognition method, device, system, medium and equipment fusing multiple features

Publications (1)

Publication Number Publication Date
CN110717428A true CN110717428A (en) 2020-01-21

Family

ID=69211985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910923395.5A Pending CN110717428A (en) 2019-09-27 2019-09-27 Identity recognition method, device, system, medium and equipment fusing multiple features

Country Status (1)

Country Link
CN (1) CN110717428A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274432A (en) * 2020-02-06 2020-06-12 浙江大华技术股份有限公司 Control distribution processing method and device
CN113449596A (en) * 2021-05-26 2021-09-28 科大讯飞股份有限公司 Object re-recognition method, electronic device and storage device
CN114792451A (en) * 2022-06-22 2022-07-26 深圳市海清视讯科技有限公司 Information processing method, device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902655A (en) * 2014-02-28 2014-07-02 小米科技有限责任公司 Clustering method and device and terminal device
CN104598012A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Interactive advertising equipment and working method thereof
CN106557783A (en) * 2016-11-21 2017-04-05 厦门优莱柏网络科技有限公司 A kind of automatic extracting system and method for caricature dominant role
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN110246292A (en) * 2019-04-26 2019-09-17 平安科技(深圳)有限公司 Domestic video monitoring method, apparatus and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598012A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Interactive advertising equipment and working method thereof
CN103902655A (en) * 2014-02-28 2014-07-02 小米科技有限责任公司 Clustering method and device and terminal device
CN106557783A (en) * 2016-11-21 2017-04-05 厦门优莱柏网络科技有限公司 A kind of automatic extracting system and method for caricature dominant role
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN110246292A (en) * 2019-04-26 2019-09-17 平安科技(深圳)有限公司 Domestic video monitoring method, apparatus and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274432A (en) * 2020-02-06 2020-06-12 浙江大华技术股份有限公司 Control distribution processing method and device
CN111274432B (en) * 2020-02-06 2023-05-09 浙江大华技术股份有限公司 Method and device for processing cloth control
CN113449596A (en) * 2021-05-26 2021-09-28 科大讯飞股份有限公司 Object re-recognition method, electronic device and storage device
CN114792451A (en) * 2022-06-22 2022-07-26 深圳市海清视讯科技有限公司 Information processing method, device, and storage medium

Similar Documents

Publication Publication Date Title
US9875392B2 (en) System and method for face capture and matching
US8340366B2 (en) Face recognition system
WO2019023606A1 (en) System and method for identifying re-photographed images
US20150086076A1 (en) Face Recognition Performance Using Additional Image Features
US8422746B2 (en) Face authentication system and authentication method thereof
CN108256459A (en) Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically
US20160210513A1 (en) Object recognition method and apparatus
CN102945366A (en) Method and device for face recognition
WO2010001311A1 (en) Networked face recognition system
CN110717428A (en) Identity recognition method, device, system, medium and equipment fusing multiple features
CN103714631A (en) ATM intelligent monitoring system based on human face recognition
CN105279496A (en) Human face recognition method and apparatus
KR101957677B1 (en) System for learning based real time guidance through face recognition and the method thereof
CN108108711A (en) Face supervision method, electronic equipment and storage medium
CN112528706A (en) Personnel identification system and method thereof
CN109492509A (en) Personal identification method, device, computer-readable medium and system
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN111091047B (en) Living body detection method and device, server and face recognition equipment
CN109889773A (en) Method, apparatus, equipment and the medium of the monitoring of assessment of bids room personnel
JP2002208011A (en) Image collation processing system and its method
WO2022134916A1 (en) Identity feature generation method and device, and storage medium
CN109213889A (en) A kind of method and device that customer information merges
Liashenko et al. Investigation of the influence of image quality on the work of biometric authentication methods
CN112183202B (en) Identity authentication method and device based on tooth structural features
WO2023279783A1 (en) Facial recognition method, device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200121

RJ01 Rejection of invention patent application after publication