CN113313029A - Integrated identity authentication method based on human and object feature fusion - Google Patents

Integrated identity authentication method based on human and object feature fusion Download PDF

Info

Publication number
CN113313029A
CN113313029A CN202110597332.2A CN202110597332A CN113313029A CN 113313029 A CN113313029 A CN 113313029A CN 202110597332 A CN202110597332 A CN 202110597332A CN 113313029 A CN113313029 A CN 113313029A
Authority
CN
China
Prior art keywords
fingerprint
fusion
human
feature
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110597332.2A
Other languages
Chinese (zh)
Inventor
吴克河
高雪
赵彤
肖卓
程相鑫
李为
姜媛
樊祺
王皓民
韩嘉佳
孙歆
李沁园
邵志鹏
李尼格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
North China Electric Power University
Global Energy Interconnection Research Institute
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
North China Electric Power University
Global Energy Interconnection Research Institute
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, North China Electric Power University, Global Energy Interconnection Research Institute, Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202110597332.2A priority Critical patent/CN113313029A/en
Publication of CN113313029A publication Critical patent/CN113313029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Abstract

The invention discloses an integrated identity authentication method based on human and object feature fusion, which comprises the following steps: 1) character feature acquisition; 2) collecting the fingerprint characteristics of equipment; 3) the human and object characteristics are fused. The integrated identity authentication method based on the human-object feature fusion adopts the human-object feature fusion technology, fuses the identity features of the user and the identity features of the terminal, realizes the binding of the identities of the user and the terminal, can reduce the authentication times of the user and prevent the same terminal from being used by a plurality of users, not only improves the usability, but also can realize the supervision requirement of a special machine of a special person under the application scene with higher security level.

Description

Integrated identity authentication method based on human and object feature fusion
Technical Field
The invention relates to an integrated identity authentication method based on human and object feature fusion, and belongs to the technical field of identity authentication.
Background
With the rapid development of mobile communication technology and mobile terminal technology, PDA terminals are widely used in industrial application scenarios such as mobile operation, mobile inspection, etc. The PDA terminal is used as a hardware resource, and when the PDA terminal is used by a user, identity authentication is required, and the PDA terminal can be used only by a legal user. Generally, password authentication, fingerprint identification and other technologies are adopted to verify the validity of the user identity. Various APP applications deployed on the PDA terminal are used as software resources, and identity authentication is also required when a user accesses the APP applications. Generally, techniques such as username/password, fingerprint recognition, face recognition, etc. are used to verify the validity of the user identity. The prior method has the following defects:
1. the two identity authentication results are mutually independent, and the user needs to repeatedly perform identity authentication, so that the usability is greatly reduced;
2. the APP only verifies the identity of the user, but not the identity of the terminal, so that the safety prevention and control requirements of 'special-person special plane' cannot be met, and the safety risk of 'one machine used by multiple persons' exists.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an integrated identity authentication method based on human and object feature fusion.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an integrated identity authentication method based on 'people and thing' feature fusion comprises the following steps:
1) character feature acquisition: collecting fingerprint characteristics;
2) collecting the fingerprint characteristics of equipment;
3) the human and object feature fusion of the feature fusion algorithm based on the Bayesian theory;
the method solves the problems of repeated identity authentication for a plurality of times and one machine for a plurality of people when a user uses the PDA terminal, firstly provides a method for fusing the characteristics of people and objects, and realizes the integrated authentication of the identities of people and objects based on the method.
In the step 1), the fingerprint has uniqueness (different people and different fingers) and stability (basically unchanged in lifetime), and is an effective means for personal identification. Verification is performed by two types of features of the fingerprint: general features and local features. When local features are considered, the matching of l3 feature points can confirm that the two fingerprints are the same.
Gross features refer to those features that are directly observable to the human eye. And the local features are the features of the nodes on the fingerprint, and the nodes with certain features are called feature points. Two fingerprints will often have the same overall features, but the feature points of their local features cannot be completely the same, that is, these feature points provide confirmation information of fingerprint uniqueness, and the fingerprint feature collection process is shown in fig. 3 and specifically includes:
in the step 1), the fingerprint feature acquisition method comprises the following steps:
(1) fingerprint image capture and image enhancement;
the commonly used fingerprint image capturing methods include the following methods: optical equipment for imaging, crystal sensor for imaging and ultrasonic equipment for imaging. In order to ensure the performance of the detail feature algorithm, the fingerprint image enhancement is required. Generally, image enhancement is performed by using digital image processing methods such as smoothing, filtering, binarization, and thinning. In practical implementation, the fingerprint image enhancement generally adopts the following links: normalization, directional diagram estimation, frequency diagram estimation, template generation and filtering. A fingerprint image is normalized so that the mean and variance of the image can be controlled within a given range for subsequent processing. The purpose of normalizing the fingerprint image is to reduce the variance of the gray scale map.
(2) Extracting a fingerprint characteristic value;
matching two fingerprint images largely uses a method based on comparing dot patterns of the two fingerprint images, and the dots used for matching are called feature points. One typical reliable detail feature extraction algorithm includes direction estimation, segmentation, ridge extraction, detail feature extraction, and post-processing. The results of the feature extraction are typically saved as a feature template, which includes the ridge ending point or bifurcation type, the location coordinates, and the direction of the feature. The general fingerprint image extraction features are 10-100, and at least 12 feature points are required to be matched.
(3) Matching fingerprint images;
fingerprint image matching is to judge whether the feature sets (templates) of two input fingerprints belong to the same fingerprint. Fingerprint matching has two modes: a. one-to-one fingerprint verification: searching out a user fingerprint to be compared from a fingerprint library according to the user ID, and comparing the user fingerprint with a newly scanned fingerprint; b. one-to-many fingerprint verification: the newly scanned fingerprints are compared with the fingerprints in the fingerprint database one by one.
In step 2), collecting the fingerprint characteristics of the equipment: similar to the fact that human fingerprints in biology can be used for identity authentication, equipment accessed in the internet of things also has unique 'fingerprints' of the equipment, and the equipment can achieve the purposes of access control, terminal identification or tracking and the like. Compared with different devices and different software, the form of the fingerprint has diversity. Traditionally, network devices are typically identified based on MAC addresses, IP addresses, etc., but these features are easily disguised and tampered with. The advantages and the disadvantages of various fingerprint technologies are comprehensively compared, and the internal network can be protected only by adopting various equipment fingerprint identity authentications according to the integration of people and objects.
The device fingerprint generation mode is divided into three main modes from the technical implementation mode:
(1) active mode
And actively acquiring multiple pieces of information of the equipment N, such as UA, MAC address, equipment IMEI number and the like, and generating unique device _ id on the client. Fingerprints generated in this manner are less resistant in anti-fraud scenarios due to the strong dependence on client code.
(2) Passive form
In the process of communication between a terminal device (such as a PDA) and a server, the passive device fingerprint technology extracts a characteristic set related to the OS, a protocol stack and a network state of the terminal device from an OSI seven-layer protocol of a data message, and combines a machine learning algorithm to identify and track the specific terminal device.
(3) Hybrid type
The method comprises an active acquisition part and a server algorithm generation part. When the PDA equipment is in network connection, the collection elements are actively acquired, the collection elements interact with an intranet firewall, and a unique equipment fingerprint ID is generated after algorithm confusion encryption.
Considering the association between the control program of the control end used by the user and the PDA terminal equipment, the hybrid equipment fingerprint technology overcomes the inherent defects of the active equipment fingerprint technology and the passive equipment fingerprint technology, accurately identifies the equipment and simultaneously expands the application range of the equipment fingerprint technology.
Preferably, in step 2), the device fingerprint feature acquisition uses hybrid extraction. And the mixed extraction comprises an active acquisition part and a server algorithm generation part, wherein the PDA equipment actively acquires elements when in network connection, interacts with an intranet firewall, and generates a unique equipment fingerprint ID after algorithm confusion and encryption.
In step 3), the characteristics of human and object are fused: the OID (Object Identifier, also called internet of things domain name) is an identification mechanism jointly proposed in the last 80 th century by ISO/IEC and ITU-T international standardization organizations, and adopts a hierarchical tree structure to carry out globally unambiguous and unique naming on any type of objects (including entity objects, virtual objects, composite objects and the like). The OID has the advantages of flexible layering, strong expansibility, cross heterogeneous systems and the like. Then, the applicant researches and carries out information fusion on the collected fingerprint acquired in the step 1) and the equipment fingerprint characteristic acquired in the step 2), redistributes a globally unique OID identifier to obtain a unique identifier, and finally realizes the uniqueness of the person-object characteristic fusion by using the identifier, namely the realization of 'person-object' integration, and carries out OID application after the biological characteristic and the equipment characteristic are fused.
The application relates to a multi-source feature fusion technology:
feature fusion, also known as data fusion, is a process of processing data or features from a single or multiple sources by correlating, interconnecting, and combining the data or features to obtain an accurate position estimate and identity estimate. The multi-source feature fusion refers to the fusion of different knowledge sources and data acquired by sensors so as to realize better understanding of observation phenomena.
The multi-source feature fusion of the application comprises the following steps:
(1) data acquisition: extracting original features of an object, acquiring original data from a plurality of data sources, and simultaneously including related data and data in some previous knowledge bases;
(2) data preprocessing: in the acquisition process of the source data, noise is necessarily included in the source data due to various factors. Meanwhile, in order to reduce the efficiency of the post-algorithm processing data, the source data should be screened and filtered. Therefore, the operation efficiency and the calculation precision of the feature fusion algorithm can be improved;
(3) feature fusion: and the data fusion adopts a selected fusion algorithm to fuse the preprocessed data, so as to obtain a more accurate result.
In the multi-source feature fusion process, the redundancy among data greatly enhances the reliability of the system, because the data from one data source is supplemented by the data from other data sources more completely, so that the features of the object to be observed can be more comprehensively represented. In fact, the multi-source feature fusion enlarges the space-time search range, the complementarity between data enables the detection of the target to be more complete and the feature description of the observed object to be more definite, and the reliability of the obtained information is higher. Therefore, the multi-source feature fusion system has the following advantages:
(1) improving the reliability and robustness of the system
(2) Extending the observation range in time and space
(3) Enhancing trustworthiness of data
(4) Enhancing the resolving power of the system.
From the above analysis, it can be seen that the performance of multi-source feature fusion is often improved to a greater extent than that of single-source feature fusion, and can be expanded more easily. Therefore, the system established by the multi-source feature fusion can have stronger fault-tolerant capability and adaptability, namely, the decision-making capability of the system is improved more.
Hierarchy for feature fusion:
in the process of establishing the multi-source feature fusion system, the feature acquisition is not necessarily centralized in the same stage, and may be acquired in several stages respectively. Thus, the stages of a multi-source feature fusion can be viewed as levels of the system. Since different feature information is acquired at different levels, when the level is reached, feature fusion of the current level is performed, and there is a possibility that feature information of a higher level may have a certain influence on fusion of feature information of a lower level. The feature information fusion system is divided into three levels: data layer fusion, feature layer fusion, and decision layer fusion.
(1) Data layer fusion
Data layer fusion is the lowest layer of fusion and requires that the sensors used to acquire the data be homogenous. The collected multi-source original data are directly analyzed and fused without any pretreatment. It is clear that the fusion of the data layers preserves the original feature information of the object to the maximum extent, without any loss of feature information, and of course also includes noisy data. Therefore, it can provide the finest feature information relative to the fusion of several other layers. At the same time, its disadvantages are also apparent.
Firstly, a large amount of original characteristic information is stored, so that the algorithm efficiency is very low, and the time complexity and the space complexity are high;
secondly, the large amount of characteristic information causes great burden of data communication in practical application, including long transmission time,
the anti-interference capability is poor, so that the real-time performance is poor;
because of various factors, the original data of each source is necessarily mixed with noise, and the original data is not preprocessed, so that the system is required to have strong error correction capability. The fusion of the data layers is mostly used for multi-source image composition and image analysis and understanding.
(2) Feature layer fusion
The feature layer fusion belongs to the fusion of the middle layer, after multi-source original data are obtained, the original data collected by each source are preprocessed, namely feature extraction is carried out, and then the processed data are analyzed and fused. Thus, the advantages over data layer fusion are apparent. Because data preprocessing is carried out, noise and useless data are deleted, and only data which are meaningful for feature fusion are reserved, the efficiency of the algorithm is greatly improved, the time complexity and the space complexity are reduced, the real-time performance of the system is enhanced, and the requirement on the error correction capability of the system is also reduced.
Preprocessing of multi-source raw data enables feature-level fusion to have these advantages, but preprocessing also brings many new problems. The data preprocessing methods are various, and the methods adopted for different types of data are different, so that a large amount of experiments are needed for selecting a data preprocessing algorithm, a large amount of time is consumed, secondly, more time is needed for preprocessing the data before feature fusion, most important little data preprocessing needs to be performed, more time is needed, and more or less feature information in the original data is lost, so that the comprehensiveness of the description of the observed object is influenced to a certain extent.
(3) Decision level fusion
Decision level fusion is the fusion of the highest level. That is, after there are many decisions for the same observation object, the decisions are fused according to a certain specified criterion to obtain the final decision. Because the existing multiple decisions are fused, the cost required by the fusion of the decision layer is very small, and meanwhile, the available methods are very many and are more flexible to process. Moreover, for the previous decisions, if a certain part or a small part of the decisions are wrong or the errors are relatively large, the correct results can still be obtained through the fusion of the decision layers. However, the accuracy of the decision layer fusion is based on the accuracy of previous multiple decisions, and under the condition that most of the previous decisions are correct, the accuracy of the final decision can only be improved as much as possible, but when most of the previous decisions are wrong, the decision layer fusion cannot obtain the correct final decision. It can be said that the accuracy of the decision layer fusion is based on the accuracy of the first two layers.
The invention uses a feature fusion algorithm based on Bayesian theory to fuse the collected 'human and object' feature information: it is known that the pattern space Ω (Ω is the pattern space) includes c patterns (c is the number of types of patterns), and is expressed as Ω ═ ω { [ ω ]1,…,ωcAnd an unknown sample x (x is a sample) is composed of N-dimensional (N is a dimension) real-value features, and is recorded as x ═ x1,x2,…,xN]NAccording to the bayesian decision theory of the minimum error rate, if samples are classified into the jth class (j is a class of samples), the class is the class of the mode with the maximum posterior probability under the condition of a known sample x (x is a sample), and this decision process can be expressed as:
x→ωj
ifF(ωj)=maxP(ωk|x)
wherein: p (ω k | x) represents the posterior probability of the kth class (P is the probability, k is the sample class, P (ω k | x) represents the posterior probability of the kth class), k ∈ {1,2, …, c };
assuming that x (x is a sample) is taken as an output result of the classifier, a classifier fusion algorithm of the Bayesian theory can be obtained, and assuming that M classifiers (M is the number of the classifiers), each classifier outputs a result yi(y is the result sample output by the classifier), so the feature at this time is obtained as: y ═ y1,…,yM]Then for an unknown sample y, the decision process can be expressed as:
y→ωj
ifF(ωj)=maxP(ωk|y1,…,yM)
wherein: p (omega)k|y1,…,yM) Represents the posterior probability of the kth class under the condition that M classifier output results are known (P is the probability, y is the result sample of the classifier output, P (omega)k|y1,…,yM) The posterior probability of class k under the condition that M classifier outputs are known), k ∈ {1,2, …, c }, on the basis of which a classifier independence assumption is introduced in combination with the following formula:
Figure BDA0003091594030000061
a classifier-fused multiplication rule can be obtained:
y→ωj
Figure BDA0003091594030000062
wherein: k ∈ {1,2, …, c }, and in the above formula when p (ω)k|xiWhen the value is 0, the prior probability and the posterior probability are introduced to be approximately equal on the basis of the multiplication rule:
Figure BDA0003091594030000063
wherein: deltakiIs a very small value deltakiRepresenting a small value, it can be derived finally that the classifier fuses the addition rule:
y→ωj
Figure BDA0003091594030000071
the prior art is referred to in the art for techniques not mentioned in the present invention.
The integrated identity authentication method based on the human-object feature fusion adopts the human-object feature fusion technology, fuses the identity features of the user and the identity features of the terminal, realizes the binding of the identities of the user and the terminal, can reduce the authentication times of the user and prevent the same terminal from being used by a plurality of users, not only improves the usability, but also can realize the supervision requirement of a special machine of a special person under the application scene with higher security level.
Drawings
FIG. 1 is a schematic diagram of multi-source feature fusion;
FIG. 2 is a schematic view of a feature fusion hierarchy;
FIG. 3 is a flow chart of texture feature acquisition;
fig. 4 is a flowchart of the OID application.
Detailed Description
In order to better understand the present invention, the following examples are further provided to illustrate the present invention, but the present invention is not limited to the following examples.
An integrated identity authentication method based on human and object feature fusion comprises the following steps:
1) character feature acquisition: collecting fingerprint characteristics;
the fingerprint has uniqueness (different people and different fingers) and stability (basically unchanged in lifetime), and is an effective means for personal identification. Verification is performed by two types of features of the fingerprint: general features and local features. When local features are considered, the matching of l3 feature points can confirm that the two fingerprints are the same.
Gross features refer to those features that are directly observable to the human eye. And the local features are the features of the nodes on the fingerprint, and the nodes with certain features are called feature points. Two fingerprints will often have the same overall features, but the feature points of their local features cannot be completely the same, that is, these feature points provide confirmation information of fingerprint uniqueness, and the fingerprint feature collection process is shown in fig. 3 and specifically includes:
(1) fingerprint image capture and image enhancement;
(2) extracting a fingerprint characteristic value;
(3) matching fingerprint images;
fingerprint image matching is to judge whether the feature sets (templates) of two input fingerprints belong to the same fingerprint. Fingerprint matching has two modes: a. one-to-one fingerprint verification: searching out a user fingerprint to be compared from a fingerprint library according to the user ID, and comparing the user fingerprint with a newly scanned fingerprint; b. one-to-many fingerprint verification: the newly scanned fingerprints are compared with the fingerprints in the fingerprint database one by one.
2) The method comprises the steps of equipment fingerprint feature collection, wherein the equipment fingerprint feature collection uses hybrid extraction and hybrid extraction, and comprises an active collection part and a server algorithm generation part, wherein when the PDA equipment is in network connection, collection elements are actively removed, the PDA equipment interacts with an intranet firewall, and after algorithm confusion and encryption, a unique equipment fingerprint ID is generated;
3) the method comprises the steps of carrying out 'human and object' feature fusion based on a feature fusion algorithm of a Bayesian theory, carrying out information fusion on the collected fingerprint acquired in the step 1) and the equipment fingerprint feature acquired in the step 2), redistributing data obtained by fusing globally unique OID identifiers to obtain unique identifiers, and finally achieving uniqueness of the human-object feature fusion by using the identifiers, namely realizing 'human and object' integration, and carrying out OID application after fusing biological features and equipment features.
And (3) fusing the collected 'human and object' feature information by using a feature fusion algorithm based on Bayesian theory: the known mode space Ω contains c modes, which are denoted as Ω ═ ω { (ω })1,…,ωcAnd an unknown sample x is composed of N-dimensional real-valued features, and is marked as x ═ x1,x2,…,xN]NAccording to the bayesian decision theory of the minimum error rate, if the samples are classified into the j-th class, the class is the mode class with the maximum posterior probability under the condition of the known sample x, and the decision process can be expressed as follows:
x→ωj
ifF(ωj)=maxP(ωk|x)
wherein: p (ω k | x) represents the posterior probability of class k, k ∈ {1,2, …, c };
assuming that x is taken as an output result of the classifier, a classifier fusion algorithm of the Bayesian theory can be obtained, and assuming that M classifiers exist, each classifier can output a result yiThus, the characteristic at this time is obtained as follows: y ═ y1,…,yM]Then for an unknown sample y, the decision process can be expressed as:
y→ωj
ifF(ωj)=maxP(ωk|y1,…,yM)
wherein: p (omega)k|y1,…,yM) The posterior probability of the kth class under the condition that M classifier output results are known, k is formed by {1,2, …, c }, and a classifier independence hypothesis is introduced on the basis of the posterior probability, and the formula is combined:
Figure BDA0003091594030000081
a classifier-fused multiplication rule can be obtained:
y→ωj
Figure BDA0003091594030000091
wherein: k ∈ {1,2, …, c }, and in the above formula when p (ω)k|xiWhen the value is 0, the prior probability and the posterior probability are introduced to be approximately equal on the basis of the multiplication rule:
Figure BDA0003091594030000092
wherein: deltakiIs a very small value. Finally, it can be deduced that the classifier fuses the addition rule:
y→ωj
Figure BDA0003091594030000093

Claims (7)

1. an integrated identity authentication method based on human and object feature fusion is characterized in that: the method comprises the following steps:
1) character feature acquisition: collecting fingerprint characteristics;
2) collecting the fingerprint characteristics of equipment;
3) the 'human and object' feature fusion of the feature fusion algorithm based on the Bayesian theory.
2. The integrated identity authentication method based on the fusion of the human and thing characteristics in claim 1, which is characterized in that: in the step 1), the fingerprint feature acquisition method comprises the following steps:
(1) fingerprint image capture and image enhancement;
(2) extracting a fingerprint characteristic value;
(3) and matching fingerprint images.
3. The integrated identity authentication method based on the fusion of the human and thing characteristics as claimed in claim 2, characterized in that: in the step (3), fingerprint matching has two modes: a. one-to-one fingerprint verification: searching out a user fingerprint to be compared from a fingerprint library according to the user ID, and comparing the user fingerprint with a newly scanned fingerprint; b. one-to-many fingerprint verification: the newly scanned fingerprints are compared with the fingerprints in the fingerprint database one by one.
4. The integrated identity authentication method based on the human-object feature fusion of any one of claims 1 to 3, characterized in that: in the step 2), the equipment fingerprint feature acquisition uses hybrid extraction.
5. The integrated identity authentication method based on the fusion of the human and thing characteristics in claim 4, wherein: and 2) in the step 2), performing hybrid extraction, namely, providing an active acquisition part and a server algorithm generation part, actively acquiring elements when the PDA equipment is in network connection, interacting with an intranet firewall, and generating a unique equipment fingerprint ID after algorithm confusion and encryption.
6. The integrated identity authentication method based on the human-object feature fusion of any one of claims 1 to 3, characterized in that: in the step 3), information fusion is carried out on the collected fingerprint collected in the step 1) and the equipment fingerprint characteristic collected in the step 2), a globally unique OID identifier is redistributed to obtain data obtained by fusion, a unique identifier is obtained, and finally the uniqueness of the person-object characteristic fusion is achieved by using the identifier.
7. As claimed inSolving 6 the integrated identity authentication method based on the human-object feature fusion is characterized in that: and (3) fusing the collected 'human and object' feature information by using a feature fusion algorithm based on Bayesian theory: the known mode space Ω contains c modes, which are denoted as Ω ═ ω { (ω })1,…,ωcω is a model, and the unknown sample x is composed of N-dimensional real-valued features, which are denoted as x ═ x1,x2,…,xN]NAccording to the bayesian decision theory of the minimum error rate, if the samples are classified into the j-th class, the class is the mode class with the maximum posterior probability under the condition of the known sample x, and the decision process can be expressed as follows:
x→ωj
ifF(ωj)=maxP(ωk|x)
wherein: p (ω k | x) represents the posterior probability of the kth class, P is the probability, k is the sample class, and k belongs to {1,2, …, c };
assuming that x is taken as an output result of the classifier, a classifier fusion algorithm of the Bayesian theory can be obtained, and assuming that M classifiers exist, each classifier can output a result yiThus, the characteristic at this time is obtained as follows: y ═ y1,…,yM]Then for an unknown sample y, the decision process can be expressed as:
y→ωj
ifF(ωj)=maxP(ωk|y1,…,yM)
wherein: p (omega)k|y1,…,yM) Representing the posterior probability of the kth class under the condition that M classifier output results are known, P is the probability, y is the result sample output by the classifier, and k is the sample of the classifier output, and k is equal to {1,2, …, c }, and introducing a classifier independence hypothesis on the basis of the formula:
Figure FDA0003091594020000021
a classifier-fused multiplication rule can be obtained:
y→ωj
ifFωjmaxP-M-1ωk∏Pωk|yi
wherein: k is equal to {1,2, …, c }, and in the above formula, when p ω is p ωk|xiWhen the value is 0, the prior probability and the posterior probability are introduced to be approximately equal on the basis of the multiplication rule:
k|xiPwkki
wherein: deltakiIs a very small value, and finally, the addition rule of classifier fusion can be deduced:
y→ωj
ifFωjmax1-MPωk∑Pωk|yi。
CN202110597332.2A 2021-05-31 2021-05-31 Integrated identity authentication method based on human and object feature fusion Pending CN113313029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110597332.2A CN113313029A (en) 2021-05-31 2021-05-31 Integrated identity authentication method based on human and object feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110597332.2A CN113313029A (en) 2021-05-31 2021-05-31 Integrated identity authentication method based on human and object feature fusion

Publications (1)

Publication Number Publication Date
CN113313029A true CN113313029A (en) 2021-08-27

Family

ID=77376136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110597332.2A Pending CN113313029A (en) 2021-05-31 2021-05-31 Integrated identity authentication method based on human and object feature fusion

Country Status (1)

Country Link
CN (1) CN113313029A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310142A (en) * 2013-05-22 2013-09-18 复旦大学 Man-machine fusion security authentication method based on wearable equipment
CN105279416A (en) * 2015-10-27 2016-01-27 上海川织金融信息服务有限公司 Identity recognition method and system based on multi-biometric feature in combination with device fingerprint
CN106488452A (en) * 2016-11-18 2017-03-08 国网江苏省电力公司南京供电公司 A kind of mobile terminal safety access authentication method of combination fingerprint
US20180165508A1 (en) * 2016-12-08 2018-06-14 Veridium Ip Limited Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
CN110443014A (en) * 2019-07-31 2019-11-12 成都商汤科技有限公司 Auth method, the electronic equipment for authentication and server, system
CN111325053A (en) * 2018-12-13 2020-06-23 北京计算机技术及应用研究所 Identity authentication method based on multiple biological characteristics and non-biological characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310142A (en) * 2013-05-22 2013-09-18 复旦大学 Man-machine fusion security authentication method based on wearable equipment
CN105279416A (en) * 2015-10-27 2016-01-27 上海川织金融信息服务有限公司 Identity recognition method and system based on multi-biometric feature in combination with device fingerprint
CN106488452A (en) * 2016-11-18 2017-03-08 国网江苏省电力公司南京供电公司 A kind of mobile terminal safety access authentication method of combination fingerprint
US20180165508A1 (en) * 2016-12-08 2018-06-14 Veridium Ip Limited Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
CN111325053A (en) * 2018-12-13 2020-06-23 北京计算机技术及应用研究所 Identity authentication method based on multiple biological characteristics and non-biological characteristics
CN110443014A (en) * 2019-07-31 2019-11-12 成都商汤科技有限公司 Auth method, the electronic equipment for authentication and server, system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘渭滨等: ""模式分类中的特征融合方法"", 《北京邮电大学学报》, vol. 40, no. 4, pages 3 *
徐新萍等: ""基于RFID技术的多模态生物特征识别系统设计"", 《中国体视学与图像分析》, vol. 13, no. 3, pages 2 *
新宝门智能: ""指纹锁的指纹匹配系统介绍"", pages 1 - 4, Retrieved from the Internet <URL:https://www.sohu.com/a/289807655_100259758> *

Similar Documents

Publication Publication Date Title
CN111133433B (en) Automatic authentication for access control using face recognition
US11837017B2 (en) System and method for face recognition based on dynamic updating of facial features
CN101478401B (en) Authentication method and system based on key stroke characteristic recognition
Schmid et al. Performance analysis of iris-based identification system at the matching score level
JP2008533606A (en) How to perform face recognition
EP3779775B1 (en) Media processing method and related apparatus
CN112801054B (en) Face recognition model processing method, face recognition method and device
Krish et al. Pre‐registration of latent fingerprints based on orientation field
Mane et al. Review of multimodal biometrics: applications, challenges and research areas
WO2017173640A1 (en) Method and apparatus for recognizing individuals based on multi-mode biological recognition information
US11803662B2 (en) Systems, methods, and non-transitory computer-readable media for secure individual identification
Hassan et al. Fusion of face and fingerprint for robust personal verification system
Vijayalakshmi et al. Finger and palm print based multibiometric authentication system with GUI interface
CN113129338A (en) Image processing method, device, equipment and medium based on multi-target tracking algorithm
CN113313029A (en) Integrated identity authentication method based on human and object feature fusion
Akulwar et al. Secured multi modal biometric system: a review
Kennedy Okokpujie et al. A secured automated bimodal biometric electronic voting system
Jadhav et al. Review on Multimodal Biometric Recognition System Using Machine Learning
CN111209551B (en) Identity authentication method and device
JP5857715B2 (en) Identification device and identification method
WO2005048172A1 (en) 2d face anthentication system
Pathak et al. Performance of multimodal biometric system based on level and method of fusion
US20230119918A1 (en) Deep learning based fingerprint minutiae extraction
CN111316266B (en) Method for improving user authentication performed by a communication device
Kabuya et al. Metric Based Technique in Multi-factor Authentication System with Artificial Intelligence Technologies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination