CN114519898A - Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment - Google Patents

Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment Download PDF

Info

Publication number
CN114519898A
CN114519898A CN202011202718.0A CN202011202718A CN114519898A CN 114519898 A CN114519898 A CN 114519898A CN 202011202718 A CN202011202718 A CN 202011202718A CN 114519898 A CN114519898 A CN 114519898A
Authority
CN
China
Prior art keywords
modality
score
comparison
modal
comparison score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011202718.0A
Other languages
Chinese (zh)
Inventor
郭秀花
杨春林
丁松
江武明
王洋
孙飞
周军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eyes Intelligent Technology Co ltd, Beijing Eyecool Technology Co Ltd filed Critical Beijing Eyes Intelligent Technology Co ltd
Priority to CN202011202718.0A priority Critical patent/CN114519898A/en
Publication of CN114519898A publication Critical patent/CN114519898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a biological characteristic multi-mode fusion recognition method, a biological characteristic multi-mode fusion recognition device, a storage medium and equipment. Adjusting a comparison threshold value of a second modality according to the quality score of the image of the second modality in multi-modality recognition; when the comparison score of the second modality is not less than the adjusted comparison threshold of the second modality, and the quality score of the biological feature image of the second modality is not less than the decision threshold of the second modality, judging that the identification is passed; otherwise, fusing the comparison scores of the two modes to obtain a fusion score, when the fusion score is larger than a fusion score threshold value, identifying the fusion score, when score fusion is carried out, normalizing the comparison scores of the two modes to form a score pair, carrying out polynomial kernel mapping on the score pair to obtain high-dimensional data, and inputting the high-dimensional data into a trained logistic regression model to obtain the fused comparison score. The invention can improve the safety and reliability of the biological identification identity recognition.

Description

Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment
Technical Field
The invention relates to the field of biological identification, in particular to a method, a device, a storage medium and equipment for multi-modal fusion identification of biological characteristics.
Background
The biological recognition technology is closely combined with high-tech means such as optics, acoustics, biosensors and the principle of biometry by a computer, and the inherent physiological characteristics of a human body such as fingerprints, human faces, irises, finger veins and the like are utilized to identify the identity of an individual.
With the increasing requirements on social security and identity authentication accuracy and reliability, the limitation of single biological feature recognition on accuracy and reliability is increasingly prominent, and the requirements on product and technology development are far from being met. Currently, multi-modal biometric identification is considered to be one of the most promising and advantageous directions of research. The multi-modal biological feature recognition can effectively utilize the independence among different modes to further improve the recognition rate, so that the accuracy of the fused result can be higher than that of any single mode.
One method of prior art multimodal fusion of biometric features is as follows: the method comprises the steps that P1, P2 and P3 are several biological feature images, R1, R2 and R3 are biological feature templates corresponding to P1, P2 and P3 respectively, in the identification process, the P1 and the R1 are compared to obtain a comparison score S1, if the comparison score S1 is larger than a threshold T1, the comparison threshold corresponding to P2 is adjusted to obtain an adjusted comparison threshold T2, for example, the score of the original comparison threshold is adjusted to be low, otherwise, if the comparison score S1 is smaller than the threshold T1, the score of the original comparison threshold T2 is adjusted to be high.
After a comparison threshold T2 is obtained by adjusting the comparison threshold of the P2 according to the comparison score of the P1, the P2 is compared with the R2 to obtain a comparison score S2, an original comparison threshold T3 of the P3 is adjusted according to the comparison score S2 to obtain a comparison threshold T3 corresponding to the adjusted P3, for example, if the comparison score S2 is greater than the comparison threshold T2, an original comparison threshold T3 corresponding to the P3 is adjusted, for example, the score of the original comparison threshold T3 is decreased, otherwise, if the comparison score S2 is less than the threshold T2, the score of the original comparison threshold T3 is increased, then the P3 is compared with the R3 to obtain a comparison score S3, if the comparison threshold T3 obtained after the S3 is greater than the adjustment, the comparison is considered to be passed, the identity recognition is successful, otherwise, the comparison is considered to be not passed, and the identity recognition is failed.
That is, in the prior art, according to the comparison result of the former biometric feature, the comparison threshold for comparing the subsequent biometric features is adjusted, although this method makes the comparison of the latter biometric feature easier and can improve the comparison pass rate, the method for determining the comparison threshold of the latter biometric feature through the comparison result of the former biometric feature has uncertainty, and the amplitude of the increase and decrease of the comparison threshold is difficult to grasp, which results in poor security and reliability of biometric identification. Meanwhile, in the existing biological feature multi-modal fusion recognition method, the method for comparing score fusion is generally simpler, and a weighting method is generally adopted, so that the advantages of different modes cannot be fully exerted, and good recognition accuracy is achieved.
Disclosure of Invention
In order to solve the technical problems that the adjustment of a comparison threshold value in the multi-modal biological feature recognition in the prior art has uncertainty and the fusion of comparison scores cannot play the advantages of different modes, the invention provides a biological feature multi-modal fusion recognition method, a device, a storage medium and equipment.
The technical scheme provided by the invention is as follows:
in a first aspect, the present invention provides a method for multimodal fusion recognition of biometric features, the method comprising:
sequentially identifying biological characteristic images of a first modality and a second modality;
when the first modality identification is passed, adjusting a comparison threshold value of a second modality according to the quality score of the biological feature image of the second modality;
when the comparison score of the second modality is not less than the lowest threshold value of the second modality and the adjusted comparison threshold value of the second modality, and the quality score of the biological feature image of the second modality is not less than the decision threshold value of the second modality, judging that the identification is passed;
when the comparison score of the second modality is not smaller than the lowest threshold of the second modality, and the comparison score of the second modality is smaller than the adjusted comparison threshold of the second modality or the quality score of the biological feature image of the second modality is smaller than the decision threshold of the second modality, the comparison score of the first modality and the comparison score of the second modality are fused to obtain a fusion score, and if the fusion score is not smaller than the fusion score threshold, the judgment that the identification is passed is made; wherein the content of the first and second substances,
the fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fused score comprises:
normalizing the comparison score of the first modality and the comparison score of the second modality;
forming a score pair by the normalized comparison score of the first modality and the normalized comparison score of the second modality, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data;
and inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
In a second aspect, the present invention provides a biometric multimodal fusion recognition apparatus, including:
the identification module is used for sequentially identifying the biological characteristic images of the first modality and the second modality;
the comparison threshold adjusting module is used for adjusting the comparison threshold of the second modality according to the quality score of the biological feature image of the second modality when the first modality passes the identification;
the first judging module is used for judging that the identification is passed when the comparison score of the second modality is not less than the lowest threshold value of the second modality, the adjusted comparison threshold value of the second modality and the quality score of the biological feature image of the second modality is not less than the decision threshold value of the second modality;
the second judgment module is used for fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score when the comparison score of the second modality is not less than the lowest threshold value of the second modality and the comparison score of the second modality is less than the adjusted comparison threshold value of the second modality or the quality score of the biological feature image of the second modality is less than the decision threshold value of the second modality, and judging that the identification is passed if the fusion score is not less than the fusion score threshold value; wherein the content of the first and second substances,
the second judging module includes:
the normalization unit is used for normalizing the comparison score of the first mode and the comparison score of the second mode;
the mapping unit is used for forming a score pair by the normalized comparison score of the first modality and the normalized comparison score of the second modality, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data;
and the fusion unit is used for inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
In a third aspect, the present invention provides a computer readable storage medium for biometric multimodal fusion recognition, comprising a memory for storing processor executable instructions which, when executed by the processor, implement the steps comprising the biometric multimodal fusion recognition method of the first aspect.
In a fourth aspect, the present invention provides an apparatus for multi-modal fusion recognition of biometric features, comprising at least one processor and a memory storing computer-executable instructions, the processor implementing the steps of the biometric multi-modal fusion recognition method of the first aspect when executing the instructions.
The invention has the following beneficial effects:
according to the invention, firstly, the comparison threshold is dynamically adjusted according to the quality scores of the biological characteristic images, the identification result is subjected to auxiliary decision making by using the quality scores of the biological characteristic images, and meanwhile, the probability of identity authentication is gradually increased by combining the fusion of the comparison scores, so that the safety and reliability of biological identification identity identification are improved.
And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores. The invention can give full play to the advantages of different modes and achieve good identification precision.
Drawings
FIG. 1 is a flow chart of a multi-modal fusion recognition method of biometric features of the present invention;
FIG. 2 is a flowchart of a logistic regression model training method in the multi-modal fusion recognition method for biological features shown in FIG. 1;
FIG. 3 is a flow chart of another specific example of the biometric multimodal fusion recognition method shown in FIG. 1;
fig. 4 is a schematic diagram of the multi-modal fusion recognition apparatus for biometric features of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the embodiment of the invention provides a biological feature multi-modal fusion recognition method, as shown in fig. 1, the method comprises the following steps:
s100: and sequentially identifying the biological characteristic images of the first modality and the second modality.
In this step, a first-modality biometric image and a second-modality biometric image of the user to be identified are obtained, and the first-modality biometric image and the second-modality biometric image are respectively identified according to a time sequence, wherein the first-modality biometric image may be a human face image, and the second-modality biometric image may be an iris image.
S110: and when the first modality identification is passed, adjusting the comparison threshold value of the second modality according to the quality score of the biological feature image of the second modality.
In this step, the biometric feature of the biometric image of the first modality may be extracted, the extracted biometric feature of the first modality is compared with the biometric template of the first modality stored in the database to obtain a comparison score of the first modality, and whether the first modality identification passes or not is determined according to the comparison score of the first modality. In the prior art, generally, if the comparison score of the first modality is greater than the threshold score set during normal recognition, the comparison is considered to pass, however, it is generally known by those skilled in the art that the biometric features are easily affected by various factors such as illumination and user shaking in the recognition process, which easily causes the real user to fail in recognition due to the influence of the surrounding environment factors. Therefore, the method and the device judge whether the identification is passed by judging whether the comparison score of the first mode is larger than the lowest threshold value of the first mode, and judge that the identification of the first mode is passed when the comparison score of the first mode is larger than or equal to the lowest threshold value of the first mode. The lowest threshold of the first modality is a threshold score which is lower than a threshold score set when the modality normally passes the identification, for example, if the score is greater than or equal to 70 in a normal case, the identification is considered to pass, in this embodiment, the lowest threshold is set to be lower than 70, for example, the lowest threshold is set to be 60, and if the score is lower than the lowest threshold, it is directly determined that the identification does not pass, so that false determination of a true user due to factors such as environment, image utility, and shaking can be prevented.
When the first modality identification passes, calculating the quality score of the biological feature image of the second modality, and adjusting the comparison threshold value during the biological identification of the second modality according to the quality score of the biological feature image of the second modality. For example, the higher the quality score of the biometric image of the second modality, the higher the comparison threshold adjustment when performing biometric recognition on the biometric image of the second modality.
S120: and judging that the identification is passed when the comparison score of the second modality is not less than the lowest threshold value of the second modality, the adjusted comparison threshold value of the second modality and the quality score of the biological feature image of the second modality is not less than the decision threshold value of the second modality.
In this step, after the comparison threshold of the second modality is adjusted according to the quality score of the biometric feature of the second modality, the biometric feature image of the second modality of the user to be identified is identified. And extracting the biological features of the biological feature image of the second modality of the user to be detected, comparing the biological features of the second modality with the biological feature template of the second modality in the database, and judging whether the identification is passed according to the comparison score. When the comparison score of the second modality is greater than or equal to the lowest threshold of the second modality and the adjusted comparison threshold of the second modality, and the quality score of the biological feature image of the second modality is greater than or equal to the decision threshold of the second modality, judging that the identification is passed; and when the comparison score of the second modality is larger than or equal to the lowest threshold of the second modality, and the comparison score of the second modality is smaller than the adjusted comparison threshold or the quality score of the biological feature image of the second modality is smaller than the decision threshold of the second modality, fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score, and judging whether the identification passes according to the fusion score.
In this step, the comparison score of the second modality is lower than the threshold score set when the normal identification of the second modality passes, and this can be understood with reference to the lowest threshold of the first modality, and details are not repeated here.
When the comparison score of the second mode is smaller than the lowest threshold value of the second mode, judging that the identification is failed; and when the comparison score of the second modality is greater than or equal to the lowest threshold of the second modality, judging whether the comparison score of the second modality is greater than or equal to the adjusted comparison threshold of the second modality, if so, judging whether the quality score of the biological feature image of the second modality is greater than the decision threshold of the second modality, and if so, judging that the biological recognition is passed.
According to the method, the comparison threshold is adjusted according to the mass fraction of the biological feature image in the second mode, the amplitude of the comparison threshold is correspondingly adjusted according to the mass fraction passing, the probability of identity authentication is gradually increased in the process of multi-mode biological identification, and the safety and the reliability of biological identification are improved.
S130: and when the comparison score of the second modality is not less than the lowest threshold of the second modality, the comparison score of the second modality is less than the adjusted comparison threshold of the second modality or the quality score of the biological feature image of the second modality is less than the decision threshold of the second modality, fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score, and if the fusion score is not less than the fusion score threshold, judging that the identification is passed.
In this step, when the comparison score of the second modality is greater than or equal to the lowest threshold of the second modality, and the comparison score of the second modality is lower than the comparison threshold adjusted by the second modality, or the quality score of the biometric image of the second modality is smaller than the decision threshold of the second modality, in order to avoid false recognition, the comparison score of the first modality and the comparison score of the second modality are subjected to score fusion, whether recognition passes or not is further judged according to the fusion score, the fusion score and the fusion score threshold are compared, if the fusion score is greater than or equal to the fusion score threshold, recognition is judged to pass, and if the fusion score is smaller than or equal to the fusion score threshold, recognition is judged not to pass.
The method comprises the following steps of fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score:
s131: and normalizing the comparison score of the first mode and the comparison score of the second mode.
S132: and forming a score pair by the normalized comparison score of the first modality and the normalized comparison score of the second modality, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data.
S133: and inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
The method comprises the steps of firstly, dynamically adjusting a comparison threshold according to the quality scores of the biological characteristic images, carrying out auxiliary decision on an identification result by utilizing the quality scores of the biological characteristic images, simultaneously combining the fusion of the comparison scores to gradually increase the probability of identity authentication, and simultaneously carrying out the same normalization and high-dimensional mapping on the obtained multi-modal comparison scores and then inputting a logistic regression model of an optimal parameter when the comparison scores of a second modality are lower than the adjusted comparison threshold or the quality scores of the biological characteristic images of the second modality are lower than the decision threshold, so that the fused comparison scores can be obtained, and the safety and the reliability of the biological identification identity identification are improved.
In the invention, when the comparison of the first mode passes, the comparison threshold of the second mode is adjusted in the following way:
dividing the quality score of the biological characteristic image of the second modality into three sections [0, Qmin ], [ Qmin, Qref ], [ Qref, Qmax ]; where Qmin, Qref, and Qmax are the lowest, reference, and maximum quality scores, respectively, of the biometric image of the modality.
The lowest quality scores Qmin correspond to a lowest threshold Tmin and the maximum quality scores Qmax correspond to a highest threshold Tmax, respectively.
If the quality score of the biometric image of the second modality is within [0, Qmin), the biometric image of the second modality may be discarded if the quality of the biometric image of the second modality is deemed not to be relevant.
If the quality score of the biometric image of the second modality is within [ Qmin, Qref), the quality of the biometric image of the second modality is considered to be in an uncertain state, and the comparison threshold of the second modality is not changed, that is, the comparison threshold of the second modality is not adjusted.
If the quality score Q of the biometric image of the second modality is within [ Qref, Qmax ], the comparison threshold T of the second modality increases as the quality score Q of the biometric image of the second modality increases.
When the mass fraction is between [ Qref, Qmax ], the quality of the biometric image of the second modality is considered to meet the requirement, and as the mass fraction of the biometric image of the second modality increases, the comparison threshold value when the biometric image of the second modality is identified also increases;
the calculation formula is as follows:
T=Tmin+(Tmax-Tmin)*f((Q-Qref)/(QMax-Qref))
wherein Q is an element [ Qmin, Qmax ]
Figure BDA0002755915620000081
In the invention, when the comparison score of the first mode and the comparison score of the second mode are fused, firstly, the comparison score of the first mode and the comparison score of the second mode are normalized; forming a score pair by the normalized first modal comparison score and the normalized second modal comparison score, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data; and inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
As shown in fig. 2, fig. 2 is a flowchart of a logistic regression model training method in the multi-modal biometric fusion recognition method shown in fig. 1, where the training method includes:
s131': and acquiring a training set comprising a plurality of first mode comparison score samples and second mode comparison score samples, and normalizing the first mode comparison score samples and the second mode comparison score samples.
When training the logistic regression model, the samples including the first modal comparison score and the second modal comparison score are normalized.
S132': and constructing a sample pair consisting of the normalized first mode comparison fraction sample and the normalized second mode comparison fraction sample, and performing polynomial kernel mapping on the sample pair to obtain a high-dimensional data sample.
In this step, a polynomial kernel map (method) may map the low dimensional data to a high dimensional space.
And S133': and modeling the high-dimensional data sample by using a logistic regression algorithm, and training by using a loss function to obtain the optimal parameters of the logistic regression model so as to finish the training of the logistic regression model.
The method comprises the following steps of establishing a logistic regression model, training the logistic regression model through a training set to obtain optimal parameters, and finishing training. And inputting the first modal comparison score and the second modal comparison score of the user to be identified, so that the fused comparison score can be obtained according to the trained logistic regression model of the optimal parameters.
The method comprises the steps of firstly carrying out normalized alignment on multi-modal comparison scores, then carrying out high-dimensional mapping on the multi-modal comparison scores by adopting polynomial kernel mapping, finally modeling the mapped scores by adopting a logistic regression algorithm, and calculating the optimal parameters of the logistic regression model through a loss function. And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores.
The fused alignment score represents the probability of the same person, and exemplarily, the fused alignment score may be in a range of [0, 1], and the closer to 1, the greater the probability of representing the same person, and conversely, the smaller the probability of representing the same person.
As an improvement of the embodiment of the present invention, the step S131' of normalizing the alignment score of the first modality and the alignment score of the second modality may include the following steps:
s1311': obtaining a first modal comparison score sample F of N personsiAnd second mode alignment score sample GiAnd equally divided into the same human sample pair
Figure BDA0002755915620000101
And do notSame person sample pair
Figure BDA0002755915620000102
Figure BDA0002755915620000103
S1312': calculating the Mean _ F and standard deviation Std _ F of the first modal alignment score sample in different human sets and the Mean _ G and standard deviation Std _ G of the second modal alignment score sample in different human sets.
S1313': the score sample F is aligned for the first modality using the following formulaiAnd second mode alignment score sample GiAnd (3) carrying out normalization:
fi=(Fi-Mean_F)/std_F
gi=(Gi-Mean_G)/std_G。
wherein, FiAnd GiA first modality comparison score sample and a second modality comparison score sample, f, of the ith individual, respectivelyiAnd giRespectively a first modal comparison score sample and a second modal comparison score sample of the normalized ith person, i belongs to [1, N];
Figure BDA0002755915620000104
And
Figure BDA0002755915620000105
respectively a first modal comparison score sample and a second modal comparison score sample of the ith pair of the same person in a training set,
Figure BDA0002755915620000106
and
Figure BDA0002755915620000107
respectively are a first modal comparison score sample and a second modal comparison score sample of the ith pair of different people in the training set.
Figure BDA0002755915620000108
Figure BDA0002755915620000109
Figure BDA00027559156200001010
Figure BDA00027559156200001011
Correspondingly, in step S131, the comparison score of the first modality and the comparison score of the second modality are normalized as follows:
acquiring a first modal comparison score F and a second modal comparison score G, and normalizing the first modal comparison score F and the second modal comparison score G by using the following formula:
f=(F-Mean_F)/std_F
g=(G-Mean_G)/std_G
wherein f and g are respectively the normalized first modal comparison score and the normalized second modal comparison score.
Further, the step S132' constructs a sample pair composed of the normalized first mode comparison score sample and the normalized second mode comparison score sample, and performs polynomial kernel mapping on the sample pair to obtain the high dimensional data sample, including:
s1321': constructing a first modal comparison score sample f after normalizationiAnd a second modal comparison score sample giPair of composed samples (f)i,gi)。
S1322': performing polynomial kernel mapping on the sample pair through the following formula, and mapping the two-dimensional face iris score into a multidimensional data sample to obtain a high-dimensional data sample, in this embodiment, the example is illustrated with the polynomial kernel as 2:
(1,fi,gi,fi 2,figi,gi 2)=poly_kernel((fi,gi))
wherein, poly _ kernel () is polynomial core mapping, (1, f)i,gi,fi 2,figi,gi 2) For high-dimensional data samples, when the polynomial kernel selects 2, the two-dimensional multi-modal comparison score is transformed into 6-dimensional data by a polynomial kernel method.
Correspondingly, step S132 combines the normalized comparison score of the first modality and the normalized comparison score of the second modality into a score pair, and performs polynomial kernel mapping on the score pair to obtain high-dimensional data, including:
s1321: and combining the normalized first modal alignment score f and the normalized second modal alignment score g into a score pair (f, g).
S1322: and performing polynomial kernel mapping on the fraction pairs by the following formula to obtain high-dimensional data:
(1,f,g,f2,fg,g2)=poly_kernel((f,g))
wherein, (1, f, g, f)2,fg,g2) Is high dimensional data.
In the foregoing S133', modeling the high-dimensional data sample by using a logistic regression algorithm, and training by using a loss function to obtain an optimal parameter of the logistic regression model, where the training of the logistic regression model includes:
s1331': the logistic regression algorithm is used for modeling the high-dimensional data samples as follows:
Xi=θ12fi3gi4fi 25figi6gi 2
wherein if the sample pair is the same person, the label y thereof(i)If the pair of samples is different persons, then its label y is 1(i)=0。
S1332': and (4) training by using a loss function through an SGD (generalized Gaussian distribution) algorithm to obtain an optimal parameter theta of the logistic regression model.
Wherein, the logistic regression model is as follows:
Figure BDA0002755915620000121
wherein the content of the first and second substances,
Figure BDA0002755915620000122
θT=[θ123456],
Figure BDA0002755915620000123
hθ(xi) To predict the probability of the same person.
After the training is finished, in S133, the fused comparison score is obtained through the logistic regression model, where the trained logistic regression model is:
Figure BDA0002755915620000124
wherein the content of the first and second substances,
Figure BDA0002755915620000125
hθ(x) Representing the probability of the same person.
The form of the loss function is not limited in the present invention, and exemplarily, the loss function is a mean square error function or a cross entropy function, and the expressions of the mean square error function and the cross entropy function are:
Figure BDA0002755915620000126
Figure BDA0002755915620000127
wherein, J (theta)01) J (theta) represents a cross entropy function, y, for a mean square error function(i)A label indicating whether the pair of samples is the same person, and if the pair of samples is the same person, y(i)1, if the sample pair is a different person,then y is(i)When the value is 0, m represents the number of all samples in the training set, that is, the optimum model parameter θ is obtained by training through an SGD algorithm (stochastic gradient descent algorithm) by using a mean square error function and a cross entropy function123456
Therefore, when the comparison score of the second modality is smaller than the adjusted comparison threshold or the quality score of the biological feature image of the second modality is lower than the decision threshold, the comparison score of the first modality and the comparison score of the second modality are compared and normalized to form a score pair, polynomial kernel mapping is carried out to obtain high-dimensional data, the high-dimensional data is input into a trained logistic regression model with optimal parameters to obtain a fused comparison score, and whether the identification passes or not is judged according to the fused comparison score. And if the fused comparison score is larger than a preset threshold value, the identification is considered to pass, and if the fused comparison score is smaller than the preset threshold value, the identification is considered to not pass.
According to the invention, firstly, the comparison threshold is dynamically adjusted according to the quality scores of the biological characteristic images, the identification result is subjected to auxiliary decision making by using the quality scores of the biological characteristic images, and meanwhile, the probability of identity authentication is gradually increased by combining the fusion of the comparison scores, so that the safety and reliability of biological identification identity identification are improved.
And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores. The invention can give full play to the advantages of different modes and achieve good identification precision.
Based on a specific example, as shown in fig. 3, fig. 3 is a flowchart of another specific example of the biometric multimodal fusion recognition method shown in fig. 1, where the method includes:
s10: and identifying the biological feature image P1 of the first modality and the biological feature template R1 of the first modality to obtain a comparison score S1 of the first modality, judging whether the comparison score S1 of the first modality is not smaller than the lowest threshold value T1min of the first modality, if so, executing S20, otherwise, judging that the identification is not passed.
P1 is a biometric image representing the identity of a human individual, such as a human face or an iris, which can be captured by a high-definition camera, a finger vein capture head, etc. of a dedicated image capture device. R1 is a pre-stored enrollment template for the user's biometric feature in the template library.
In this step, the acquired biometric image P1 of the first modality is compared with the pre-stored biometric registration template R1 of the user in the template library for similarity, and a comparison score S1 of the first modality is obtained.
The lowest threshold T1min of the first modality is lower than the threshold score set when the P1 normally passes the recognition, for example, if the S1 is greater than or equal to 70 scores under normal conditions, the recognition is considered to pass, in the present embodiment, T1min is less than 70 scores, for example, T1min is set to 60 scores, so that misjudgment of a real user due to factors such as environment, image utility, shaking and the like can be prevented.
And when the comparison score S1 of the first modality of the user is less than 60 min, judging that the biometric image matching of the current user fails, failing to pass the identification, and otherwise, continuing to execute the step S20.
S20: calculating a quality score Q2 of the biometric image P2 of the second modality, and adjusting a comparison threshold T2 of the second modality according to the quality score Q2 of the biometric image of the second modality, wherein if the quality score Q2 of the biometric image of the second modality is higher, the comparison threshold T2 of the second modality is higher, and conversely, if the quality score Q2 is lower, the T2 is lower.
In this step, when S1 is greater than or equal to T1min, the quality score Q2 of P2 needs to be further analyzed by an image quality algorithm, and the comparison threshold T2 needs to be adjusted according to the quality score Q2.
S30: and identifying the biological feature image P2 of the second modality and the biological feature template R2 of the second modality to obtain a comparison score S2 of the second modality, and judging whether the comparison score S2 of the second modality is not smaller than the minimum threshold value T2min of the second modality, if so, executing S40, otherwise, judging that the identification is not passed.
In this step, after the comparison of the biometric image P1 of the first modality is completed and the comparison threshold T2 of the biometric image is adjusted according to the quality score Q2 of the biometric image P2 of the second modality, the biometric image P2 of the second modality is compared with the biometric registration template R2 of the user, which is stored in advance in the template library, to obtain the comparison score S2.
The lowest threshold T2min of the second modality is lower than the threshold score set when the P2 normally passes the recognition, for example, if the S2 is greater than or equal to 80 scores under normal conditions, the recognition is considered to pass, in the present embodiment, T2min is less than 80 scores, for example, T2min is 70 scores, and thus, the false judgment of the real user due to factors such as environment, image utility, shaking and the like can be prevented.
And when the comparison score S2 of the second modality of the user is less than 70 min, judging that the matching of the biometric images of the current user fails, and if not, continuing to execute the step S40.
T1min and T2min may be the same or different. For example, when P1 is a face image and P2 is an iris image, T1min may be set to 60 points, and since the iris alignment accuracy is high, T2min may be greater than T1min, for example, T2min may be set to 70 points.
S40: and judging whether the comparison score S2 of the second modality is not less than the adjusted comparison threshold T2 of the second modality and the quality score Q2 of the biological feature image of the second modality is not less than the decision threshold Q of the second modality, if so, judging that the identification is passed, otherwise, executing S50.
In this step, when the comparison score S2 of the second modality is greater than or equal to the minimum threshold T2min of the second modality, it needs to further determine whether the comparison score S2 of the second modality is greater than or equal to the adjusted second comparison threshold T2 to prevent false recognition, and if the comparison score S2 of the second modality is greater than or equal to the adjusted second comparison threshold T2, further determine whether the quality score Q2 of the P2 is greater than or equal to a decision threshold of the second modality, where the decision threshold is a set quality score threshold, and if the quality score Q2 of the second modality is greater than or equal to the decision threshold, determine that the recognition is passed.
The purpose of determining whether or not the mass fraction Q2 of P2 is equal to or greater than Q is to determine the reliability of the second alignment threshold T2 adjusted by Q2, and if so, the adjusted T2 is reliable and the recognition is considered to be passed.
S50: and fusing the comparison score S1 of the first modality with the comparison score S2 of the second modality to obtain a first fusion score S12, and judging whether the first fusion score is smaller than a first fusion score threshold value T12, if so, judging that the identification is failed.
In this step, the alignment score S1 corresponding to P1 and the alignment score S2 corresponding to P2 are fused to obtain a first fusion score threshold T12.
The method for fusing the alignment score S1 and the alignment score S2 can be understood by referring to the above embodiments, and will not be described herein.
S60: if the first fusion score S12 is not less than the first fusion score threshold T12, the recognition is determined to be passed.
Example 2:
the embodiment of the invention provides a biological characteristic multi-mode fusion recognition device, which comprises:
and the identification module 100 is used for sequentially identifying the biological characteristic images of each modality.
And the comparison threshold adjusting module 110 is configured to adjust the comparison threshold of the second modality according to the quality score of the biometric image of the second modality when the first modality passes the identification.
The first determining module 120 is configured to determine that the identification is passed when the comparison score of the second modality is not less than the lowest threshold of the second modality and the adjusted comparison threshold of the second modality, and the quality score of the biometric image of the second modality is not less than the decision threshold of the second modality.
A second determining module 130, configured to fuse the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score when the comparison score of the second modality is not less than the lowest threshold of the second modality, and the comparison score of the second modality is less than the adjusted comparison threshold of the second modality or the quality score of the biometric image of the second modality is less than the decision threshold of the second modality, and determine that the identification is passed if the fusion score is not less than the fusion score threshold, where the second determining module includes:
a normalization unit 131, configured to normalize the comparison score of the first modality and the comparison score of the second modality;
the mapping unit 132 is configured to combine the normalized comparison score of the first modality and the normalized comparison score of the second modality into a score pair, and perform polynomial kernel mapping on the score pair to obtain high-dimensional data;
and the fusion unit 133 is configured to input the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
According to the invention, firstly, the comparison threshold is dynamically adjusted according to the quality scores of the biological characteristic images, the identification result is subjected to auxiliary decision making by using the quality scores of the biological characteristic images, and meanwhile, the probability of identity authentication is gradually increased by combining the fusion of the comparison scores, so that the safety and reliability of biological identification identity identification are improved.
And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores. The invention can give full play to the advantages of different modes and achieve good identification precision
For the first modality and the second modality, the lowest threshold value is lower than the threshold value when the corresponding modality normally identifies to pass.
For the second modality, the higher the quality score of the biometric image of that modality, the higher the comparison threshold adjustment of that modality.
Adjusting the comparison threshold of the second modality according to the quality score of the biological feature image of the second modality by:
dividing the quality score of the biological characteristic image of the second modality into three sections [0, Qmin ], [ Qmin, Qref ], [ Qref, Qmax ]; wherein Qmin, Qref, and Qmax are the lowest quality score, the reference quality score, and the maximum quality score, respectively, of the biometric image of the second modality.
And if the quality score of the biological characteristic image of the second modality is within [0, Qmin), discarding the biological characteristic image of the second modality.
If the quality score of the biometric image of the second modality is within [ Qmin, Qref ], the comparison threshold of the second modality is unchanged.
If the quality score of the biometric image of the second modality is within [ Qref, Qmax ], the comparison threshold of the second modality increases as the quality score of the biometric image of the second modality increases.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiment 1, and for the sake of brief description, reference may be made to the corresponding content in the method embodiment 1 for the part where the embodiment of the device is not mentioned. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the unit described above may all refer to the corresponding processes in the above method embodiment 1, and are not described herein again.
Example 3:
the method provided by this specification and described in the foregoing embodiment 1 may implement the service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, so as to implement the effect of the solution described in embodiment 1 of this specification. Accordingly, the present invention also provides a computer readable storage medium for biometric multimodal fusion recognition, comprising a memory for storing processor executable instructions which, when executed by the processor, implement the steps comprising the biometric multimodal fusion recognition method of embodiment 1.
According to the invention, firstly, the comparison threshold is dynamically adjusted according to the quality scores of the biological characteristic images, the identification result is subjected to auxiliary decision making by using the quality scores of the biological characteristic images, and meanwhile, the probability of identity authentication is gradually increased by combining the fusion of the comparison scores, so that the safety and reliability of biological identification identity identification are improved.
And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores. The invention can give full play to the advantages of different modes and achieve good identification precision.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The device described above may also include other implementations in accordance with the description of method embodiment 1. The specific implementation manner may refer to the description of the related method embodiment 1, and is not described in detail here.
Example 4:
the invention also provides a device for multi-modal fusion recognition of biological features, which can be a single computer, and can also comprise an actual operation device and the like using one or more methods or one or more embodiment devices of the specification. The apparatus for multi-modal fusion recognition of biometric features may comprise at least one processor and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of the biometric multi-modal fusion recognition method of any one or more of embodiments 1 above.
According to the invention, firstly, the comparison threshold is dynamically adjusted according to the quality scores of the biological characteristic images, the identification result is subjected to auxiliary decision making by using the quality scores of the biological characteristic images, and meanwhile, the probability of identity authentication is gradually increased by combining the fusion of the comparison scores, so that the safety and reliability of biological identification identity identification are improved.
And during multi-modal identification, performing the same normalization and high-dimensional mapping treatment on the obtained multi-modal comparison scores, and inputting the obtained multi-modal comparison scores into a logistic regression model of the optimal parameters to obtain the fused comparison scores. The invention can give full play to the advantages of different modes and achieve good identification precision.
The above description of the device according to the method or apparatus embodiment may also include other implementation manners, and a specific implementation manner may refer to the description of related method embodiment 1, which is not described in detail herein.
It should be noted that, the above-mentioned apparatus or system in this specification may also include other implementation manners according to the description of the related method embodiment, and a specific implementation manner may refer to the description of the method embodiment, which is not described herein in detail. The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class, storage medium + program embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be conceived to be both a software module implementing the method and a structure within a hardware component.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional identical elements in the process, method or apparatus comprising the element.
One skilled in the art will appreciate that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for multimodal fusion recognition of biometric features, the method comprising:
sequentially identifying biological characteristic images of a first modality and a second modality;
when the first modality identification is passed, adjusting a comparison threshold value of a second modality according to the quality score of the biological feature image of the second modality;
when the comparison score of the second modality is not less than the lowest threshold value of the second modality and the adjusted comparison threshold value of the second modality, and the quality score of the biological feature image of the second modality is not less than the decision threshold value of the second modality, judging that the identification is passed;
when the comparison score of the second modality is not smaller than the lowest threshold of the second modality, and the comparison score of the second modality is smaller than the adjusted comparison threshold of the second modality or the quality score of the biological feature image of the second modality is smaller than the decision threshold of the second modality, the comparison score of the first modality and the comparison score of the second modality are fused to obtain a fusion score, and if the fusion score is not smaller than the fusion score threshold, the judgment that the identification is passed is made; wherein the content of the first and second substances,
the fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fused score comprises:
normalizing the comparison score of the first modality and the comparison score of the second modality;
forming a score pair by the normalized comparison score of the first modality and the normalized comparison score of the second modality, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data;
and inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
2. The multi-modal fusion recognition method of biometric features of claim 1, wherein the logistic regression model is trained by:
constructing a training set comprising a plurality of first modal comparison score samples and second modal comparison score samples, and normalizing the first modal comparison score samples and the second modal comparison score samples;
constructing a sample pair consisting of the normalized first modal comparison score sample and the normalized second modal comparison score sample, and performing polynomial kernel mapping on the sample pair to obtain a high-dimensional data sample;
and modeling the high-dimensional data sample by using a logistic regression algorithm, and training by using a loss function to obtain the optimal parameters of the logistic regression model so as to finish the training of the logistic regression model.
3. The method according to claim 2, wherein constructing a training set comprising a plurality of first and second modality comparison score samples, and normalizing the first and second modality comparison score samples comprises:
obtaining a first modal comparison score sample F of N personsiAnd second mode alignment score sample GiAnd equally divided into the same human sample pair
Figure FDA0002755915610000021
And different human sample pairs
Figure FDA0002755915610000022
Calculating the Mean _ F and standard deviation Std _ F of the first modal alignment score sample in different human sets and the Mean _ G and standard deviation Std _ G of the second modal alignment score sample in different human sets;
the score sample F is aligned for the first modality using the following formulaiAnd second mode alignment score sample GiAnd (3) carrying out normalization:
fi=(Fi-Mean_F)/std_F
gi=(Gi-Mean_G)/std_G
wherein, FiAnd GiA first modality comparison score sample and a second modality comparison score sample, f, of the ith individual, respectivelyiAnd giRespectively a first modal comparison score sample and a second modal comparison score sample of the normalized ith person, i belongs to [1, N];Fi genAnd
Figure FDA0002755915610000023
respectively a first modal comparison score sample and a second modal comparison score sample of the ith pair of the same person in a training set,
Figure FDA0002755915610000024
and
Figure FDA0002755915610000025
respectively a first modal comparison score sample and a second modal comparison score sample of the ith pair of different people in a training set;
Figure FDA0002755915610000026
Figure FDA0002755915610000027
Figure FDA0002755915610000028
Figure FDA0002755915610000029
the obtaining a first modal comparison score and a second modal comparison score and normalizing the first modal comparison score and the second modal comparison score includes:
acquiring a first modal comparison score F and a second modal comparison score G, and normalizing the first modal comparison score F and the second modal comparison score G by using the following formula:
f=(F-Mean_F)/std_F
g=(G-Mean_G)/std_G
wherein f and g are respectively the normalized first modal comparison score and the normalized second modal comparison score.
4. The biometric multimodal fusion recognition method of claim 2, wherein the loss function is a mean square error function or a cross entropy function.
5. The multi-modal fusion recognition method of biometric features of claim 1, wherein the higher the quality score of the biometric image of the second modality, the higher the alignment threshold of the second modality is adjusted.
6. The multi-modal fusion recognition method of biological features according to claim 5, wherein the comparison threshold of the second modality is adjusted by:
dividing the quality score of the biological characteristic image of the second modality into three sections [0, Qmin ], [ Qmin, Qref ], [ Qref, Qmax ]; wherein Qmin, Qref, and Qmax are the lowest quality score, the reference quality score, and the maximum quality score, respectively, of the biometric image of the second modality;
if the quality score of the biological characteristic image of the second modality is within [0, Qmin), discarding the biological characteristic image of the second modality;
if the quality score of the biological feature image of the second modality is within [ Qmin, Qref ], the comparison threshold of the second modality is unchanged;
if the quality score of the biological characteristic image of the second modality is within [ Qref, Qmax ], the comparison threshold of the second modality is increased as the quality score of the biological characteristic image of the modality is increased; the calculation formula is as follows:
t ═ Tmin + (Tmax-Tmin) can be ((Q-Qref)/(Qmax-Qref))
Wherein Q is an element [ Qmin, Qmax ]
Figure FDA0002755915610000031
7. A biometric multimodal fusion recognition apparatus, the apparatus comprising:
the identification module is used for sequentially identifying the biological characteristic images of the first modality and the second modality;
the comparison threshold adjusting module is used for adjusting the comparison threshold of the second modality according to the quality score of the biological feature image of the second modality when the first modality passes the identification;
the first judging module is used for judging that the identification is passed when the comparison score of the second modality is not less than the lowest threshold value of the second modality, the adjusted comparison threshold value of the second modality and the quality score of the biological feature image of the second modality is not less than the decision threshold value of the second modality;
the second judgment module is used for fusing the comparison score of the first modality and the comparison score of the second modality to obtain a fusion score when the comparison score of the second modality is not less than the lowest threshold value of the second modality and the comparison score of the second modality is less than the adjusted comparison threshold value of the second modality or the quality score of the biological feature image of the second modality is less than the decision threshold value of the second modality, and judging that the identification is passed if the fusion score is not less than the fusion score threshold value; wherein the content of the first and second substances,
the second judging module includes:
the normalization unit is used for normalizing the comparison score of the first mode and the comparison score of the second mode;
the mapping unit is used for forming a score pair by the normalized comparison score of the first modality and the normalized comparison score of the second modality, and performing polynomial kernel mapping on the score pair to obtain high-dimensional data;
and the fusion unit is used for inputting the high-dimensional data into the trained logistic regression model to obtain a fused comparison score.
8. The device for multimodal fusion and recognition of biological features according to claim 7, wherein the logistic regression model in the fusion unit is trained by the following method:
constructing a training set comprising a plurality of first modal comparison score samples and second modal comparison score samples, and normalizing the first modal comparison score samples and the second modal comparison score samples;
constructing a sample pair consisting of the normalized first modal comparison score sample and the normalized second modal comparison score sample, and performing polynomial kernel mapping on the sample pair to obtain a high-dimensional data sample;
and modeling the high-dimensional data sample by using a logistic regression algorithm, and training by using a loss function to obtain the optimal parameters of the logistic regression model so as to finish the training of the logistic regression model.
9. A computer-readable storage medium for multi-modal fusion recognition of biometric features, comprising a memory for storing processor-executable instructions that, when executed by the processor, perform steps comprising the biometric multi-modal fusion recognition method of any of claims 1-6.
10. An apparatus for multi-modal fusion recognition of biometric features, comprising at least one processor and a memory storing computer-executable instructions, the processor implementing the steps of the method of multi-modal fusion recognition of biometric features as claimed in any one of claims 1 to 6 when executing the instructions.
CN202011202718.0A 2020-11-02 2020-11-02 Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment Pending CN114519898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011202718.0A CN114519898A (en) 2020-11-02 2020-11-02 Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011202718.0A CN114519898A (en) 2020-11-02 2020-11-02 Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN114519898A true CN114519898A (en) 2022-05-20

Family

ID=81594591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011202718.0A Pending CN114519898A (en) 2020-11-02 2020-11-02 Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN114519898A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116383795A (en) * 2023-06-01 2023-07-04 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment
CN117370933A (en) * 2023-10-31 2024-01-09 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116383795A (en) * 2023-06-01 2023-07-04 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment
CN116383795B (en) * 2023-06-01 2023-08-25 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment
CN117370933A (en) * 2023-10-31 2024-01-09 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium
CN117370933B (en) * 2023-10-31 2024-05-07 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US11244035B2 (en) Apparatus and methods for biometric verification
KR20170016231A (en) Multi-modal fusion method for user authentification and user authentification method
EP2523149A2 (en) A method and system for association and decision fusion of multimodal inputs
JP6798798B2 (en) Method and device for updating data for user authentication
US11449590B2 (en) Device and method for user authentication on basis of iris recognition
US10922399B2 (en) Authentication verification using soft biometric traits
US20190347472A1 (en) Method and system for image identification
Jaafar et al. A review of multibiometric system with fusion strategies and weighting factor
JP2002304626A (en) Data classifying device and body recognizing device
CN114519898A (en) Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment
TWI325568B (en) A method for face varification
Gawande et al. Improving iris recognition accuracy by score based fusion method
US10713342B2 (en) Techniques to determine distinctiveness of a biometric input in a biometric system
Chaitanya et al. Verification of pattern unlock and gait behavioural authentication through a machine learning approach
CN113822308B (en) Multi-mode biological recognition comparison score fusion method, device, medium and equipment
Monwar A Multimodal Biometric System based on Rank Level fusion.
CN114529732A (en) Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment
Jamdar et al. Implementation of unimodal to multimodal biometrie feature level fusion of combining face iris and ear in multi-modal biometric system
Turky et al. The use of SOM for fingerprint classification
CN114359952A (en) Multi-modal score fusion method, device, computer-readable storage medium and equipment
De Tré et al. Human centric recognition of 3D ear models
TW202217611A (en) Authentication method
CN114332905A (en) Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment
Nguyen et al. User re-identification using clothing information for smartphones
CN113361554B (en) Multi-mode fusion method, device, storage medium and equipment for biological feature recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination