CN113469002A - Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion - Google Patents

Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion Download PDF

Info

Publication number
CN113469002A
CN113469002A CN202110704146.4A CN202110704146A CN113469002A CN 113469002 A CN113469002 A CN 113469002A CN 202110704146 A CN202110704146 A CN 202110704146A CN 113469002 A CN113469002 A CN 113469002A
Authority
CN
China
Prior art keywords
feature
face
dis
characteristic
defining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110704146.4A
Other languages
Chinese (zh)
Inventor
朱全银
马天龙
高尚兵
徐莹莹
马思伟
朱燕妮
王媛媛
周泓
冯远航
章磊
魏丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202110704146.4A priority Critical patent/CN113469002A/en
Publication of CN113469002A publication Critical patent/CN113469002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Business, Economics & Management (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion, which is suitable for a common identity recognition method and a check-in problem based on block chain mutual authentication. The data fusion method based on the ANP is characterized by extracting features based on a convolutional neural network, classifying the features by using a traditional machine learning algorithm, verifying the features by using a block chain mutual authentication mode and fusing the data, receiving photos and voice information which are sent by a user and need to be recognized, calling a target detection algorithm to recognize face information in the photos, calling a voiceprint recognition algorithm to recognize the face information, verifying a recognition result secondarily by using a network picture mutual authentication mode, and fusing and storing the recognition result in a check-in system. The invention can effectively identify biological characteristics, accurately carry out secondary verification by mutual authentication, fuse verification data and accurately record check-in.

Description

Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion
Technical Field
The invention relates to a data fusion and feature recognition method, in particular to an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion.
Background
In recent years, biometric identification techniques have been developed rapidly, and people have been increasingly focused on techniques for identifying individuals by image processing, identification, and the like. The need for identity authentication based on biometric technology is increasing, and the traditional biometric technology can be counterfeited, for example, by forging fingerprints or human faces to perform identity authentication, so that the single-mode biometric system has limitations in matching progress, difficulty and universality. Therefore, the method has important practical significance in places such as schools, enterprises and the like, can provide an effective class check-in result set for the schools, and can reduce potential safety hazards of enterprises due to the fact that technical staff take the place of attendance.
In the aspect of biological characteristic authentication, at present, most researches on unilateral processing of main problems such as human faces, fingerprints, irises, sounds and the like are carried out, biological multi-characteristic fusion researches are lacked, and information fusion is single.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides an identity identification method based on a block chain, biological multi-feature identification and multi-source data fusion, and aims to solve the problem of sign-in under multi-biological features.
The technical scheme is as follows: in order to solve the technical problem, the invention provides an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion, which is characterized by comprising the following steps of:
(1) setting an initial image data set of the acquired wireless network list as W, a text data set converted from a network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating a vote number according to outliers and aggregation points to mark the vote number as SC1
(2) Setting the acquired face initial image data set to be Fa, carrying out characteristic value identification on the submitted picture and the face initial image data set Fa, acquiring similarity score, and recording as SC2
(3) Input speech signal data set S3For the speech signal data set S3Pre-emphasis and framing the voice signal to obtain MFCC characteristic parameters, and obtaining the voice print similarity score, which is recorded as SC, through GMM Gaussian mixture model3
(4) The obtained calculated number of votes SC1Similarity score SC2Voice print similarity score SC3As input, establishing a comparison matrix for pairwise comparison of the feature scores, and calculating the vote number SC by using an AHP method1Similarity score SC2Voiceprint similarity score SC3Performing fusion calculation to obtain AHP weight, and marking the AHP weight as N;
(5) and fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.
Further, the step (1) specifically includes the following steps:
(1.1) inputting an initial image data set W of a wireless network list, defining a set X as the number of pictures uploaded by an object to be identified, and defining a function len (X) to represent the length of the set X, wherein W is { W ═ W { (X)1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];
(1.2) defining a loop variable i1 for traversing W, i1 ∈ [1, len (W) ], i1 having an initial value of 1;
(1.3) if i1 ≦ len (W), entering step (1.4), otherwise, entering step (1.10);
(1.4) to Wi1Denoising to obtain Deno _ Si1
(1.5) denoising image Deno _ Wi1Performing image enhancement processing to obtain an enhanced image enhancement _ Wi1
(1.6) Enhance _ W for the enhanced imagei1Scaling to obtain a scaled image zom _ Wi1
(1.7) scaling the image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1
(1.8) feature image sha _ W by using character classifieri1Performing character recognition and extracting character information, and putting the obtained character information into a WT;
(1.9) i1 ═ i1+1, go to step (1.3);
(1.10) finishing the extraction of the WIFI information characters;
(1.11) defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures currently;
(1.12) defining a hash table FS to record voting and information of an object to be identified, wherein a key is SF to represent picture information submitted by the object to be identified, another table Cm is defined to represent the number of votes obtained by the object to be identified, the value of the FS table is the hash table Cm, the key of Cm is the name of the object to be identified corresponding to the current sent picture, and the value is the number of votes obtained by the object to be identified;
(1.13) the SF corresponding to Bt exists in FS;
(1.14) taking the FS as a parent table, and adding a newly-built hash table Cmi into the parent table FS;
(1.15) the vote corresponding to Bt exists in Cm;
(1.16) setting the newly-built key as a voting object corresponding to Bt, setting the value to be 1 and storing the value into Cm;
(1.17) converting the acquired WT into different hot spots S, and taking out random hot spots as random sample points S by using a CURE algorithmi
(1.18) random sample points SiDivided into groups denoted as Pi(ii) a Using CURE algorithm to PiClustering, the clustering points marked as GiThe outliers are marked as Oi
(1.19) recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total number of votes as H1;
(1.20) the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC1
(1.21) if SC1<And omega, judging that the position verification fails, namely that the network picture information submitted by the current object to be recognized is not matched with the network picture information submitted by other objects to be recognized, wherein omega is a network picture information similarity threshold set according to the verification total number and the vote number.
Further, the step (2) specifically includes the following steps:
(2.1) defining a face detection target object, and establishing a table for storing object information to be recognized and face information, and recording the table as Fa; define the function len (Fa) as the length of the set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];
(2.2) defining a loop variable j1 for traversing Fa, j1 ∈ [1, len (Fa) ], j1 having an initial value of 1;
(2.3) traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), and if not, ending the traversal Fa, and jumping to the step (2.21);
(2.4) processing Fa by using a Haar characteristic;
(2.5) loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;
(2.6) defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;
(2.7) if d _ flag is 1, jumping to step (2.8), otherwise, jumping to step (2.17);
(2.8) to the face area FafCarrying out normalization processing to obtain a face normalization region F;
(2.9) extracting face LBP (local binary pattern) characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff
(2.10) if the system already detects the target object, jumping to the step (2.11), otherwise, jumping to (2.16);
(2.11) inputting a detection image G, and respectively calculating a human face feature histogram G of the detection image G by using the chi-square distancef
(2.12) calculating the face feature histogram set F by using the chi-square distancef={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);
(2.13) set of distances to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt
(2.14) if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, jumping to the step (2.16), otherwise, jumping to the step (2.15);
(2.15) successfully identifying, and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set { dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;
Self-adaption reliability judgment is carried out by using the formula (1) to obtain a similarity score delta ═ Mean (N) -Mean (N-Y);
(2.16) creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;
(2.17) if the system already tracks the target object, jumping to the step (2.19), otherwise, jumping to the step (2.20);
(2.18) adding the extracted features to a feature list of each detection target;
(2.19) predicting the position of the next frame of each detected target in the object by using a Kalman observer, and clearing the detector which is not matched with the target for a long time;
(2.20) i1 ═ i1+1, go to step (2.3);
(2.21) obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,Fas} feature similarity score SC2Wherein, FasShowing the S-th image in Fa.
Further, the step (3) specifically includes the following steps:
(3.1) input Speech Signal data set S3
(3.2) carrying out pre-emphasis and framing processing on the voice signals; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters are obtained, and an MFCC sequence is obtained;
(3.3) training the GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X of the GMM Gaussian mixture model3
(3.4) Framing the speech signal, dividing it into T sections, calculating MFCC sequence, denoted as Y, for each section of speech signalt
(3.5) for the voiceprint feature vector sequence Yt={Y1,Y2,Y3,…,YNProcessing to obtain the characteristic parameter lambda of GMM Gaussian mixture model so as to ensure that the characteristic vector sequence YtThe likelihood probability of (2) is maximum;
(3.6) all T sections of MFCC sequence YtThe user voiceprint characteristic sequence Y is obtained by tandem connectionaWill sequence YaInputting GMM Gaussian mixture model to calculate posterior probability to obtain score SC of voiceprint similarity3
Further, the step (4) specifically includes the following steps:
(4.1) using the human face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;
(4.2) defining the check-in results as K1, K2 and K3 as scheme layers respectively;
(4.3) defining the maximum final check-in rate to be O as a target layer;
(4.4) defining a judgment matrix scaling method, wherein the scaling method is used for comparing two factors and determining the importance degree of the two factors;
(4.5) aiming at the criterion layer, the scheme layer and the target layer, establishing a judgment matrix by using an analytic hierarchy process, calculating a maximum characteristic root lambda max, carrying out normalization processing on the lambda max, recording the normalized root lambda max as Nor, and calculating a consistency ratio CR 1;
(4.6) if CR1<0.1, then go to step (4.5), otherwise go to step (4.7);
(4.7) carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;
(4.8) carrying out overall hierarchical ordering on the alignment measurement layer and the scheme layer to calculate a consistency ratio CR 3;
(4.9) if CR1<0.1 then step (4.8) is entered, otherwise step (4.7) is entered.
(4.10) obtaining the weight value N according to the decisions of CR2 and CR 3.
Further, the step (5) specifically includes the following steps:
(5.1) according to the weight value N obtained in the step (4), carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information to obtain a final fusion result, and marking the final fusion result as A;
(5.2) defining table names Sid, name, swift, Sage, Sface and SFU of the database of the attendance system as ID, name, wireless network list picture, age, face picture, feature tag in voice and attendance data fusion table of a single object to be identified respectively, and meeting St { Sid, name, swift, Sage, Sface and SFU };
(5.3) defining a cycle variable St, giving an initial value St as 0, and defining the maximum cycle number Sn as the number of the objects to be identified of the current sent picture;
(5.4) if St < Sn then go to step (5.5) otherwise go to step (5.11);
(5.5) creating an attendance data fusion table SFU;
(5.6) fusing the weighted value N and the corresponding characteristic value and writing the fused value into an attendance data fusion table SFU;
(5.7) calculating the number of votes SC obtained in the steps (1), (2) and (3)1Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;
(5.8) setting the number of votes SC to be counted1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);
(5.9) marking the check-in result of the object to be identified as successful to be written into the database table SFU;
(5.10) marking the check-in result of the object to be identified as an error and writing the error into a database table SFU;
and (5.11) outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.
Has the advantages that:
compared with the prior art, the invention has the following remarkable advantages: 1. and data fusion identification based on the block chain and biological multi-feature is realized, data fusion is carried out on the biological feature similarity value based on the biological multi-feature identification data and the ANP neural network, and the condition that the check-in person is checked in without the attendance is avoided by using decentralization in the block chain, so that fusion identification of multi-source data is realized. 2. The limitation of single biological feature recognition is changed, the improved WIFI sign-in recognition technology is combined, a user sign-in information result label with higher accuracy can be effectively obtained, the user sign-in recognition result is more accurate under the condition of multi-biological feature recognition, and the use value of face and voiceprint recognition in a target scene is increased.
Drawings
FIG. 1 is a flow chart of an identity recognition method based on blockchain and biological multi-feature recognition and multi-source data fusion according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating WIFI signal picture text extraction and similarity score thereof according to an embodiment of the present invention;
FIG. 3 is a flow chart of a face recognition subsystem according to an embodiment of the present invention;
FIG. 4 is a flowchart of normalization, feature extraction, and face similarity score of a face picture according to an embodiment of the present invention;
FIG. 5 is a flow chart of the pre-processing, feature extraction, and voiceprint similarity score of a speech signal according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a process of fusing voiceprint and face similarity scores in FIGS. 2 and 3 according to an embodiment of the present invention;
fig. 7 is a flowchart of a system for fusing feature data and displaying the feature data through a web page according to an embodiment of the present invention.
Detailed Description
The identity identification method based on the block chain, biological multi-feature identification and multi-source data fusion combines the technical means of the block chain technology, the GMM Gaussian mixture model, the ANP algorithm, the CURE clustering algorithm and the like, performs data fusion through the identification similarity value of the multi-biological features of the user and uses a decentralized identification mode based on the block chain to prevent the user from not really participating in signing, thereby really enabling the object to be identified to participate in signing in, increasing the data use value of the biological multi-features in a target scene and improving the credibility of signing in.
Block chaining techniques: blockchains are a term of art in information technology. In essence, it is a shared database with data or information stored therein having characteristics of decentralization, openness, independence, security, and anonymity. The blockchain technology is a brand new distributed infrastructure and computing paradigm that utilizes blockchain data structures to verify and store data, utilizes distributed node consensus algorithms to generate and update data, cryptographically secures data transmission and access, and utilizes intelligent contracts composed of automated script code to program and manipulate data. The problems of cross-subject cooperation of business development, low-cost trust establishment and the like can be properly solved through the block chain. Based on the characteristics, the text characteristics of the mobile phone wireless network list pictures are extracted and compared, and each user network list is used as a data source. The block chain core technology comprises a distributed account book, asymmetric encryption, a consensus mechanism and an intelligent contract.
GMM Gaussian mixture model: the gaussian model is a model formed based on a gaussian probability density function (normal distribution curve) by accurately quantizing an object using the gaussian probability density function (normal distribution curve) and decomposing one object into a plurality of objects. The invention preprocesses the audio frequency by the prior general technology, obtains the characteristic parameters of the voiceprint by the GMM Gaussian mixture model so as to generate a fusion judgment template, and then performs fusion judgment on the information of the object to be recognized by training calculation and the like. GMMs have achieved good results in the fields of numerical approximation, speech recognition, image classification, image denoising, image reconstruction, fault diagnosis, video analysis, mail filtering, density estimation, target recognition and tracking, etc.
ANP algorithm: ANP first divides system elements into two major parts: the first part, referred to as the control factor layer, includes problem objectives and decision criteria. All decision criteria are considered independent of each other and are only governed by the target element. There may be no decision criteria in the control factors, but at least one goal. The weight of each criterion in the control layer can be obtained by an AHP method. The second part is a network layer, which is a C internal mutual influence network structure composed of all element groups controlled by the control layer, and is composed of all elements controlled by the control layer, wherein the elements are interdependent and mutually controlled, the elements and the hierarchies are not internally independent, and each criterion in the hierarchical structure is not a simple internally independent element but a mutually dependent and feedback network structure.
CURE clustering algorithm: the CURE adopts a novel hierarchical clustering algorithm which is between single chain and group average, overcomes the defects of the two hierarchical clustering algorithms, and can process large data, outliers and data of clusters with non-spherical sizes and non-uniform sizes.
The algorithm first considers each data point as a class and then merges the closest classes until the number of classes is the desired number. However, the differences from the AGNES algorithm are: instead of using all points or using the center point + distance to represent a class, a fixed number of well-distributed points are extracted from each class as representative points for the class, and these representative points (typically 10) are multiplied by an appropriate shrinking factor (typically set between 0.2 and 0.7) to bring them closer to the class center point.
The technical scheme of the invention is further elaborated in detail by combining the specific embodiments of multi-target tracking sign-in attendance of the faces of students and tracking, identifying and processing production items of facial feature information of the students in a class scene of a campus.
As shown in fig. 1 to 7, an identity recognition method based on block chain, biological multi-feature recognition and multi-source data fusion includes the following steps:
1. setting the initial image data set of the acquired wireless network list as W, setting the text data set converted from the network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating the vote number mark SC1 according to the outlier and the aggregation point.
2. And setting the acquired face initial image data set to be Fa, and identifying the characteristic values of the submitted picture and the data set Fa to acquire a similarity score of SC 2.
3. And inputting voiceprint characteristic information audio S, performing pre-emphasis and framing processing on the voice signal to obtain MFCC characteristic parameters, obtaining a voiceprint through a GMM Gaussian mixture model to obtain a voiceprint similarity score, and marking the score as SC 3.
4. And taking the acquired feature score data and the wireless network hotspot data as input, establishing a pairwise comparison matrix for the feature scores, and performing fusion calculation on the feature values by using an AHP method to obtain an AHP weight, which is marked as N.
5. And fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.
The detailed procedure for steps 1-5 is as follows:
the step 1 specifically comprises the following steps:
step 1.1: inputting an image data set W, defining a set X as the number of pictures uploaded by students, defining a function len (W) and setting W as { W ═ W }1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];
Step 1.2: defining a cyclic variable i1 for traversing W, i 1E [1, len (W) ], and i1 assigning an initial value of 1;
step 1.3: if i1 is less than or equal to len (W), then step 1.4 is entered, otherwise step 1.10 is entered;
step 1.4: to Si1Denoising to obtain Deno _ Si1
Step 1.5: performing image enhancement processing on the denoised image Deno _ Wi1 to obtain an enhanced image Enhance _ Wi 1;
step 1.6: for enhanced graphLike Enhance _ Wi1Scaling to obtain a scaled image zom _ Wi1
Step 1.7: for the scaled image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1
Step 1.8: characteristic image sha _ W by using character classifieri1Performing character recognition and extracting character information to obtain character information and putting the character information into a WT;
step 1.9: i1 ═ i1+1, go to step 1.3;
step 1.10: and finishing the extraction of the WIFI information characters.
Step 1.11: defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures at present;
step 1.12: defining a Hash table FS to record voting and student information, wherein a key is SF to represent picture information submitted by a student, another table Cm is defined to represent the number of votes obtained by the student, the value of the FS table is Cm, the key of Cm is the name of the student corresponding to the current sent picture, and the value is the number of votes obtained by the student;
step 1.13: the SF corresponding to Bt exists in FS;
step 1.14: newly building a hash table Cmi and adding a father table FS;
step 1.15: the vote corresponding to Bt exists in Cm;
step 1.16: setting the newly-built key as a voting object corresponding to Bt, setting the value as 1 and storing the value into Cm;
step 1.17: converting the obtained WT into different hot spots S, and taking out random hot spots as random sample points Si by using a CURE algorithm;
step 1.18: dividing the random sample points Si into a set of Pi(ii) a P pair by using CURE clustering algorithmiClustering is carried out, and the clustered points are marked as GiThe outliers are marked as Oi
Step 1.19: recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total vote number as H1;
step 1.20: the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC 1;
step 1.21: if the SC1< ω, it is determined that the location verification fails, that is, the network picture information submitted by the current student does not match the network picture information submitted by other students, where ω is a threshold value of similarity of the network picture information set according to the total verification number and the number of votes.
The step 2 specifically comprises the following steps:
step 2.1: defining a face detection target object, and establishing a table for storing student information and face information, and recording the table as Fa; define the length of the function len (Fa) set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];
Step 2.2: defining a cyclic variable j1 for traversing Fa, j1 e [1, len (Fa) ], and j1 assigning an initial value of 1;
step 2.3: traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), otherwise, ending the traversal Fa, and jumping to the step (2.21);
step 2.4: processing Fa by using Haar characteristics;
step 2.5: loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;
step 2.6: defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;
step 2.7: if d _ flag is equal to 1, jumping to step 2.8, otherwise, jumping to step 2.17;
step 2.8: to the face area FafCarrying out normalization processing to obtain a first face normalization region F;
step 2.9: extracting face LBP characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff
Step 2.10: if the system has detected the target object, then jump to step 2.11, otherwise jump to 2.16;
step 2.11: respectively calculating a human face feature histogram G of an input detection image G by using chi-square distancef
Step 2.12: calculating the face feature histogram set by using the chi-square distance: ff={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);
step 2.13: set of distance to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt
Step 2.14: if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, skipping to step 2.16, otherwise skipping to 2.15;
step 2.15: successfully identifying and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set
{dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;
Self-adaption reliable judgment is carried out by using the formula (1) to obtain a similarity score delta:
δ=Mean(N)-Mean(N-Y) (1)
step 2.16: creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;
step 2.17: if the system already tracks the target object, go to step 2.19, otherwise go to step 2.20;
step 2.18: adding the extracted features to a feature list of each detection target;
step 2.19: predicting the position of the next frame of each detection target in the object by using a Kalman observer, and clearing a detector which is not matched with the target for a long time;
step 2.20: the variable i1 is increased by 1, i1 ═ i1+1, and the process proceeds to step 2.3;
step 2.21: obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,FaMF, wherein FasShowing the S-th image in Fa.
The step 3 specifically comprises the following steps:
step 3.1: inputting a speech signal data set S3;
step 3.2: carrying out pre-emphasis and framing processing on a voice signal; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters and the like are obtained, and an MFCC sequence is obtained;
step 3.3: training a GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X3 of the GMM Gaussian mixture model;
step 3.4: performing frame processing on a voice signal, dividing the voice signal into T sections, and calculating an MFCC sequence of each section of the voice signal and recording the sequence as Yt;
step 3.5: processing the voiceprint feature vector sequence Yt { Y1, Y2, Y3, …, YN } to obtain a feature parameter lambda of the GMM Gaussian mixture model, so that the likelihood probability of the feature vector sequence Yt is maximum;
step 3.6: all the T sections of MFCC sequences Yt are connected in series to obtain a user voiceprint characteristic sequence Ya, the sequence Ya is input into a GMM Gaussian mixture model to calculate posterior probability, and a voiceprint similarity score SC3 is obtained
The step 4 specifically comprises the following steps:
step 4.1: taking the face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;
step 4.2: defining the check-in results as K1, K2 and K3 as scheme layers respectively;
step 4.3: defining the highest final check-in rate as O as a target layer;
step 4.4: the definition decision matrix scaling method is noted as i1-i9(ii) a The specific meanings are shown in table 1;
step 4.5: establishing judgment matrixes for different layers by using an analytic hierarchy process to calculate a maximum characteristic root, carrying out normalization processing, recording the normalized characteristic root as Nor, and calculating a consistency ratio CR 1;
step 4.6: step 4.5 if CR1<0.1, otherwise step 4.7;
step 4.7: carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;
step 4.8: carrying out overall hierarchical ordering on the scheme layer and the scheme layer to calculate a consistency ratio CR 3;
step 4.9: the definition decision matrix scaling method is noted as ijWherein j ∈ [1-9 ]];
Step 4.10: if CR <0.1, entering step 4.8, otherwise entering step 4.2 to reconstruct the judgment matrix;
step 4.11: making a decision according to the total hierarchical ranking consistency ratio to obtain a weight value N;
the step 5 specifically comprises the following steps:
step 5.1: carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information by using the weighted value N obtained in the step 4 to obtain a final fusion result which is marked as A;
step 5.2: defining table names Sid, name, swift, Sage, Sface, Svoic and SFU of an attendance system database as a schoolwork number, name, wireless network graph, age, face picture, voice and attendance data fusion table of a single student respectively, and meeting St { Sid, name, swift, Sage, Sface, Svoie and SFU };
step 5.3: defining a cycle variable St, assigning an initial value St to be 0, and defining the maximum cycle time Sn as the number of students currently sending pictures;
step 5.4: step 5.5 if St < Sn or step 5.11;
step 5.5: newly building an attendance data fusion table SFU;
step 5.6: fusing the weights obtained in the step (4) and the corresponding characteristic values and writing the fused weights into an attendance data fusion table SFU;
step 5.7: calculating the number of votes SC obtained in the steps 1, 2 and 31Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;
step 5.8: setting and calculating the number of votes SC1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);
step 5.9: the sign of the student sign-in result is written into the database table SFU successfully;
step 5.10: the sign of the student sign-in result is written into the database table SFU as 'error';
step 5.11: and outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.
Table 1: variables involved in Steps 1-5
Figure BDA0003130516300000131
Figure BDA0003130516300000141
Figure BDA0003130516300000151
In order to better illustrate the effectiveness of the method, 4956 student face information key frame sequences are subjected to data processing, feature extraction is carried out by using LBP (local binary pattern) to obtain a face feature histogram, and a similarity score is calculated by using a chi-square distance. And (4) extracting the characters of 224 groups of 672 WIFI pictures, and voting to obtain corresponding WIFI similarity scores. Feature extraction is carried out on 64 groups of voice signals, similarity values are calculated by using a GMM Gaussian mixture model, and through data fusion of the similarity values, the accuracy of biological feature sign-in is improved, and the accuracy of 98% on sign-in results is achieved.
The invention can be combined with a computer system so as to complete the multi-source data fusion of biological multi-feature recognition.
The invention creatively provides an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion, and a biological multi-feature sign-in recognition result under a WIFI environment is obtained through multiple experiments.
The invention provides an identification method based on a block chain, biological multi-feature identification and multi-source data fusion, which can be used for identifying the features of biological voiceprints and human faces in a WIFI environment and fusing the data of identification results.
The above description is only an example of the present invention and is not intended to limit the present invention. All equivalents which come within the spirit of the invention are therefore intended to be embraced therein. Details not described herein are well within the skill of those in the art.

Claims (6)

1. An identity recognition method based on block chain, biological multi-feature recognition and multi-source data fusion is characterized by comprising the following steps:
(1) setting an initial image data set of the acquired wireless network list as W, a text data set converted from a network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating a vote number according to outliers and aggregation points to mark the vote number as SC1
(2) Setting the acquired face initial image data set to be Fa, carrying out characteristic value identification on the submitted picture and the face initial image data set Fa, acquiring similarity score, and recording as SC2
(3) Input speech signal data set S3For the speech signal data set S3Pre-emphasis and framing the voice signal to obtain MFCC characteristic parameters, and obtaining sound through GMM Gaussian mixture modelTexture similarity score, denoted SC3
(4) The obtained calculated number of votes SC1Similarity score SC2Voice print similarity score SC3As input, establishing a comparison matrix for pairwise comparison of the feature scores, and calculating the vote number SC by using an AHP method1Similarity score SC2Voice print similarity score SC3Performing fusion calculation to obtain AHP weight, and marking the AHP weight as N;
(5) and fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.
2. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (1) specifically comprises the following steps:
(1.1) inputting an initial image data set W of a wireless network list, defining a set X as the number of pictures uploaded by an object to be identified, and defining a function len (X) to represent the length of the set X, wherein W is { W ═ W { (X)1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];
(1.2) defining a loop variable i1 for traversing W, i1 ∈ [1, len (W) ], i1 having an initial value of 1;
(1.3) if i1 ≦ len (W), entering step (1.4), otherwise, entering step (1.10);
(1.4) to Wi1Denoising to obtain Deno _ Si1
(1.5) denoising image Deno _ Wi1Performing image enhancement processing to obtain an enhanced image enhancement _ Wi1
(1.6) Enhance _ W for the enhanced imagei1Scaling to obtain a scaled image zom _ Wi1
(1.7) scaling the image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1
(1.8) feature image sha _ W by using character classifieri1Performing character recognition and extracting character information, and putting the obtained character information into a WT;
(1.9) i1 ═ i1+1, go to step (1.3);
(1.10) finishing the extraction of the WIFI information characters;
(1.11) defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures currently;
(1.12) defining a hash table FS to record voting and information of an object to be identified, wherein a key is SF to represent picture information submitted by the object to be identified, another table Cm is defined to represent the number of votes obtained by the object to be identified, the value of the FS table is the hash table Cm, the key of Cm is the name of the object to be identified corresponding to the current sent picture, and the value is the number of votes obtained by the object to be identified;
(1.13) the SF corresponding to Bt exists in FS;
(1.14) taking the FS as a parent table, and adding a newly-built hash table Cmi into the parent table FS;
(1.15) the vote corresponding to Bt exists in Cm;
(1.16) setting the newly-built key as a voting object corresponding to Bt, setting the value to be 1 and storing the value into Cm;
(1.17) converting the acquired WT into different hot spots S, and taking out random hot spots as random sample points S by using a CURE algorithmi
(1.18) random sample points SiDivided into groups denoted as Pi(ii) a Using CURE algorithm to PiClustering, the clustering points marked as GiThe outliers are marked as Oi
(1.19) recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total number of votes as H1;
(1.20) the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC1
(1.21) if SC1<And omega, judging that the position verification fails, namely that the network picture information submitted by the current object to be recognized is not matched with the network picture information submitted by other objects to be recognized, wherein omega is a network picture information similarity threshold set according to the verification total number and the vote number.
3. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (2) specifically comprises the following steps:
(2.1) defining a face detection target object, and establishing a table for storing object information to be recognized and face information, and recording the table as Fa; define the function len (Fa) as the length of the set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];
(2.2) defining a loop variable j1 for traversing Fa, j1 ∈ [1, len (Fa) ], j1 having an initial value of 1;
(2.3) traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), and if not, ending the traversal Fa, and jumping to the step (2.21);
(2.4) processing Fa by using a Haar characteristic;
(2.5) loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;
(2.6) defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;
(2.7) if d _ flag is 1, jumping to step (2.8), otherwise, jumping to step (2.17);
(2.8) to the face area FafCarrying out normalization processing to obtain a face normalization region F;
(2.9) extracting face LBP (local binary pattern) characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff
(2.10) if the system already detects the target object, jumping to the step (2.11), otherwise, jumping to (2.16);
(2.11) inputting a detection image G, and respectively calculating a human face feature histogram G of the detection image G by using the chi-square distancef
(2.12) calculating the face feature histogram set F by using the chi-square distancef={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);
(2.13) set of distances to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt
(2.14) if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, jumping to the step (2.16), otherwise, jumping to the step (2.15);
(2.15) successfully identifying, and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set { dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;
Self-adaption reliability judgment is carried out by using the formula (1) to obtain a similarity score delta ═ Mean (N) -Mean (N-Y);
(2.16) creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;
(2.17) if the system already tracks the target object, jumping to the step (2.19), otherwise, jumping to the step (2.20);
(2.18) adding the extracted features to a feature list of each detection target;
(2.19) predicting the position of the next frame of each detected target in the object by using a Kalman observer, and clearing the detector which is not matched with the target for a long time;
(2.20) i1 ═ i1+1, go to step (2.3);
(2.21) obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,Fas} feature similarity score SC2Wherein, FasShowing the S-th image in Fa.
4. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (3) specifically comprises the following steps:
(3.1) input Speech Signal data set S3
(3.2) carrying out pre-emphasis and framing processing on the voice signals; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters are obtained, and an MFCC sequence is obtained;
(3.3) training the GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X of the GMM Gaussian mixture model3
(3.4) Framing the speech signal, dividing it into T sections, calculating MFCC sequence, denoted as Y, for each section of speech signalt
(3.5) for the voiceprint feature vector sequence Yt={Y1,Y2,Y3,…,YNProcessing to obtain the characteristic parameter lambda of GMM Gaussian mixture model so as to ensure that the characteristic vector sequence YtThe likelihood probability of (2) is maximum;
(3.6) all T sections of MFCC sequence YtThe user voiceprint characteristic sequence Y is obtained by tandem connectionaWill sequence YaInputting GMM Gaussian mixture model to calculate posterior probability to obtain score SC of voiceprint similarity3
5. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (4) specifically comprises the following steps:
(4.1) using the human face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;
(4.2) defining the check-in results as K1, K2 and K3 as scheme layers respectively;
(4.3) defining the maximum final check-in rate to be O as a target layer;
(4.4) defining a scaling method of the judgment matrix, wherein the scaling method is used for comparing the two factors and determining the importance degrees of the two factors;
(4.5) aiming at the criterion layer, the scheme layer and the target layer, establishing a judgment matrix by using an analytic hierarchy process, calculating a maximum characteristic root lambda max, carrying out normalization processing on the lambda max, recording the normalized root lambda max as Nor, and calculating a consistency ratio CR 1;
(4.6) if CR1<0.1, then go to step (4.5), otherwise go to step (4.7);
(4.7) carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;
(4.8) carrying out overall hierarchical ordering on the alignment measurement layer and the scheme layer to calculate a consistency ratio CR 3;
(4.9) if CR1<0.1 then step (4.8) is entered, otherwise step (4.7) is entered.
(4.10) obtaining the weight value N according to the decisions of CR2 and CR 3.
6. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (5) specifically comprises the following steps:
(5.1) according to the weight value N obtained in the step (4), carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information to obtain a final fusion result, and marking the final fusion result as A;
(5.2) defining table names Sid, name, swift, Sage, Sface and SFU of the database of the attendance system as ID, name, wireless network list picture, age, face picture, feature tag in voice and attendance data fusion table of a single object to be identified respectively, and meeting St { Sid, name, swift, Sage, Sface and SFU };
(5.3) defining a cycle variable St, giving an initial value St as 0, and defining the maximum cycle number Sn as the number of the objects to be identified of the current sent picture;
(5.4) if St < Sn then go to step (5.5) otherwise go to step (5.11);
(5.5) creating an attendance data fusion table SFU;
(5.6) fusing the weighted value N and the corresponding characteristic value and writing the fused value into an attendance data fusion table SFU;
(5.7) calculating the number of votes SC obtained in the steps (1), (2) and (3)1Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;
(5.8) setting the number of votes SC to be counted1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);
(5.9) marking the check-in result of the object to be identified as successful to be written into the database table SFU;
(5.10) marking the check-in result of the object to be identified as an error and writing the error into a database table SFU;
and (5.11) outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.
CN202110704146.4A 2021-06-24 2021-06-24 Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion Pending CN113469002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110704146.4A CN113469002A (en) 2021-06-24 2021-06-24 Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110704146.4A CN113469002A (en) 2021-06-24 2021-06-24 Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion

Publications (1)

Publication Number Publication Date
CN113469002A true CN113469002A (en) 2021-10-01

Family

ID=77872731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110704146.4A Pending CN113469002A (en) 2021-06-24 2021-06-24 Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion

Country Status (1)

Country Link
CN (1) CN113469002A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445052A (en) * 2022-04-07 2022-05-06 北京吉道尔科技有限公司 Intelligent education student attendance big data statistical method and system based on block chain
CN115273863A (en) * 2022-06-13 2022-11-01 广东职业技术学院 Compound network class attendance system and method based on voice recognition and face recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300911A1 (en) * 2016-04-13 2017-10-19 Abdullah Abdulaziz I. Alnajem Risk-link authentication for optimizing decisions of multi-factor authentications
CN109100440A (en) * 2018-08-20 2018-12-28 东南大学 On-line chromatograph control analysis system and its application method based on network server
US20190171438A1 (en) * 2017-12-05 2019-06-06 Archemy, Inc. Active adaptation of networked compute devices using vetted reusable software components
CN110704531A (en) * 2019-04-25 2020-01-17 中国南方电网有限责任公司 Block chain-based electricity consumption client credit management method and system
KR102099234B1 (en) * 2019-07-24 2020-04-09 최상규 System for providing finance service with payment in advance of principal using blockchain based smart contract
CN111416847A (en) * 2020-03-12 2020-07-14 北京金山云网络技术有限公司 Scheme decision method and device and server
CN111541703A (en) * 2020-04-27 2020-08-14 平安银行股份有限公司 Terminal equipment authentication method and device, computer equipment and storage medium
CN111988381A (en) * 2020-08-07 2020-11-24 南通大学 HashGraph-based vehicle networking distributed trust system and trust value calculation method
CN112734258A (en) * 2020-12-02 2021-04-30 北京航空航天大学 Avionics system performance evaluation characterization system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300911A1 (en) * 2016-04-13 2017-10-19 Abdullah Abdulaziz I. Alnajem Risk-link authentication for optimizing decisions of multi-factor authentications
US20190171438A1 (en) * 2017-12-05 2019-06-06 Archemy, Inc. Active adaptation of networked compute devices using vetted reusable software components
CN109100440A (en) * 2018-08-20 2018-12-28 东南大学 On-line chromatograph control analysis system and its application method based on network server
CN110704531A (en) * 2019-04-25 2020-01-17 中国南方电网有限责任公司 Block chain-based electricity consumption client credit management method and system
KR102099234B1 (en) * 2019-07-24 2020-04-09 최상규 System for providing finance service with payment in advance of principal using blockchain based smart contract
CN111416847A (en) * 2020-03-12 2020-07-14 北京金山云网络技术有限公司 Scheme decision method and device and server
CN111541703A (en) * 2020-04-27 2020-08-14 平安银行股份有限公司 Terminal equipment authentication method and device, computer equipment and storage medium
CN111988381A (en) * 2020-08-07 2020-11-24 南通大学 HashGraph-based vehicle networking distributed trust system and trust value calculation method
CN112734258A (en) * 2020-12-02 2021-04-30 北京航空航天大学 Avionics system performance evaluation characterization system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LI Z等: "A sustainable production capability evaluation mechanism based on blockchain, LSTM, analytic hierarchy process for supply chain network", 《INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH》, vol. 58, no. 24, 19 March 2020 (2020-03-19), pages 7399 - 7419 *
周前飞等: "融合区块链的智慧电梯多源异构大数据分析平台", 《中国特种设备安全》, vol. 36, no. 5, 30 May 2020 (2020-05-30), pages 1 - 6 *
尹晓琦等: "基于WiFi和虚拟仪器的噪声监测系统的设计", 《计算机测量与控制》, vol. 23, no. 12, 25 December 2015 (2015-12-25), pages 4002 - 4004 *
彭永勇等: "基于区块链应用模式的可信身份认证关键技术研究", 《网络安全技术与应用》, no. 2, 31 December 2018 (2018-12-31), pages 1 - 2 *
马天龙: "区块链技术及国库应用场景——基于国家金库工程建设的考量", 《地方财政研究》, no. 12, 15 December 2017 (2017-12-15), pages 26 - 32 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445052A (en) * 2022-04-07 2022-05-06 北京吉道尔科技有限公司 Intelligent education student attendance big data statistical method and system based on block chain
CN115273863A (en) * 2022-06-13 2022-11-01 广东职业技术学院 Compound network class attendance system and method based on voice recognition and face recognition

Similar Documents

Publication Publication Date Title
Khan et al. Deep unified model for face recognition based on convolution neural network and edge computing
Abozaid et al. Multimodal biometric scheme for human authentication technique based on voice and face recognition fusion
Faundez-Zanuy On-line signature recognition based on VQ-DTW
Tolosana et al. SVC-onGoing: Signature verification competition
CN113469002A (en) Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion
CN112132030A (en) Video processing method and device, storage medium and electronic equipment
WO2023273616A1 (en) Image recognition method and apparatus, electronic device, storage medium
Xie et al. Writer-independent online signature verification based on 2D representation of time series data using triplet supervised network
Arısoy Signature verification using siamese neural network one-shot learning
Sharma et al. Sign language gesture recognition
Mehta et al. Cohort selection using mini-batch k-means clustering for ear recognition
Schlapbach Writer identification and verification
CN113723111B (en) Small sample intention recognition method, device, equipment and storage medium
Faundez-Zanuy et al. Online signature recognition: A biologically inspired feature vector splitting approach
Patel et al. Counterfeit currency detection using deep learning
Hung et al. Offline handwritten signature forgery verification using deep learning methods
Serdouk et al. A new handwritten signature verification system based on the histogram of templates feature and the joint use of the artificial immune system with SVM
Cenys et al. Genetic algorithm based palm recognition method for biometric authentication systems
Srivastava et al. Three-layer multimodal biometric fusion using SIFT and SURF descriptors for improved accuracy of authentication of human identity
Beritelli et al. Performance Evaluation of Multimodal Biometric Systems based on Mathematical Models and Probabilistic Neural Networks.
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Koch et al. One-shot lip-based biometric authentication: Extending behavioral features with authentication phrase information
Mishra et al. Integrating State-of-the-Art Face Recognition and Anti-Spoofing Techniques into Enterprise Information Systems
US11694463B2 (en) Systems and methods for generating document numerical representations
US12033415B2 (en) Systems and methods for generating document numerical representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination