CN116229557A - Identity recognition method, device, equipment and storage medium - Google Patents

Identity recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116229557A
CN116229557A CN202310286620.5A CN202310286620A CN116229557A CN 116229557 A CN116229557 A CN 116229557A CN 202310286620 A CN202310286620 A CN 202310286620A CN 116229557 A CN116229557 A CN 116229557A
Authority
CN
China
Prior art keywords
identity
user
iris
face
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310286620.5A
Other languages
Chinese (zh)
Inventor
张安
陈永录
李变
刘斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310286620.5A priority Critical patent/CN116229557A/en
Publication of CN116229557A publication Critical patent/CN116229557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides an identity recognition method, an identity recognition device, identity recognition equipment and a storage medium, and relates to the technical field of biological recognition. The method comprises the following steps: acquiring a first face image and a first iris image of a first user; determining a plurality of first to-be-selected identities of the first user and the face matching degree of each first to-be-selected identity and the first user according to the first face image; determining a plurality of second identity marks to be selected of the first user and the iris matching degree of each second identity mark to be selected and the first user according to the first iris image; and determining the target identity of the first user according to the plurality of first identity marks to be selected, the plurality of second identity marks to be selected, the face matching degree of each first identity mark to be selected and the first user, and the iris matching degree of each second identity mark to be selected and the first user. The method solves the problems of low recognition rate and low safety when only using face recognition to carry out identity recognition.

Description

Identity recognition method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of biological identification technologies, and in particular, to an identity identification method, apparatus, device, and storage medium.
Background
With the continuous development of the biological recognition technology, more and more enterprises apply the biological recognition technology in the production process.
In a financial system, when a large amount of money transfer service is handled, in view of safety and convenience, it is widely used to perform money transfer after identification based on face recognition technology. Compared with the traditional password transfer, the transfer mode of face identification is greatly improved in safety, and great convenience is brought to life of people.
The face recognition technology has the defects of reduced recognition rate caused by change of appearance and reduced safety caused by replicable face.
Disclosure of Invention
The application provides an identity recognition method, an identity recognition device, identity recognition equipment and a storage medium, solves the problems of low recognition rate and low safety when only using face recognition to carry out identity recognition, provides convenience and easy service for users with damaged biological characteristics, and optimizes customer experience.
In one aspect, the present application provides an identification method, including:
acquiring a first face image and a first iris image of a first user;
determining a plurality of first identity marks to be selected of the first user and the face matching degree of each first identity mark to be selected and the first user according to the first face image;
Determining a plurality of second identity marks to be selected of the first user and iris matching degree of each second identity mark to be selected and the first user according to the first iris image;
and determining the target identity of the first user from the plurality of first identity marks to be selected and the plurality of second identity marks to be selected according to the plurality of first identity marks to be selected, the face matching degree of each first identity mark to be selected and the first user, and the iris matching degree of each second identity mark to be selected and the first user.
Optionally, determining the target identity of the first user in the first plurality of identity marks and the second plurality of identity marks according to the first plurality of identity marks, the second plurality of identity marks, the face matching degree of each first identity mark and the first user, and the iris matching degree of each second identity mark and the first user, includes:
determining at least one coincidence identity to be selected existing in the first identity to be selected and the second identity to be selected;
And determining the target identity in the at least one coincidence identity according to the face matching degree of each coincidence identity to be selected and the first user and the iris matching degree of each coincidence identity to be selected and the first user.
Optionally, determining the target identity in the at least one identity to be selected according to the face matching degree of each identity to be selected to the first user and the iris matching degree of each identity to be selected to the first user, including:
aiming at any one coincidence identity to be selected, determining the fusion matching degree of the coincidence identity to be selected and the first user according to the face matching degree of the coincidence identity to be selected and the first user and the iris matching degree of the coincidence identity to be selected and the first user;
and determining the target identity in the at least one coincident identity to be selected according to the fusion matching degree of each repeated identity to be selected and the first user.
Optionally, determining the target identity in the at least one coincident identity to be selected according to the fusion matching degree of each repeated identity to be selected and the first user includes:
And determining the coincidence candidate identity with the highest fusion matching degree with the first user in the at least one coincidence candidate identity as the target identity.
Optionally, determining a plurality of first to-be-selected identities of the first user and a face matching degree between each first to-be-selected identity and the first user according to the first face image includes:
acquiring a first face feature vector of the first face image;
acquiring the face similarity between the first face feature vector and each face feature vector in a face database;
according to the face similarity between the first face feature vector and each face feature vector in a face database, determining a plurality of face feature vectors to be selected in the face database;
the identity marks corresponding to the face feature vectors to be selected are determined to be the first identity marks to be selected;
and determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user.
Optionally, determining the plurality of second candidate identities of the first user and the iris matching degree of each second candidate identity and the first user according to the first iris image includes:
Acquiring a first iris characteristic vector of the first iris image;
obtaining iris similarity between the first iris feature vector and each iris feature vector in an iris database;
determining a plurality of iris feature vectors to be selected in an iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database;
determining the identity marks corresponding to the iris feature vectors to be selected as the second identity marks to be selected;
and determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user.
Optionally, acquiring a first face image of the first user includes
Acquiring an initial image of the first user acquired by the camera device;
performing at least one image processing operation on the initial image to obtain the first face image, the at least one image processing operation including: clipping processing, rotation processing, or angle adjustment processing.
In another aspect, the present application provides an identification device, including:
The acquisition module is used for acquiring a first face image and a first iris image of the first user;
the determining module is used for determining a plurality of first identity marks to be selected of the first user and the face matching degree of each first identity mark to be selected and the first user according to the first face image;
the determining module is further used for determining a plurality of second identity marks to be selected of the first user and the iris matching degree of each second identity mark to be selected and the first user according to the first iris image;
the determining module is further configured to determine a target identity of the first user from the plurality of first to-be-selected identities and the plurality of second to-be-selected identities according to the plurality of first to-be-selected identities, the plurality of second to-be-selected identities, the face matching degree of each first to-be-selected identity and the first user, and the iris matching degree of each second to-be-selected identity and the first user.
In one possible implementation manner, the determining module is specifically configured to:
determining at least one coincidence identity to be selected existing in the first identity to be selected and the second identity to be selected;
And determining the target identity in the at least one coincidence identity according to the face matching degree of each coincidence identity to be selected and the first user and the iris matching degree of each coincidence identity to be selected and the first user.
In one possible implementation manner, the determining module is specifically configured to:
aiming at any one coincidence identity to be selected, determining the fusion matching degree of the coincidence identity to be selected and the first user according to the face matching degree of the coincidence identity to be selected and the first user and the iris matching degree of the coincidence identity to be selected and the first user;
and determining the target identity in the at least one coincident identity to be selected according to the fusion matching degree of each repeated identity to be selected and the first user.
In one possible implementation manner, the determining module is specifically configured to:
and determining the coincidence candidate identity with the highest fusion matching degree with the first user in the at least one coincidence candidate identity as the target identity.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first face feature vector of the first face image;
Acquiring the face similarity between the first face feature vector and each face feature vector in a face database;
according to the face similarity between the first face feature vector and each face feature vector in a face database, determining a plurality of face feature vectors to be selected in the face database;
the identity marks corresponding to the face feature vectors to be selected are determined to be the first identity marks to be selected;
and determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first iris characteristic vector of the first iris image;
obtaining iris similarity between the first iris feature vector and each iris feature vector in an iris database;
determining a plurality of iris feature vectors to be selected in an iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database;
determining the identity marks corresponding to the iris feature vectors to be selected as the second identity marks to be selected;
And determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user.
In one possible implementation manner, the acquiring module is specifically configured to:
acquiring an initial image of the first user acquired by the camera device;
performing at least one image processing operation on the initial image to obtain the first face image, the at least one image processing operation including: clipping processing, rotation processing, or angle adjustment processing. In a third aspect of the present application, there is provided an electronic device, comprising:
a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to cause the electronic device to perform the method of any one of the first aspects.
In a fourth aspect of the present application, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out a method of determining a driver of a hardware peripheral as in any of the first aspects.
The embodiment provides an identity recognition method, device, equipment and storage medium, wherein the method comprises the steps of acquiring a first face image and a first iris image of a first user; determining a plurality of first to-be-selected identities of the first user and the face matching degree of each first to-be-selected identity and the first user according to the first face image; determining a plurality of second identity marks to be selected of the first user and the iris matching degree of each second identity mark to be selected and the first user according to the first iris image; and determining the target identity of the first user from the plurality of first to-be-selected identities and the plurality of second to-be-selected identities according to the plurality of first to-be-selected identities, the plurality of second to-be-selected identities, the face matching degree of each first to-be-selected identity and the first user, and the iris matching degree of each second to-be-selected identity and the first user. According to the method, the face image and the iris image of the first user are collected, the face matching degree and the iris matching degree are determined at the same time, and then the target identity is determined, so that the problems of low recognition rate and low safety when only using face recognition for identity recognition are solved, convenience and easy use services are provided for users with damaged biological characteristics, and customer experience is optimized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a block diagram of an identification method provided herein;
FIG. 2 is a flowchart of a method for identity recognition according to an embodiment of the present application;
FIG. 3 is a second flowchart of an identification method according to an embodiment of the present application;
fig. 4 is a flowchart III of an identification method provided in an embodiment of the present application;
FIG. 5 is a flowchart of a method for identifying identity according to an embodiment of the present application;
FIG. 6 is a flow chart of the mobile banking large transfer provided by the present application;
fig. 7 is a schematic structural diagram of an identification device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Fig. 1 is a block diagram of an identification method provided in the present application. As shown in fig. 1, the method of the present application firstly obtains a face image and an iris image of a user through an image acquisition device, and then processes the two images respectively. And on one hand, carrying out image preprocessing on the face image to obtain a normalized image capable of carrying out feature extraction. And carrying out feature extraction on the normalized face image on the basis to obtain the face features. And then carrying out face recognition according to the face characteristics to obtain the face similarity. The face similarity here refers to the similarity between the face features of the user and the face features stored in the database in advance. The process of processing the iris image of the user to obtain the iris similarity is the same as the above process, and will not be repeated here. And finally, carrying out fusion matching processing on the face similarity and the iris similarity to obtain fusion similarity, and further determining the identity of the user.
When the transfer is performed after the identification based on the face recognition technology is used, the problems of reduced recognition rate or low safety are easy to occur. As people age, the appearance also changes to a certain extent, and the recognition rate of the face recognition algorithm is different for different age groups. If the face of the person is injured, makeup or face-lifting and the like, the facial features of the person can be changed little, so that the accuracy of face recognition is affected, and even the face cannot be recognized. With the wide application of cameras in life, new challenges are brought to the safety of personal facial features, so that great safety problems exist in large transfer of face-brushing identification.
The method comprises the steps of acquiring a first face image and a first iris image of a first user, determining face matching degree and iris matching degree at the same time, and further determining a target identity, so that the problems of low recognition rate and low safety when only using face recognition for identity recognition are solved, convenience and easy use are provided for users with damaged biological characteristics, and customer experience is optimized.
The identity recognition method provided by the application aims at solving the technical problems in the prior art.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of an identification method provided in an embodiment of the present application. As shown in fig. 2, the method of the present embodiment includes:
s201, acquiring a first face image and a first iris image of a first user;
the execution body of the embodiment of the application may be an electronic device, or may be an identification device disposed in the electronic device. Alternatively, the identification means may be implemented in software, or in a combination of software and hardware.
In this embodiment, the first face image and the first iris image of the first user are generally acquired by different devices. The face image can be acquired by a general camera, and since the iris texture is mostly in a fine and dull state, the texture is not very clear. With a common color camera and visible light, the intensity of the light cannot be too strong due to the sensitivity of human eyes to the visible light, and clear iris images with obvious contrast are difficult to obtain. A dedicated iris image acquisition device must therefore be used, comprising an infrared optical imaging system, an electronic control unit and appropriate software algorithms. Meanwhile, the iris image acquisition device belongs to a feedback type device, and a user needs to feed back an image of the user to the user according to the acquisition time device so as to conveniently adjust the position and the angle of the user to adapt to the iris image shot by the device.
Optionally, acquiring a first face image of the first user includes
Acquiring an initial image of a first user acquired by a camera device;
performing at least one image processing operation on the initial image to obtain a first face image, the at least one image processing operation comprising: clipping processing, rotation processing, or angle adjustment processing.
In this embodiment, the position of the camera device for acquiring the initial image relative to the first user is random, for example, the monitoring camera is mostly located above the user, and when the user acquires the face image by using the mobile phone, the pose and angle of the image acquired by different users are different. This results in poor feature extraction of the original image.
Therefore, after the initial image of the first user acquired by the image pickup device is acquired, at least one image processing operation such as a cropping process, a rotation process, or an angle adjustment process needs to be performed on the initial image. For example, if the face image of the first user and the face images of other users are included in the initial image at the same time, a cropping process is required. When the face image of the first user in the initial image is not at the applicable angle, rotation processing or angle adjustment processing is also required.
It will be appreciated by those skilled in the art that the image processing operation of the initial image of the first user includes, but is not limited to, a cropping process, a rotation process, or an angle adjustment process, and also includes operations such as adjusting the brightness of the image or denoising the image.
It will be appreciated by those skilled in the art that after the initial image of the first user's iris is obtained, the initial image of the iris also needs to be processed to obtain an image that can be used to obtain characteristics of the iris.
In this embodiment, the preprocessing of the first face image needs to locate key points such as eyes, nose, mouth, and the like, and compensation for illumination and posture changes needs to be considered during image normalization. Meanwhile, the first iris image is normalized by considering the compensation illumination and the posture change. The iris image preprocessing generally comprises living body detection, quality evaluation and removal of images with poor quality and unrecognizable quality, and internal and external circle positioning and normalization of the iris. Wherein, iris positioning refers to determining the positions of an inner circle, an outer circle and a quadratic curve in an image. Wherein the inner circle is the boundary between the iris and the pupil, the outer circle is the boundary between the iris and the sclera, and the quadratic curve is the boundary between the iris and the upper eyelid and the lower eyelid. The iris image normalization refers to adjusting the iris size in the image to a fixed size set by the recognition system.
The iris positioning algorithm based on Hough transformation can be adopted to perform preprocessing operations such as normalization and positioning on the iris.
S202, determining a plurality of first identity marks to be selected of a first user and the face matching degree of each first identity mark to be selected and the first user according to the first face image;
in this embodiment, face feature extraction is required after preprocessing and normalizing the first face image. Face feature extraction, also known as face characterization, is a process of feature modeling of a face. The face feature extraction method which can be adopted comprises two main types: one is a knowledge-based characterization method; the other is a characterization method based on algebraic features or statistical learning. The knowledge-based characterization method mainly obtains feature data which are helpful for face classification according to the shape description of face organs and the distance characteristics between the face organs, wherein feature components generally comprise Euclidean distance, curvature, angle and the like among feature points. The characterization method based on algebraic features or statistical learning mainly utilizes algebraic features to extract face identity information. It can be broadly divided into two general classes of extraction methods, linear and nonlinear features.
After the face features of the first face image are acquired, the face features of the first user need to be matched with the face features in the database. The database pre-acquires face images of all users, and then extracts face features from the face images and stores the face features, so that a feature library is constructed. After the face features of the first user are matched with the feature library in the database, a plurality of face features with higher face feature matching degree with the first user and corresponding face matching degree are obtained, and then user identifications corresponding to the face features, namely first identity identifications to be selected, are determined.
S203, determining a plurality of second identity marks to be selected of the first user and iris matching degree of each second identity mark to be selected and the first user according to the first iris image;
in this embodiment, after preprocessing and normalizing the first iris image, iris feature extraction is required. The iris is primarily a texture feature, which is typically extracted using a texture analysis method. Texture analysis methods that may be employed include statistical, structural, spectral methods. The statistical method is described by the moment of the gray level histogram or gray level co-occurrence matrix. The basic idea of the structural approach is that complex textures can be constructed from simple textures in a regular manner, i.e. assuming that the texture pattern consists of a spatial arrangement of texture primitives, and are identified by means of syntactic analysis. Spectral methods describe the texture characteristics of periodic or near-periodic 2D images by means of the frequency characteristics of fourier transforms.
After the iris features of the first iris image are acquired, the iris features of the first user need to be matched with the iris features in the database. The database acquires iris images of all users in advance, and then extracts iris features from the iris images and stores the iris features, so that a feature library is constructed. After the iris features of the first user are matched with the feature library in the database, a plurality of iris features with higher iris feature matching degree and corresponding iris matching degree with the first user are obtained, and then user identifications corresponding to the face features, namely second identity identifications to be selected, are determined.
S204, determining target identity marks of the first user in the first identity marks and the second identity marks according to the first identity marks, the second identity marks, the face matching degree of each first identity mark and the first user and the iris matching degree of each second identity mark and the first user.
In this embodiment, after determining a plurality of first to-be-selected identities and a plurality of to-be-selected identities, when further determining the target identity of the first user, the face matching degree between each first to-be-selected identity and the first user and the iris matching degree between each second to-be-selected identity and the first user need to be considered. Meanwhile, the target identity of the first user is determined by using the characteristics of the face and the iris, so that the accuracy of the identity recognition result is improved, and the defect that the identity recognition cannot be conveniently and quickly performed on users with damaged biological characteristics when the biological recognition technology is performed by only using the face can be overcome.
The ranges of the first identity to be selected and the second identity to be selected can be completely overlapped or incompletely overlapped. When the ranges of the first identity to be selected and the second identity to be selected are coincident, for example, three first identities to be selected are user a, user B and user C, and three second identities to be selected are also user a, user B and user C. At this time, the target identity can be determined according to the face matching degree and the iris matching degree of the identity to be selected.
The target identity can be determined by adopting a matching degree fusion method. The fusion at this time belongs to fractional layer fusion, and the flow comprises the following steps: firstly, the matching degree of each mode is obtained, then, the matching degree is normalized to the same scale, and finally, the fused matching degree is obtained through operations such as minimum, maximum, mean value and the like.
The obtained matching degree can be regarded as a characteristic vector, and the target identity is determined by a method of training a classifier to classify on the basis. Support vector machines, likelihood ratios, etc. are commonly used. On the basis of the obtained fused matching degree, determining the face matching degree with better evaluation or the identity to be selected corresponding to the iris matching degree as a target identity. In the financial system, the identity of the user who has undergone the above operation is recognized, and thus more operations similar to large transfers can be performed.
The embodiment provides an identity recognition method, which comprises the steps of obtaining a first face image and a first iris image of a first user; determining a plurality of first to-be-selected identities of the first user and the face matching degree of each first to-be-selected identity and the first user according to the first face image; determining a plurality of second identity marks to be selected of the first user and the iris matching degree of each second identity mark to be selected and the first user according to the first iris image; and determining the target identity of the first user from the plurality of first to-be-selected identities and the plurality of second to-be-selected identities according to the plurality of first to-be-selected identities, the plurality of second to-be-selected identities, the face matching degree of each first to-be-selected identity and the first user, and the iris matching degree of each second to-be-selected identity and the first user. According to the method, the first face image and the first iris image of the first user are acquired, and the face matching degree and the iris matching degree are determined at the same time, so that the target identity is determined, the problems of low recognition rate and low safety when only face recognition is used for identity recognition are solved, convenience and easy use services are provided for users with damaged biological characteristics, and customer experience is optimized.
Fig. 3 is a flowchart of a second identification method provided in an embodiment of the present application. As shown in fig. 3, the method of this embodiment, based on the embodiment shown in fig. 2, describes in detail a process of determining, according to a plurality of first to-be-selected identities, a plurality of second to-be-selected identities, a face matching degree between each first to-be-selected identity and the first user, and an iris matching degree between each second to-be-selected identity and the first user, a target identity of the first user among the plurality of first to-be-selected identities and the plurality of second to-be-selected identities.
S301, determining at least one coincidence identity to be selected existing in a plurality of first identity to be selected and a plurality of second identity to be selected;
when the range of the first identity to be selected and the range of the second identity to be selected are not completely coincident, at least one coincident identity to be selected needs to be determined. For example, if the three first identity identifiers to be selected are user a, user B and user C and the three second identity identifiers to be selected are user a, user D and user F, the identity identifiers to be selected are coincident to be user a.
On the basis of determining the identity identification to be selected, determining a target identity identification in at least one identity identification to be selected according to the face matching degree and the iris matching degree of each identity identification to be selected and the first user.
S302, aiming at any one identity mark to be selected, determining the fusion matching degree of the identity mark to be selected and the first user according to the face matching degree of the identity mark to be selected and the first user and the iris matching degree of the identity mark to be selected and the first user;
in this embodiment, a method of fractional layer fusion is used to determine the fusion matching degree of the first user.
And a fusion recognition algorithm based on a C-SVC linear kernel function can be adopted to determine the fusion matching degree of the coincident candidate identity and the first user. Wherein the function of the kernel implies a mapping from the low-dimensional space to the high-dimensional space, and the mapping can change two kinds of points which are linearly inseparable in the low-dimensional space into linearly inseparable points.
A fusion recognition algorithm of a core canonical correlation analysis (Kernel Canonical Correlation Analysis, KCCA) may be employed to determine a fusion match of the coincident candidate identity with the first user. The KCAA algorithm maps a sample set into a high-dimensional feature space, and replaces inner product operation in the feature space by a pre-defined regeneration kernel, so that a new approach of learning method nonlinearity is provided on the premise of not increasing the calculated amount, and the nonlinearity relation stored in the sample can be accurately and efficiently expressed.
It is worth noting that the face matching degree and the iris matching degree of the identity to be selected and the first user are obtained respectively, so that the obtained matching degrees are similar in distribution type, but are not in the same range in numerical value, and the identity to be selected and the first user need to be distributed in the same numerical range before feature fusion is carried out. The linear change is carried out by adopting a minimum-maximum normalized processing method, and the formula is as follows:
Figure BDA0004140051050000121
wherein S is N Is the matching score value after normalization processing; s is a pre-treatment score value; s is S min Is the minimum value of the scoring values of similar biological characteristics; s is S max Is the maximum value of similar biological feature scores; x represents the lower limit of the interval after transformation, and the value is set to 0 when in application; y represents the upper limit of the transformed interval, and the value is set to 1 when applied.
After the fusion matching degree of the identity identification to be selected and the first user is determined, a target identity identification is determined in at least one identity identification to be selected according to the fusion matching degree of each repeated identity identification to be selected and the first user.
S303, determining the coincidence candidate identity with the highest fusion matching degree with the first user in at least one coincidence candidate identity as a target identity.
The fusion matching degree of the identity mark to be selected and the first user is generally a vector, and the vector comprises the face matching degree and the iris matching degree after fusion. The determination of the identity to be selected with the highest fusion matching degree with the first user may take a variety of forms.
For example, there are three coincident identity marks to be selected, user a, user B and user C, and their corresponding fusion matching degrees are (a 1, B1), (a 2, B2) and (a 3, B3), where ai represents the fused face matching degree and bi represents the fused iris matching degree. The weight of the face matching degree is set as w1, the weight of the iris matching degree is set as w2, and after the weighted sum ai is calculated as w1+ bi w2, the coincidence candidate identification corresponding to the maximum weighted sum is selected to be determined as the target identity identification.
The embodiment provides an identity recognition method, which comprises the steps of determining at least one coincidence identity to be selected existing in a plurality of first identity to be selected and a plurality of second identity to be selected; aiming at any one coincidence identity to be selected, determining the fusion matching degree of the coincidence identity to be selected and the first user according to the face matching degree of the coincidence identity to be selected and the first user and the iris matching degree of the coincidence identity to be selected and the first user; and determining the coincidence candidate identity with the highest fusion matching degree with the first user in the at least one coincidence candidate identity as a target identity. According to the method, the range of the first identity to be selected and the range of the second identity to be selected are considered to be incompletely overlapped, the identity to be selected is determined to be overlapped first, and then fusion is carried out to determine the target identity, so that the practicability of the method is greatly improved.
Fig. 4 is a flowchart of an identification method provided in an embodiment of the present application. As shown in fig. 4, the method of the present embodiment describes in detail, based on the embodiment shown in fig. 2, a process of determining a plurality of first candidate identities of the first user according to the first face image, and a face matching degree between each first candidate identity and the first user.
S401, acquiring a first face feature vector of a first face image;
in this embodiment, a face feature extraction method is adopted to obtain a first face feature vector of a first face image. Characterization methods based on statistical learning may be employed. Characterization methods based on statistical learning are roughly classified into two major types, namely linear and nonlinear, wherein linear feature extraction methods mainly comprise principal component analysis (Principal Component Analysis, PCA), linear discriminant analysis (Linear Discriminant Analysis, LDA), independent component analysis (Independent Component Analysis, ICA) and the like, and nonlinear feature extraction methods comprise kernel principal component analysis (Kernel Principal Component Analysis, KPCA), local linear embedding (Locally Linear Embedding, LLE) and the like.
The LDA-LLE characteristic of the face image can be extracted by adopting a nonlinear dimension reduction algorithm combining LLE and LDA.
S402, acquiring the face similarity between the first face feature vector and each face feature vector in a face database;
the method for feature matching recognition is used for obtaining the face similarity between the first face feature vector and each face feature vector in the face database. The feature matching identification is to input the extracted features into a classification decision device to carry out final training decision, thereby completing face recognition. The principle is that the extracted characteristic information is compared and classified mutually, and if the similarity of the two samples is higher, the two samples are judged to be similar. The feature matching methods that can be employed can be roughly classified into a distance-based classification method and a classification method based on statistical properties of sample distribution.
The face similarity of the first face feature vector and each face feature vector in the face database may be calculated by using a euclidean distance-based classification method. Firstly, selecting a feature point in a first image, then searching two feature points closest to the Euclidean distance of the feature point in all database images, and reacting the formula of similarity calculation of the image by using a feature matching rate as follows:
Figure BDA0004140051050000131
wherein d is E Representing the Euclidean distance; x and y represent feature vectors of the face; k represents the feature vector dimension.
S403, determining a plurality of face feature vectors to be selected in the face database according to the face similarity between the first face feature vector and each face feature vector in the face database;
after the face similarity between the first face feature vector and each face feature vector in the face database is determined, the face feature vector corresponding to the well-represented face similarity can be selected as the face feature vector to be selected.
The method of setting the threshold value can be adopted, the similarity of the faces is compared with the threshold value, and when the similarity of the faces is larger than the threshold value, the corresponding face feature vector is the face feature vector to be selected.
As will be appreciated by those skilled in the art, the method of determining a plurality of candidate face feature vectors in the face database includes, but is not limited to, the above-described method based on the face similarity of the first face feature vector to each of the face feature vectors in the face database.
S404, determining the identity marks corresponding to the face feature vectors to be selected as a plurality of first identity marks to be selected;
in the database, all face feature vectors are stored, and the mapping relation between the face feature vectors and the corresponding identity marks is also stored. Therefore, after the plurality of face feature vectors to be selected are determined, the identity marks corresponding to the face feature vectors to be selected can be queried, and then the identity marks to be selected are determined as the first identity marks.
S405, determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user.
The embodiment provides an identity recognition method, which comprises the steps of obtaining a first face feature vector of a first face image; acquiring the face similarity between the first face feature vector and each face feature vector in a face database; according to the face similarity between the first face feature vector and each face feature vector in the face database, determining a plurality of face feature vectors to be selected in the face database; the identity marks corresponding to the face feature vectors to be selected are determined to be a plurality of first identity marks to be selected; and determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user. According to the method, the face matching degree of the first identity to be selected and the first user is determined by carrying out feature vector extraction and feature matching identification operation on the first face image, so that the method has higher accuracy and practicability.
Fig. 5 is a flowchart of an identification method provided in an embodiment of the present application. As shown in fig. 5, the method of the present embodiment, based on the embodiment shown in fig. 2, describes in detail a process of determining a plurality of second candidate identities of the first user according to the first iris image, and the iris matching degree of each second candidate identity and the first user.
S501, acquiring a first iris characteristic vector of a first iris image;
in this embodiment, a first iris feature vector of a first face image is obtained by using an iris feature extraction method. Phase two-dimensional Gabor filtering-based, zero-crossing-point-description-based wavelet zero-crossing detection, texture analysis-based laplacian pyramid method, shape-based and other analysis methods can be adopted.
A Gabor and PCA combined algorithm may be used to extract feature vectors of the iris image.
S502, obtaining iris similarity between a first iris feature vector and each iris feature vector in an iris database;
in this embodiment, the iris similarity between the first iris feature vector and each iris feature vector in the iris database is obtained by using a feature matching recognition method. The iris similarity between the first iris feature vector and each iris feature vector in the iris database can be calculated by using the euclidean distance classification method, and the process is described above, and will not be repeated here.
S503, determining a plurality of iris feature vectors to be selected in the iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database;
after the iris similarity between the first iris feature vector and each iris feature vector in the iris database is determined, the iris feature vector corresponding to the well-represented iris similarity can be selected as the iris feature vector to be selected.
The method for setting the threshold value can be adopted, the iris similarity is compared with the threshold value, the iris similarity is larger than the threshold value, and the iris feature vector corresponding to the iris similarity is the iris feature vector to be selected.
S504, determining the identity marks corresponding to the iris feature vectors to be selected as a plurality of second identity marks to be selected;
s505, determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user.
The embodiment provides an identity recognition method, which comprises the steps of obtaining a first iris characteristic vector of a first iris image; obtaining iris similarity between a first iris feature vector and each iris feature vector in an iris database; determining a plurality of iris feature vectors to be selected in an iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database; the identity marks corresponding to the iris feature vectors to be selected are determined to be a plurality of second identity marks to be selected; and determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user. According to the method, the first iris image is used for carrying out feature vector extraction and feature matching identification operation, and the iris matching degree of the second identity to be selected and the first user is determined, so that the method has higher accuracy and practicability.
The following describes the technical solution of the present application in detail with a specific embodiment.
Fig. 6 is a flow chart of the mobile banking large transfer provided in the present application. As shown in fig. 6, a scenario in which a bank performs identification of a large transfer using the multi-modal biometric technology is taken as an example, and the following conditions need to be satisfied before the method is used: firstly, when the bank card opens an account, face information and iris information of the person who opens the account are acquired and stored in a face and iris characteristic database of the bank. And setting threshold information for judging the large transfer amount in a banking system, and setting threshold information for the fusion matching degree of the human face and the iris. When the user uses the mobile banking APP to transfer the large amount of money, the user enters a mobile phone main interface and clicks the transfer function. After the account number, the name and the transfer amount of the transferred user are filled in by the user, the mobile phone bank APP judges whether the input amount belongs to large transfer or not. If the money amount of the transfer is judged to belong to large-amount transfer and the user agrees, acquiring a face image and an iris image of the user through a mobile phone camera for the transfer during the transfer, and performing identity verification according to the face image and the iris image.
Fig. 7 is a schematic structural diagram of an identification device according to an embodiment of the present application. The apparatus of this embodiment may be in the form of software and/or hardware. As shown in fig. 7, an identification device 700 provided in an embodiment of the present application includes an obtaining module 701 and a determining module 702,
an acquiring module 701, configured to acquire a first face image and a first iris image of a first user;
a determining module 702, configured to determine a plurality of first to-be-selected identities of the first user and a face matching degree between each of the first to-be-selected identities and the first user according to the first face image;
the determining module 702 is further configured to determine a plurality of second to-be-selected identities of the first user and an iris matching degree between each second to-be-selected identity and the first user according to the first iris image;
the determining module 702 is further configured to determine a target identity of the first user from the plurality of first to-be-selected identities and the plurality of second to-be-selected identities according to the plurality of first to-be-selected identities, the plurality of second to-be-selected identities, the face matching degree of each first to-be-selected identity and the first user, and the iris matching degree of each second to-be-selected identity and the first user.
In one possible implementation manner, the determining module is specifically configured to:
determining at least one coincidence identity to be selected existing in the first identity to be selected and the second identity to be selected;
and determining a target identity in at least one identity to be selected according to the face matching degree of each identity to be selected to the first user and the iris matching degree of each identity to be selected to the first user.
In one possible implementation manner, the determining module is specifically configured to:
aiming at any one coincidence identity to be selected, determining the fusion matching degree of the coincidence identity to be selected and the first user according to the face matching degree of the coincidence identity to be selected and the first user and the iris matching degree of the coincidence identity to be selected and the first user;
and determining a target identity in at least one coincident identity to be selected according to the fusion matching degree of each repeated identity to be selected and the first user.
In one possible implementation manner, the determining module is specifically configured to:
and determining the coincidence candidate identity with the highest fusion matching degree with the first user in the at least one coincidence candidate identity as a target identity.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first face feature vector of a first face image;
acquiring the face similarity between the first face feature vector and each face feature vector in a face database;
according to the face similarity between the first face feature vector and each face feature vector in the face database, determining a plurality of face feature vectors to be selected in the face database;
the identity marks corresponding to the face feature vectors to be selected are determined to be a plurality of first identity marks to be selected;
and determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first iris characteristic vector of a first iris image;
obtaining iris similarity between a first iris feature vector and each iris feature vector in an iris database;
determining a plurality of iris feature vectors to be selected in an iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database;
the identity marks corresponding to the iris feature vectors to be selected are determined to be a plurality of second identity marks to be selected;
And determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user.
In one possible implementation manner, the acquiring module is specifically configured to:
acquiring an initial image of a first user acquired by a camera device;
performing at least one image processing operation on the initial image to obtain a first face image, the at least one image processing operation comprising: clipping processing, rotation processing, or angle adjustment processing.
The identity recognition device provided in this embodiment may be used to execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
An embodiment of the present application provides a schematic structural diagram of an electronic device, referring to fig. 8, the electronic device 20 may include a processor 21 and a memory 22. The processor 21, the memory 22, and the like are illustratively interconnected by a bus 23.
Memory 22 stores computer-executable instructions;
processor 21 executes computer-executable instructions stored in memory 22 to cause the electronic device to perform the identification method as described above.
It should be understood that the processor 21 may be a central processing unit (in english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (in english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (in english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution. The memory 22 may include a high-speed random access memory (in english: random Access Memory, abbreviated as RAM), and may further include a Non-volatile memory (in english: NVM), such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk, or an optical disk.
The embodiment of the application correspondingly provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and the computer execution instructions are used for realizing the identification method when being executed by the processor.
It should be noted that the identification method and the identification device can be used in the financial field. But also can be used in any fields other than the financial field. The application field of the identity recognition method and the device is not limited.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An identification method, comprising:
acquiring a first face image and a first iris image of a first user;
determining a plurality of first identity marks to be selected of the first user and the face matching degree of each first identity mark to be selected and the first user according to the first face image;
determining a plurality of second identity marks to be selected of the first user and iris matching degree of each second identity mark to be selected and the first user according to the first iris image;
and determining the target identity of the first user from the plurality of first identity marks to be selected and the plurality of second identity marks to be selected according to the plurality of first identity marks to be selected, the face matching degree of each first identity mark to be selected and the first user, and the iris matching degree of each second identity mark to be selected and the first user.
2. The method of claim 1, wherein determining the target identity of the first user from the plurality of first candidate identities and the plurality of second candidate identities based on the plurality of first candidate identities, the degree of face matching of each first candidate identity with the first user, and the degree of iris matching of each second candidate identity with the first user, comprises:
determining at least one coincidence identity to be selected existing in the first identity to be selected and the second identity to be selected;
and determining the target identity in the at least one coincidence identity according to the face matching degree of each coincidence identity to be selected and the first user and the iris matching degree of each coincidence identity to be selected and the first user.
3. The method of claim 2, wherein determining the target identity in the at least one identity to be selected based on a face match of each identity to be selected with the first user and an iris match of each identity to be selected with the first user comprises:
Aiming at any one coincidence identity to be selected, determining the fusion matching degree of the coincidence identity to be selected and the first user according to the face matching degree of the coincidence identity to be selected and the first user and the iris matching degree of the coincidence identity to be selected and the first user;
and determining the target identity in the at least one coincident identity to be selected according to the fusion matching degree of each repeated identity to be selected and the first user.
4. A method according to claim 3, wherein determining the target identity in the at least one overlapping candidate identity according to the fusion match of each repeated candidate identity with the first user comprises:
and determining the coincidence candidate identity with the highest fusion matching degree with the first user in the at least one coincidence candidate identity as the target identity.
5. The method according to any one of claims 1-4, wherein determining a plurality of first candidate identities of the first user from the first face image and a face matching degree of each first candidate identity with the first user comprises:
Acquiring a first face feature vector of the first face image;
acquiring the face similarity between the first face feature vector and each face feature vector in a face database;
according to the face similarity between the first face feature vector and each face feature vector in a face database, determining a plurality of face feature vectors to be selected in the face database;
the identity marks corresponding to the face feature vectors to be selected are determined to be the first identity marks to be selected;
and determining the similarity between the face feature vector to be selected corresponding to the first identity to be selected and the first face feature vector as the face matching degree between the first identity to be selected and the first user.
6. The method of any of claims 1-5, wherein determining a plurality of second candidate identities of the first user from the first iris image and an iris match of each second candidate identity to the first user comprises:
acquiring a first iris characteristic vector of the first iris image;
obtaining iris similarity between the first iris feature vector and each iris feature vector in an iris database;
Determining a plurality of iris feature vectors to be selected in an iris database according to the iris similarity between the first iris feature vector and each iris feature vector in the iris database;
determining the identity marks corresponding to the iris feature vectors to be selected as the second identity marks to be selected;
and determining the similarity between the iris feature vector to be selected corresponding to the second identity to be selected and the first iris feature vector as the iris matching degree between the second identity to be selected and the first user.
7. The method of any of claims 1-6, wherein acquiring a first face image of the first user comprises
Acquiring an initial image of the first user acquired by the camera device;
performing at least one image processing operation on the initial image to obtain the first face image, the at least one image processing operation including: clipping processing, rotation processing, or angle adjustment processing.
8. An identification device, comprising:
the acquisition module is used for acquiring a first face image and a first iris image of the first user;
the determining module is used for determining a plurality of first identity marks to be selected of the first user and the face matching degree of each first identity mark to be selected and the first user according to the first face image;
The determining module is further used for determining a plurality of second identity marks to be selected of the first user and the iris matching degree of each second identity mark to be selected and the first user according to the first iris image;
the determining module is further configured to determine a target identity of the first user from the plurality of first to-be-selected identities and the plurality of second to-be-selected identities according to the plurality of first to-be-selected identities, the plurality of second to-be-selected identities, the face matching degree of each first to-be-selected identity and the first user, and the iris matching degree of each second to-be-selected identity and the first user.
9. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory to cause the electronic device to perform the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out an identification method as claimed in any one of claims 1 to 7.
CN202310286620.5A 2023-03-22 2023-03-22 Identity recognition method, device, equipment and storage medium Pending CN116229557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310286620.5A CN116229557A (en) 2023-03-22 2023-03-22 Identity recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310286620.5A CN116229557A (en) 2023-03-22 2023-03-22 Identity recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116229557A true CN116229557A (en) 2023-06-06

Family

ID=86587386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310286620.5A Pending CN116229557A (en) 2023-03-22 2023-03-22 Identity recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116229557A (en)

Similar Documents

Publication Publication Date Title
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
EP3680794B1 (en) Device and method for user authentication on basis of iris recognition
US10922399B2 (en) Authentication verification using soft biometric traits
Barpanda et al. Iris feature extraction through wavelet mel-frequency cepstrum coefficients
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
Yu et al. An eye detection method based on convolutional neural networks and support vector machines
Jung et al. An eye detection method robust to eyeglasses for mobile iris recognition
Das et al. Fuzzy logic based selera recognition
Gale et al. Evolution of performance analysis of iris recognition system by using hybrid methods of feature extraction and matching by hybrid classifier for iris recognition system
Wati et al. Security of facial biometric authentication for attendance system
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
Albadarneh et al. Iris recognition system for secure authentication based on texture and shape features
CN110688872A (en) Lip-based person identification method, device, program, medium, and electronic apparatus
Pathak et al. Multimodal eye biometric system based on contour based E-CNN and multi algorithmic feature extraction using SVBF matching
Lin et al. A gender classification scheme based on multi-region feature extraction and information fusion for unconstrained images
Méndez-Llanes et al. On the use of local fixations and quality measures for deep face recognition
CN116229557A (en) Identity recognition method, device, equipment and storage medium
Nandakumar et al. Incorporating ancillary information in multibiometric systems
Muthukumaran et al. Face and Iris based Human Authentication using Deep Learning
Rossant et al. A robust iris identification system based on wavelet packet decomposition and local comparisons of the extracted signatures
Zhou et al. Eye localization based on face alignment
Mohammad Multi-Modal Ocular Recognition in Presence of Occlusion in Mobile Devices
Bachoo et al. A segmentation method to improve iris-based person identification
Avazpour et al. Optimization of Human Recognition from the Iris Images using the Haar Wavelet.
Subbarayudu et al. A novel iris recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination