CN117114690A - Payment verification method based on iris recognition and related products - Google Patents

Payment verification method based on iris recognition and related products Download PDF

Info

Publication number
CN117114690A
CN117114690A CN202311076876.XA CN202311076876A CN117114690A CN 117114690 A CN117114690 A CN 117114690A CN 202311076876 A CN202311076876 A CN 202311076876A CN 117114690 A CN117114690 A CN 117114690A
Authority
CN
China
Prior art keywords
iris
image
human eye
payment verification
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311076876.XA
Other languages
Chinese (zh)
Inventor
周博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202311076876.XA priority Critical patent/CN117114690A/en
Publication of CN117114690A publication Critical patent/CN117114690A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Accounting & Taxation (AREA)
  • Artificial Intelligence (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a payment verification method based on iris recognition and a related product, which can be applied to the field of artificial intelligence and the financial field, and the method comprises the following steps: obtaining a human eye image with qualified quality; extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network; extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network; and during payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed. Therefore, iris recognition is realized through a deep learning algorithm, and the payment verification efficiency and safety are improved.

Description

Payment verification method based on iris recognition and related products
Technical Field
The application relates to the field of artificial intelligence, in particular to a payment verification method based on iris recognition and a related product.
Background
With the development of the internet, more and more people choose to complete various payment operations including transfer, payment and the like at online banks. Compared with the traditional bank counter, the internet banking has higher flexibility and convenience, and saves the waiting time and unnecessary trouble.
The existing online bank payment verification methods comprise short message verification codes, passwords, dynamic passwords, U shields and the like, and the problems of low verification efficiency and poor safety exist in the verification methods due to different use scenes.
Therefore, how to improve the security of payment verification while guaranteeing the payment verification efficiency is a problem that the skilled person is urgent to solve.
Disclosure of Invention
Based on the problems, the application provides a payment verification method based on iris recognition and related products, which realize iris recognition through a deep learning algorithm and solve the problems of low verification efficiency and poor safety in the prior art.
In a first aspect, the present application provides a payment verification method based on iris recognition, including:
obtaining a human eye image with qualified quality;
extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network;
extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network;
and during payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed.
Optionally, before the obtaining the qualified human eye image, the method further includes:
collecting human eye images of the target object by using an image collecting device;
comparing the human eye image with a preset evaluation standard;
marking the human eye image which accords with the preset evaluation standard as a human eye image with qualified quality;
and marking the human eye images which do not meet the preset evaluation standard as human eye images with unqualified quality.
Optionally, before the image acquisition device acquires the eye image of the target object, the method further includes:
performing living body detection on a target object by using an image acquisition device;
if the living body of the target object passes, continuing to acquire the human eye image of the target object;
and stopping acquiring the human eye image of the target object if the living body detection of the target object does not pass.
Optionally, the extracting the iris image from the human eye image by using the pre-trained iris semantic segmentation network includes:
dividing the human eye image by utilizing a pre-trained iris semantic dividing network to obtain an iris image containing target noise;
separating target noise by utilizing a pre-trained iris semantic segmentation network to realize extraction of iris images;
the target noise includes: pupil, sclera, and eyelid.
Optionally, before extracting the feature vector from the iris image by using the pre-trained iris feature extraction network, the method further includes:
image interception is carried out on the iris image after the target noise is separated by utilizing a pre-trained iris semantic segmentation network according to an interception strategy, so that a first intercepted image is obtained;
the truncation strategy is a square image block in the 6 o' clock direction of the iris image and one side of the square image block is tangential to the inner edge of the iris.
Optionally, the extracting feature vectors from the iris image by using the pre-trained iris feature extraction network includes:
and extracting feature vectors from the first intercepted image of the iris image by utilizing the pre-trained iris feature extraction network.
Optionally, during the payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed, further including:
judging whether a target object corresponding to the human eye image is used for the first time or not;
if yes, uploading the feature vector extracted from the iris image by utilizing the pre-trained iris feature extraction network to an iris sample library for storage;
if not, continuing to carry out payment verification.
In a second aspect, the present application provides a payment verification device based on iris recognition, including:
the acquisition module is used for acquiring human eye images with qualified quality;
the first extraction module is used for extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network;
the second extraction module is used for extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network;
and the verification module is used for comparing the feature vector with the feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, the payment verification is considered to be successful, and if the similarity does not reach the first preset threshold value, the payment verification is considered to be failed.
In a third aspect, the present application provides a payment verification device based on iris recognition, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the iris recognition based payment verification method as described in any one of the above when executing the computer program.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the iris recognition based payment verification method as described in any of the above.
Compared with the prior art, the application has the following advantages that:
the application firstly acquires the human eye image with qualified quality. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally, comparing the feature vector with a feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold value, considering that the payment verification is failed. Therefore, iris recognition is realized by means of a high-precision algorithm of deep learning, the efficiency and the safety of online bank payment verification are improved, and better payment experience is brought to users.
Drawings
FIG. 1 is a flow chart of a payment verification method based on iris recognition provided by the application;
fig. 2 is a schematic structural diagram of a payment verification device based on iris recognition.
Detailed Description
As described above, the existing online banking payment verification method has the problems of low verification efficiency and poor security during use. Specifically, the existing online banking payment modes comprise a short message verification code, a password, a dynamic password, a U shield and the like, and the verification modes all require a user to perform certain operation, so that the overall verification time is prolonged, and the payment verification efficiency is low. In addition, most of the verification methods belong to visual verification methods, and the verification methods are insufficient in singleness and are easily stolen by others, so that the problem of poor safety is caused.
In order to solve the above problems, the present application provides a payment verification method based on iris recognition, comprising: firstly, obtaining a human eye image with qualified quality. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally, comparing the feature vector with a feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold value, considering that the payment verification is failed.
Therefore, iris recognition is realized by means of a high-precision algorithm of deep learning, the efficiency and the safety of online bank payment verification are improved, and better payment experience is brought to users.
It should be noted that the payment verification method based on iris recognition and the related products provided by the application can be applied to the artificial intelligence field and the financial field. The foregoing is merely exemplary, and the application fields of the iris recognition-based payment verification method and related products provided by the present application are not limited.
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a flowchart of a payment verification method based on iris recognition according to an embodiment of the present application. Referring to fig. 1, the payment verification method based on iris recognition provided by the present application may include:
s101: and obtaining the human eye image with qualified quality.
In practical application, the image quality determines the definition of the image, and in order to ensure the accuracy of the subsequent iris image extraction, the acquisition of the quality qualified human eye image is required to be ensured to continue the payment verification.
In addition, since the manner of acquiring the human eye images with acceptable quality is not the same, the present application can be described in terms of one possible acquisition manner.
In one case, it is aimed at how to obtain a quality acceptable human eye image. Correspondingly, before the human eye image with qualified quality is obtained, the method further comprises the following steps:
collecting human eye images of the target object by using an image collecting device;
comparing the human eye image with a preset evaluation standard;
marking the human eye image which accords with the preset evaluation standard as a human eye image with qualified quality;
and marking the human eye images which do not meet the preset evaluation standard as human eye images with unqualified quality.
In practical application, it is assumed that when a user A, namely, a target object performs payment verification, the iris recognition-based payment verification device shoots the whole eyes of the user A by using the image acquisition device, acquires human eye images of the user A, and can evaluate the image quality of the human eye images at the moment. For example, a full reference image quality evaluation is adopted, that is, in the case of selecting an ideal image as a reference image, the difference between the image to be evaluated and the reference image is compared, and the distortion degree of the image to be evaluated is analyzed, thereby obtaining the quality evaluation of the image to be evaluated. Specifically, the acquired human eye image is compared with a pre-stored ideal image, and the distortion degree of the human eye image is analyzed. If the distortion degree is set to 20%, that is, the preset evaluation standard is set to 20%, the analysis result is lower than 20%, the human eye image which accords with the preset evaluation standard is considered, and is marked as the human eye image with qualified quality, otherwise, the human eye image with unqualified quality is marked.
In addition, since the manner of acquiring the human eye images with acceptable quality is not the same, the present application can be described in terms of one possible acquisition manner.
In one case, it is aimed at how to obtain a quality acceptable human eye image. Correspondingly, before the image acquisition device is used for acquiring the human eye image of the target object, the method further comprises the following steps:
performing living body detection on a target object by using an image acquisition device;
if the living body of the target object passes, continuing to acquire the human eye image of the target object;
and stopping acquiring the human eye image of the target object if the living body detection of the target object does not pass.
In practical application, in order to improve security, prevent that other people from embezzling human eye image and carrying out payment verification, the payment verification device based on iris recognition can utilize image acquisition device to carry out the biopsy to target feature before gathering human eye image. Specifically, the detection purpose can be achieved by comprehensively analyzing color textures, non-rigid motion deformation, materials, images, video quality and the like. If the living body detection of the target object passes, continuing to acquire the human eye image of the target object, and if the living body detection of the target object does not pass, stopping acquiring the human eye image of the target object and sending out an alarm to inform.
S102: and extracting an iris image from the human eye image by using the pre-trained iris semantic segmentation network.
In practical application, the iris is used as a human body feature and has the characteristic of high singleness, so that the iris is very suitable for being used as input of payment verification. Specifically, after the human eye image with qualified quality is obtained, the collected human eye image is segmented by utilizing a trained iris semantic segmentation network so as to obtain an iris image required by subsequent processing. It should be noted that the skeleton of the pre-trained semantic segmentation network structure is based on a classical semantic segmentation network FCN, as an improvement to the network, a CBAM attention module is added to the skeleton, and the added position is after the first five pooling layers, namely, after the outputs of the first five pooling layers are processed by using a Channel Attention Mechanism (CAM) and a Spatial Attention Mechanism (SAM) of the CBAM attention module, the outputs are input to the next layer.
In addition, since the modes of extracting iris images are not the same, the present application can be described in terms of one possible extraction mode.
In one case, it is directed to how to extract iris images. Correspondingly, S102: extracting iris images from the human eye images by utilizing a pre-trained iris semantic segmentation network, wherein the method specifically comprises the following steps of:
dividing the human eye image by utilizing a pre-trained iris semantic dividing network to obtain an iris image containing target noise;
separating target noise by utilizing a pre-trained iris semantic segmentation network to realize extraction of iris images;
the target noise includes: pupil, sclera, and eyelid.
In practical application, the iris image is extracted by utilizing a pre-trained iris semantic segmentation network to segment the image, an iris image containing noise such as pupil, sclera and eyelid is obtained at the moment, and then the noise such as pupil, sclera and eyelid is separated again by utilizing the pre-trained iris semantic segmentation network, so that the iris image is extracted. That is, when an iris image is extracted by using a pre-trained iris semantic segmentation network, a human eye image is input, and an iris image for separating noise such as pupil, sclera, eyelid, etc. is output.
S103: and extracting feature vectors from the iris image by using a pre-trained iris feature extraction network.
In practical application, feature extraction is required to be performed on an iris image after the iris image is extracted by using a pre-trained iris semantic segmentation network. Specifically, a pre-trained iris feature extraction network is utilized to extract feature vectors from iris images. Wherein the feature vector is a small part of the iris image. The pre-trained iris feature extraction network structure is based on a classical ResNeXt network, as an improvement on the network, the convolution operation of a convolution layer in the original structure is replaced by hole convolution, and the output of the last full-connection layer of the ResNeXt network is used as the feature vector of the iris.
In addition, since the manner of extracting the feature vectors is not the same, the present application can be described in terms of one possible extraction manner.
In one case, it is directed to how to extract feature vectors from iris images. Correspondingly, before extracting the feature vector from the iris image by using the pre-trained iris feature extraction network, the method further comprises:
image interception is carried out on the iris image after the target noise is separated by utilizing a pre-trained iris semantic segmentation network according to an interception strategy, so that a first intercepted image is obtained;
the truncation strategy is a square image block in the 6 o' clock direction of the iris image and one side of the square image block is tangential to the inner edge of the iris.
In practical application, the whole iris image is too large, which is unfavorable for feature extraction, so that the iris image needs to be intercepted. Specifically, a square image block which is in the 6 o' clock direction of the whole iris image and is tangent to the inner edge of the iris is taken as a interception strategy, and then the iris image after noise such as pupils, scleras and eyelids is separated is subjected to image interception by utilizing a pre-trained iris semantic segmentation network based on the interception strategy, so that a first interception image is obtained. And taking the first intercepted image smaller than the iris image as a target of subsequent processing so as to realize final payment verification.
In addition, since the manner of extracting the feature vectors is not the same, the present application can be described in terms of one possible extraction manner.
In one case, it is directed to how to extract feature vectors from iris images. Accordingly, S103: extracting feature vectors from the iris image by using a pre-trained iris feature extraction network, wherein the method specifically comprises the following steps:
and extracting feature vectors from the first intercepted image of the iris image by utilizing the pre-trained iris feature extraction network.
In combination with the above, since the whole iris image is too large, it is unfavorable to perform feature extraction, so that it is necessary to intercept the iris image. In this way, the first cut image smaller than the iris image is used as a target of subsequent processing, and the feature vector is extracted from the first cut image of the iris image by utilizing the pre-trained iris feature extraction network. For iris digital images with a certain number of identifiable textures, different eyes are distinguished by labeling a binary mask map of the iris region of the original image and numbering the photographs of the eyes.
S104: and during payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed.
In a practical application, a plurality of feature samples are stored in a server for comparison at the time of payment verification. Specifically, after the feature vector is extracted from the iris image by using the pre-trained iris feature extraction network, the payment verification device based on iris recognition compares the extracted feature vector with the feature sample stored in the server. If the first preset threshold value is set to be 99%, when the similarity between one sample A in the feature samples and the feature vector reaches 99.5%, the target object corresponding to the current feature vector is determined to reach a payment verification condition, namely the payment verification is successful. Otherwise, if the similarity between all the samples in the feature samples and the feature vector is less than 99%, the target object corresponding to the current feature vector is determined to not reach the payment verification condition, namely the payment verification fails.
In addition, since there is a certain difference in whether the target object performs payment verification for the first time, the present application can be described in terms of possible verification modes.
In one case, different verification schemes are aimed at. Correspondingly, during the payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed, further comprising:
judging whether a target object corresponding to the human eye image is used for the first time or not;
if yes, uploading the feature vector extracted from the iris image by utilizing the pre-trained iris feature extraction network to an iris sample library for storage;
if not, continuing to carry out payment verification.
In practical application, after a target user uses the iris recognition-based payment verification device, the device records user information of the target user so as to directly start the payment verification process next time. Therefore, the iris recognition-based payment verification device generally verifies the identity of the target user a priori when the device is used for the target user, and judges whether the current target user uses the device for the first time. If yes, the device firstly acquires the human eye image with qualified quality of the target user. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally uploading the feature vector to an iris sample library for storage, namely uploading to a server for storage. When the target user uses the device again, namely when the target object is verified to be not used for the first time, entering a payment verification process, and firstly acquiring a human eye image with qualified target object quality. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally, comparing the feature vector with the feature sample stored in the server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed.
In summary, the application firstly obtains the human eye image with qualified quality. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally, comparing the feature vector with a feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold value, considering that the payment verification is failed. Therefore, iris recognition is realized by means of a high-precision algorithm of deep learning, the efficiency and the safety of online bank payment verification are improved, and better payment experience is brought to users.
Based on the payment verification method based on iris recognition provided by the embodiment, the application also provides a payment verification device based on iris recognition. The iris recognition-based payment verification apparatus is described below with reference to the embodiments and drawings, respectively.
Fig. 2 is a schematic structural diagram of a payment verification device based on iris recognition according to an embodiment of the present application. Referring to fig. 2, an iris recognition-based payment verification apparatus 200 according to an embodiment of the present application includes:
an acquisition module 201, configured to acquire a human eye image with qualified quality;
a first extraction module 202, configured to extract an iris image from the human eye image by using a pre-trained iris semantic segmentation network;
a second extraction module 203, configured to extract a feature vector from the iris image by using a pre-trained iris feature extraction network;
and the verification module 204 is configured to compare the feature vector with a feature sample stored in the server during payment verification, and consider that the payment verification is successful if the similarity reaches a first preset threshold, and consider that the payment verification is failed if the similarity does not reach the first preset threshold.
As an embodiment, the payment verification device 200 based on iris recognition further includes: the device comprises an acquisition module, a comparison module and a marking module;
the acquisition module is used for acquiring human eye images of the target object by utilizing the image acquisition device;
the comparison module is used for comparing the human eye image with a preset evaluation standard;
the marking module is used for marking the human eye image which accords with the preset evaluation standard as a human eye image with qualified quality; and marking the human eye images which do not meet the preset evaluation standard as human eye images with unqualified quality.
As an embodiment, the payment verification device 200 based on iris recognition further includes: a detection module;
the detection module is used for performing living detection on the target object by utilizing the image acquisition device;
if the living body of the target object passes, continuing to acquire the human eye image of the target object;
and stopping acquiring the human eye image of the target object if the living body detection of the target object does not pass.
As an embodiment, the first extraction module 202 is specifically configured to:
dividing the human eye image by utilizing a pre-trained iris semantic dividing network to obtain an iris image containing target noise;
separating target noise by utilizing a pre-trained iris semantic segmentation network to realize extraction of iris images;
the target noise includes: pupil, sclera, and eyelid.
As an embodiment, the payment verification apparatus 200 based on iris recognition further includes: a cutting module;
the cutting module is used for carrying out image cutting on the iris image after the target noise is separated by utilizing the pre-trained iris semantic segmentation network according to the cutting strategy to obtain a first cut image;
the truncation strategy is a square image block in the 6 o' clock direction of the iris image and one side of the square image block is tangential to the inner edge of the iris.
As an embodiment, the second extraction module 203 is specifically configured to:
and extracting feature vectors from the first intercepted image of the iris image by utilizing the pre-trained iris feature extraction network.
As an embodiment, the payment verification device 200 based on iris recognition further includes: a judging module;
the judging module is used for judging whether the target object corresponding to the human eye image is used for the first time or not; if yes, uploading the feature vector extracted from the iris image by utilizing the pre-trained iris feature extraction network to an iris sample library for storage; if not, continuing to carry out payment verification.
In summary, the application firstly obtains the human eye image with qualified quality. And then extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network, and extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network. And finally, comparing the feature vector with a feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold value, considering that the payment verification is failed. Therefore, iris recognition is realized by means of a high-precision algorithm of deep learning, the efficiency and the safety of online bank payment verification are improved, and better payment experience is brought to users.
In addition, the application also provides a payment verification device based on iris recognition, which comprises: a memory for storing a computer program; a processor for implementing the steps of the iris recognition based payment verification method as described in any one of the above when executing the computer program.
In addition, the application further provides a readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the iris recognition-based payment verification method as described in any one of the above.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A payment verification method based on iris recognition, the method comprising:
obtaining a human eye image with qualified quality;
extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network;
extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network;
and during payment verification, comparing the feature vector with a feature sample stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed.
2. The method of claim 1, further comprising, prior to said obtaining the quality acceptable human eye image:
collecting human eye images of the target object by using an image collecting device;
comparing the human eye image with a preset evaluation standard;
marking the human eye image which accords with the preset evaluation standard as a human eye image with qualified quality;
and marking the human eye images which do not meet the preset evaluation standard as human eye images with unqualified quality.
3. The method of claim 2, further comprising, prior to the capturing the image of the human eye of the target object with the image capturing device:
performing living body detection on a target object by using an image acquisition device;
if the living body of the target object passes, continuing to acquire the human eye image of the target object;
and stopping acquiring the human eye image of the target object if the living body detection of the target object does not pass.
4. The method of claim 1, wherein extracting an iris image from the human eye image using a pre-trained iris semantic segmentation network comprises:
dividing the human eye image by utilizing a pre-trained iris semantic dividing network to obtain an iris image containing target noise;
separating target noise by utilizing a pre-trained iris semantic segmentation network to realize extraction of iris images;
the target noise includes: pupil, sclera, and eyelid.
5. The method of claim 4, wherein prior to extracting feature vectors from the iris image using the pre-trained iris feature extraction network, further comprising:
image interception is carried out on the iris image after the target noise is separated by utilizing a pre-trained iris semantic segmentation network according to an interception strategy, so that a first intercepted image is obtained;
the truncation strategy is a square image block in the 6 o' clock direction of the iris image and one side of the square image block is tangential to the inner edge of the iris.
6. The method of claim 5, wherein extracting feature vectors from the iris image using a pre-trained iris feature extraction network comprises:
and extracting feature vectors from the first intercepted image of the iris image by utilizing the pre-trained iris feature extraction network.
7. The method according to claim 1, wherein during the payment verification, comparing the feature vector with feature samples stored in a server, if the similarity reaches a first preset threshold, considering that the payment verification is successful, and if the similarity does not reach the first preset threshold, considering that the payment verification is failed, further comprising:
judging whether a target object corresponding to the human eye image is used for the first time or not;
if yes, uploading the feature vector extracted from the iris image by utilizing the pre-trained iris feature extraction network to an iris sample library for storage;
if not, continuing to carry out payment verification.
8. A payment verification device based on iris recognition, comprising:
the acquisition module is used for acquiring human eye images with qualified quality;
the first extraction module is used for extracting an iris image from the human eye image by utilizing a pre-trained iris semantic segmentation network;
the second extraction module is used for extracting feature vectors from the iris image by utilizing a pre-trained iris feature extraction network;
and the verification module is used for comparing the feature vector with the feature sample stored in the server during payment verification, if the similarity reaches a first preset threshold value, the payment verification is considered to be successful, and if the similarity does not reach the first preset threshold value, the payment verification is considered to be failed.
9. A payment verification device based on iris recognition, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the iris recognition based payment verification method as claimed in any one of claims 1 to 7 when executing said computer program.
10. A readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the iris recognition based payment verification method according to any one of claims 1 to 7.
CN202311076876.XA 2023-08-24 2023-08-24 Payment verification method based on iris recognition and related products Pending CN117114690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311076876.XA CN117114690A (en) 2023-08-24 2023-08-24 Payment verification method based on iris recognition and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311076876.XA CN117114690A (en) 2023-08-24 2023-08-24 Payment verification method based on iris recognition and related products

Publications (1)

Publication Number Publication Date
CN117114690A true CN117114690A (en) 2023-11-24

Family

ID=88795999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311076876.XA Pending CN117114690A (en) 2023-08-24 2023-08-24 Payment verification method based on iris recognition and related products

Country Status (1)

Country Link
CN (1) CN117114690A (en)

Similar Documents

Publication Publication Date Title
US10810423B2 (en) Iris liveness detection for mobile devices
Chakraborty et al. An overview of face liveness detection
CN111191616A (en) Face shielding detection method, device, equipment and storage medium
Wang et al. Toward accurate localization and high recognition performance for noisy iris images
CN111738735B (en) Image data processing method and device and related equipment
CN109271915B (en) Anti-counterfeiting detection method and device, electronic equipment and storage medium
Bezerra et al. Robust iris segmentation based on fully convolutional networks and generative adversarial networks
CN113591747B (en) Multi-scene iris recognition method based on deep learning
Ilankumaran et al. Multi-biometric authentication system using finger vein and iris in cloud computing
Gale et al. Evolution of performance analysis of iris recognition system by using hybrid methods of feature extraction and matching by hybrid classifier for iris recognition system
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN113947209A (en) Integrated learning method, system and storage medium based on cloud edge cooperation
CN111783725A (en) Face recognition method, face recognition device and storage medium
CN117114690A (en) Payment verification method based on iris recognition and related products
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
Chen Design and simulation of AI remote terminal user identity recognition system based on reinforcement learning
Nguyen et al. User re-identification using clothing information for smartphones
Rossant et al. A robust iris identification system based on wavelet packet decomposition and local comparisons of the extracted signatures
RU2761776C1 (en) Method for personal identification based on the hand vein pattern
CN114283486B (en) Image processing method, model training method, image processing device, model training device, image recognition method, model training device, image recognition device and storage medium
Jebarani et al. PNN-SIFT: an enhanced face recognition and classification system in image processing
Sharma et al. Lip Detection and Recognition-A Review1
Kumar Presentation Attack Detection in Facial Biometric Authentication
Heenaye-Mamode Khan et al. Analysis and Representation of Face Robot-portrait Features
Dixit et al. SIFRS: Spoof Invariant Facial Recognition System (A Helping Hand for Visual Impaired People)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination